This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 04 Jun 2025 13:42:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.141-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.1.141-rc1
Nishanth Menon nm@ti.com net: ethernet: ti: am65-cpsw: Lower random mac address error print to info
Mark Pearson mpearson-lenovo@squebb.ca platform/x86: thinkpad_acpi: Ignore battery threshold change event notification
Valtteri Koskivuori vkoskiv@gmail.com platform/x86: fujitsu-laptop: Support Lifebook S2110 hotkeys
Trond Myklebust trond.myklebust@hammerspace.com NFS: Avoid flushing data while holding directory locks in nfs_rename()
Ilya Guterman amfernusus@gmail.com nvme-pci: add NVME_QUIRK_NO_DEEPEST_PS quirk for SOLIDIGM P44 Pro
Alessandro Grassi alessandro.grassi@mailbox.org spi: spi-sun4i: fix early activation
Masahiro Yamada masahiroy@kernel.org um: let 'make clean' properly clean underlying SUBARCH as well
John Chau johnchau@0atlas.com platform/x86: thinkpad_acpi: Support also NEC Lavie X1475JAS
Jeff Layton jlayton@kernel.org nfs: don't share pNFS DS connections between net namespaces
Milton Barrera miltonjosue2001@gmail.com HID: quirks: Add ADATA XPG alpha wireless mouse support
Christian Brauner brauner@kernel.org coredump: hand a pidfd to the usermode coredump helper
Christian Brauner brauner@kernel.org fork: use pidfd_prepare()
Christian Brauner brauner@kernel.org pid: add pidfd_prepare()
Christian Brauner brauner@kernel.org coredump: fix error handling for replace_fd()
Robin Murphy robin.murphy@arm.com perf/arm-cmn: Initialise cmn->cpu earlier
Robin Murphy robin.murphy@arm.com perf/arm-cmn: Fix REQ2/SNP2 mixup
Pedro Tammela pctammela@mojatatu.com net_sched: hfsc: Address reentrant enqueue adding class to eltree twice
Alok Tiwari alok.a.tiwari@oracle.com arm64: dts: qcom: sm8350: Fix typo in pil_camera_mem node
Shigeru Yoshida syoshida@redhat.com af_unix: Fix uninit-value in __unix_walk_scc()
Michal Luczaj mhal@rbox.co af_unix: Fix garbage collection of embryos carrying OOB with SCM_RIGHTS
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Add dead flag to struct scm_fp_list.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Don't access successor in unix_del_edges() during GC.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Try not to hold unix_gc_lock during accept().
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Remove lock dance in unix_peek_fds().
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Replace garbage collection algorithm.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Detect dead SCC.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Assign a unique index to SCC.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Avoid Tarjan's algorithm if unnecessary.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Skip GC if no cycle exists.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Save O(n) setup of Tarjan's algo.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Fix up unix_edge.successor for embryo socket.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Save listener for embryo socket.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Detect Strongly Connected Components.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Iterate all vertices by DFS.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Bulk update unix_tot_inflight/unix_inflight when queuing skb.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Link struct unix_edge when queuing skb.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Allocate struct unix_edge for each inflight AF_UNIX fd.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Allocate struct unix_vertex for each inflight AF_UNIX fd.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Remove CONFIG_UNIX_SCM.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Remove io_uring code for GC.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Replace BUG_ON() with WARN_ON_ONCE().
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Try to run GC async.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Run GC on only one CPU.
Kuniyuki Iwashima kuniyu@amazon.com af_unix: Return struct unix_sock from unix_get_socket().
Alexander Mikhalitsyn aleksandr.mikhalitsyn@canonical.com af_unix: Kconfig: make CONFIG_UNIX bool
Boris Burkov boris@bur.io btrfs: check folio mapping after unlock in relocate_one_folio()
Frederic Weisbecker frederic@kernel.org hrtimers: Force migrate away hrtimers queued after CPUHP_AP_HRTIMERS_DYING
Ratheesh Kannoth rkannoth@marvell.com octeontx2-pf: Fix page pool frag allocation warning
Ratheesh Kannoth rkannoth@marvell.com octeontx2-pf: Fix page pool cache index corruption.
Ratheesh Kannoth rkannoth@marvell.com octeontx2-pf: fix page_pool creation fail for rings > 32k
Harshit Mogalapalli harshit.m.mogalapalli@oracle.com dmaengine: idxd: Fix passing freed memory in idxd_cdev_open()
Balbir Singh balbirs@nvidia.com x86/mm/init: Handle the special case of device private pages in add_pages(), to not increase max_pfn and trigger dma_addressing_limited() bounce buffers bounce buffers
Nathan Chancellor nathan@kernel.org i3c: master: svc: Fix implicit fallthrough in svc_i3c_master_ibi_work()
Dan Carpenter dan.carpenter@linaro.org pinctrl: tegra: Fix off by one in tegra_pinctrl_get_group()
Geert Uytterhoeven geert+renesas@glider.be serial: sh-sci: Save and restore more registers
Nathan Chancellor nathan@kernel.org kbuild: Disable -Wdefault-const-init-unsafe
Larisa Grigore larisa.grigore@nxp.com spi: spi-fsl-dspi: Reset SR flags before sending a new message
Bogdan-Gabriel Roman bogdan-gabriel.roman@nxp.com spi: spi-fsl-dspi: Halt the module after a new message transfer
Larisa Grigore larisa.grigore@nxp.com spi: spi-fsl-dspi: restrict register range for regmap access
Namjae Jeon linkinjeon@kernel.org ksmbd: fix stream write failure
Jernej Skrabec jernej.skrabec@gmail.com Revert "arm64: dts: allwinner: h6: Use RSB for AXP805 PMIC connection"
Tianyang Zhang zhangtianyang@loongson.cn mm/page_alloc.c: avoid infinite retries caused by cpuset race
Breno Leitao leitao@debian.org memcg: always call cond_resched() after fn()
Mario Limonciello mario.limonciello@amd.com Revert "drm/amd: Keep display off while going into S4"
Wang Zhaolong wangzhaolong1@huawei.com smb: client: Reset all search buffer pointers when releasing buffer
Wang Zhaolong wangzhaolong1@huawei.com smb: client: Fix use-after-free in cifs_fill_dirent
feijuan.li feijuan.li@samsung.com drm/edid: fixed the bug that hdr metadata was not reset
Vladimir Moskovkin Vladimir.Moskovkin@kaspersky.com platform/x86: dell-wmi-sysman: Avoid buffer overflow in current_password_store()
Ilia Gavrilov Ilia.Gavrilov@infotecs.ru llc: fix data loss when reading from a socket in llc_ui_recvmsg()
Ed Burcher git@edburcher.com ALSA: hda/realtek: Add quirk for Lenovo Yoga Pro 7 14ASP10
Takashi Iwai tiwai@suse.de ALSA: pcm: Fix race of buffer access at PCM OSS layer
Oliver Hartkopp socketcan@hartkopp.net can: bcm: add missing rcu read protection for procfs content
Oliver Hartkopp socketcan@hartkopp.net can: bcm: add locking for bcm_op runtime updates
Carlos Sanchez carlossanchez@geotab.com can: slcan: allow reception of short error messages
Dominik Grzegorzek dominik.grzegorzek@oracle.com padata: do not leak refcount in reorder_work
Ivan Pravdin ipravdin.official@gmail.com crypto: algif_hash - fix double free in hash_accept
Geetha sowjanya gakula@marvell.com octeontx2-af: Fix APR entry mapping based on APR_LMT_CFG
Subbaraya Sundeep sbhatta@marvell.com octeontx2-af: Set LMT_ENA bit for APR table entries
Wang Liang wangliang74@huawei.com net/tipc: fix slab-use-after-free Read in tipc_aead_encrypt_done
Suman Ghosh sumang@marvell.com octeontx2-pf: Add AF_XDP non-zero copy support
Ratheesh Kannoth rkannoth@marvell.com octeontx2-pf: Add support for page pool
Cong Wang xiyou.wangcong@gmail.com sch_hfsc: Fix qlen accounting bug when using peek in hfsc_enqueue()
Pavel Begunkov asml.silence@gmail.com io_uring: fix overflow resched cqe reordering
Thangaraj Samynathan thangaraj.s@microchip.com net: lan743x: Restore SGMII CTRL register on resume
Paul Kocialkowski paulk@sys-base.io net: dwmac-sun8i: Use parsed internal PHY address instead of 1
Jacob Keller jacob.e.keller@intel.com ice: fix vf->num_mac count with port representors
Ido Schimmel idosch@nvidia.com bridge: netfilter: Fix forwarding of fragmented packets
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: L2CAP: Fix not checking l2cap_chan security level
Dave Jiang dave.jiang@intel.com dmaengine: idxd: Fix ->poll() return value
Paul Chaignon paul.chaignon@gmail.com xfrm: Sanitize marks before insert
Andre Przywara andre.przywara@arm.com clk: sunxi-ng: d1: Add missing divider for MMC mod clocks
Matti Lehtimäki matti.lehtimaki@gmail.com remoteproc: qcom_wcnss: Fix on platforms without fallback regulators
Vinicius Costa Gomes vinicius.gomes@intel.com dmaengine: idxd: Fix allowing write() from different address spaces
Fenghua Yu fenghua.yu@intel.com dmaengine: idxd: add idxd_copy_cr() to copy user completion record during page fault handling
Dave Jiang dave.jiang@intel.com dmaengine: idxd: add per DSA wq workqueue for processing cr faults
Sabrina Dubroca sd@queasysnail.net espintcp: remove encap socket caching to avoid reference leak
Al Viro viro@zeniv.linux.org.uk __legitimize_mnt(): check for MNT_SYNC_UMOUNT should be under mount_lock
Jason Andryuk jason.andryuk@amd.com xenbus: Allow PVH dom0 a non-local xenstore
Johannes Berg johannes.berg@intel.com wifi: iwlwifi: add support for Killer on MTL
Goldwyn Rodrigues rgoldwyn@suse.de btrfs: correct the order of prelim_ref arguments in btrfs__prelim_ref
Jens Axboe axboe@kernel.dk io_uring/fdinfo: annotate racy sq/cq head/tail reads
Alistair Francis alistair.francis@wdc.com nvmet-tcp: don't restore null sk_state_change
Takashi Iwai tiwai@suse.de ALSA: hda/realtek: Add quirk for HP Spectre x360 15-df1xxx
Takashi Iwai tiwai@suse.de ASoC: Intel: bytcr_rt5640: Add DMI quirk for Acer Aspire SW3-013
Martin Blumenstingl martin.blumenstingl@googlemail.com pinctrl: meson: define the pull up/down resistor value as 60 kOhm
Chenyuan Yang chenyuan0y@gmail.com ASoC: imx-card: Adjust over allocation of memory in imx_card_parse_of()
Jessica Zhang quic_jesszhan@quicinc.com drm: Add valid clones check
Douglas Anderson dianders@chromium.org drm/panel-edp: Add Starry 116KHD024006
Simona Vetter simona.vetter@ffwll.ch drm/atomic: clarify the rules around drm_atomic_state->allow_modeset
Rosen Penev rosenp@gmail.com wifi: ath9k: return by of_get_mac_address
Isaac Scott isaac.scott@ideasonboard.com regulator: ad5398: Add device tree support
Sean Anderson sean.anderson@linux.dev spi: zynqmp-gqspi: Always acknowledge interrupts
Ping-Ke Shih pkshih@realtek.com wifi: rtw89: add wiphy_lock() to work that isn't held wiphy_lock() yet
Bitterblue Smith rtl8821cerfe2@gmail.com wifi: rtw88: Don't use static local variable in rtw8822b_set_tx_power_index_by_rate
Soeren Moch smoch@web.de wifi: rtl8xxxu: retry firmware download on error
Ravi Bangoria ravi.bangoria@amd.com perf/amd/ibs: Fix perf_ibs_op.cnt_mask for CurCnt
Viktor Malik vmalik@redhat.com bpftool: Fix readlink usage in get_fd_type
Thomas Zimmermann tzimmermann@suse.de drm/ast: Find VBIOS mode from regular display size
Cezary Rojewski cezary.rojewski@intel.com ASoC: codecs: pcm3168a: Allow for 24-bit in provider mode
junan junan76@163.com HID: usbkbd: Fix the bit shift number for LED_KANA
Kai Mäkisara Kai.Makisara@kolumbus.fi scsi: st: Restore some drive settings after reset
Justin Tee justin.tee@broadcom.com scsi: lpfc: Free phba irq in lpfc_sli4_enable_msi() when pci_irq_vector() fails
Justin Tee justin.tee@broadcom.com scsi: lpfc: Handle duplicate D_IDs in ndlp search-by D_ID routine
Konstantin Taranov kotaranov@microsoft.com net/mana: fix warning in the writer of client oob
Michal Swiatkowski michal.swiatkowski@linux.intel.com ice: count combined queues using Rx/Tx count
Peter Zijlstra (Intel) peterz@infradead.org perf: Avoid the read if the count is already updated
Ankur Arora ankur.a.arora@oracle.com rcu: fix header guard for rcu_all_qs()
Ankur Arora ankur.a.arora@oracle.com rcu: handle unstable rdp in rcu_read_unlock_strict()
Ankur Arora ankur.a.arora@oracle.com rcu: handle quiescent states for PREEMPT_RCU=n, PREEMPT_COUNT=y
Heiner Kallweit hkallweit1@gmail.com r8169: don't scan PHY addresses > 0
Ido Schimmel idosch@nvidia.com vxlan: Annotate FDB data races
Depeng Shao quic_depengs@quicinc.com media: qcom: camss: csid: Only add TPG v4l2 ctrl if TPG hardware is available
Andrey Vatoropin a.vatoropin@crpt.ru hwmon: (xgene-hwmon) use appropriate type for the latency value
Jordan Crouse jorcrous@amazon.com clk: qcom: camcc-sm8250: Use clk_rcg2_shared_ops for some RCGs
Bitterblue Smith rtl8821cerfe2@gmail.com wifi: rtw88: Fix download_firmware_validate() for RTL8814AU
Aleksander Jan Bajkowski olek2@wp.pl r8152: add vendor/device ID pair for Dell Alienware AW1022z
Kuniyuki Iwashima kuniyu@amazon.com ip: fib_rules: Fetch net from fib_rule in fib[46]_rule_configure().
Athira Rajeev atrajeev@linux.vnet.ibm.com arch/powerpc/perf: Check the instruction type before creating sample with perf_mem_data_src
Johannes Berg johannes.berg@intel.com wifi: mac80211: remove misplaced drv_mgd_complete_tx() call
Johannes Berg johannes.berg@intel.com wifi: mac80211: don't unconditionally call drv_mgd_complete_tx()
William Tu witu@nvidia.com net/mlx5e: reduce rep rxq depth to 256 for ECPF
William Tu witu@nvidia.com net/mlx5e: set the tx_queue_len for pfifo_fast
Alexei Lazar alazar@nvidia.com net/mlx5: Extend Ethtool loopback selftest to support non-linear SKB
Alex Deucher alexander.deucher@amd.com drm/amd/display/dm: drop hw_support check in amdgpu_dm_i2c_xfer()
Shiwu Zhang shiwu.zhang@amd.com drm/amdgpu: enlarge the VBIOS binary size limit
Tom Chung chiahsuan.chung@amd.com drm/amd/display: Initial psr_version with correct setting
Jiang Liu gerry@linux.alibaba.com drm/amdgpu: reset psp->cmd to NULL after releasing the buffer
Dmitry Baryshkov dmitry.baryshkov@linaro.org phy: core: don't require set_mode() callback for phy_get_mode() to work
Claudiu Beznea claudiu.beznea.uj@bp.renesas.com serial: sh-sci: Update the suspend/resume support
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org clk: qcom: clk-alpha-pll: Do not use random stack value for recalc rate
Kees Cook kees@kernel.org net/mlx4_core: Avoid impossible mlx4_db_alloc() order value
Brendan Jackman jackmanb@google.com kunit: tool: Use qboot on QEMU x86_64
Konstantin Andreev andreev@swemel.ru smack: recognize ipv4 CIPSO w/o categories
Valentin Caron valentin.caron@foss.st.com pinctrl: devicetree: do not goto err when probing hogs in pinctrl_dt_to_map
Kuninori Morimoto kuninori.morimoto.gx@renesas.com ASoC: soc-dai: check return value at snd_soc_dai_set_tdm_slot()
Hector Martin marcan@marcan.st ASoC: tas2764: Power up/down amp on mute ops
Hector Martin marcan@marcan.st ASoC: tas2764: Mark SW_RESET as volatile
Hector Martin marcan@marcan.st ASoC: tas2764: Add reg defaults for TAS2764_INT_CLK_CFG
Martin Povišer povik+lin@cutebit.org ASoC: ops: Enforce platform maximum on initial value
Shahar Shitrit shshitrit@nvidia.com net/mlx5: Apply rate-limiting to high temperature warning
Shahar Shitrit shshitrit@nvidia.com net/mlx5: Modify LSB bitmask in temperature event to include only the first bit
Hans Verkuil hverkuil@xs4all.nl media: test-drivers: vivid: don't call schedule in loop
Petr Machata petrm@nvidia.com vxlan: Join / leave MC group after remote changes
Xiaofei Tan tanxiaofei@huawei.com ACPI: HED: Always initialize before evged
Ilpo Järvinen ilpo.jarvinen@linux.intel.com PCI: Fix old_size lower bound in calculate_iosize() too
Jakub Kicinski kuba@kernel.org eth: mlx4: don't try to complete XDP frames in netpoll
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org can: c_can: Use of_property_present() to test existence of DT property
Ahmad Fatoum a.fatoum@pengutronix.de pmdomain: imx: gpcv2: use proper helper for property detection
Michael Margolin mrgolin@amazon.com RDMA/core: Fix best page size finding when it can cross SG entries
Alexis Lothoré alexis.lothore@bootlin.com serial: mctrl_gpio: split disable_ms into sync and no_sync APIs
Frank Li Frank.Li@nxp.com i3c: master: svc: Flush FIFO before sending Dynamic Address Assignment(DAA)
Arnd Bergmann arnd@arndb.de EDAC/ie31200: work around false positive build warning
Peter Seiderer ps.report@gmx.net net: pktgen: fix access outside of user given buffer in pktgen_thread_write()
Ping-Ke Shih pkshih@realtek.com wifi: rtw89: fw: propagate error code from rtw89_h2c_tx()
Bitterblue Smith rtl8821cerfe2@gmail.com wifi: rtw88: Fix rtw_desc_to_mcsrate() to handle MCS16-31
Bitterblue Smith rtl8821cerfe2@gmail.com wifi: rtw88: Fix rtw_init_ht_cap() for RTL8814AU
Bitterblue Smith rtl8821cerfe2@gmail.com wifi: rtw88: Fix rtw_init_vht_cap() for RTL8814AU
Shivasharan S shivasharan.srikanteshwara@broadcom.com scsi: mpt3sas: Send a diag reset if target reset fails
Paul Burton paulburton@kernel.org clocksource: mips-gic-timer: Enable counter when CPUs start
Paul Burton paulburton@kernel.org MIPS: pm-cps: Use per-CPU variables as per-CPU, not per-core
Jason Gunthorpe jgg@ziepe.ca genirq/msi: Store the IOMMU IOVA directly in msi_desc instead of iommu_cookie
Bibo Mao maobibo@loongson.cn MIPS: Use arch specific syscall name match function
Balbir Singh balbirs@nvidia.com x86/kaslr: Reduce KASLR entropy on most x86 systems
Jinliang Zheng alexjlzheng@gmail.com dm: fix unconditional IO throttle caused by REQ_PREFLUSH
Nandakumar Edamana nandakumar@nandakumar.co.in libbpf: Fix out-of-bound read
Niklas Söderlund niklas.soderlund+renesas@ragnatech.se media: adv7180: Disable test-pattern control on adv7180
Rafael J. Wysocki rafael.j.wysocki@intel.com cpuidle: menu: Avoid discarding useful information
Waiman Long longman@redhat.com x86/nmi: Add an emergency handler in nmi_desc & use it in nmi_shootdown_cpus()
Yihan Zhu Yihan.Zhu@amd.com drm/amd/display: handle max_downscale_src_width fail check
Nir Lichtman nir@lichtman.org x86/build: Fix broken copy command in genimage.sh when making isoimage
Andrew Davis afd@ti.com soc: ti: k3-socinfo: Do not use syscon helper to build regmap
Hangbin Liu liuhangbin@gmail.com bonding: report duplicate MAC address in all situations
Arnd Bergmann arnd@arndb.de net: xgene-v2: remove incorrect ACPI_PTR annotation
Eric Woudstra ericwouds@gmail.com net: ethernet: mtk_ppe_offload: Allow QinQ, double ETH_P_8021Q only
Yuanjun Gong ruc_gongyuanjun@163.com leds: pwm-multicolor: Add check for fwnode_property_read_u32
Philip Yang Philip.Yang@amd.com drm/amdkfd: KFD release_work possible circular locking
Kevin Krakauer krakauer@google.com selftests/net: have `gro.sh -t` return a correct exit code
Moshe Shemesh moshe@nvidia.com net/mlx5: Avoid report two health errors on same syndrome
Viresh Kumar viresh.kumar@linaro.org firmware: arm_ffa: Set dma_mask for ffa devices
Stanimir Varbanov svarbanov@suse.de PCI: brcmstb: Add a softdep to MIP MSI-X driver
Stanimir Varbanov svarbanov@suse.de PCI: brcmstb: Expand inbound window size up to 64GB
Hector Martin marcan@marcan.st soc: apple: rtkit: Implement OSLog buffers properly
Janne Grunau j@jannau.net soc: apple: rtkit: Use high prio work queue
Kuhanh Murugasen Krishnan kuhanh.murugasen.krishnan@intel.com fpga: altera-cvp: Increase credit timeout
AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com drm/mediatek: mtk_dpi: Add checks for reg_h_fre_con existence
Li Bin bin.li@microchip.com ARM: at91: pm: fix at91_suspend_finish for ZQ calibration
Alexander Stein alexander.stein@ew.tq-group.com hwmon: (gpio-fan) Add missing mutex locks
Breno Leitao leitao@debian.org x86/bugs: Make spectre user default depend on MITIGATION_SPECTRE_V2
Ahmad Fatoum a.fatoum@pengutronix.de clk: imx8mp: inform CCF of maximum frequency of clocks
Ricardo Ribalda ribalda@chromium.org media: uvcvideo: Add sanity check to uvc_ioctl_xu_ctrl_map
Andy Yan andy.yan@rock-chips.com drm/rockchip: vop2: Add uv swap for cluster window
Kuniyuki Iwashima kuniyu@amazon.com ipv4: fib: Move fib_valid_key_len() to rtm_to_fib_config().
Maciej S. Szmigiero mail@maciej.szmigiero.name ALSA: hda/realtek: Enable PC beep passthrough for HP EliteBook 855 G7
Saket Kumar Bhaskar skb99@linux.ibm.com perf/hw_breakpoint: Return EOPNOTSUPP for unsupported breakpoint type
Peter Seiderer ps.report@gmx.net net: pktgen: fix mpls maximum labels list parsing
Alexander Sverdlin alexander.sverdlin@siemens.com net: ethernet: ti: cpsw_new: populate netdev of_node
Artur Weber aweber.kernel@gmail.com pinctrl: bcm281xx: Use "unsigned int" instead of bare "unsigned"
Hans Verkuil hverkuil@xs4all.nl media: cx231xx: set device_caps for 417
Victor Lu victorchengchi.lu@amd.com drm/amdgpu: Do not program AGP BAR regs under SRIOV in gfxhub_v1_0.c
Matti Lehtimäki matti.lehtimaki@gmail.com remoteproc: qcom_wcnss: Handle platforms with only single power domain
Choong Yong Liang yong.liang.choong@linux.intel.com net: phylink: use pl->link_interface in phylink_expects_phy()
Matthew Wilcox (Oracle) willy@infradead.org orangefs: Do not truncate file size
Ming-Hung Tsai mtsai@redhat.com dm cache: prevent BUG_ON by blocking retries on failed device resumes
Markus Elfring elfring@users.sourceforge.net media: c8sectpfe: Call of_node_put(i2c_bus) only once in c8sectpfe_probe()
Svyatoslav Ryhel clamor95@gmail.com ARM: tegra: Switch DSI-B clock parent to PLLD on Tegra114
Andy Shevchenko andriy.shevchenko@linux.intel.com ieee802154: ca8210: Use proper setters and getters for bitwise types
Alexandre Belloni alexandre.belloni@bootlin.com rtc: ds1307: stop disabling alarms on probe
Eric Dumazet edumazet@google.com tcp: bring back NUMA dispersion in inet_ehash_locks_alloc()
Takashi Iwai tiwai@suse.de ALSA: seq: Improve data consistency at polling
Andreas Schwab schwab@linux-m68k.org powerpc/prom_init: Fixup missing #size-cells on PowerBook6,7
Diogo Ivo diogo.ivo@tecnico.ulisboa.pt arm64: tegra: p2597: Fix gpio for vdd-1v8-dis regulator
Herbert Xu herbert@gondor.apana.org.au crypto: lzo - Fix compression buffer overrun
Aaron Kling luceoscutum@gmail.com cpufreq: tegra186: Share policy per cluster
Vasant Hegde vasant.hegde@amd.com iommu/amd/pgtbl_v2: Improve error handling
Alexey Klimov alexey.klimov@linaro.org ASoC: qcom: sm8250: explicitly set format in sm8250_be_hw_params_fixup()
Andy Shevchenko andriy.shevchenko@linux.intel.com auxdisplay: charlcd: Partially revert "Move hwidth and bwidth to struct hd44780_common"
Andreas Gruenbacher agruenba@redhat.com gfs2: Check for empty queue in run_queue
Zhikai Zhai zhikai.zhai@amd.com drm/amd/display: calculate the remain segments for all pipes
Willem de Bruijn willemb@google.com ipv6: save dontfrag in cork
Kurt Borja kuurtb@gmail.com hwmon: (dell-smm) Increment the number of fans
Erick Shepherd erick.shepherd@ni.com mmc: sdhci: Disable SD card clock before changing parameters
Kaustabh Chakraborty kauschluss@disroot.org mmc: dw_mmc: add exynos7870 DW MMC support
Ryan Roberts ryan.roberts@arm.com arm64/mm: Check PUD_TYPE_TABLE in pud_bad()
Nicolas Bouchinet nicolas.bouchinet@ssi.gouv.fr netfilter: conntrack: Bound nf_conntrack sysctl writes
Thomas Weißschuh thomas.weissschuh@linutronix.de timer_list: Don't use %pK through printk()
Eric Dumazet edumazet@google.com posix-timers: Add cond_resched() to posix_timer_add() search loop
Maher Sanalla msanalla@nvidia.com RDMA/uverbs: Propagate errors from rdma_lookup_get_uobject()
Baokun Li libaokun1@huawei.com ext4: reject the 'data_err=abort' option in nojournal mode
Ryan Walklin ryan@testtoast.com ASoC: sun4i-codec: support hp-det-gpios property
Prathamesh Shete pshete@nvidia.com pinctrl-tegra: Restore SFSEL bit when freeing pins
Frediano Ziglio frediano.ziglio@cloud.com xen: Add support for XenServer 6.1 platform device
Guangguan Wang guangguan.wang@linux.alibaba.com net/smc: use the correct ndev to find pnetid by pnetid table
Mikulas Patocka mpatocka@redhat.com dm: restrict dm device size to 2^63-512 bytes
Shashank Gupta shashankg@marvell.com crypto: octeontx2 - suppress auth failure screaming due to negative tests
Seyediman Seyedarab imandevel@gmail.com kbuild: fix argument parsing in scripts/config
Nícolas F. R. A. Prado nfraprado@collabora.com ASoC: mediatek: mt6359: Add stub for mt6359_accdet_enable_jack_detect
Mika Westerberg mika.westerberg@linux.intel.com thunderbolt: Do not add non-active NVM if NVM upgrade is disabled for retimer
Alexandre Belloni alexandre.belloni@bootlin.com rtc: rv3032: fix EERD location
Ilpo Järvinen ij@kernel.org tcp: reorganize tcp_in_ack_event() and tcp_count_delivered()
Mykyta Yatsenko yatsenko@meta.com bpf: Return prog btf_id without capable check
Alex Williamson alex.williamson@redhat.com vfio/pci: Handle INTx IRQ_NOTCONNECTED
Kai Mäkisara Kai.Makisara@kolumbus.fi scsi: st: ERASE does not change tape location
Kai Mäkisara Kai.Makisara@kolumbus.fi scsi: st: Tighten the page format heuristics with MODE SELECT
Christian Göttsche cgzones@googlemail.com ext4: reorder capability check last
Tiwei Bie tiwei.btw@antgroup.com um: Update min_low_pfn to match changes in uml_reserved
Benjamin Berg benjamin@sipsolutions.net um: Store full CSGSFS and SS register from mcontext
Heming Zhao heming.zhao@suse.com dlm: make tcp still work in multi-link env
Stanley Chu yschu@nuvoton.com i3c: master: svc: Fix missing STOP for master request
Jing Zhou Jing.Zhou@amd.com drm/amd/display: Guard against setting dispclk low for dcn31x
Filipe Manana fdmanana@suse.com btrfs: send: return -ENAMETOOLONG when attempting a path that is too long
Filipe Manana fdmanana@suse.com btrfs: get zone unusable bytes while holding lock at btrfs_reclaim_bgs_work()
Filipe Manana fdmanana@suse.com btrfs: fix non-empty delayed iputs list on unmount due to async workers
Qu Wenruo wqu@suse.com btrfs: run btrfs_error_commit_super() early
Mark Harmstone maharmstone@fb.com btrfs: avoid linker error in btrfs_find_create_tree_block()
Boris Burkov boris@bur.io btrfs: make btrfs_discard_workfn() block_group ref explicit
Vitalii Mordan mordan@ispras.ru i2c: pxa: fix call balance of i2c->clk handling routines
Stephan Gerhold stephan.gerhold@kernkonzept.com i2c: qup: Vote for interconnect bandwidth to DRAM
Philip Redkin me@rarity.fan x86/mm: Check return value from memblock_phys_alloc_range()
Erick Shepherd erick.shepherd@ni.com mmc: host: Wait for Vdd to settle on card power off
Robert Richter rrichter@amd.com libnvdimm/labels: Fix divide error in nd_label_data_init()
Roger Pau Monne roger.pau@citrix.com PCI: vmd: Disable MSI remapping bypass under Xen
Trond Myklebust trond.myklebust@hammerspace.com pNFS/flexfiles: Report ENETDOWN as a connection error
Ian Rogers irogers@google.com tools/build: Don't pass test log files to linker
Frank Li Frank.Li@nxp.com PCI: dwc: ep: Ensure proper iteration over outbound map windows
Josh Poimboeuf jpoimboe@kernel.org objtool: Properly disable uaccess validation
Ryo Takakura ryotkkr98@gmail.com lockdep: Fix wait context check on softirq for PREEMPT_RT
Jing Su jingsusu@didiglobal.com dql: Fix dql->limit value when reset.
Alice Guo alice.guo@nxp.com thermal/drivers/qoriq: Power down TMU on system suspend
Trond Myklebust trond.myklebust@hammerspace.com SUNRPC: rpcbind should never reset the port to the value '0'
Trond Myklebust trond.myklebust@hammerspace.com SUNRPC: rpc_clnt_set_transport() must not change the autobind setting
Trond Myklebust trond.myklebust@hammerspace.com NFSv4: Treat ENETUNREACH errors as fatal for state recovery
Pali Rohár pali@kernel.org cifs: Fix establishing NetBIOS session for SMB2+ connection
Zsolt Kajtar soci@c64.rulez.org fbdev: core: tileblit: Implement missing margin clearing for tileblit
Zsolt Kajtar soci@c64.rulez.org fbcon: Use correct erase colour for clearing in fbcon
Shixiong Ou oushixiong@kylinos.cn fbdev: fsl-diu-fb: add missing device_remove_file()
Tudor Ambarus tudor.ambarus@linaro.org mailbox: use error ret code of of_parse_phandle_with_args()
Andy Shevchenko andriy.shevchenko@linux.intel.com tracing: Mark binary printing functions with __printf() attribute
Jinqian Yang yangjinqian1@huawei.com arm64: Add support for HIP09 Spectre-BHB mitigation
Trond Myklebust trond.myklebust@hammerspace.com SUNRPC: Don't allow waiting for exiting tasks
Trond Myklebust trond.myklebust@hammerspace.com NFS: Don't allow waiting for exiting tasks
Trond Myklebust trond.myklebust@hammerspace.com NFSv4: Check for delegation validity in nfs_start_delegation_return_locked()
Matt Johnston matt@codeconstruct.com.au fuse: Return EPERM rather than ENOSYS from link()
Pali Rohár pali@kernel.org cifs: Fix negotiate retry functionality
Pali Rohár pali@kernel.org cifs: Fix querying and creating MF symlinks over SMB1
Pali Rohár pali@kernel.org cifs: Add fallback for SMB2 CREATE without FILE_READ_ATTRIBUTES
Anthony Krowiak akrowiak@linux.ibm.com s390/vfio-ap: Fix no AP queue sharing allowed message written to kernel log
Daniel Gomez da.gomez@samsung.com kconfig: merge_config: use an empty file as initfile
Haoran Jiang jianghaoran@kylinos.cn samples/bpf: Fix compilation failure for samples/bpf on LoongArch Fedora
Brandon Kammerdiener brandon.kammerdiener@intel.com bpf: fix possible endless loop in BPF map iteration
Ihor Solodrai ihor.solodrai@linux.dev selftests/bpf: Mitigate sockmap_ktls disconnect_after_delete failure
Felix Kuehling felix.kuehling@amd.com drm/amdgpu: Allow P2P access through XGMI
Vladimir Oltean vladimir.oltean@nxp.com net: enetc: refactor bulk flipping of RX buffers to separate function
Ranjan Kumar ranjan.kumar@broadcom.com scsi: mpi3mr: Add level check to control event logging
gaoxu gaoxu2@honor.com cgroup: Fix compilation issue due to cgroup_mutex not being exported
Marek Szyprowski m.szyprowski@samsung.com dma-mapping: avoid potential unused data compilation warning
Zhongqiu Han quic_zhonhan@quicinc.com virtio_ring: Fix data race by tagging event_triggered as racy for KCSAN
Dmitry Bogdanov d.bogdanov@yadro.com scsi: target: iscsi: Fix timeout on deleted connection
Claudiu Beznea claudiu.beznea.uj@bp.renesas.com phy: renesas: rcar-gen3-usb2: Assert PLL reset on PHY power off
Claudiu Beznea claudiu.beznea.uj@bp.renesas.com phy: renesas: rcar-gen3-usb2: Lock around hardware registers and driver data
Claudiu Beznea claudiu.beznea.uj@bp.renesas.com phy: renesas: rcar-gen3-usb2: Move IRQ request in probe
Claudiu Beznea claudiu.beznea.uj@bp.renesas.com phy: renesas: rcar-gen3-usb2: Add support to initialize the bus
Emanuele Ghidoli emanuele.ghidoli@toradex.com gpio: pca953x: fix IRQ storm on system wake up
Andy Shevchenko andriy.shevchenko@linux.intel.com gpio: pca953x: Simplify code with cleanup helpers
Andy Shevchenko andriy.shevchenko@linux.intel.com gpio: pca953x: Split pca953x_restore_context() and pca953x_save_context()
Andy Shevchenko andriy.shevchenko@linux.intel.com gpio: pca953x: Add missing header(s)
-------------
Diffstat:
Documentation/admin-guide/kernel-parameters.txt | 2 + Documentation/driver-api/serial/driver.rst | 2 +- Documentation/hwmon/dell-smm-hwmon.rst | 14 +- Makefile | 16 +- arch/arm/boot/dts/tegra114.dtsi | 2 +- arch/arm/mach-at91/pm.c | 21 +- .../boot/dts/allwinner/sun50i-h6-beelink-gs1.dts | 38 +- .../boot/dts/allwinner/sun50i-h6-orangepi-3.dts | 14 +- .../boot/dts/allwinner/sun50i-h6-orangepi.dtsi | 22 +- arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi | 2 +- arch/arm64/boot/dts/qcom/sm8350.dtsi | 2 +- arch/arm64/include/asm/cputype.h | 2 + arch/arm64/include/asm/pgtable.h | 3 +- arch/arm64/kernel/proton-pack.c | 1 + arch/mips/include/asm/ftrace.h | 16 + arch/mips/kernel/pm-cps.c | 30 +- arch/powerpc/kernel/prom_init.c | 4 +- arch/powerpc/perf/core-book3s.c | 20 + arch/powerpc/perf/isa207-common.c | 4 +- arch/um/Makefile | 1 + arch/um/kernel/mem.c | 1 + arch/x86/boot/genimage.sh | 5 +- arch/x86/events/amd/ibs.c | 3 +- arch/x86/include/asm/nmi.h | 2 + arch/x86/include/asm/perf_event.h | 1 + arch/x86/kernel/cpu/bugs.c | 10 +- arch/x86/kernel/nmi.c | 42 ++ arch/x86/kernel/reboot.c | 10 +- arch/x86/mm/init.c | 9 +- arch/x86/mm/init_64.c | 15 +- arch/x86/mm/kaslr.c | 10 +- arch/x86/um/os-Linux/mcontext.c | 3 +- crypto/algif_hash.c | 4 - crypto/lzo-rle.c | 2 +- crypto/lzo.c | 2 +- drivers/acpi/Kconfig | 2 +- drivers/acpi/hed.c | 7 +- drivers/auxdisplay/charlcd.c | 5 +- drivers/auxdisplay/charlcd.h | 5 +- drivers/auxdisplay/hd44780.c | 2 +- drivers/auxdisplay/lcd2s.c | 2 +- drivers/auxdisplay/panel.c | 2 +- drivers/clk/imx/clk-imx8mp.c | 151 ++++ drivers/clk/qcom/camcc-sm8250.c | 56 +- drivers/clk/qcom/clk-alpha-pll.c | 52 +- drivers/clk/sunxi-ng/ccu-sun20i-d1.c | 42 +- drivers/clk/sunxi-ng/ccu_mp.h | 22 + drivers/clocksource/mips-gic-timer.c | 6 +- drivers/cpufreq/tegra186-cpufreq.c | 7 + drivers/cpuidle/governors/menu.c | 13 +- .../crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c | 7 +- drivers/dma/idxd/cdev.c | 128 +++- drivers/dma/idxd/idxd.h | 7 + drivers/dma/idxd/init.c | 2 + drivers/dma/idxd/sysfs.c | 1 + drivers/edac/ie31200_edac.c | 28 +- drivers/firmware/arm_ffa/bus.c | 1 + drivers/fpga/altera-cvp.c | 2 +- drivers/gpio/gpio-pca953x.c | 114 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 30 +- drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 7 +- drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 10 +- drivers/gpu/drm/amd/amdkfd/kfd_process.c | 16 +- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 +- .../amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c | 20 +- .../amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c | 13 +- drivers/gpu/drm/amd/display/dc/core/dc.c | 1 + drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c | 11 +- .../drm/amd/display/dc/dcn315/dcn315_resource.c | 40 +- drivers/gpu/drm/ast/ast_mode.c | 10 +- drivers/gpu/drm/drm_atomic_helper.c | 28 + drivers/gpu/drm/drm_edid.c | 1 + drivers/gpu/drm/mediatek/mtk_dpi.c | 5 +- drivers/gpu/drm/panel/panel-edp.c | 1 + drivers/gpu/drm/rockchip/rockchip_drm_vop2.c | 6 +- drivers/hid/hid-ids.h | 4 + drivers/hid/hid-quirks.c | 2 + drivers/hid/usbhid/usbkbd.c | 2 +- drivers/hwmon/dell-smm-hwmon.c | 5 +- drivers/hwmon/gpio-fan.c | 16 +- drivers/hwmon/xgene-hwmon.c | 2 +- drivers/i2c/busses/i2c-pxa.c | 5 +- drivers/i2c/busses/i2c-qup.c | 36 + drivers/i3c/master/svc-i3c-master.c | 4 + drivers/infiniband/core/umem.c | 36 +- drivers/infiniband/core/uverbs_cmd.c | 144 ++-- drivers/infiniband/core/verbs.c | 11 +- drivers/iommu/amd/io_pgtable_v2.c | 2 +- drivers/iommu/dma-iommu.c | 28 +- drivers/leds/rgb/leds-pwm-multicolor.c | 5 +- drivers/mailbox/mailbox.c | 7 +- drivers/md/dm-cache-target.c | 24 + drivers/md/dm-table.c | 4 + drivers/md/dm.c | 8 +- drivers/media/i2c/adv7180.c | 34 +- drivers/media/platform/qcom/camss/camss-csid.c | 64 +- .../platform/st/sti/c8sectpfe/c8sectpfe-core.c | 3 +- .../media/test-drivers/vivid/vivid-kthread-cap.c | 11 +- .../media/test-drivers/vivid/vivid-kthread-out.c | 11 +- .../media/test-drivers/vivid/vivid-kthread-touch.c | 11 +- drivers/media/test-drivers/vivid/vivid-sdr-cap.c | 11 +- drivers/media/usb/cx231xx/cx231xx-417.c | 2 + drivers/media/usb/uvc/uvc_v4l2.c | 6 + drivers/mmc/host/dw_mmc-exynos.c | 41 +- drivers/mmc/host/sdhci-pci-core.c | 6 +- drivers/mmc/host/sdhci.c | 9 +- drivers/net/bonding/bond_main.c | 2 +- drivers/net/can/c_can/c_can_platform.c | 2 +- drivers/net/can/slcan/slcan-core.c | 26 +- drivers/net/ethernet/apm/xgene-v2/main.c | 4 +- drivers/net/ethernet/freescale/enetc/enetc.c | 16 +- drivers/net/ethernet/intel/ice/ice_ethtool.c | 3 +- drivers/net/ethernet/intel/ice/ice_virtchnl.c | 1 - drivers/net/ethernet/marvell/octeontx2/Kconfig | 1 + .../net/ethernet/marvell/octeontx2/af/rvu_cn10k.c | 24 +- .../ethernet/marvell/octeontx2/af/rvu_debugfs.c | 11 +- drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c | 6 +- drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h | 2 +- .../ethernet/marvell/octeontx2/nic/otx2_common.c | 130 ++-- .../ethernet/marvell/octeontx2/nic/otx2_common.h | 9 +- .../net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 18 +- .../net/ethernet/marvell/octeontx2/nic/otx2_txrx.c | 49 +- .../net/ethernet/marvell/octeontx2/nic/otx2_txrx.h | 7 +- .../net/ethernet/marvell/octeontx2/nic/qos_sq.c | 2 +- drivers/net/ethernet/mediatek/mtk_ppe_offload.c | 22 +- drivers/net/ethernet/mellanox/mlx4/alloc.c | 6 +- drivers/net/ethernet/mellanox/mlx4/en_tx.c | 2 + drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 5 + .../net/ethernet/mellanox/mlx5/core/en_selftest.c | 3 + drivers/net/ethernet/mellanox/mlx5/core/events.c | 11 +- drivers/net/ethernet/mellanox/mlx5/core/health.c | 1 + drivers/net/ethernet/microchip/lan743x_main.c | 19 +- drivers/net/ethernet/microsoft/mana/gdma_main.c | 2 +- drivers/net/ethernet/realtek/r8169_main.c | 1 + drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c | 2 +- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 2 +- drivers/net/ethernet/ti/cpsw_new.c | 1 + drivers/net/ieee802154/ca8210.c | 9 +- drivers/net/phy/phylink.c | 2 +- drivers/net/usb/r8152.c | 1 + drivers/net/vxlan/vxlan_core.c | 36 +- drivers/net/wireless/ath/ath9k/init.c | 4 +- drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 2 + .../net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c | 17 +- drivers/net/wireless/realtek/rtw88/main.c | 40 +- drivers/net/wireless/realtek/rtw88/reg.h | 3 +- drivers/net/wireless/realtek/rtw88/rtw8822b.c | 14 +- drivers/net/wireless/realtek/rtw88/util.c | 3 +- drivers/net/wireless/realtek/rtw89/fw.c | 2 - drivers/net/wireless/realtek/rtw89/regd.c | 2 + drivers/net/wireless/realtek/rtw89/ser.c | 4 + drivers/nvdimm/label.c | 3 +- drivers/nvme/host/pci.c | 2 + drivers/nvme/target/tcp.c | 3 + drivers/pci/Kconfig | 6 + drivers/pci/controller/dwc/pcie-designware-ep.c | 2 +- drivers/pci/controller/pcie-brcmstb.c | 5 +- drivers/pci/controller/vmd.c | 20 + drivers/pci/setup-bus.c | 6 +- drivers/perf/arm-cmn.c | 10 +- drivers/phy/phy-core.c | 7 +- drivers/phy/renesas/phy-rcar-gen3-usb2.c | 143 ++-- drivers/pinctrl/bcm/pinctrl-bcm281xx.c | 44 +- drivers/pinctrl/devicetree.c | 10 +- drivers/pinctrl/meson/pinctrl-meson.c | 2 +- drivers/pinctrl/tegra/pinctrl-tegra.c | 59 +- drivers/pinctrl/tegra/pinctrl-tegra.h | 6 + .../x86/dell/dell-wmi-sysman/passobj-attributes.c | 2 +- drivers/platform/x86/fujitsu-laptop.c | 33 +- drivers/platform/x86/thinkpad_acpi.c | 7 + drivers/regulator/ad5398.c | 12 +- drivers/remoteproc/qcom_wcnss.c | 34 +- drivers/rtc/rtc-ds1307.c | 4 +- drivers/rtc/rtc-rv3032.c | 2 +- drivers/s390/crypto/vfio_ap_ops.c | 72 +- drivers/scsi/lpfc/lpfc_hbadisc.c | 17 +- drivers/scsi/lpfc/lpfc_init.c | 2 + drivers/scsi/mpi3mr/mpi3mr_fw.c | 3 + drivers/scsi/mpt3sas/mpt3sas_ctl.c | 12 +- drivers/scsi/st.c | 29 +- drivers/scsi/st.h | 2 + drivers/soc/apple/rtkit-internal.h | 1 + drivers/soc/apple/rtkit.c | 58 +- drivers/soc/imx/gpcv2.c | 2 +- drivers/soc/ti/k3-socinfo.c | 13 +- drivers/spi/spi-fsl-dspi.c | 46 +- drivers/spi/spi-sun4i.c | 5 +- drivers/spi/spi-zynqmp-gqspi.c | 22 +- drivers/target/iscsi/iscsi_target.c | 2 +- drivers/thermal/qoriq_thermal.c | 13 + drivers/thunderbolt/retimer.c | 8 +- drivers/tty/serial/8250/8250_port.c | 2 +- drivers/tty/serial/atmel_serial.c | 2 +- drivers/tty/serial/imx.c | 2 +- drivers/tty/serial/serial_mctrl_gpio.c | 34 +- drivers/tty/serial/serial_mctrl_gpio.h | 17 +- drivers/tty/serial/sh-sci.c | 98 ++- drivers/tty/serial/stm32-usart.c | 2 +- drivers/vfio/pci/vfio_pci_config.c | 3 +- drivers/vfio/pci/vfio_pci_core.c | 10 +- drivers/vfio/pci/vfio_pci_intrs.c | 2 +- drivers/video/fbdev/core/bitblit.c | 5 +- drivers/video/fbdev/core/fbcon.c | 10 +- drivers/video/fbdev/core/fbcon.h | 38 +- drivers/video/fbdev/core/fbcon_ccw.c | 5 +- drivers/video/fbdev/core/fbcon_cw.c | 5 +- drivers/video/fbdev/core/fbcon_ud.c | 5 +- drivers/video/fbdev/core/tileblit.c | 45 +- drivers/video/fbdev/fsl-diu-fb.c | 1 + drivers/virtio/virtio_ring.c | 2 +- drivers/xen/platform-pci.c | 4 + drivers/xen/xenbus/xenbus_probe.c | 14 +- fs/btrfs/block-group.c | 18 +- fs/btrfs/discard.c | 34 +- fs/btrfs/disk-io.c | 28 +- fs/btrfs/extent_io.c | 7 +- fs/btrfs/relocation.c | 6 + fs/btrfs/send.c | 6 +- fs/coredump.c | 81 ++- fs/dlm/lowcomms.c | 4 +- fs/ext4/balloc.c | 4 +- fs/ext4/super.c | 12 + fs/fuse/dir.c | 2 + fs/gfs2/glock.c | 11 +- fs/namespace.c | 6 +- fs/nfs/client.c | 2 + fs/nfs/delegation.c | 3 +- fs/nfs/dir.c | 15 +- fs/nfs/filelayout/filelayoutdev.c | 6 +- fs/nfs/flexfilelayout/flexfilelayout.c | 1 + fs/nfs/flexfilelayout/flexfilelayoutdev.c | 6 +- fs/nfs/inode.c | 2 + fs/nfs/internal.h | 5 + fs/nfs/nfs3proc.c | 2 +- fs/nfs/nfs4proc.c | 9 +- fs/nfs/nfs4state.c | 10 +- fs/nfs/pnfs.h | 4 +- fs/nfs/pnfs_nfs.c | 9 +- fs/orangefs/inode.c | 7 +- fs/smb/client/cifsproto.h | 3 + fs/smb/client/connect.c | 30 +- fs/smb/client/link.c | 8 +- fs/smb/client/readdir.c | 7 +- fs/smb/client/smb1ops.c | 7 - fs/smb/client/smb2file.c | 11 +- fs/smb/client/smb2ops.c | 3 - fs/smb/client/transport.c | 2 +- fs/smb/server/vfs.c | 14 +- include/drm/drm_atomic.h | 23 +- include/linux/coredump.h | 1 + include/linux/dma-mapping.h | 12 +- include/linux/hrtimer.h | 1 + include/linux/ipv6.h | 1 + include/linux/lzo.h | 8 + include/linux/mlx4/device.h | 2 +- include/linux/msi.h | 33 +- include/linux/nfs_fs_sb.h | 12 +- include/linux/perf_event.h | 8 +- include/linux/pid.h | 1 + include/linux/rcupdate.h | 2 +- include/linux/rcutree.h | 2 +- include/linux/trace.h | 4 +- include/linux/trace_seq.h | 8 +- include/linux/usb/r8152.h | 1 + include/net/af_unix.h | 48 +- include/net/scm.h | 11 + include/net/xfrm.h | 1 - include/rdma/uverbs_std_types.h | 2 +- include/sound/hda_codec.h | 1 + include/sound/pcm.h | 2 + include/trace/events/btrfs.h | 2 +- io_uring/fdinfo.c | 4 +- io_uring/io_uring.c | 1 + kernel/bpf/hashtab.c | 2 +- kernel/bpf/syscall.c | 4 +- kernel/cgroup/cgroup.c | 2 +- kernel/events/core.c | 33 +- kernel/events/hw_breakpoint.c | 5 +- kernel/events/ring_buffer.c | 1 + kernel/fork.c | 98 ++- kernel/padata.c | 3 +- kernel/pid.c | 19 +- kernel/rcu/tree_plugin.h | 22 +- kernel/softirq.c | 18 + kernel/time/hrtimer.c | 103 ++- kernel/time/posix-timers.c | 1 + kernel/time/timer_list.c | 4 +- kernel/trace/trace.c | 11 +- kernel/trace/trace.h | 16 +- lib/dynamic_queue_limits.c | 2 +- lib/lzo/Makefile | 2 +- lib/lzo/lzo1x_compress.c | 102 ++- lib/lzo/lzo1x_compress_safe.c | 18 + mm/memcontrol.c | 6 +- mm/page_alloc.c | 8 + net/Makefile | 2 +- net/bluetooth/l2cap_core.c | 15 +- net/bridge/br_nf_core.c | 7 +- net/bridge/br_private.h | 1 + net/can/bcm.c | 79 ++- net/core/pktgen.c | 13 +- net/core/scm.c | 17 + net/ipv4/esp4.c | 49 +- net/ipv4/fib_frontend.c | 18 +- net/ipv4/fib_rules.c | 4 +- net/ipv4/fib_trie.c | 22 - net/ipv4/inet_hashtables.c | 37 +- net/ipv4/tcp_input.c | 56 +- net/ipv6/esp6.c | 49 +- net/ipv6/fib6_rules.c | 4 +- net/ipv6/ip6_output.c | 9 +- net/llc/af_llc.c | 8 +- net/mac80211/mlme.c | 4 +- net/netfilter/nf_conntrack_standalone.c | 12 +- net/sched/sch_hfsc.c | 15 +- net/smc/smc_pnet.c | 8 +- net/sunrpc/clnt.c | 3 - net/sunrpc/rpcb_clnt.c | 5 +- net/sunrpc/sched.c | 2 + net/tipc/crypto.c | 5 + net/unix/Kconfig | 11 +- net/unix/Makefile | 2 - net/unix/af_unix.c | 120 ++-- net/unix/garbage.c | 779 ++++++++++++++------- net/unix/scm.c | 154 ---- net/unix/scm.h | 10 - net/xfrm/xfrm_policy.c | 3 + net/xfrm/xfrm_state.c | 6 +- samples/bpf/Makefile | 2 +- scripts/config | 26 +- scripts/kconfig/merge_config.sh | 4 +- security/smack/smackfs.c | 4 + sound/core/oss/pcm_oss.c | 3 +- sound/core/pcm_native.c | 11 + sound/core/seq/seq_clientmgr.c | 5 +- sound/core/seq/seq_memory.c | 1 + sound/pci/hda/hda_beep.c | 15 +- sound/pci/hda/patch_realtek.c | 77 +- sound/soc/codecs/mt6359-accdet.h | 9 + sound/soc/codecs/pcm3168a.c | 6 +- sound/soc/codecs/tas2764.c | 53 +- sound/soc/fsl/imx-card.c | 2 +- sound/soc/intel/boards/bytcr_rt5640.c | 13 + sound/soc/qcom/sm8250.c | 3 + sound/soc/soc-dai.c | 8 +- sound/soc/soc-ops.c | 29 +- sound/soc/sunxi/sun4i-codec.c | 53 ++ tools/bpf/bpftool/common.c | 3 +- tools/build/Makefile.build | 6 +- tools/lib/bpf/libbpf.c | 2 +- tools/objtool/check.c | 11 +- tools/testing/kunit/qemu_configs/x86_64.py | 4 +- .../selftests/bpf/prog_tests/sockmap_ktls.c | 1 - tools/testing/selftests/net/gro.sh | 3 +- 354 files changed, 4230 insertions(+), 2032 deletions(-)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
[ Upstream commit c20a395f9bf939ef0587ce5fa14316ac26252e9b ]
Do not imply that some of the generic headers may be always included. Instead, include explicitly what we are direct user of.
While at it, sort headers alphabetically.
Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Stable-dep-of: 3e38f946062b ("gpio: pca953x: fix IRQ storm on system wake up") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-pca953x.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index 262b3d276df78..caf3bb6cb6b9f 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -10,8 +10,8 @@
#include <linux/acpi.h> #include <linux/bitmap.h> -#include <linux/gpio/driver.h> #include <linux/gpio/consumer.h> +#include <linux/gpio/driver.h> #include <linux/i2c.h> #include <linux/init.h> #include <linux/interrupt.h> @@ -20,6 +20,7 @@ #include <linux/platform_data/pca953x.h> #include <linux/regmap.h> #include <linux/regulator/consumer.h> +#include <linux/seq_file.h> #include <linux/slab.h>
#include <asm/unaligned.h>
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
[ Upstream commit ec5bde62019b0a5300c67bd81b9864a8ea12274e ]
Split regcache handling to the respective helpers. It will allow to have further refactoring with ease.
Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Stable-dep-of: 3e38f946062b ("gpio: pca953x: fix IRQ storm on system wake up") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-pca953x.c | 44 ++++++++++++++++++++++++------------- 1 file changed, 29 insertions(+), 15 deletions(-)
diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index caf3bb6cb6b9f..db4a48558c676 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -1200,9 +1200,9 @@ static void pca953x_remove(struct i2c_client *client) }
#ifdef CONFIG_PM_SLEEP -static int pca953x_regcache_sync(struct device *dev) +static int pca953x_regcache_sync(struct pca953x_chip *chip) { - struct pca953x_chip *chip = dev_get_drvdata(dev); + struct device *dev = &chip->client->dev; int ret; u8 regaddr;
@@ -1249,13 +1249,37 @@ static int pca953x_regcache_sync(struct device *dev) return 0; }
-static int pca953x_suspend(struct device *dev) +static int pca953x_restore_context(struct pca953x_chip *chip) { - struct pca953x_chip *chip = dev_get_drvdata(dev); + int ret;
+ mutex_lock(&chip->i2c_lock); + + regcache_cache_only(chip->regmap, false); + regcache_mark_dirty(chip->regmap); + ret = pca953x_regcache_sync(chip); + if (ret) { + mutex_unlock(&chip->i2c_lock); + return ret; + } + + ret = regcache_sync(chip->regmap); + mutex_unlock(&chip->i2c_lock); + return ret; +} + +static void pca953x_save_context(struct pca953x_chip *chip) +{ mutex_lock(&chip->i2c_lock); regcache_cache_only(chip->regmap, true); mutex_unlock(&chip->i2c_lock); +} + +static int pca953x_suspend(struct device *dev) +{ + struct pca953x_chip *chip = dev_get_drvdata(dev); + + pca953x_save_context(chip);
if (atomic_read(&chip->wakeup_path)) device_set_wakeup_path(dev); @@ -1278,17 +1302,7 @@ static int pca953x_resume(struct device *dev) } }
- mutex_lock(&chip->i2c_lock); - regcache_cache_only(chip->regmap, false); - regcache_mark_dirty(chip->regmap); - ret = pca953x_regcache_sync(dev); - if (ret) { - mutex_unlock(&chip->i2c_lock); - return ret; - } - - ret = regcache_sync(chip->regmap); - mutex_unlock(&chip->i2c_lock); + ret = pca953x_restore_context(chip); if (ret) { dev_err(dev, "Failed to restore register map: %d\n", ret); return ret;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
[ Upstream commit 8e471b784a720f6f34f9fb449ba0744359dcaccb ]
Use macros defined in linux/cleanup.h to automate resource lifetime control in gpio-pca953x.
Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Stable-dep-of: 3e38f946062b ("gpio: pca953x: fix IRQ storm on system wake up") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-pca953x.c | 77 ++++++++++++++----------------------- 1 file changed, 29 insertions(+), 48 deletions(-)
diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index db4a48558c676..a5be47a916d3a 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -10,6 +10,7 @@
#include <linux/acpi.h> #include <linux/bitmap.h> +#include <linux/cleanup.h> #include <linux/gpio/consumer.h> #include <linux/gpio/driver.h> #include <linux/i2c.h> @@ -523,12 +524,10 @@ static int pca953x_gpio_direction_input(struct gpio_chip *gc, unsigned off) struct pca953x_chip *chip = gpiochip_get_data(gc); u8 dirreg = chip->recalc_addr(chip, chip->regs->direction, off); u8 bit = BIT(off % BANK_SZ); - int ret;
- mutex_lock(&chip->i2c_lock); - ret = regmap_write_bits(chip->regmap, dirreg, bit, bit); - mutex_unlock(&chip->i2c_lock); - return ret; + guard(mutex)(&chip->i2c_lock); + + return regmap_write_bits(chip->regmap, dirreg, bit, bit); }
static int pca953x_gpio_direction_output(struct gpio_chip *gc, @@ -540,17 +539,15 @@ static int pca953x_gpio_direction_output(struct gpio_chip *gc, u8 bit = BIT(off % BANK_SZ); int ret;
- mutex_lock(&chip->i2c_lock); + guard(mutex)(&chip->i2c_lock); + /* set output level */ ret = regmap_write_bits(chip->regmap, outreg, bit, val ? bit : 0); if (ret) - goto exit; + return ret;
/* then direction */ - ret = regmap_write_bits(chip->regmap, dirreg, bit, 0); -exit: - mutex_unlock(&chip->i2c_lock); - return ret; + return regmap_write_bits(chip->regmap, dirreg, bit, 0); }
static int pca953x_gpio_get_value(struct gpio_chip *gc, unsigned off) @@ -561,9 +558,8 @@ static int pca953x_gpio_get_value(struct gpio_chip *gc, unsigned off) u32 reg_val; int ret;
- mutex_lock(&chip->i2c_lock); - ret = regmap_read(chip->regmap, inreg, ®_val); - mutex_unlock(&chip->i2c_lock); + scoped_guard(mutex, &chip->i2c_lock) + ret = regmap_read(chip->regmap, inreg, ®_val); if (ret < 0) return ret;
@@ -576,9 +572,9 @@ static void pca953x_gpio_set_value(struct gpio_chip *gc, unsigned off, int val) u8 outreg = chip->recalc_addr(chip, chip->regs->output, off); u8 bit = BIT(off % BANK_SZ);
- mutex_lock(&chip->i2c_lock); + guard(mutex)(&chip->i2c_lock); + regmap_write_bits(chip->regmap, outreg, bit, val ? bit : 0); - mutex_unlock(&chip->i2c_lock); }
static int pca953x_gpio_get_direction(struct gpio_chip *gc, unsigned off) @@ -589,9 +585,8 @@ static int pca953x_gpio_get_direction(struct gpio_chip *gc, unsigned off) u32 reg_val; int ret;
- mutex_lock(&chip->i2c_lock); - ret = regmap_read(chip->regmap, dirreg, ®_val); - mutex_unlock(&chip->i2c_lock); + scoped_guard(mutex, &chip->i2c_lock) + ret = regmap_read(chip->regmap, dirreg, ®_val); if (ret < 0) return ret;
@@ -608,9 +603,8 @@ static int pca953x_gpio_get_multiple(struct gpio_chip *gc, DECLARE_BITMAP(reg_val, MAX_LINE); int ret;
- mutex_lock(&chip->i2c_lock); - ret = pca953x_read_regs(chip, chip->regs->input, reg_val); - mutex_unlock(&chip->i2c_lock); + scoped_guard(mutex, &chip->i2c_lock) + ret = pca953x_read_regs(chip, chip->regs->input, reg_val); if (ret) return ret;
@@ -625,16 +619,15 @@ static void pca953x_gpio_set_multiple(struct gpio_chip *gc, DECLARE_BITMAP(reg_val, MAX_LINE); int ret;
- mutex_lock(&chip->i2c_lock); + guard(mutex)(&chip->i2c_lock); + ret = pca953x_read_regs(chip, chip->regs->output, reg_val); if (ret) - goto exit; + return;
bitmap_replace(reg_val, reg_val, bits, mask, gc->ngpio);
pca953x_write_regs(chip, chip->regs->output, reg_val); -exit: - mutex_unlock(&chip->i2c_lock); }
static int pca953x_gpio_set_pull_up_down(struct pca953x_chip *chip, @@ -642,7 +635,6 @@ static int pca953x_gpio_set_pull_up_down(struct pca953x_chip *chip, unsigned long config) { enum pin_config_param param = pinconf_to_config_param(config); - u8 pull_en_reg = chip->recalc_addr(chip, PCAL953X_PULL_EN, offset); u8 pull_sel_reg = chip->recalc_addr(chip, PCAL953X_PULL_SEL, offset); u8 bit = BIT(offset % BANK_SZ); @@ -655,7 +647,7 @@ static int pca953x_gpio_set_pull_up_down(struct pca953x_chip *chip, if (!(chip->driver_data & PCA_PCAL)) return -ENOTSUPP;
- mutex_lock(&chip->i2c_lock); + guard(mutex)(&chip->i2c_lock);
/* Configure pull-up/pull-down */ if (param == PIN_CONFIG_BIAS_PULL_UP) @@ -665,17 +657,13 @@ static int pca953x_gpio_set_pull_up_down(struct pca953x_chip *chip, else ret = 0; if (ret) - goto exit; + return ret;
/* Disable/Enable pull-up/pull-down */ if (param == PIN_CONFIG_BIAS_DISABLE) - ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0); + return regmap_write_bits(chip->regmap, pull_en_reg, bit, 0); else - ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit); - -exit: - mutex_unlock(&chip->i2c_lock); - return ret; + return regmap_write_bits(chip->regmap, pull_en_reg, bit, bit); }
static int pca953x_gpio_set_config(struct gpio_chip *gc, unsigned int offset, @@ -888,10 +876,8 @@ static irqreturn_t pca953x_irq_handler(int irq, void *devid)
bitmap_zero(pending, MAX_LINE);
- mutex_lock(&chip->i2c_lock); - ret = pca953x_irq_pending(chip, pending); - mutex_unlock(&chip->i2c_lock); - + scoped_guard(mutex, &chip->i2c_lock) + ret = pca953x_irq_pending(chip, pending); if (ret) { ret = 0;
@@ -1253,26 +1239,21 @@ static int pca953x_restore_context(struct pca953x_chip *chip) { int ret;
- mutex_lock(&chip->i2c_lock); + guard(mutex)(&chip->i2c_lock);
regcache_cache_only(chip->regmap, false); regcache_mark_dirty(chip->regmap); ret = pca953x_regcache_sync(chip); - if (ret) { - mutex_unlock(&chip->i2c_lock); + if (ret) return ret; - }
- ret = regcache_sync(chip->regmap); - mutex_unlock(&chip->i2c_lock); - return ret; + return regcache_sync(chip->regmap); }
static void pca953x_save_context(struct pca953x_chip *chip) { - mutex_lock(&chip->i2c_lock); + guard(mutex)(&chip->i2c_lock); regcache_cache_only(chip->regmap, true); - mutex_unlock(&chip->i2c_lock); }
static int pca953x_suspend(struct device *dev)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Emanuele Ghidoli emanuele.ghidoli@toradex.com
[ Upstream commit 3e38f946062b4845961ab86b726651b4457b2af8 ]
If an input changes state during wake-up and is used as an interrupt source, the IRQ handler reads the volatile input register to clear the interrupt mask and deassert the IRQ line. However, the IRQ handler is triggered before access to the register is granted, causing the read operation to fail.
As a result, the IRQ handler enters a loop, repeatedly printing the "failed reading register" message, until `pca953x_resume()` is eventually called, which restores the driver context and enables access to registers.
Fix by disabling the IRQ line before entering suspend mode, and re-enabling it after the driver context is restored in `pca953x_resume()`.
An IRQ can be disabled with disable_irq() and still wake the system as long as the IRQ has wake enabled, so the wake-up functionality is preserved.
Fixes: b76574300504 ("gpio: pca953x: Restore registers after suspend/resume cycle") Cc: stable@vger.kernel.org Signed-off-by: Emanuele Ghidoli emanuele.ghidoli@toradex.com Signed-off-by: Francesco Dolcini francesco.dolcini@toradex.com Reviewed-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Tested-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://lore.kernel.org/r/20250512095441.31645-1-francesco@dolcini.it Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-pca953x.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index a5be47a916d3a..f81d79a297a5c 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -1241,6 +1241,8 @@ static int pca953x_restore_context(struct pca953x_chip *chip)
guard(mutex)(&chip->i2c_lock);
+ if (chip->client->irq > 0) + enable_irq(chip->client->irq); regcache_cache_only(chip->regmap, false); regcache_mark_dirty(chip->regmap); ret = pca953x_regcache_sync(chip); @@ -1253,6 +1255,10 @@ static int pca953x_restore_context(struct pca953x_chip *chip) static void pca953x_save_context(struct pca953x_chip *chip) { guard(mutex)(&chip->i2c_lock); + + /* Disable IRQ to prevent early triggering while regmap "cache only" is on */ + if (chip->client->irq > 0) + disable_irq(chip->client->irq); regcache_cache_only(chip->regmap, true); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com
[ Upstream commit 4eae16375357a2a7e8501be5469532f7636064b3 ]
The Renesas RZ/G3S need to initialize the USB BUS before transferring data due to hardware limitation. As the register that need to be touched for this is in the address space of the USB PHY, and the UBS PHY need to be initialized before any other USB drivers handling data transfer, add support to initialize the USB BUS.
As the USB PHY is probed before any other USB drivers that enables clocks and de-assert the reset signals and the BUS initialization is done in the probe phase, we need to add code to de-assert reset signal and runtime resume the device (which enables its clocks) before accessing the registers.
As the reset signals are not required by the USB PHY driver for the other USB PHY hardware variants, the reset signals and runtime PM was handled only in the function that initialize the USB BUS.
The PHY initialization was done right after runtime PM enable to have all in place when the PHYs are registered.
Signed-off-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Link: https://lore.kernel.org/r/20240822152801.602318-11-claudiu.beznea.uj@bp.rene... Signed-off-by: Vinod Koul vkoul@kernel.org Stable-dep-of: 9ce71e85b29e ("phy: renesas: rcar-gen3-usb2: Assert PLL reset on PHY power off") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/phy/renesas/phy-rcar-gen3-usb2.c | 50 ++++++++++++++++++++++-- 1 file changed, 47 insertions(+), 3 deletions(-)
diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c index 3824e338b61e5..d13083f60d897 100644 --- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c +++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c @@ -21,12 +21,14 @@ #include <linux/platform_device.h> #include <linux/pm_runtime.h> #include <linux/regulator/consumer.h> +#include <linux/reset.h> #include <linux/string.h> #include <linux/usb/of.h> #include <linux/workqueue.h>
/******* USB2.0 Host registers (original offset is +0x200) *******/ #define USB2_INT_ENABLE 0x000 +#define USB2_AHB_BUS_CTR 0x008 #define USB2_USBCTR 0x00c #define USB2_SPD_RSM_TIMSET 0x10c #define USB2_OC_TIMSET 0x110 @@ -42,6 +44,10 @@ #define USB2_INT_ENABLE_USBH_INTB_EN BIT(2) /* For EHCI */ #define USB2_INT_ENABLE_USBH_INTA_EN BIT(1) /* For OHCI */
+/* AHB_BUS_CTR */ +#define USB2_AHB_BUS_CTR_MBL_MASK GENMASK(1, 0) +#define USB2_AHB_BUS_CTR_MBL_INCR4 2 + /* USBCTR */ #define USB2_USBCTR_DIRPD BIT(2) #define USB2_USBCTR_PLL_RST BIT(1) @@ -112,6 +118,7 @@ struct rcar_gen3_chan { struct extcon_dev *extcon; struct rcar_gen3_phy rphys[NUM_OF_PHYS]; struct regulator *vbus; + struct reset_control *rstc; struct work_struct work; struct mutex lock; /* protects rphys[...].powered */ enum usb_dr_mode dr_mode; @@ -126,6 +133,7 @@ struct rcar_gen3_chan { struct rcar_gen3_phy_drv_data { const struct phy_ops *phy_usb2_ops; bool no_adp_ctrl; + bool init_bus; };
/* @@ -647,6 +655,35 @@ static enum usb_dr_mode rcar_gen3_get_dr_mode(struct device_node *np) return candidate; }
+static int rcar_gen3_phy_usb2_init_bus(struct rcar_gen3_chan *channel) +{ + struct device *dev = channel->dev; + int ret; + u32 val; + + channel->rstc = devm_reset_control_array_get_shared(dev); + if (IS_ERR(channel->rstc)) + return PTR_ERR(channel->rstc); + + ret = pm_runtime_resume_and_get(dev); + if (ret) + return ret; + + ret = reset_control_deassert(channel->rstc); + if (ret) + goto rpm_put; + + val = readl(channel->base + USB2_AHB_BUS_CTR); + val &= ~USB2_AHB_BUS_CTR_MBL_MASK; + val |= USB2_AHB_BUS_CTR_MBL_INCR4; + writel(val, channel->base + USB2_AHB_BUS_CTR); + +rpm_put: + pm_runtime_put(dev); + + return ret; +} + static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev) { const struct rcar_gen3_phy_drv_data *phy_data; @@ -700,6 +737,15 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev) goto error; }
+ platform_set_drvdata(pdev, channel); + channel->dev = dev; + + if (phy_data->init_bus) { + ret = rcar_gen3_phy_usb2_init_bus(channel); + if (ret) + goto error; + } + channel->soc_no_adp_ctrl = phy_data->no_adp_ctrl; if (phy_data->no_adp_ctrl) channel->obint_enable_bits = USB2_OBINT_IDCHG_EN; @@ -727,9 +773,6 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev) channel->vbus = NULL; }
- platform_set_drvdata(pdev, channel); - channel->dev = dev; - provider = devm_of_phy_provider_register(dev, rcar_gen3_phy_usb2_xlate); if (IS_ERR(provider)) { dev_err(dev, "Failed to register PHY provider\n"); @@ -756,6 +799,7 @@ static int rcar_gen3_phy_usb2_remove(struct platform_device *pdev) if (channel->is_otg_channel) device_remove_file(&pdev->dev, &dev_attr_role);
+ reset_control_assert(channel->rstc); pm_runtime_disable(&pdev->dev);
return 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com
[ Upstream commit de76809f60cc938d3580bbbd5b04b7d12af6ce3a ]
Commit 08b0ad375ca6 ("phy: renesas: rcar-gen3-usb2: move IRQ registration to init") moved the IRQ request operation from probe to struct phy_ops::phy_init API to avoid triggering interrupts (which lead to register accesses) while the PHY clocks (enabled through runtime PM APIs) are not active. If this happens, it results in a synchronous abort.
One way to reproduce this issue is by enabling CONFIG_DEBUG_SHIRQ, which calls free_irq() on driver removal.
Move the IRQ request and free operations back to probe, and take the runtime PM state into account in IRQ handler. This commit is preparatory for the subsequent fixes in this series.
Reviewed-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Tested-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Reviewed-by: Lad Prabhakar prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Link: https://lore.kernel.org/r/20250507125032.565017-3-claudiu.beznea.uj@bp.renes... Signed-off-by: Vinod Koul vkoul@kernel.org Stable-dep-of: 9ce71e85b29e ("phy: renesas: rcar-gen3-usb2: Assert PLL reset on PHY power off") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/phy/renesas/phy-rcar-gen3-usb2.c | 46 +++++++++++++----------- 1 file changed, 26 insertions(+), 20 deletions(-)
diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c index d13083f60d897..69cc99c60f58d 100644 --- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c +++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c @@ -122,7 +122,6 @@ struct rcar_gen3_chan { struct work_struct work; struct mutex lock; /* protects rphys[...].powered */ enum usb_dr_mode dr_mode; - int irq; u32 obint_enable_bits; bool extcon_host; bool is_otg_channel; @@ -427,16 +426,25 @@ static irqreturn_t rcar_gen3_phy_usb2_irq(int irq, void *_ch) { struct rcar_gen3_chan *ch = _ch; void __iomem *usb2_base = ch->base; - u32 status = readl(usb2_base + USB2_OBINTSTA); + struct device *dev = ch->dev; irqreturn_t ret = IRQ_NONE; + u32 status;
+ pm_runtime_get_noresume(dev); + + if (pm_runtime_suspended(dev)) + goto rpm_put; + + status = readl(usb2_base + USB2_OBINTSTA); if (status & ch->obint_enable_bits) { - dev_vdbg(ch->dev, "%s: %08x\n", __func__, status); + dev_vdbg(dev, "%s: %08x\n", __func__, status); writel(ch->obint_enable_bits, usb2_base + USB2_OBINTSTA); rcar_gen3_device_recognition(ch); ret = IRQ_HANDLED; }
+rpm_put: + pm_runtime_put_noidle(dev); return ret; }
@@ -446,17 +454,6 @@ static int rcar_gen3_phy_usb2_init(struct phy *p) struct rcar_gen3_chan *channel = rphy->ch; void __iomem *usb2_base = channel->base; u32 val; - int ret; - - if (!rcar_gen3_is_any_rphy_initialized(channel) && channel->irq >= 0) { - INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work); - ret = request_irq(channel->irq, rcar_gen3_phy_usb2_irq, - IRQF_SHARED, dev_name(channel->dev), channel); - if (ret < 0) { - dev_err(channel->dev, "No irq handler (%d)\n", channel->irq); - return ret; - } - }
/* Initialize USB2 part */ val = readl(usb2_base + USB2_INT_ENABLE); @@ -492,9 +489,6 @@ static int rcar_gen3_phy_usb2_exit(struct phy *p) val &= ~USB2_INT_ENABLE_UCOM_INTEN; writel(val, usb2_base + USB2_INT_ENABLE);
- if (channel->irq >= 0 && !rcar_gen3_is_any_rphy_initialized(channel)) - free_irq(channel->irq, channel); - return 0; }
@@ -690,7 +684,7 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; struct rcar_gen3_chan *channel; struct phy_provider *provider; - int ret = 0, i; + int ret = 0, i, irq;
if (!dev->of_node) { dev_err(dev, "This driver needs device tree\n"); @@ -706,8 +700,6 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev) return PTR_ERR(channel->base);
channel->obint_enable_bits = USB2_OBINT_BITS; - /* get irq number here and request_irq for OTG in phy_init */ - channel->irq = platform_get_irq_optional(pdev, 0); channel->dr_mode = rcar_gen3_get_dr_mode(dev->of_node); if (channel->dr_mode != USB_DR_MODE_UNKNOWN) { channel->is_otg_channel = true; @@ -773,6 +765,20 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev) channel->vbus = NULL; }
+ irq = platform_get_irq_optional(pdev, 0); + if (irq < 0 && irq != -ENXIO) { + ret = irq; + goto error; + } else if (irq > 0) { + INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work); + ret = devm_request_irq(dev, irq, rcar_gen3_phy_usb2_irq, + IRQF_SHARED, dev_name(dev), channel); + if (ret < 0) { + dev_err(dev, "Failed to request irq (%d)\n", irq); + goto error; + } + } + provider = devm_of_phy_provider_register(dev, rcar_gen3_phy_usb2_xlate); if (IS_ERR(provider)) { dev_err(dev, "Failed to register PHY provider\n");
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com
[ Upstream commit 55a387ebb9219cbe4edfa8ba9996ccb0e7ad4932 ]
The phy-rcar-gen3-usb2 driver exposes four individual PHYs that are requested and configured by PHY users. The struct phy_ops APIs access the same set of registers to configure all PHYs. Additionally, PHY settings can be modified through sysfs or an IRQ handler. While some struct phy_ops APIs are protected by a driver-wide mutex, others rely on individual PHY-specific mutexes.
This approach can lead to various issues, including: 1/ the IRQ handler may interrupt PHY settings in progress, racing with hardware configuration protected by a mutex lock 2/ due to msleep(20) in rcar_gen3_init_otg(), while a configuration thread suspends to wait for the delay, another thread may try to configure another PHY (with phy_init() + phy_power_on()); re-running the phy_init() goes to the exact same configuration code, re-running the same hardware configuration on the same set of registers (and bits) which might impact the result of the msleep for the 1st configuring thread 3/ sysfs can configure the hardware (though role_store()) and it can still race with the phy_init()/phy_power_on() APIs calling into the drivers struct phy_ops
To address these issues, add a spinlock to protect hardware register access and driver private data structures (e.g., calls to rcar_gen3_is_any_rphy_initialized()). Checking driver-specific data remains necessary as all PHY instances share common settings. With this change, the existing mutex protection is removed and the cleanup.h helpers are used.
While at it, to keep the code simpler, do not skip regulator_enable()/regulator_disable() APIs in rcar_gen3_phy_usb2_power_on()/rcar_gen3_phy_usb2_power_off() as the regulators enable/disable operations are reference counted anyway.
Fixes: f3b5a8d9b50d ("phy: rcar-gen3-usb2: Add R-Car Gen3 USB2 PHY driver") Cc: stable@vger.kernel.org Reviewed-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Tested-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Reviewed-by: Lad Prabhakar prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Link: https://lore.kernel.org/r/20250507125032.565017-4-claudiu.beznea.uj@bp.renes... Signed-off-by: Vinod Koul vkoul@kernel.org Stable-dep-of: 9ce71e85b29e ("phy: renesas: rcar-gen3-usb2: Assert PLL reset on PHY power off") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/phy/renesas/phy-rcar-gen3-usb2.c | 49 +++++++++++++----------- 1 file changed, 26 insertions(+), 23 deletions(-)
diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c index 69cc99c60f58d..8b1280cdbcef8 100644 --- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c +++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c @@ -9,6 +9,7 @@ * Copyright (C) 2014 Cogent Embedded, Inc. */
+#include <linux/cleanup.h> #include <linux/extcon-provider.h> #include <linux/interrupt.h> #include <linux/io.h> @@ -120,7 +121,7 @@ struct rcar_gen3_chan { struct regulator *vbus; struct reset_control *rstc; struct work_struct work; - struct mutex lock; /* protects rphys[...].powered */ + spinlock_t lock; /* protects access to hardware and driver data structure. */ enum usb_dr_mode dr_mode; u32 obint_enable_bits; bool extcon_host; @@ -347,6 +348,8 @@ static ssize_t role_store(struct device *dev, struct device_attribute *attr, bool is_b_device; enum phy_mode cur_mode, new_mode;
+ guard(spinlock_irqsave)(&ch->lock); + if (!ch->is_otg_channel || !rcar_gen3_is_any_otg_rphy_initialized(ch)) return -EIO;
@@ -414,7 +417,7 @@ static void rcar_gen3_init_otg(struct rcar_gen3_chan *ch) val = readl(usb2_base + USB2_ADPCTRL); writel(val | USB2_ADPCTRL_IDPULLUP, usb2_base + USB2_ADPCTRL); } - msleep(20); + mdelay(20);
writel(0xffffffff, usb2_base + USB2_OBINTSTA); writel(ch->obint_enable_bits, usb2_base + USB2_OBINTEN); @@ -435,12 +438,14 @@ static irqreturn_t rcar_gen3_phy_usb2_irq(int irq, void *_ch) if (pm_runtime_suspended(dev)) goto rpm_put;
- status = readl(usb2_base + USB2_OBINTSTA); - if (status & ch->obint_enable_bits) { - dev_vdbg(dev, "%s: %08x\n", __func__, status); - writel(ch->obint_enable_bits, usb2_base + USB2_OBINTSTA); - rcar_gen3_device_recognition(ch); - ret = IRQ_HANDLED; + scoped_guard(spinlock, &ch->lock) { + status = readl(usb2_base + USB2_OBINTSTA); + if (status & ch->obint_enable_bits) { + dev_vdbg(dev, "%s: %08x\n", __func__, status); + writel(ch->obint_enable_bits, usb2_base + USB2_OBINTSTA); + rcar_gen3_device_recognition(ch); + ret = IRQ_HANDLED; + } }
rpm_put: @@ -455,6 +460,8 @@ static int rcar_gen3_phy_usb2_init(struct phy *p) void __iomem *usb2_base = channel->base; u32 val;
+ guard(spinlock_irqsave)(&channel->lock); + /* Initialize USB2 part */ val = readl(usb2_base + USB2_INT_ENABLE); val |= USB2_INT_ENABLE_UCOM_INTEN | rphy->int_enable_bits; @@ -481,6 +488,8 @@ static int rcar_gen3_phy_usb2_exit(struct phy *p) void __iomem *usb2_base = channel->base; u32 val;
+ guard(spinlock_irqsave)(&channel->lock); + rphy->initialized = false;
val = readl(usb2_base + USB2_INT_ENABLE); @@ -500,16 +509,17 @@ static int rcar_gen3_phy_usb2_power_on(struct phy *p) u32 val; int ret = 0;
- mutex_lock(&channel->lock); - if (!rcar_gen3_are_all_rphys_power_off(channel)) - goto out; - if (channel->vbus) { ret = regulator_enable(channel->vbus); if (ret) - goto out; + return ret; }
+ guard(spinlock_irqsave)(&channel->lock); + + if (!rcar_gen3_are_all_rphys_power_off(channel)) + goto out; + val = readl(usb2_base + USB2_USBCTR); val |= USB2_USBCTR_PLL_RST; writel(val, usb2_base + USB2_USBCTR); @@ -519,7 +529,6 @@ static int rcar_gen3_phy_usb2_power_on(struct phy *p) out: /* The powered flag should be set for any other phys anyway */ rphy->powered = true; - mutex_unlock(&channel->lock);
return 0; } @@ -530,18 +539,12 @@ static int rcar_gen3_phy_usb2_power_off(struct phy *p) struct rcar_gen3_chan *channel = rphy->ch; int ret = 0;
- mutex_lock(&channel->lock); - rphy->powered = false; - - if (!rcar_gen3_are_all_rphys_power_off(channel)) - goto out; + scoped_guard(spinlock_irqsave, &channel->lock) + rphy->powered = false;
if (channel->vbus) ret = regulator_disable(channel->vbus);
-out: - mutex_unlock(&channel->lock); - return ret; }
@@ -742,7 +745,7 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev) if (phy_data->no_adp_ctrl) channel->obint_enable_bits = USB2_OBINT_IDCHG_EN;
- mutex_init(&channel->lock); + spin_lock_init(&channel->lock); for (i = 0; i < NUM_OF_PHYS; i++) { channel->rphys[i].phy = devm_phy_create(dev, NULL, phy_data->phy_usb2_ops);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com
[ Upstream commit 9ce71e85b29eb63e48e294479742e670513f03a0 ]
Assert PLL reset on PHY power off. This saves power.
Fixes: f3b5a8d9b50d ("phy: rcar-gen3-usb2: Add R-Car Gen3 USB2 PHY driver") Cc: stable@vger.kernel.org Reviewed-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Tested-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Reviewed-by: Lad Prabhakar prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Link: https://lore.kernel.org/r/20250507125032.565017-5-claudiu.beznea.uj@bp.renes... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/phy/renesas/phy-rcar-gen3-usb2.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c index 8b1280cdbcef8..024cc5ce68a37 100644 --- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c +++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c @@ -539,9 +539,17 @@ static int rcar_gen3_phy_usb2_power_off(struct phy *p) struct rcar_gen3_chan *channel = rphy->ch; int ret = 0;
- scoped_guard(spinlock_irqsave, &channel->lock) + scoped_guard(spinlock_irqsave, &channel->lock) { rphy->powered = false;
+ if (rcar_gen3_are_all_rphys_power_off(channel)) { + u32 val = readl(channel->base + USB2_USBCTR); + + val |= USB2_USBCTR_PLL_RST; + writel(val, channel->base + USB2_USBCTR); + } + } + if (channel->vbus) ret = regulator_disable(channel->vbus);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Bogdanov d.bogdanov@yadro.com
[ Upstream commit 7f533cc5ee4c4436cee51dc58e81dfd9c3384418 ]
NOPIN response timer may expire on a deleted connection and crash with such logs:
Did not receive response to NOPIN on CID: 0, failing connection for I_T Nexus (null),i,0x00023d000125,iqn.2017-01.com.iscsi.target,t,0x3d
BUG: Kernel NULL pointer dereference on read at 0x00000000 NIP strlcpy+0x8/0xb0 LR iscsit_fill_cxn_timeout_err_stats+0x5c/0xc0 [iscsi_target_mod] Call Trace: iscsit_handle_nopin_response_timeout+0xfc/0x120 [iscsi_target_mod] call_timer_fn+0x58/0x1f0 run_timer_softirq+0x740/0x860 __do_softirq+0x16c/0x420 irq_exit+0x188/0x1c0 timer_interrupt+0x184/0x410
That is because nopin response timer may be re-started on nopin timer expiration.
Stop nopin timer before stopping the nopin response timer to be sure that no one of them will be re-started.
Signed-off-by: Dmitry Bogdanov d.bogdanov@yadro.com Link: https://lore.kernel.org/r/20241224101757.32300-1-d.bogdanov@yadro.com Reviewed-by: Maurizio Lombardi mlombard@redhat.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/target/iscsi/iscsi_target.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c index 07e196b44b91d..04d40e76772b3 100644 --- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -4314,8 +4314,8 @@ int iscsit_close_connection( spin_unlock(&iscsit_global->ts_bitmap_lock);
iscsit_stop_timers_for_cmds(conn); - iscsit_stop_nopin_response_timer(conn); iscsit_stop_nopin_timer(conn); + iscsit_stop_nopin_response_timer(conn);
if (conn->conn_transport->iscsit_wait_conn) conn->conn_transport->iscsit_wait_conn(conn);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhongqiu Han quic_zhonhan@quicinc.com
[ Upstream commit 2e2f925fe737576df2373931c95e1a2b66efdfef ]
syzbot reports a data-race when accessing the event_triggered, here is the simplified stack when the issue occurred:
================================================================== BUG: KCSAN: data-race in virtqueue_disable_cb / virtqueue_enable_cb_delayed
write to 0xffff8881025bc452 of 1 bytes by task 3288 on cpu 0: virtqueue_enable_cb_delayed+0x42/0x3c0 drivers/virtio/virtio_ring.c:2653 start_xmit+0x230/0x1310 drivers/net/virtio_net.c:3264 __netdev_start_xmit include/linux/netdevice.h:5151 [inline] netdev_start_xmit include/linux/netdevice.h:5160 [inline] xmit_one net/core/dev.c:3800 [inline]
read to 0xffff8881025bc452 of 1 bytes by interrupt on cpu 1: virtqueue_disable_cb_split drivers/virtio/virtio_ring.c:880 [inline] virtqueue_disable_cb+0x92/0x180 drivers/virtio/virtio_ring.c:2566 skb_xmit_done+0x5f/0x140 drivers/net/virtio_net.c:777 vring_interrupt+0x161/0x190 drivers/virtio/virtio_ring.c:2715 __handle_irq_event_percpu+0x95/0x490 kernel/irq/handle.c:158 handle_irq_event_percpu kernel/irq/handle.c:193 [inline]
value changed: 0x01 -> 0x00 ==================================================================
When the data race occurs, the function virtqueue_enable_cb_delayed() sets event_triggered to false, and virtqueue_disable_cb_split/packed() reads it as false due to the race condition. Since event_triggered is an unreliable hint used for optimization, this should only cause the driver temporarily suggest that the device not send an interrupt notification when the event index is used.
Fix this KCSAN reported data-race issue by explicitly tagging the access as data_racy.
Reported-by: syzbot+efe683d57990864b8c8e@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/67c7761a.050a0220.15b4b9.0018.GAE@google.com/ Signed-off-by: Zhongqiu Han quic_zhonhan@quicinc.com Message-Id: 20250312130412.3516307-1-quic_zhonhan@quicinc.com Signed-off-by: Michael S. Tsirkin mst@redhat.com Acked-by: Jason Wang jasowang@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/virtio/virtio_ring.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 7d320f799ca1e..06a64c4adc987 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2415,7 +2415,7 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) struct vring_virtqueue *vq = to_vvq(_vq);
if (vq->event_triggered) - vq->event_triggered = false; + data_race(vq->event_triggered = false);
return vq->packed_ring ? virtqueue_enable_cb_delayed_packed(_vq) : virtqueue_enable_cb_delayed_split(_vq);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marek Szyprowski m.szyprowski@samsung.com
[ Upstream commit c9b19ea63036fc537a69265acea1b18dabd1cbd3 ]
When CONFIG_NEED_DMA_MAP_STATE is not defined, dma-mapping clients might report unused data compilation warnings for dma_unmap_*() calls arguments. Redefine macros for those calls to let compiler to notice that it is okay when the provided arguments are not used.
Reported-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Suggested-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Tested-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Link: https://lore.kernel.org/r/20250415075659.428549-1-m.szyprowski@samsung.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/dma-mapping.h | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index e13050eb97771..af3f39ecc1b87 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -598,10 +598,14 @@ static inline int dma_mmap_wc(struct device *dev, #else #define DEFINE_DMA_UNMAP_ADDR(ADDR_NAME) #define DEFINE_DMA_UNMAP_LEN(LEN_NAME) -#define dma_unmap_addr(PTR, ADDR_NAME) (0) -#define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) do { } while (0) -#define dma_unmap_len(PTR, LEN_NAME) (0) -#define dma_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0) +#define dma_unmap_addr(PTR, ADDR_NAME) \ + ({ typeof(PTR) __p __maybe_unused = PTR; 0; }) +#define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) \ + do { typeof(PTR) __p __maybe_unused = PTR; } while (0) +#define dma_unmap_len(PTR, LEN_NAME) \ + ({ typeof(PTR) __p __maybe_unused = PTR; 0; }) +#define dma_unmap_len_set(PTR, LEN_NAME, VAL) \ + do { typeof(PTR) __p __maybe_unused = PTR; } while (0) #endif
#endif /* _LINUX_DMA_MAPPING_H */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: gaoxu gaoxu2@honor.com
[ Upstream commit 87c259a7a359e73e6c52c68fcbec79988999b4e6 ]
When adding folio_memcg function call in the zram module for Android16-6.12, the following error occurs during compilation: ERROR: modpost: "cgroup_mutex" [../soc-repo/zram.ko] undefined!
This error is caused by the indirect call to lockdep_is_held(&cgroup_mutex) within folio_memcg. The export setting for cgroup_mutex is controlled by the CONFIG_PROVE_RCU macro. If CONFIG_LOCKDEP is enabled while CONFIG_PROVE_RCU is not, this compilation error will occur.
To resolve this issue, add a parallel macro CONFIG_LOCKDEP control to ensure cgroup_mutex is properly exported when needed.
Signed-off-by: gao xu gaoxu2@honor.com Acked-by: Michal Koutný mkoutny@suse.com Signed-off-by: Tejun Heo tj@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/cgroup/cgroup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 6e54f0daebeff..7997c8021b62f 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -90,7 +90,7 @@ DEFINE_MUTEX(cgroup_mutex); DEFINE_SPINLOCK(css_set_lock);
-#ifdef CONFIG_PROVE_RCU +#if (defined CONFIG_PROVE_RCU || defined CONFIG_LOCKDEP) EXPORT_SYMBOL_GPL(cgroup_mutex); EXPORT_SYMBOL_GPL(css_set_lock); #endif
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ranjan Kumar ranjan.kumar@broadcom.com
[ Upstream commit b0b7ee3b574a72283399b9232f6190be07f220c0 ]
Ensure event logs are only generated when the debug logging level MPI3_DEBUG_EVENT is enabled. This prevents unnecessary logging.
Signed-off-by: Ranjan Kumar ranjan.kumar@broadcom.com Link: https://lore.kernel.org/r/20250415101546.204018-1-ranjan.kumar@broadcom.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/mpi3mr/mpi3mr_fw.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c index 41636c4c43af0..015a875a46a19 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_fw.c +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c @@ -174,6 +174,9 @@ static void mpi3mr_print_event_data(struct mpi3mr_ioc *mrioc, char *desc = NULL; u16 event;
+ if (!(mrioc->logging_level & MPI3_DEBUG_EVENT)) + return; + event = event_reply->event;
switch (event) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vladimir Oltean vladimir.oltean@nxp.com
[ Upstream commit 1d587faa5be7e9785b682cc5f58ba8f4100c13ea ]
This small snippet of code ensures that we do something with the array of RX software buffer descriptor elements after passing the skb to the stack. In this case, we see if the other half of the page is reusable, and if so, we "turn around" the buffers, making them directly usable by enetc_refill_rx_ring() without going to enetc_new_page().
We will need to perform this kind of buffer flipping from a new code path, i.e. from XDP_PASS. Currently, enetc_build_skb() does it there buffer by buffer, but in a subsequent change we will stop using enetc_build_skb() for XDP_PASS.
Signed-off-by: Vladimir Oltean vladimir.oltean@nxp.com Reviewed-by: Wei Fang wei.fang@nxp.com Link: https://patch.msgid.link/20250417120005.3288549-3-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index 230b317d93dae..bf49c07c8b513 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -1536,6 +1536,16 @@ static void enetc_xdp_drop(struct enetc_bdr *rx_ring, int rx_ring_first, } }
+static void enetc_bulk_flip_buff(struct enetc_bdr *rx_ring, int rx_ring_first, + int rx_ring_last) +{ + while (rx_ring_first != rx_ring_last) { + enetc_flip_rx_buff(rx_ring, + &rx_ring->rx_swbd[rx_ring_first]); + enetc_bdr_idx_inc(rx_ring, &rx_ring_first); + } +} + static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, struct napi_struct *napi, int work_limit, struct bpf_prog *prog) @@ -1659,11 +1669,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, enetc_xdp_drop(rx_ring, orig_i, i); rx_ring->stats.xdp_redirect_failures++; } else { - while (orig_i != i) { - enetc_flip_rx_buff(rx_ring, - &rx_ring->rx_swbd[orig_i]); - enetc_bdr_idx_inc(rx_ring, &orig_i); - } + enetc_bulk_flip_buff(rx_ring, orig_i, i); xdp_redirect_frm_cnt++; rx_ring->stats.xdp_redirect++; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Felix Kuehling felix.kuehling@amd.com
[ Upstream commit a92741e72f91b904c1d8c3d409ed8dbe9c1f2b26 ]
If peer memory is accessible through XGMI, allow leaving it in VRAM rather than forcing its migration to GTT on DMABuf attachment.
Signed-off-by: Felix Kuehling felix.kuehling@amd.com Tested-by: Hao (Claire) Zhou hao.zhou@amd.com Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit 372c8d72c3680fdea3fbb2d6b089f76b4a6d596a) Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 30 ++++++++++++++++++++- 1 file changed, 29 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index ab06cb4d7b358..4dcc7de961d08 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -42,6 +42,29 @@ #include <linux/pci-p2pdma.h> #include <linux/pm_runtime.h>
+static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops; + +/** + * dma_buf_attach_adev - Helper to get adev of an attachment + * + * @attach: attachment + * + * Returns: + * A struct amdgpu_device * if the attaching device is an amdgpu device or + * partition, NULL otherwise. + */ +static struct amdgpu_device *dma_buf_attach_adev(struct dma_buf_attachment *attach) +{ + if (attach->importer_ops == &amdgpu_dma_buf_attach_ops) { + struct drm_gem_object *obj = attach->importer_priv; + struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); + + return amdgpu_ttm_adev(bo->tbo.bdev); + } + + return NULL; +} + /** * amdgpu_dma_buf_attach - &dma_buf_ops.attach implementation * @@ -53,12 +76,14 @@ static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) { + struct amdgpu_device *attach_adev = dma_buf_attach_adev(attach); struct drm_gem_object *obj = dmabuf->priv; struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); int r;
- if (pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0) + if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) && + pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0) attach->peer2peer = false;
r = pm_runtime_get_sync(adev_to_drm(adev)->dev); @@ -479,6 +504,9 @@ bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev, struct drm_gem_object *obj = &bo->tbo.base; struct drm_gem_object *gobj;
+ if (!adev) + return false; + if (obj->import_attach) { struct dma_buf *dma_buf = obj->import_attach->dmabuf;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ihor Solodrai ihor.solodrai@linux.dev
[ Upstream commit f2858f308131a09e33afb766cd70119b5b900569 ]
"sockmap_ktls disconnect_after_delete" test has been failing on BPF CI after recent merges from netdev: * https://github.com/kernel-patches/bpf/actions/runs/14458537639 * https://github.com/kernel-patches/bpf/actions/runs/14457178732
It happens because disconnect has been disabled for TLS [1], and it renders the test case invalid.
Removing all the test code creates a conflict between bpf and bpf-next, so for now only remove the offending assert [2].
The test will be removed later on bpf-next.
[1] https://lore.kernel.org/netdev/20250404180334.3224206-1-kuba@kernel.org/ [2] https://lore.kernel.org/bpf/cfc371285323e1a3f3b006bfcf74e6cf7ad65258@linux.d...
Signed-off-by: Ihor Solodrai ihor.solodrai@linux.dev Signed-off-by: Andrii Nakryiko andrii@kernel.org Reviewed-by: Jiayuan Chen jiayuan.chen@linux.dev Link: https://lore.kernel.org/bpf/20250416170246.2438524-1-ihor.solodrai@linux.dev Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c b/tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c index 2d0796314862a..0a99fd404f6dc 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c @@ -68,7 +68,6 @@ static void test_sockmap_ktls_disconnect_after_delete(int family, int map) goto close_cli;
err = disconnect(cli); - ASSERT_OK(err, "disconnect");
close_cli: close(cli);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brandon Kammerdiener brandon.kammerdiener@intel.com
[ Upstream commit 75673fda0c557ae26078177dd14d4857afbf128d ]
The _safe variant used here gets the next element before running the callback, avoiding the endless loop condition.
Signed-off-by: Brandon Kammerdiener brandon.kammerdiener@intel.com Link: https://lore.kernel.org/r/20250424153246.141677-2-brandon.kammerdiener@intel... Signed-off-by: Alexei Starovoitov ast@kernel.org Acked-by: Hou Tao houtao1@huawei.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/hashtab.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index dae9ed02a75be..06bc7f26be06f 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -2172,7 +2172,7 @@ static int bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_f b = &htab->buckets[i]; rcu_read_lock(); head = &b->head; - hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) { + hlist_nulls_for_each_entry_safe(elem, n, head, hash_node) { key = elem->key; if (is_percpu) { /* current cpu value for percpu map */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haoran Jiang jianghaoran@kylinos.cn
[ Upstream commit 548762f05d19c5542db7590bcdfb9be1fb928376 ]
When building the latest samples/bpf on LoongArch Fedora
make M=samples/bpf
There are compilation errors as follows:
In file included from ./linux/samples/bpf/sockex2_kern.c:2: In file included from ./include/uapi/linux/in.h:25: In file included from ./include/linux/socket.h:8: In file included from ./include/linux/uio.h:9: In file included from ./include/linux/thread_info.h:60: In file included from ./arch/loongarch/include/asm/thread_info.h:15: In file included from ./arch/loongarch/include/asm/processor.h:13: In file included from ./arch/loongarch/include/asm/cpu-info.h:11: ./arch/loongarch/include/asm/loongarch.h:13:10: fatal error: 'larchintrin.h' file not found ^~~~~~~~~~~~~~~ 1 error generated.
larchintrin.h is included in /usr/lib64/clang/14.0.6/include, and the header file location is specified at compile time.
Test on LoongArch Fedora: https://github.com/fedora-remix-loongarch/releases-info
Signed-off-by: Haoran Jiang jianghaoran@kylinos.cn Signed-off-by: zhangxi zhangxi@kylinos.cn Signed-off-by: Andrii Nakryiko andrii@kernel.org Link: https://lore.kernel.org/bpf/20250425095042.838824-1-jianghaoran@kylinos.cn Signed-off-by: Sasha Levin sashal@kernel.org --- samples/bpf/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index 727da3c5879b2..77bf18cfdae7f 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -434,7 +434,7 @@ $(obj)/%.o: $(src)/%.c @echo " CLANG-bpf " $@ $(Q)$(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(BPF_EXTRA_CFLAGS) \ -I$(obj) -I$(srctree)/tools/testing/selftests/bpf/ \ - -I$(LIBBPF_INCLUDE) \ + -I$(LIBBPF_INCLUDE) $(CLANG_SYS_INCLUDES) \ -D__KERNEL__ -D__BPF_TRACING__ -Wno-unused-value -Wno-pointer-sign \ -D__TARGET_ARCH_$(SRCARCH) -Wno-compare-distinct-pointer-types \ -Wno-gnu-variable-sized-type-not-at-end \
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Gomez da.gomez@samsung.com
[ Upstream commit a26fe287eed112b4e21e854f173c8918a6a8596d ]
The scripts/kconfig/merge_config.sh script requires an existing $INITFILE (or the $1 argument) as a base file for merging Kconfig fragments. However, an empty $INITFILE can serve as an initial starting point, later referenced by the KCONFIG_ALLCONFIG Makefile variable if -m is not used. This variable can point to any configuration file containing preset config symbols (the merged output) as stated in Documentation/kbuild/kconfig.rst. When -m is used $INITFILE will contain just the merge output requiring the user to run make (i.e. KCONFIG_ALLCONFIG=<$INITFILE> make <allnoconfig/alldefconfig> or make olddefconfig).
Instead of failing when `$INITFILE` is missing, create an empty file and use it as the starting point for merges.
Signed-off-by: Daniel Gomez da.gomez@samsung.com Signed-off-by: Masahiro Yamada masahiroy@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- scripts/kconfig/merge_config.sh | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/scripts/kconfig/merge_config.sh b/scripts/kconfig/merge_config.sh index 72da3b8d6f307..151f9938abaa7 100755 --- a/scripts/kconfig/merge_config.sh +++ b/scripts/kconfig/merge_config.sh @@ -105,8 +105,8 @@ INITFILE=$1 shift;
if [ ! -r "$INITFILE" ]; then - echo "The base file '$INITFILE' does not exist. Exit." >&2 - exit 1 + echo "The base file '$INITFILE' does not exist. Creating one..." >&2 + touch "$INITFILE" fi
MERGE_LIST=$*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Anthony Krowiak akrowiak@linux.ibm.com
[ Upstream commit d33d729afcc8ad2148d99f9bc499b33fd0c0d73b ]
An erroneous message is written to the kernel log when either of the following actions are taken by a user:
1. Assign an adapter or domain to a vfio_ap mediated device via its sysfs assign_adapter or assign_domain attributes that would result in one or more AP queues being assigned that are already assigned to a different mediated device. Sharing of queues between mdevs is not allowed.
2. Reserve an adapter or domain for the host device driver via the AP bus driver's sysfs apmask or aqmask attribute that would result in providing host access to an AP queue that is in use by a vfio_ap mediated device. Reserving a queue for a host driver that is in use by an mdev is not allowed.
In both cases, the assignment will return an error; however, a message like the following is written to the kernel log:
vfio_ap_mdev e1839397-51a0-4e3c-91e0-c3b9c3d3047d: Userspace may not re-assign queue 00.0028 already assigned to \ e1839397-51a0-4e3c-91e0-c3b9c3d3047d
Notice the mdev reporting the error is the same as the mdev identified in the message as the one to which the queue is being assigned. It is perfectly okay to assign a queue to an mdev to which it is already assigned; the assignment is simply ignored by the vfio_ap device driver.
This patch logs more descriptive and accurate messages for both 1 and 2 above to the kernel log:
Example for 1: vfio_ap_mdev 0fe903a0-a323-44db-9daf-134c68627d61: Userspace may not assign queue 00.0033 to mdev: already assigned to \ 62177883-f1bb-47f0-914d-32a22e3a8804
Example for 2: vfio_ap_mdev 62177883-f1bb-47f0-914d-32a22e3a8804: Can not reserve queue 00.0033 for host driver: in use by mdev
Signed-off-by: Anthony Krowiak akrowiak@linux.ibm.com Link: https://lore.kernel.org/r/20250311103304.1539188-1-akrowiak@linux.ibm.com Signed-off-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/s390/crypto/vfio_ap_ops.c | 72 ++++++++++++++++++++----------- 1 file changed, 46 insertions(+), 26 deletions(-)
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c index 86a8bd5324899..11fe917fbd9d4 100644 --- a/drivers/s390/crypto/vfio_ap_ops.c +++ b/drivers/s390/crypto/vfio_ap_ops.c @@ -789,48 +789,66 @@ static void vfio_ap_mdev_remove(struct mdev_device *mdev) vfio_put_device(&matrix_mdev->vdev); }
-#define MDEV_SHARING_ERR "Userspace may not re-assign queue %02lx.%04lx " \ - "already assigned to %s" +#define MDEV_SHARING_ERR "Userspace may not assign queue %02lx.%04lx to mdev: already assigned to %s"
-static void vfio_ap_mdev_log_sharing_err(struct ap_matrix_mdev *matrix_mdev, - unsigned long *apm, - unsigned long *aqm) +#define MDEV_IN_USE_ERR "Can not reserve queue %02lx.%04lx for host driver: in use by mdev" + +static void vfio_ap_mdev_log_sharing_err(struct ap_matrix_mdev *assignee, + struct ap_matrix_mdev *assigned_to, + unsigned long *apm, unsigned long *aqm) { unsigned long apid, apqi; - const struct device *dev = mdev_dev(matrix_mdev->mdev); - const char *mdev_name = dev_name(dev);
- for_each_set_bit_inv(apid, apm, AP_DEVICES) + for_each_set_bit_inv(apid, apm, AP_DEVICES) { + for_each_set_bit_inv(apqi, aqm, AP_DOMAINS) { + dev_warn(mdev_dev(assignee->mdev), MDEV_SHARING_ERR, + apid, apqi, dev_name(mdev_dev(assigned_to->mdev))); + } + } +} + +static void vfio_ap_mdev_log_in_use_err(struct ap_matrix_mdev *assignee, + unsigned long *apm, unsigned long *aqm) +{ + unsigned long apid, apqi; + + for_each_set_bit_inv(apid, apm, AP_DEVICES) { for_each_set_bit_inv(apqi, aqm, AP_DOMAINS) - dev_warn(dev, MDEV_SHARING_ERR, apid, apqi, mdev_name); + dev_warn(mdev_dev(assignee->mdev), MDEV_IN_USE_ERR, apid, apqi); + } }
/** * vfio_ap_mdev_verify_no_sharing - verify APQNs are not shared by matrix mdevs * + * @assignee: the matrix mdev to which @mdev_apm and @mdev_aqm are being + * assigned; or, NULL if this function was called by the AP bus + * driver in_use callback to verify none of the APQNs being reserved + * for the host device driver are in use by a vfio_ap mediated device * @mdev_apm: mask indicating the APIDs of the APQNs to be verified * @mdev_aqm: mask indicating the APQIs of the APQNs to be verified * - * Verifies that each APQN derived from the Cartesian product of a bitmap of - * AP adapter IDs and AP queue indexes is not configured for any matrix - * mediated device. AP queue sharing is not allowed. + * Verifies that each APQN derived from the Cartesian product of APIDs + * represented by the bits set in @mdev_apm and the APQIs of the bits set in + * @mdev_aqm is not assigned to a mediated device other than the mdev to which + * the APQN is being assigned (@assignee). AP queue sharing is not allowed. * * Return: 0 if the APQNs are not shared; otherwise return -EADDRINUSE. */ -static int vfio_ap_mdev_verify_no_sharing(unsigned long *mdev_apm, +static int vfio_ap_mdev_verify_no_sharing(struct ap_matrix_mdev *assignee, + unsigned long *mdev_apm, unsigned long *mdev_aqm) { - struct ap_matrix_mdev *matrix_mdev; + struct ap_matrix_mdev *assigned_to; DECLARE_BITMAP(apm, AP_DEVICES); DECLARE_BITMAP(aqm, AP_DOMAINS);
- list_for_each_entry(matrix_mdev, &matrix_dev->mdev_list, node) { + list_for_each_entry(assigned_to, &matrix_dev->mdev_list, node) { /* - * If the input apm and aqm are fields of the matrix_mdev - * object, then move on to the next matrix_mdev. + * If the mdev to which the mdev_apm and mdev_aqm is being + * assigned is the same as the mdev being verified */ - if (mdev_apm == matrix_mdev->matrix.apm && - mdev_aqm == matrix_mdev->matrix.aqm) + if (assignee == assigned_to) continue;
memset(apm, 0, sizeof(apm)); @@ -840,15 +858,16 @@ static int vfio_ap_mdev_verify_no_sharing(unsigned long *mdev_apm, * We work on full longs, as we can only exclude the leftover * bits in non-inverse order. The leftover is all zeros. */ - if (!bitmap_and(apm, mdev_apm, matrix_mdev->matrix.apm, - AP_DEVICES)) + if (!bitmap_and(apm, mdev_apm, assigned_to->matrix.apm, AP_DEVICES)) continue;
- if (!bitmap_and(aqm, mdev_aqm, matrix_mdev->matrix.aqm, - AP_DOMAINS)) + if (!bitmap_and(aqm, mdev_aqm, assigned_to->matrix.aqm, AP_DOMAINS)) continue;
- vfio_ap_mdev_log_sharing_err(matrix_mdev, apm, aqm); + if (assignee) + vfio_ap_mdev_log_sharing_err(assignee, assigned_to, apm, aqm); + else + vfio_ap_mdev_log_in_use_err(assigned_to, apm, aqm);
return -EADDRINUSE; } @@ -877,7 +896,8 @@ static int vfio_ap_mdev_validate_masks(struct ap_matrix_mdev *matrix_mdev) matrix_mdev->matrix.aqm)) return -EADDRNOTAVAIL;
- return vfio_ap_mdev_verify_no_sharing(matrix_mdev->matrix.apm, + return vfio_ap_mdev_verify_no_sharing(matrix_mdev, + matrix_mdev->matrix.apm, matrix_mdev->matrix.aqm); }
@@ -1945,7 +1965,7 @@ int vfio_ap_mdev_resource_in_use(unsigned long *apm, unsigned long *aqm)
mutex_lock(&matrix_dev->guests_lock); mutex_lock(&matrix_dev->mdevs_lock); - ret = vfio_ap_mdev_verify_no_sharing(apm, aqm); + ret = vfio_ap_mdev_verify_no_sharing(NULL, apm, aqm); mutex_unlock(&matrix_dev->mdevs_lock); mutex_unlock(&matrix_dev->guests_lock);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pali Rohár pali@kernel.org
[ Upstream commit e255612b5ed9f179abe8196df7c2ba09dd227900 ]
Some operations, like WRITE, does not require FILE_READ_ATTRIBUTES access.
So when FILE_READ_ATTRIBUTES is not explicitly requested for smb2_open_file() then first try to do SMB2 CREATE with FILE_READ_ATTRIBUTES access (like it was before) and then fallback to SMB2 CREATE without FILE_READ_ATTRIBUTES access (less common case).
This change allows to complete WRITE operation to a file when it does not grant FILE_READ_ATTRIBUTES permission and its parent directory does not grant READ_DATA permission (parent directory READ_DATA is implicit grant of child FILE_READ_ATTRIBUTES permission).
Signed-off-by: Pali Rohár pali@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/smb/client/smb2file.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c index a7475bc05cac0..afdc78e92ee9b 100644 --- a/fs/smb/client/smb2file.c +++ b/fs/smb/client/smb2file.c @@ -108,16 +108,25 @@ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 int err_buftype = CIFS_NO_BUFFER; struct cifs_fid *fid = oparms->fid; struct network_resiliency_req nr_ioctl_req; + bool retry_without_read_attributes = false;
smb2_path = cifs_convert_path_to_utf16(oparms->path, oparms->cifs_sb); if (smb2_path == NULL) return -ENOMEM;
- oparms->desired_access |= FILE_READ_ATTRIBUTES; + if (!(oparms->desired_access & FILE_READ_ATTRIBUTES)) { + oparms->desired_access |= FILE_READ_ATTRIBUTES; + retry_without_read_attributes = true; + } smb2_oplock = SMB2_OPLOCK_LEVEL_BATCH;
rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov, &err_buftype); + if (rc == -EACCES && retry_without_read_attributes) { + oparms->desired_access &= ~FILE_READ_ATTRIBUTES; + rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov, + &err_buftype); + } if (rc && data) { struct smb2_hdr *hdr = err_iov.iov_base;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pali Rohár pali@kernel.org
[ Upstream commit 4236ac9fe5b8b42756070d4abfb76fed718e87c2 ]
Old SMB1 servers without CAP_NT_SMBS do not support CIFS_open() function and instead SMBLegacyOpen() needs to be used. This logic is already handled in cifs_open_file() function, which is server->ops->open callback function.
So for querying and creating MF symlinks use open callback function instead of CIFS_open() function directly.
This change fixes querying and creating new MF symlinks on Windows 98. Currently cifs_query_mf_symlink() is not able to detect MF symlink and cifs_create_mf_symlink() is failing with EIO error.
Signed-off-by: Pali Rohár pali@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/smb/client/link.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/smb/client/link.c b/fs/smb/client/link.c index c0f101fc1e5d0..d71feb3fdbd2c 100644 --- a/fs/smb/client/link.c +++ b/fs/smb/client/link.c @@ -269,7 +269,7 @@ cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, struct cifs_open_parms oparms; struct cifs_io_parms io_parms = {0}; int buf_type = CIFS_NO_BUFFER; - FILE_ALL_INFO file_info; + struct cifs_open_info_data query_data;
oparms = (struct cifs_open_parms) { .tcon = tcon, @@ -281,11 +281,11 @@ cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, .fid = &fid, };
- rc = CIFS_open(xid, &oparms, &oplock, &file_info); + rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, &query_data); if (rc) return rc;
- if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) { + if (query_data.fi.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) { rc = -ENOENT; /* it's not a symlink */ goto out; @@ -324,7 +324,7 @@ cifs_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, .fid = &fid, };
- rc = CIFS_open(xid, &oparms, &oplock, NULL); + rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, NULL); if (rc) return rc;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pali Rohár pali@kernel.org
[ Upstream commit e94e882a6d69525c07589222cf3a6ff57ad12b5b ]
SMB negotiate retry functionality in cifs_negotiate() is currently broken and does not work when doing socket reconnect. Caller of this function, which is cifs_negotiate_protocol() requires that tcpStatus after successful execution of negotiate callback stay in CifsInNegotiate. But if the CIFSSMBNegotiate() called from cifs_negotiate() fails due to connection issues then tcpStatus is changed as so repeated CIFSSMBNegotiate() call does not help.
Fix this problem by moving retrying code from negotiate callback (which is either cifs_negotiate() or smb2_negotiate()) to cifs_negotiate_protocol() which is caller of those callbacks. This allows to properly handle and implement correct transistions between tcpStatus states as function cifs_negotiate_protocol() already handles it.
With this change, cifs_negotiate_protocol() now handles also -EAGAIN error set by the RFC1002_NEGATIVE_SESSION_RESPONSE processing after reconnecting with NetBIOS session.
Signed-off-by: Pali Rohár pali@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/smb/client/connect.c | 10 ++++++++++ fs/smb/client/smb1ops.c | 7 ------- fs/smb/client/smb2ops.c | 3 --- 3 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c index 6aeb25006db82..0baa64c48c3d1 100644 --- a/fs/smb/client/connect.c +++ b/fs/smb/client/connect.c @@ -4178,11 +4178,13 @@ int cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses, struct TCP_Server_Info *server) { + bool in_retry = false; int rc = 0;
if (!server->ops->need_neg || !server->ops->negotiate) return -ENOSYS;
+retry: /* only send once per connect */ spin_lock(&server->srv_lock); if (server->tcpStatus != CifsGood && @@ -4202,6 +4204,14 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses, spin_unlock(&server->srv_lock);
rc = server->ops->negotiate(xid, ses, server); + if (rc == -EAGAIN) { + /* Allow one retry attempt */ + if (!in_retry) { + in_retry = true; + goto retry; + } + rc = -EHOSTDOWN; + } if (rc == 0) { spin_lock(&server->srv_lock); if (server->tcpStatus == CifsInNegotiate) diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c index 225cc7e0304c2..1489b9d21b609 100644 --- a/fs/smb/client/smb1ops.c +++ b/fs/smb/client/smb1ops.c @@ -426,13 +426,6 @@ cifs_negotiate(const unsigned int xid, { int rc; rc = CIFSSMBNegotiate(xid, ses, server); - if (rc == -EAGAIN) { - /* retry only once on 1st time connection */ - set_credits(server, 1); - rc = CIFSSMBNegotiate(xid, ses, server); - if (rc == -EAGAIN) - rc = -EHOSTDOWN; - } return rc; }
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index a62f3e5a7689c..c9b9892b510ea 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -422,9 +422,6 @@ smb2_negotiate(const unsigned int xid, server->CurrentMid = 0; spin_unlock(&server->mid_lock); rc = SMB2_negotiate(xid, ses, server); - /* BB we probably don't need to retry with modern servers */ - if (rc == -EAGAIN) - rc = -EHOSTDOWN; return rc; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matt Johnston matt@codeconstruct.com.au
[ Upstream commit 8344213571b2ac8caf013cfd3b37bc3467c3a893 ]
link() is documented to return EPERM when a filesystem doesn't support the operation, return that instead.
Link: https://github.com/libfuse/libfuse/issues/925 Signed-off-by: Matt Johnston matt@codeconstruct.com.au Signed-off-by: Miklos Szeredi mszeredi@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/fuse/dir.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c index c431abbf48e66..0dbacdd7bb0d8 100644 --- a/fs/fuse/dir.c +++ b/fs/fuse/dir.c @@ -1068,6 +1068,8 @@ static int fuse_link(struct dentry *entry, struct inode *newdir, else if (err == -EINTR) fuse_invalidate_attr(inode);
+ if (err == -ENOSYS) + err = -EPERM; return err; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit 9e8f324bd44c1fe026b582b75213de4eccfa1163 ]
Check that the delegation is still attached after taking the spin lock in nfs_start_delegation_return_locked().
Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/delegation.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c index 6363bbc37f425..17b38da17288b 100644 --- a/fs/nfs/delegation.c +++ b/fs/nfs/delegation.c @@ -297,7 +297,8 @@ nfs_start_delegation_return_locked(struct nfs_inode *nfsi) if (delegation == NULL) goto out; spin_lock(&delegation->lock); - if (!test_and_set_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) { + if (delegation->inode && + !test_and_set_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) { clear_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags); /* Refcount matched in nfs_end_delegation_return() */ ret = nfs_get_delegation(delegation);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit 8d3ca331026a7f9700d3747eed59a67b8f828cdc ]
Once a task calls exit_signals() it can no longer be signalled. So do not allow it to do killable waits.
Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/inode.c | 2 ++ fs/nfs/internal.h | 5 +++++ fs/nfs/nfs3proc.c | 2 +- fs/nfs/nfs4proc.c | 9 +++++++-- 4 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index 964df0725f4c2..f2e66b946f4b4 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -74,6 +74,8 @@ nfs_fattr_to_ino_t(struct nfs_fattr *fattr)
int nfs_wait_bit_killable(struct wait_bit_key *key, int mode) { + if (unlikely(nfs_current_task_exiting())) + return -EINTR; schedule(); if (signal_pending_state(mode, current)) return -ERESTARTSYS; diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index ece517ebcca0b..84361674bffc7 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -832,6 +832,11 @@ static inline u32 nfs_stateid_hash(const nfs4_stateid *stateid) NFS4_STATEID_OTHER_SIZE); }
+static inline bool nfs_current_task_exiting(void) +{ + return (current->flags & PF_EXITING) != 0; +} + static inline bool nfs_error_is_fatal(int err) { switch (err) { diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c index 2e7579626cf01..f036d30f7515c 100644 --- a/fs/nfs/nfs3proc.c +++ b/fs/nfs/nfs3proc.c @@ -39,7 +39,7 @@ nfs3_rpc_wrapper(struct rpc_clnt *clnt, struct rpc_message *msg, int flags) __set_current_state(TASK_KILLABLE|TASK_FREEZABLE_UNSAFE); schedule_timeout(NFS_JUKEBOX_RETRY_TIME); res = -ERESTARTSYS; - } while (!fatal_signal_pending(current)); + } while (!fatal_signal_pending(current) && !nfs_current_task_exiting()); return res; }
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index acef50824d1a2..0f28607c57473 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -422,6 +422,8 @@ static int nfs4_delay_killable(long *timeout) { might_sleep();
+ if (unlikely(nfs_current_task_exiting())) + return -EINTR; __set_current_state(TASK_KILLABLE|TASK_FREEZABLE_UNSAFE); schedule_timeout(nfs4_update_delay(timeout)); if (!__fatal_signal_pending(current)) @@ -433,6 +435,8 @@ static int nfs4_delay_interruptible(long *timeout) { might_sleep();
+ if (unlikely(nfs_current_task_exiting())) + return -EINTR; __set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE_UNSAFE); schedule_timeout(nfs4_update_delay(timeout)); if (!signal_pending(current)) @@ -1712,7 +1716,8 @@ static void nfs_set_open_stateid_locked(struct nfs4_state *state, rcu_read_unlock(); trace_nfs4_open_stateid_update_wait(state->inode, stateid, 0);
- if (!fatal_signal_pending(current)) { + if (!fatal_signal_pending(current) && + !nfs_current_task_exiting()) { if (schedule_timeout(5*HZ) == 0) status = -EAGAIN; else @@ -3500,7 +3505,7 @@ static bool nfs4_refresh_open_old_stateid(nfs4_stateid *dst, write_sequnlock(&state->seqlock); trace_nfs4_close_stateid_update_wait(state->inode, dst, 0);
- if (fatal_signal_pending(current)) + if (fatal_signal_pending(current) || nfs_current_task_exiting()) status = -EINTR; else if (schedule_timeout(5*HZ) != 0)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit 14e41b16e8cb677bb440dca2edba8b041646c742 ]
Once a task calls exit_signals() it can no longer be signalled. So do not allow it to do killable waits.
Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sunrpc/sched.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c index 9b45fbdc90cab..73bc39281ef5f 100644 --- a/net/sunrpc/sched.c +++ b/net/sunrpc/sched.c @@ -276,6 +276,8 @@ EXPORT_SYMBOL_GPL(rpc_destroy_wait_queue);
static int rpc_wait_bit_killable(struct wait_bit_key *key, int mode) { + if (unlikely(current->flags & PF_EXITING)) + return -EINTR; schedule(); if (signal_pending_state(mode, current)) return -ERESTARTSYS;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jinqian Yang yangjinqian1@huawei.com
[ Upstream commit e18c09b204e81702ea63b9f1a81ab003b72e3174 ]
The HIP09 processor is vulnerable to the Spectre-BHB (Branch History Buffer) attack, which can be exploited to leak information through branch prediction side channels. This commit adds the MIDR of HIP09 to the list for software mitigation.
Signed-off-by: Jinqian Yang yangjinqian1@huawei.com Link: https://lore.kernel.org/r/20250325141900.2057314-1-yangjinqian1@huawei.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/include/asm/cputype.h | 2 ++ arch/arm64/kernel/proton-pack.c | 1 + 2 files changed, 3 insertions(+)
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index fe022fe2d4f6b..41612b03af638 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -132,6 +132,7 @@ #define FUJITSU_CPU_PART_A64FX 0x001
#define HISI_CPU_PART_TSV110 0xD01 +#define HISI_CPU_PART_HIP09 0xD02
#define APPLE_CPU_PART_M1_ICESTORM 0x022 #define APPLE_CPU_PART_M1_FIRESTORM 0x023 @@ -201,6 +202,7 @@ #define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL) #define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX) #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110) +#define MIDR_HISI_HIP09 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP09) #define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM) #define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM) #define MIDR_APPLE_M1_ICESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_PRO) diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index fcc641f30c93d..4978c466e325d 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -916,6 +916,7 @@ static u8 spectre_bhb_loop_affected(void) MIDR_ALL_VERSIONS(MIDR_CORTEX_A77), MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1), MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_GOLD), + MIDR_ALL_VERSIONS(MIDR_HISI_HIP09), {}, }; static const struct midr_range spectre_bhb_k11_list[] = {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
[ Upstream commit 196a062641fe68d9bfe0ad36b6cd7628c99ad22c ]
Binary printing functions are using printf() type of format, and compiler is not happy about them as is:
kernel/trace/trace.c:3292:9: error: function ‘trace_vbprintk’ might be a candidate for ‘gnu_printf’ format attribute [-Werror=suggest-attribute=format] kernel/trace/trace_seq.c:182:9: error: function ‘trace_seq_bprintf’ might be a candidate for ‘gnu_printf’ format attribute [-Werror=suggest-attribute=format]
Fix the compilation errors by adding __printf() attribute.
While at it, move existing __printf() attributes from the implementations to the declarations. IT also fixes incorrect attribute parameters that are used for trace_array_printk().
Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Reviewed-by: Kees Cook kees@kernel.org Reviewed-by: Petr Mladek pmladek@suse.com Link: https://lore.kernel.org/r/20250321144822.324050-4-andriy.shevchenko@linux.in... Signed-off-by: Petr Mladek pmladek@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/trace.h | 4 ++-- include/linux/trace_seq.h | 8 ++++---- kernel/trace/trace.c | 11 +++-------- kernel/trace/trace.h | 16 +++++++++------- 4 files changed, 18 insertions(+), 21 deletions(-)
diff --git a/include/linux/trace.h b/include/linux/trace.h index 2a70a447184c9..bb4d84f1c58cc 100644 --- a/include/linux/trace.h +++ b/include/linux/trace.h @@ -72,8 +72,8 @@ static inline int unregister_ftrace_export(struct trace_export *export) static inline void trace_printk_init_buffers(void) { } -static inline int trace_array_printk(struct trace_array *tr, unsigned long ip, - const char *fmt, ...) +static inline __printf(3, 4) +int trace_array_printk(struct trace_array *tr, unsigned long ip, const char *fmt, ...) { return 0; } diff --git a/include/linux/trace_seq.h b/include/linux/trace_seq.h index 5a2c650d9e1c1..c230cbd25aee8 100644 --- a/include/linux/trace_seq.h +++ b/include/linux/trace_seq.h @@ -77,8 +77,8 @@ extern __printf(2, 3) void trace_seq_printf(struct trace_seq *s, const char *fmt, ...); extern __printf(2, 0) void trace_seq_vprintf(struct trace_seq *s, const char *fmt, va_list args); -extern void -trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary); +extern __printf(2, 0) +void trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary); extern int trace_print_seq(struct seq_file *m, struct trace_seq *s); extern int trace_seq_to_user(struct trace_seq *s, char __user *ubuf, int cnt); @@ -100,8 +100,8 @@ extern int trace_seq_hex_dump(struct trace_seq *s, const char *prefix_str, static inline void trace_seq_printf(struct trace_seq *s, const char *fmt, ...) { } -static inline void -trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary) +static inline __printf(2, 0) +void trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary) { }
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 9da29583dfbc7..9e0b9c9a7dff9 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3422,10 +3422,9 @@ int trace_vbprintk(unsigned long ip, const char *fmt, va_list args) } EXPORT_SYMBOL_GPL(trace_vbprintk);
-__printf(3, 0) -static int -__trace_array_vprintk(struct trace_buffer *buffer, - unsigned long ip, const char *fmt, va_list args) +static __printf(3, 0) +int __trace_array_vprintk(struct trace_buffer *buffer, + unsigned long ip, const char *fmt, va_list args) { struct trace_event_call *call = &event_print; struct ring_buffer_event *event; @@ -3478,7 +3477,6 @@ __trace_array_vprintk(struct trace_buffer *buffer, return len; }
-__printf(3, 0) int trace_array_vprintk(struct trace_array *tr, unsigned long ip, const char *fmt, va_list args) { @@ -3505,7 +3503,6 @@ int trace_array_vprintk(struct trace_array *tr, * Note, trace_array_init_printk() must be called on @tr before this * can be used. */ -__printf(3, 0) int trace_array_printk(struct trace_array *tr, unsigned long ip, const char *fmt, ...) { @@ -3550,7 +3547,6 @@ int trace_array_init_printk(struct trace_array *tr) } EXPORT_SYMBOL_GPL(trace_array_init_printk);
-__printf(3, 4) int trace_array_printk_buf(struct trace_buffer *buffer, unsigned long ip, const char *fmt, ...) { @@ -3566,7 +3562,6 @@ int trace_array_printk_buf(struct trace_buffer *buffer, return ret; }
-__printf(2, 0) int trace_vprintk(unsigned long ip, const char *fmt, va_list args) { return trace_array_vprintk(&global_trace, ip, fmt, args); diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index aad7fcd84617c..49b297ca7fc72 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -780,13 +780,15 @@ static inline void __init disable_tracing_selftest(const char *reason)
extern void *head_page(struct trace_array_cpu *data); extern unsigned long long ns2usecs(u64 nsec); -extern int -trace_vbprintk(unsigned long ip, const char *fmt, va_list args); -extern int -trace_vprintk(unsigned long ip, const char *fmt, va_list args); -extern int -trace_array_vprintk(struct trace_array *tr, - unsigned long ip, const char *fmt, va_list args); + +__printf(2, 0) +int trace_vbprintk(unsigned long ip, const char *fmt, va_list args); +__printf(2, 0) +int trace_vprintk(unsigned long ip, const char *fmt, va_list args); +__printf(3, 0) +int trace_array_vprintk(struct trace_array *tr, + unsigned long ip, const char *fmt, va_list args); +__printf(3, 4) int trace_array_printk_buf(struct trace_buffer *buffer, unsigned long ip, const char *fmt, ...); void trace_printk_seq(struct trace_seq *s);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tudor Ambarus tudor.ambarus@linaro.org
[ Upstream commit 24fdd5074b205cfb0ef4cd0751a2d03031455929 ]
In case of error, of_parse_phandle_with_args() returns -EINVAL when the passed index is negative, or -ENOENT when the index is for an empty phandle. The mailbox core overwrote the error return code with a less precise -ENODEV. Use the error returned code from of_parse_phandle_with_args().
Signed-off-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Jassi Brar jassisinghbrar@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mailbox/mailbox.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c index 4229b9b5da98f..6f54501dc7762 100644 --- a/drivers/mailbox/mailbox.c +++ b/drivers/mailbox/mailbox.c @@ -350,11 +350,12 @@ struct mbox_chan *mbox_request_channel(struct mbox_client *cl, int index)
mutex_lock(&con_mutex);
- if (of_parse_phandle_with_args(dev->of_node, "mboxes", - "#mbox-cells", index, &spec)) { + ret = of_parse_phandle_with_args(dev->of_node, "mboxes", "#mbox-cells", + index, &spec); + if (ret) { dev_dbg(dev, "%s: can't parse "mboxes" property\n", __func__); mutex_unlock(&con_mutex); - return ERR_PTR(-ENODEV); + return ERR_PTR(ret); }
chan = ERR_PTR(-EPROBE_DEFER);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shixiong Ou oushixiong@kylinos.cn
[ Upstream commit 86d16cd12efa547ed43d16ba7a782c1251c80ea8 ]
Call device_remove_file() when driver remove.
Signed-off-by: Shixiong Ou oushixiong@kylinos.cn Signed-off-by: Helge Deller deller@gmx.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/video/fbdev/fsl-diu-fb.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/video/fbdev/fsl-diu-fb.c b/drivers/video/fbdev/fsl-diu-fb.c index ce3c5b0b8f4ef..53be4ab374cc3 100644 --- a/drivers/video/fbdev/fsl-diu-fb.c +++ b/drivers/video/fbdev/fsl-diu-fb.c @@ -1829,6 +1829,7 @@ static int fsl_diu_remove(struct platform_device *pdev) int i;
data = dev_get_drvdata(&pdev->dev); + device_remove_file(&pdev->dev, &data->dev_attr); disable_lcdc(&data->fsl_diu_info[0]);
free_irq(data->irq, data->diu_reg);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zsolt Kajtar soci@c64.rulez.org
[ Upstream commit 892c788d73fe4a94337ed092cb998c49fa8ecaf4 ]
The erase colour calculation for fbcon clearing should use get_color instead of attr_col_ec, like everything else. The latter is similar but is not correct. For example it's missing the depth dependent remapping and doesn't care about blanking.
The problem can be reproduced by setting up the background colour to grey (vt.color=0x70) and having an fbcon console set to 2bpp (4 shades of gray). Now the background attribute should be 1 (dark gray) on the console.
If the screen is scrolled when pressing enter in a shell prompt at the bottom line then the new line is cleared using colour 7 instead of 1. That's not something fillrect likes (at 2bbp it expect 0-3) so the result is interesting.
This patch switches to get_color with vc_video_erase_char to determine the erase colour from attr_col_ec. That makes the latter function redundant as no other users were left.
Use correct erase colour for clearing in fbcon
Signed-off-by: Zsolt Kajtar soci@c64.rulez.org Signed-off-by: Helge Deller deller@gmx.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/video/fbdev/core/bitblit.c | 5 ++-- drivers/video/fbdev/core/fbcon.c | 10 +++++--- drivers/video/fbdev/core/fbcon.h | 38 +--------------------------- drivers/video/fbdev/core/fbcon_ccw.c | 5 ++-- drivers/video/fbdev/core/fbcon_cw.c | 5 ++-- drivers/video/fbdev/core/fbcon_ud.c | 5 ++-- drivers/video/fbdev/core/tileblit.c | 8 +++--- 7 files changed, 18 insertions(+), 58 deletions(-)
diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c index 8587c9da06700..42e681a78136a 100644 --- a/drivers/video/fbdev/core/bitblit.c +++ b/drivers/video/fbdev/core/bitblit.c @@ -59,12 +59,11 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy, }
static void bit_clear(struct vc_data *vc, struct fb_info *info, int sy, - int sx, int height, int width) + int sx, int height, int width, int fg, int bg) { - int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; struct fb_fillrect region;
- region.color = attr_bgcol_ec(bgshift, vc, info); + region.color = bg; region.dx = sx * vc->vc_font.width; region.dy = sy * vc->vc_font.height; region.width = width * vc->vc_font.width; diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c index e6640edec155e..538e932055ca5 100644 --- a/drivers/video/fbdev/core/fbcon.c +++ b/drivers/video/fbdev/core/fbcon.c @@ -1240,7 +1240,7 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height, { struct fb_info *info = fbcon_info_from_console(vc->vc_num); struct fbcon_ops *ops = info->fbcon_par; - + int fg, bg; struct fbcon_display *p = &fb_display[vc->vc_num]; u_int y_break;
@@ -1261,16 +1261,18 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height, fbcon_clear_margins(vc, 0); }
+ fg = get_color(vc, info, vc->vc_video_erase_char, 1); + bg = get_color(vc, info, vc->vc_video_erase_char, 0); /* Split blits that cross physical y_wrap boundary */
y_break = p->vrows - p->yscroll; if (sy < y_break && sy + height - 1 >= y_break) { u_int b = y_break - sy; - ops->clear(vc, info, real_y(p, sy), sx, b, width); + ops->clear(vc, info, real_y(p, sy), sx, b, width, fg, bg); ops->clear(vc, info, real_y(p, sy + b), sx, height - b, - width); + width, fg, bg); } else - ops->clear(vc, info, real_y(p, sy), sx, height, width); + ops->clear(vc, info, real_y(p, sy), sx, height, width, fg, bg); }
static void fbcon_putcs(struct vc_data *vc, const unsigned short *s, diff --git a/drivers/video/fbdev/core/fbcon.h b/drivers/video/fbdev/core/fbcon.h index 0eaf54a211516..25691d4b027bf 100644 --- a/drivers/video/fbdev/core/fbcon.h +++ b/drivers/video/fbdev/core/fbcon.h @@ -55,7 +55,7 @@ struct fbcon_ops { void (*bmove)(struct vc_data *vc, struct fb_info *info, int sy, int sx, int dy, int dx, int height, int width); void (*clear)(struct vc_data *vc, struct fb_info *info, int sy, - int sx, int height, int width); + int sx, int height, int width, int fb, int bg); void (*putcs)(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx, int fg, int bg); @@ -116,42 +116,6 @@ static inline int mono_col(const struct fb_info *info) return (~(0xfff << max_len)) & 0xff; }
-static inline int attr_col_ec(int shift, struct vc_data *vc, - struct fb_info *info, int is_fg) -{ - int is_mono01; - int col; - int fg; - int bg; - - if (!vc) - return 0; - - if (vc->vc_can_do_color) - return is_fg ? attr_fgcol(shift,vc->vc_video_erase_char) - : attr_bgcol(shift,vc->vc_video_erase_char); - - if (!info) - return 0; - - col = mono_col(info); - is_mono01 = info->fix.visual == FB_VISUAL_MONO01; - - if (attr_reverse(vc->vc_video_erase_char)) { - fg = is_mono01 ? col : 0; - bg = is_mono01 ? 0 : col; - } - else { - fg = is_mono01 ? 0 : col; - bg = is_mono01 ? col : 0; - } - - return is_fg ? fg : bg; -} - -#define attr_bgcol_ec(bgshift, vc, info) attr_col_ec(bgshift, vc, info, 0) -#define attr_fgcol_ec(fgshift, vc, info) attr_col_ec(fgshift, vc, info, 1) - /* * Scroll Method */ diff --git a/drivers/video/fbdev/core/fbcon_ccw.c b/drivers/video/fbdev/core/fbcon_ccw.c index 2789ace796342..9f4d65478554a 100644 --- a/drivers/video/fbdev/core/fbcon_ccw.c +++ b/drivers/video/fbdev/core/fbcon_ccw.c @@ -78,14 +78,13 @@ static void ccw_bmove(struct vc_data *vc, struct fb_info *info, int sy, }
static void ccw_clear(struct vc_data *vc, struct fb_info *info, int sy, - int sx, int height, int width) + int sx, int height, int width, int fg, int bg) { struct fbcon_ops *ops = info->fbcon_par; struct fb_fillrect region; - int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; u32 vyres = GETVYRES(ops->p, info);
- region.color = attr_bgcol_ec(bgshift,vc,info); + region.color = bg; region.dx = sy * vc->vc_font.height; region.dy = vyres - ((sx + width) * vc->vc_font.width); region.height = width * vc->vc_font.width; diff --git a/drivers/video/fbdev/core/fbcon_cw.c b/drivers/video/fbdev/core/fbcon_cw.c index 86a254c1b2b7b..b18e31886da10 100644 --- a/drivers/video/fbdev/core/fbcon_cw.c +++ b/drivers/video/fbdev/core/fbcon_cw.c @@ -63,14 +63,13 @@ static void cw_bmove(struct vc_data *vc, struct fb_info *info, int sy, }
static void cw_clear(struct vc_data *vc, struct fb_info *info, int sy, - int sx, int height, int width) + int sx, int height, int width, int fg, int bg) { struct fbcon_ops *ops = info->fbcon_par; struct fb_fillrect region; - int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; u32 vxres = GETVXRES(ops->p, info);
- region.color = attr_bgcol_ec(bgshift,vc,info); + region.color = bg; region.dx = vxres - ((sy + height) * vc->vc_font.height); region.dy = sx * vc->vc_font.width; region.height = width * vc->vc_font.width; diff --git a/drivers/video/fbdev/core/fbcon_ud.c b/drivers/video/fbdev/core/fbcon_ud.c index 23bc045769d08..b6b074cfd9dc0 100644 --- a/drivers/video/fbdev/core/fbcon_ud.c +++ b/drivers/video/fbdev/core/fbcon_ud.c @@ -64,15 +64,14 @@ static void ud_bmove(struct vc_data *vc, struct fb_info *info, int sy, }
static void ud_clear(struct vc_data *vc, struct fb_info *info, int sy, - int sx, int height, int width) + int sx, int height, int width, int fg, int bg) { struct fbcon_ops *ops = info->fbcon_par; struct fb_fillrect region; - int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; u32 vyres = GETVYRES(ops->p, info); u32 vxres = GETVXRES(ops->p, info);
- region.color = attr_bgcol_ec(bgshift,vc,info); + region.color = bg; region.dy = vyres - ((sy + height) * vc->vc_font.height); region.dx = vxres - ((sx + width) * vc->vc_font.width); region.width = width * vc->vc_font.width; diff --git a/drivers/video/fbdev/core/tileblit.c b/drivers/video/fbdev/core/tileblit.c index 2768eff247ba4..674ca6a410ec8 100644 --- a/drivers/video/fbdev/core/tileblit.c +++ b/drivers/video/fbdev/core/tileblit.c @@ -32,16 +32,14 @@ static void tile_bmove(struct vc_data *vc, struct fb_info *info, int sy, }
static void tile_clear(struct vc_data *vc, struct fb_info *info, int sy, - int sx, int height, int width) + int sx, int height, int width, int fg, int bg) { struct fb_tilerect rect; - int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; - int fgshift = (vc->vc_hi_font_mask) ? 9 : 8;
rect.index = vc->vc_video_erase_char & ((vc->vc_hi_font_mask) ? 0x1ff : 0xff); - rect.fg = attr_fgcol_ec(fgshift, vc, info); - rect.bg = attr_bgcol_ec(bgshift, vc, info); + rect.fg = fg; + rect.bg = bg; rect.sx = sx; rect.sy = sy; rect.width = width;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zsolt Kajtar soci@c64.rulez.org
[ Upstream commit 76d3ca89981354e1f85a3e0ad9ac4217d351cc72 ]
I was wondering why there's garbage at the bottom of the screen when tile blitting is used with an odd mode like 1080, 600 or 200. Sure there's only space for half a tile but the same area is clean when the buffer is bitmap.
Then later I found that it's supposed to be cleaned but that's not implemented. So I took what's in bitblit and adapted it for tileblit.
This implementation was tested for both the horizontal and vertical case, and now does the same as what's done for bitmap buffers.
If anyone is interested to reproduce the problem then I could bet that'd be on a S3 or Ark. Just set up a mode with an odd line count and make sure that the virtual size covers the complete tile at the bottom. E.g. for 600 lines that's 608 virtual lines for a 16 tall tile. Then the bottom area should be cleaned.
For the right side it's more difficult as there the drivers won't let an odd size happen, unless the code is modified. But once it reports back a few pixel columns short then fbcon won't use the last column. With the patch that column is now clean.
Btw. the virtual size should be rounded up by the driver for both axes (not only the horizontal) so that it's dividable by the tile size. That's a driver bug but correcting it is not in scope for this patch.
Implement missing margin clearing for tileblit
Signed-off-by: Zsolt Kajtar soci@c64.rulez.org Signed-off-by: Helge Deller deller@gmx.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/video/fbdev/core/tileblit.c | 37 ++++++++++++++++++++++++++++- 1 file changed, 36 insertions(+), 1 deletion(-)
diff --git a/drivers/video/fbdev/core/tileblit.c b/drivers/video/fbdev/core/tileblit.c index 674ca6a410ec8..b3aa0c6620c7d 100644 --- a/drivers/video/fbdev/core/tileblit.c +++ b/drivers/video/fbdev/core/tileblit.c @@ -74,7 +74,42 @@ static void tile_putcs(struct vc_data *vc, struct fb_info *info, static void tile_clear_margins(struct vc_data *vc, struct fb_info *info, int color, int bottom_only) { - return; + unsigned int cw = vc->vc_font.width; + unsigned int ch = vc->vc_font.height; + unsigned int rw = info->var.xres - (vc->vc_cols*cw); + unsigned int bh = info->var.yres - (vc->vc_rows*ch); + unsigned int rs = info->var.xres - rw; + unsigned int bs = info->var.yres - bh; + unsigned int vwt = info->var.xres_virtual / cw; + unsigned int vht = info->var.yres_virtual / ch; + struct fb_tilerect rect; + + rect.index = vc->vc_video_erase_char & + ((vc->vc_hi_font_mask) ? 0x1ff : 0xff); + rect.fg = color; + rect.bg = color; + + if ((int) rw > 0 && !bottom_only) { + rect.sx = (info->var.xoffset + rs + cw - 1) / cw; + rect.sy = 0; + rect.width = (rw + cw - 1) / cw; + rect.height = vht; + if (rect.width + rect.sx > vwt) + rect.width = vwt - rect.sx; + if (rect.sx < vwt) + info->tileops->fb_tilefill(info, &rect); + } + + if ((int) bh > 0) { + rect.sx = info->var.xoffset / cw; + rect.sy = (info->var.yoffset + bs) / ch; + rect.width = rs / cw; + rect.height = (bh + ch - 1) / ch; + if (rect.height + rect.sy > vht) + rect.height = vht - rect.sy; + if (rect.sy < vht) + info->tileops->fb_tilefill(info, &rect); + } }
static void tile_cursor(struct vc_data *vc, struct fb_info *info, int mode,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pali Rohár pali@kernel.org
[ Upstream commit 781802aa5a5950f99899f13ff9d760f5db81d36d ]
Function ip_rfc1001_connect() which establish NetBIOS session for SMB connections, currently uses smb_send() function for sending NetBIOS Session Request packet. This function expects that the passed buffer is SMB packet and for SMB2+ connections it mangles packet header, which breaks prepared NetBIOS Session Request packet. Result is that this function send garbage packet for SMB2+ connection, which SMB2+ server cannot parse. That function is not mangling packets for SMB1 connections, so it somehow works for SMB1.
Fix this problem and instead of smb_send(), use smb_send_kvec() function which does not mangle prepared packet, this function send them as is. Just API of this function takes struct msghdr (kvec) instead of packet buffer.
[MS-SMB2] specification allows SMB2 protocol to use NetBIOS as a transport protocol. NetBIOS can be used over TCP via port 139. So this is a valid configuration, just not so common. And even recent Windows versions (e.g. Windows Server 2022) still supports this configuration: SMB over TCP port 139, including for modern SMB2 and SMB3 dialects.
This change fixes SMB2 and SMB3 connections over TCP port 139 which requires establishing of NetBIOS session. Tested that this change fixes establishing of SMB2 and SMB3 connections with Windows Server 2022.
Signed-off-by: Pali Rohár pali@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/smb/client/cifsproto.h | 3 +++ fs/smb/client/connect.c | 20 +++++++++++++++----- fs/smb/client/transport.c | 2 +- 3 files changed, 19 insertions(+), 6 deletions(-)
diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index d1fd54fb3cc14..9a30425b75a96 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -30,6 +30,9 @@ extern void cifs_small_buf_release(void *); extern void free_rsp_buf(int, void *); extern int smb_send(struct TCP_Server_Info *, struct smb_hdr *, unsigned int /* length */); +extern int smb_send_kvec(struct TCP_Server_Info *server, + struct msghdr *msg, + size_t *sent); extern unsigned int _get_xid(void); extern void _free_xid(unsigned int); #define get_xid() \ diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c index 0baa64c48c3d1..76c04c4ed45fc 100644 --- a/fs/smb/client/connect.c +++ b/fs/smb/client/connect.c @@ -2966,8 +2966,10 @@ ip_rfc1001_connect(struct TCP_Server_Info *server) * sessinit is sent but no second negprot */ struct rfc1002_session_packet req = {}; - struct smb_hdr *smb_buf = (struct smb_hdr *)&req; + struct msghdr msg = {}; + struct kvec iov = {}; unsigned int len; + size_t sent;
req.trailer.session_req.called_len = sizeof(req.trailer.session_req.called_name);
@@ -2996,10 +2998,18 @@ ip_rfc1001_connect(struct TCP_Server_Info *server) * As per rfc1002, @len must be the number of bytes that follows the * length field of a rfc1002 session request payload. */ - len = sizeof(req) - offsetof(struct rfc1002_session_packet, trailer.session_req); + len = sizeof(req.trailer.session_req); + req.type = RFC1002_SESSION_REQUEST; + req.flags = 0; + req.length = cpu_to_be16(len); + len += offsetof(typeof(req), trailer.session_req); + iov.iov_base = &req; + iov.iov_len = len; + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iov, 1, len); + rc = smb_send_kvec(server, &msg, &sent); + if (rc < 0 || len != sent) + return (rc == -EINTR || rc == -EAGAIN) ? rc : -ECONNABORTED;
- smb_buf->smb_buf_length = cpu_to_be32((RFC1002_SESSION_REQUEST << 24) | len); - rc = smb_send(server, smb_buf, len); /* * RFC1001 layer in at least one server requires very short break before * negprot presumably because not expecting negprot to follow so fast. @@ -3008,7 +3018,7 @@ ip_rfc1001_connect(struct TCP_Server_Info *server) */ usleep_range(1000, 2000);
- return rc; + return 0; }
static int diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index 3fdafb9297f13..d2867bd263c55 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -178,7 +178,7 @@ delete_mid(struct mid_q_entry *mid) * Our basic "send data to server" function. Should be called with srv_mutex * held. The caller is responsible for handling the results. */ -static int +int smb_send_kvec(struct TCP_Server_Info *server, struct msghdr *smb_msg, size_t *sent) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit 0af5fb5ed3d2fd9e110c6112271f022b744a849a ]
If a containerised process is killed and causes an ENETUNREACH or ENETDOWN error to be propagated to the state manager, then mark the nfs_client as being dead so that we don't loop in functions that are expecting recovery to succeed.
Reviewed-by: Jeff Layton jlayton@kernel.org Reviewed-by: Benjamin Coddington bcodding@redhat.com Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/nfs4state.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c index 48ea406604229..80a7c5bd7a476 100644 --- a/fs/nfs/nfs4state.c +++ b/fs/nfs/nfs4state.c @@ -2737,7 +2737,15 @@ static void nfs4_state_manager(struct nfs_client *clp) pr_warn_ratelimited("NFS: state manager%s%s failed on NFSv4 server %s" " with error %d\n", section_sep, section, clp->cl_hostname, -status); - ssleep(1); + switch (status) { + case -ENETDOWN: + case -ENETUNREACH: + nfs_mark_client_ready(clp, -EIO); + break; + default: + ssleep(1); + break; + } out_drain: memalloc_nofs_restore(memflags); nfs4_end_drain_session(clp);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit bf9be373b830a3e48117da5d89bb6145a575f880 ]
The autobind setting was supposed to be determined in rpc_create(), since commit c2866763b402 ("SUNRPC: use sockaddr + size when creating remote transport endpoints").
Reviewed-by: Jeff Layton jlayton@kernel.org Reviewed-by: Benjamin Coddington bcodding@redhat.com Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sunrpc/clnt.c | 3 --- 1 file changed, 3 deletions(-)
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c index b6529a9d37d37..a390a4e5592f2 100644 --- a/net/sunrpc/clnt.c +++ b/net/sunrpc/clnt.c @@ -275,9 +275,6 @@ static struct rpc_xprt *rpc_clnt_set_transport(struct rpc_clnt *clnt, old = rcu_dereference_protected(clnt->cl_xprt, lockdep_is_held(&clnt->cl_lock));
- if (!xprt_bound(xprt)) - clnt->cl_autobind = 1; - clnt->cl_timeout = timeout; rcu_assign_pointer(clnt->cl_xprt, xprt); spin_unlock(&clnt->cl_lock);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit 214c13e380ad7636631279f426387f9c4e3c14d9 ]
If we already had a valid port number for the RPC service, then we should not allow the rpcbind client to set it to the invalid value '0'.
Reviewed-by: Jeff Layton jlayton@kernel.org Reviewed-by: Benjamin Coddington bcodding@redhat.com Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sunrpc/rpcb_clnt.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c index 82afb56695f8d..1ec20163a0b7d 100644 --- a/net/sunrpc/rpcb_clnt.c +++ b/net/sunrpc/rpcb_clnt.c @@ -797,9 +797,10 @@ static void rpcb_getport_done(struct rpc_task *child, void *data) }
trace_rpcb_setport(child, map->r_status, map->r_port); - xprt->ops->set_port(xprt, map->r_port); - if (map->r_port) + if (map->r_port) { + xprt->ops->set_port(xprt, map->r_port); xprt_set_bound(xprt); + } }
/*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alice Guo alice.guo@nxp.com
[ Upstream commit 229f3feb4b0442835b27d519679168bea2de96c2 ]
Enable power-down of TMU (Thermal Management Unit) for TMU version 2 during system suspend to save power. Save approximately 4.3mW on VDD_ANA_1P8 on i.MX93 platforms.
Signed-off-by: Alice Guo alice.guo@nxp.com Signed-off-by: Frank Li Frank.Li@nxp.com Link: https://lore.kernel.org/r/20241209164859.3758906-2-Frank.Li@nxp.com Signed-off-by: Daniel Lezcano daniel.lezcano@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/thermal/qoriq_thermal.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/drivers/thermal/qoriq_thermal.c b/drivers/thermal/qoriq_thermal.c index d111e218f362e..b33cb1d880b74 100644 --- a/drivers/thermal/qoriq_thermal.c +++ b/drivers/thermal/qoriq_thermal.c @@ -19,6 +19,7 @@ #define SITES_MAX 16 #define TMR_DISABLE 0x0 #define TMR_ME 0x80000000 +#define TMR_CMD BIT(29) #define TMR_ALPF 0x0c000000 #define TMR_ALPF_V2 0x03000000 #define TMTMIR_DEFAULT 0x0000000f @@ -345,6 +346,12 @@ static int __maybe_unused qoriq_tmu_suspend(struct device *dev) if (ret) return ret;
+ if (data->ver > TMU_VER1) { + ret = regmap_set_bits(data->regmap, REGS_TMR, TMR_CMD); + if (ret) + return ret; + } + clk_disable_unprepare(data->clk);
return 0; @@ -359,6 +366,12 @@ static int __maybe_unused qoriq_tmu_resume(struct device *dev) if (ret) return ret;
+ if (data->ver > TMU_VER1) { + ret = regmap_clear_bits(data->regmap, REGS_TMR, TMR_CMD); + if (ret) + return ret; + } + /* Enable monitoring */ return regmap_update_bits(data->regmap, REGS_TMR, TMR_ME, TMR_ME); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jing Su jingsusu@didiglobal.com
[ Upstream commit 3a17f23f7c36bac3a3584aaf97d3e3e0b2790396 ]
Executing dql_reset after setting a non-zero value for limit_min can lead to an unreasonable situation where dql->limit is less than dql->limit_min.
For instance, after setting /sys/class/net/eth*/queues/tx-0/byte_queue_limits/limit_min, an ifconfig down/up operation might cause the ethernet driver to call netdev_tx_reset_queue, which in turn invokes dql_reset.
In this case, dql->limit is reset to 0 while dql->limit_min remains non-zero value, which is unexpected. The limit should always be greater than or equal to limit_min.
Signed-off-by: Jing Su jingsusu@didiglobal.com Link: https://patch.msgid.link/Z9qHD1s/NEuQBdgH@pilot-ThinkCentre-M930t-N000 Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- lib/dynamic_queue_limits.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/dynamic_queue_limits.c b/lib/dynamic_queue_limits.c index fde0aa2441480..a75a9ca46b594 100644 --- a/lib/dynamic_queue_limits.c +++ b/lib/dynamic_queue_limits.c @@ -116,7 +116,7 @@ EXPORT_SYMBOL(dql_completed); void dql_reset(struct dql *dql) { /* Reset all dynamic values */ - dql->limit = 0; + dql->limit = dql->min_limit; dql->num_queued = 0; dql->num_completed = 0; dql->last_obj_cnt = 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ryo Takakura ryotkkr98@gmail.com
[ Upstream commit 61c39d8c83e2077f33e0a2c8980a76a7f323f0ce ]
Since:
0c1d7a2c2d32 ("lockdep: Remove softirq accounting on PREEMPT_RT.")
the wait context test for mutex usage within "in softirq context" fails as it references @softirq_context:
| wait context tests | -------------------------------------------------------------------------- | rcu | raw | spin |mutex | -------------------------------------------------------------------------- in hardirq context: ok | ok | ok | ok | in hardirq context (not threaded): ok | ok | ok | ok | in softirq context: ok | ok | ok |FAILED|
As a fix, add lockdep map for BH disabled section. This fixes the issue by letting us catch cases when local_bh_disable() gets called with preemption disabled where local_lock doesn't get acquired. In the case of "in softirq context" selftest, local_bh_disable() was being called with preemption disable as it's early in the boot.
[ boqun: Move the lockdep annotations into __local_bh_*() to avoid false positives because of unpaired local_bh_disable() reported by Borislav Petkov and Peter Zijlstra, and make bh_lock_map only exist for PREEMPT_RT. ]
[ mingo: Restored authorship and improved the bh_lock_map definition. ]
Signed-off-by: Ryo Takakura ryotkkr98@gmail.com Signed-off-by: Boqun Feng boqun.feng@gmail.com Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/20250321143322.79651-1-boqun.feng@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/softirq.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/kernel/softirq.c b/kernel/softirq.c index 6665f5cd60cb0..9ab5ca399a990 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -140,6 +140,18 @@ static DEFINE_PER_CPU(struct softirq_ctrl, softirq_ctrl) = { .lock = INIT_LOCAL_LOCK(softirq_ctrl.lock), };
+#ifdef CONFIG_DEBUG_LOCK_ALLOC +static struct lock_class_key bh_lock_key; +struct lockdep_map bh_lock_map = { + .name = "local_bh", + .key = &bh_lock_key, + .wait_type_outer = LD_WAIT_FREE, + .wait_type_inner = LD_WAIT_CONFIG, /* PREEMPT_RT makes BH preemptible. */ + .lock_type = LD_LOCK_PERCPU, +}; +EXPORT_SYMBOL_GPL(bh_lock_map); +#endif + /** * local_bh_blocked() - Check for idle whether BH processing is blocked * @@ -162,6 +174,8 @@ void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
WARN_ON_ONCE(in_hardirq());
+ lock_map_acquire_read(&bh_lock_map); + /* First entry of a task into a BH disabled section? */ if (!current->softirq_disable_cnt) { if (preemptible()) { @@ -225,6 +239,8 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt) WARN_ON_ONCE(in_hardirq()); lockdep_assert_irqs_enabled();
+ lock_map_release(&bh_lock_map); + local_irq_save(flags); curcnt = __this_cpu_read(softirq_ctrl.cnt);
@@ -275,6 +291,8 @@ static inline void ksoftirqd_run_begin(void) /* Counterpart to ksoftirqd_run_begin() */ static inline void ksoftirqd_run_end(void) { + /* pairs with the lock_map_acquire_read() in ksoftirqd_run_begin() */ + lock_map_release(&bh_lock_map); __local_bh_enable(SOFTIRQ_OFFSET, true); WARN_ON_ONCE(in_interrupt()); local_irq_enable();
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Josh Poimboeuf jpoimboe@kernel.org
[ Upstream commit e1a9dda74dbffbc3fa2069ff418a1876dc99fb14 ]
If opts.uaccess isn't set, the uaccess validation is disabled, but only partially: it doesn't read the uaccess_safe_builtin list but still tries to do the validation. Disable it completely to prevent false warnings.
Signed-off-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Cc: Linus Torvalds torvalds@linux-foundation.org Link: https://lore.kernel.org/r/0e95581c1d2107fb5f59418edf2b26bba38b0cbb.174285284... Signed-off-by: Sasha Levin sashal@kernel.org --- tools/objtool/check.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 828c91aaf55bd..bf75628c5389a 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -3083,7 +3083,7 @@ static int handle_insn_ops(struct instruction *insn, if (update_cfi_state(insn, next_insn, &state->cfi, op)) return 1;
- if (!insn->alt_group) + if (!opts.uaccess || !insn->alt_group) continue;
if (op->dest.type == OP_DEST_PUSHF) { @@ -3535,6 +3535,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, return 0;
case INSN_STAC: + if (!opts.uaccess) + break; + if (state.uaccess) { WARN_FUNC("recursive UACCESS enable", sec, insn->offset); return 1; @@ -3544,6 +3547,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, break;
case INSN_CLAC: + if (!opts.uaccess) + break; + if (!state.uaccess && func) { WARN_FUNC("redundant UACCESS disable", sec, insn->offset); return 1; @@ -3956,7 +3962,8 @@ static int validate_symbol(struct objtool_file *file, struct section *sec, if (!insn || insn->ignore || insn->visited) return 0;
- state->uaccess = sym->uaccess_safe; + if (opts.uaccess) + state->uaccess = sym->uaccess_safe;
ret = validate_branch(file, insn->func, insn, *state); if (ret && opts.backtrace)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Frank Li Frank.Li@nxp.com
[ Upstream commit f3e1dccba0a0833fc9a05fb838ebeb6ea4ca0e1a ]
Most systems' PCIe outbound map windows have non-zero physical addresses, but the possibility of encountering zero increased after following commit ("PCI: dwc: Use parent_bus_offset").
'ep->outbound_addr[n]', representing 'parent_bus_address', might be 0 on some hardware, which trims high address bits through bus fabric before sending to the PCIe controller.
Replace the iteration logic with 'for_each_set_bit()' to ensure only allocated map windows are iterated when determining the ATU index from a given address.
Link: https://lore.kernel.org/r/20250315201548.858189-12-helgaas@kernel.org Signed-off-by: Frank Li Frank.Li@nxp.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pci/controller/dwc/pcie-designware-ep.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 449ad709495d3..3b3f079d0d2dd 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c @@ -283,7 +283,7 @@ static int dw_pcie_find_index(struct dw_pcie_ep *ep, phys_addr_t addr, u32 index; struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
- for (index = 0; index < pci->num_ob_windows; index++) { + for_each_set_bit(index, ep->ob_window_map, pci->num_ob_windows) { if (ep->outbound_addr[index] != addr) continue; *atu_index = index;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ian Rogers irogers@google.com
[ Upstream commit 935e7cb5bb80106ff4f2fe39640f430134ef8cd8 ]
Separate test log files from object files. Depend on test log output but don't pass to the linker.
Reviewed-by: James Clark james.clark@linaro.org Signed-off-by: Ian Rogers irogers@google.com Link: https://lore.kernel.org/r/20250311213628.569562-2-irogers@google.com Signed-off-by: Namhyung Kim namhyung@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/build/Makefile.build | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/tools/build/Makefile.build b/tools/build/Makefile.build index 715092fc6a239..6a043b729b367 100644 --- a/tools/build/Makefile.build +++ b/tools/build/Makefile.build @@ -130,6 +130,10 @@ objprefix := $(subst ./,,$(OUTPUT)$(dir)/) obj-y := $(addprefix $(objprefix),$(obj-y)) subdir-obj-y := $(addprefix $(objprefix),$(subdir-obj-y))
+# Separate out test log files from real build objects. +test-y := $(filter %_log, $(obj-y)) +obj-y := $(filter-out %_log, $(obj-y)) + # Final '$(obj)-in.o' object in-target := $(objprefix)$(obj)-in.o
@@ -140,7 +144,7 @@ $(subdir-y):
$(sort $(subdir-obj-y)): $(subdir-y) ;
-$(in-target): $(obj-y) FORCE +$(in-target): $(obj-y) $(test-y) FORCE $(call rule_mkdir) $(call if_changed,$(host)ld_multi)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit aa42add73ce9b9e3714723d385c254b75814e335 ]
If the client should see an ENETDOWN when trying to connect to the data server, it might still be able to talk to the metadata server through another NIC. If so, report the error.
Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Reviewed-by: Jeff Layton jlayton@kernel.org Tested-by: Jeff Layton jlayton@kernel.org Acked-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/flexfilelayout/flexfilelayout.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c index 8056b05bd8dca..07e5ea64dcd68 100644 --- a/fs/nfs/flexfilelayout/flexfilelayout.c +++ b/fs/nfs/flexfilelayout/flexfilelayout.c @@ -1255,6 +1255,7 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg, case -ECONNRESET: case -EHOSTDOWN: case -EHOSTUNREACH: + case -ENETDOWN: case -ENETUNREACH: case -EADDRINUSE: case -ENOBUFS:
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Roger Pau Monne roger.pau@citrix.com
[ Upstream commit 6c4d5aadf5df31ea0ac025980670eee9beaf466b ]
MSI remapping bypass (directly configuring MSI entries for devices on the VMD bus) won't work under Xen, as Xen is not aware of devices in such bus, and hence cannot configure the entries using the pIRQ interface in the PV case, and in the PVH case traps won't be setup for MSI entries for such devices.
Until Xen is aware of devices in the VMD bus prevent the VMD_FEAT_CAN_BYPASS_MSI_REMAP capability from being used when running as any kind of Xen guest.
The MSI remapping bypass is an optional feature of VMD bridges, and hence when running under Xen it will be masked and devices will be forced to redirect its interrupts from the VMD bridge. That mode of operation must always be supported by VMD bridges and works when Xen is not aware of devices behind the VMD bridge.
Signed-off-by: Roger Pau Monné roger.pau@citrix.com Acked-by: Bjorn Helgaas bhelgaas@google.com Message-ID: 20250219092059.90850-3-roger.pau@citrix.com Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pci/controller/vmd.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index 09995b6e73bcc..771ff0f6971f9 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -17,6 +17,8 @@ #include <linux/rculist.h> #include <linux/rcupdate.h>
+#include <xen/xen.h> + #include <asm/irqdomain.h>
#define VMD_CFGBAR 0 @@ -919,6 +921,24 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id) struct vmd_dev *vmd; int err;
+ if (xen_domain()) { + /* + * Xen doesn't have knowledge about devices in the VMD bus + * because the config space of devices behind the VMD bridge is + * not known to Xen, and hence Xen cannot discover or configure + * them in any way. + * + * Bypass of MSI remapping won't work in that case as direct + * write by Linux to the MSI entries won't result in functional + * interrupts, as Xen is the entity that manages the host + * interrupt controller and must configure interrupts. However + * multiplexing of interrupts by the VMD bridge will work under + * Xen, so force the usage of that mode which must always be + * supported by VMD bridges. + */ + features &= ~VMD_FEAT_CAN_BYPASS_MSI_REMAP; + } + if (resource_size(&dev->resource[VMD_CFGBAR]) < (1 << 20)) return -ENOMEM;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Robert Richter rrichter@amd.com
[ Upstream commit ef1d3455bbc1922f94a91ed58d3d7db440652959 ]
If a faulty CXL memory device returns a broken zero LSA size in its memory device information (Identify Memory Device (Opcode 4000h), CXL spec. 3.1, 8.2.9.9.1.1), a divide error occurs in the libnvdimm driver:
Oops: divide error: 0000 [#1] PREEMPT SMP NOPTI RIP: 0010:nd_label_data_init+0x10e/0x800 [libnvdimm]
Code and flow:
1) CXL Command 4000h returns LSA size = 0 2) config_size is assigned to zero LSA size (CXL pmem driver):
drivers/cxl/pmem.c: .config_size = mds->lsa_size,
3) max_xfer is set to zero (nvdimm driver):
drivers/nvdimm/label.c: max_xfer = min_t(size_t, ndd->nsarea.max_xfer, config_size);
4) A subsequent DIV_ROUND_UP() causes a division by zero:
drivers/nvdimm/label.c: /* Make our initial read size a multiple of max_xfer size */ drivers/nvdimm/label.c: read_size = min(DIV_ROUND_UP(read_size, max_xfer) * max_xfer, drivers/nvdimm/label.c- config_size);
Fix this by checking the config size parameter by extending an existing check.
Signed-off-by: Robert Richter rrichter@amd.com Reviewed-by: Pankaj Gupta pankaj.gupta@amd.com Reviewed-by: Ira Weiny ira.weiny@intel.com Link: https://patch.msgid.link/20250320112223.608320-1-rrichter@amd.com Signed-off-by: Ira Weiny ira.weiny@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvdimm/label.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c index 082253a3a9560..04f4a049599a1 100644 --- a/drivers/nvdimm/label.c +++ b/drivers/nvdimm/label.c @@ -442,7 +442,8 @@ int nd_label_data_init(struct nvdimm_drvdata *ndd) if (ndd->data) return 0;
- if (ndd->nsarea.status || ndd->nsarea.max_xfer == 0) { + if (ndd->nsarea.status || ndd->nsarea.max_xfer == 0 || + ndd->nsarea.config_size == 0) { dev_dbg(ndd->dev, "failed to init config data area: (%u:%u)\n", ndd->nsarea.max_xfer, ndd->nsarea.config_size); return -ENXIO;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Erick Shepherd erick.shepherd@ni.com
[ Upstream commit 31e75ed964582257f59156ce6a42860e1ae4cc39 ]
The SD spec version 6.0 section 6.4.1.5 requires that Vdd must be lowered to less than 0.5V for a minimum of 1 ms when powering off a card. Increase wait to 15 ms so that voltage has time to drain down to 0.5V and cards can power off correctly. Issues with voltage drain time were only observed on Apollo Lake and Bay Trail host controllers so this fix is limited to those devices.
Signed-off-by: Erick Shepherd erick.shepherd@ni.com Acked-by: Adrian Hunter adrian.hunter@intel.com Link: https://lore.kernel.org/r/20250314195021.1588090-1-erick.shepherd@ni.com Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mmc/host/sdhci-pci-core.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c index 5a5cc40d4bc37..c71d9956b398d 100644 --- a/drivers/mmc/host/sdhci-pci-core.c +++ b/drivers/mmc/host/sdhci-pci-core.c @@ -613,8 +613,12 @@ static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode,
sdhci_set_power(host, mode, vdd);
- if (mode == MMC_POWER_OFF) + if (mode == MMC_POWER_OFF) { + if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_APL_SD || + slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BYT_SD) + usleep_range(15000, 17500); return; + }
/* * Bus power might not enable after D3 -> D0 transition due to the
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Philip Redkin me@rarity.fan
[ Upstream commit 631ca8909fd5c62b9fda9edda93924311a78a9c4 ]
At least with CONFIG_PHYSICAL_START=0x100000, if there is < 4 MiB of contiguous free memory available at this point, the kernel will crash and burn because memblock_phys_alloc_range() returns 0 on failure, which leads memblock_phys_free() to throw the first 4 MiB of physical memory to the wolves.
At a minimum it should fail gracefully with a meaningful diagnostic, but in fact everything seems to work fine without the weird reserve allocation.
Signed-off-by: Philip Redkin me@rarity.fan Signed-off-by: Ingo Molnar mingo@kernel.org Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Rik van Riel riel@surriel.com Cc: "H. Peter Anvin" hpa@zytor.com Link: https://lore.kernel.org/r/94b3e98f-96a7-3560-1f76-349eb95ccf7f@rarity.fan Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/mm/init.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index ab697ee645288..446bf7fbc3250 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -654,8 +654,13 @@ static void __init memory_map_top_down(unsigned long map_start, */ addr = memblock_phys_alloc_range(PMD_SIZE, PMD_SIZE, map_start, map_end); - memblock_phys_free(addr, PMD_SIZE); - real_end = addr + PMD_SIZE; + if (!addr) { + pr_warn("Failed to release memory for alloc_low_pages()"); + real_end = max(map_start, ALIGN_DOWN(map_end, PMD_SIZE)); + } else { + memblock_phys_free(addr, PMD_SIZE); + real_end = addr + PMD_SIZE; + }
/* step_size need to be small so pgt_buf from BRK could cover it */ step_size = PMD_SIZE;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stephan Gerhold stephan.gerhold@kernkonzept.com
[ Upstream commit d4f35233a6345f62637463ef6e0708f44ffaa583 ]
When the I2C QUP controller is used together with a DMA engine it needs to vote for the interconnect path to the DRAM. Otherwise it may be unable to access the memory quickly enough.
The requested peak bandwidth is dependent on the I2C core clock.
To avoid sending votes too often the bandwidth is always requested when a DMA transfer starts, but dropped only on runtime suspend. Runtime suspend should only happen if no transfer is active. After resumption we can defer the next vote until the first DMA transfer actually happens.
The implementation is largely identical to the one introduced for spi-qup in commit ecdaa9473019 ("spi: qup: Vote for interconnect bandwidth to DRAM") since both drivers represent the same hardware block.
Signed-off-by: Stephan Gerhold stephan.gerhold@kernkonzept.com Signed-off-by: Andi Shyti andi.shyti@kernel.org Link: https://lore.kernel.org/r/20231128-i2c-qup-dvfs-v1-3-59a0e3039111@kernkonzep... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i2c/busses/i2c-qup.c | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+)
diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c index 78682388e02ed..82de4651d18f0 100644 --- a/drivers/i2c/busses/i2c-qup.c +++ b/drivers/i2c/busses/i2c-qup.c @@ -14,6 +14,7 @@ #include <linux/dma-mapping.h> #include <linux/err.h> #include <linux/i2c.h> +#include <linux/interconnect.h> #include <linux/interrupt.h> #include <linux/io.h> #include <linux/module.h> @@ -150,6 +151,8 @@ /* TAG length for DATA READ in RX FIFO */ #define READ_RX_TAGS_LEN 2
+#define QUP_BUS_WIDTH 8 + static unsigned int scl_freq; module_param_named(scl_freq, scl_freq, uint, 0444); MODULE_PARM_DESC(scl_freq, "SCL frequency override"); @@ -227,6 +230,7 @@ struct qup_i2c_dev { int irq; struct clk *clk; struct clk *pclk; + struct icc_path *icc_path; struct i2c_adapter adap;
int clk_ctl; @@ -255,6 +259,10 @@ struct qup_i2c_dev { /* To configure when bus is in run state */ u32 config_run;
+ /* bandwidth votes */ + u32 src_clk_freq; + u32 cur_bw_clk_freq; + /* dma parameters */ bool is_dma; /* To check if the current transfer is using DMA */ @@ -453,6 +461,23 @@ static int qup_i2c_bus_active(struct qup_i2c_dev *qup, int len) return ret; }
+static int qup_i2c_vote_bw(struct qup_i2c_dev *qup, u32 clk_freq) +{ + u32 needed_peak_bw; + int ret; + + if (qup->cur_bw_clk_freq == clk_freq) + return 0; + + needed_peak_bw = Bps_to_icc(clk_freq * QUP_BUS_WIDTH); + ret = icc_set_bw(qup->icc_path, 0, needed_peak_bw); + if (ret) + return ret; + + qup->cur_bw_clk_freq = clk_freq; + return 0; +} + static void qup_i2c_write_tx_fifo_v1(struct qup_i2c_dev *qup) { struct qup_i2c_block *blk = &qup->blk; @@ -840,6 +865,10 @@ static int qup_i2c_bam_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int ret = 0; int idx = 0;
+ ret = qup_i2c_vote_bw(qup, qup->src_clk_freq); + if (ret) + return ret; + enable_irq(qup->irq); ret = qup_i2c_req_dma(qup);
@@ -1645,6 +1674,7 @@ static void qup_i2c_disable_clocks(struct qup_i2c_dev *qup) config = readl(qup->base + QUP_CONFIG); config |= QUP_CLOCK_AUTO_GATE; writel(config, qup->base + QUP_CONFIG); + qup_i2c_vote_bw(qup, 0); clk_disable_unprepare(qup->pclk); }
@@ -1745,6 +1775,11 @@ static int qup_i2c_probe(struct platform_device *pdev) goto fail_dma; } qup->is_dma = true; + + qup->icc_path = devm_of_icc_get(&pdev->dev, NULL); + if (IS_ERR(qup->icc_path)) + return dev_err_probe(&pdev->dev, PTR_ERR(qup->icc_path), + "failed to get interconnect path\n"); }
nodma: @@ -1793,6 +1828,7 @@ static int qup_i2c_probe(struct platform_device *pdev) qup_i2c_enable_clocks(qup); src_clk_freq = clk_get_rate(qup->clk); } + qup->src_clk_freq = src_clk_freq;
/* * Bootloaders might leave a pending interrupt on certain QUP's,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vitalii Mordan mordan@ispras.ru
[ Upstream commit be7113d2e2a6f20cbee99c98d261a1fd6fd7b549 ]
If the clock i2c->clk was not enabled in i2c_pxa_probe(), it should not be disabled in any path.
Found by Linux Verification Center (linuxtesting.org) with Klever.
Signed-off-by: Vitalii Mordan mordan@ispras.ru Signed-off-by: Andi Shyti andi.shyti@kernel.org Link: https://lore.kernel.org/r/20250212172803.1422136-1-mordan@ispras.ru Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i2c/busses/i2c-pxa.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/i2c/busses/i2c-pxa.c b/drivers/i2c/busses/i2c-pxa.c index ade3f0ea59551..8263e017577de 100644 --- a/drivers/i2c/busses/i2c-pxa.c +++ b/drivers/i2c/busses/i2c-pxa.c @@ -1508,7 +1508,10 @@ static int i2c_pxa_probe(struct platform_device *dev) i2c->adap.name); }
- clk_prepare_enable(i2c->clk); + ret = clk_prepare_enable(i2c->clk); + if (ret) + return dev_err_probe(&dev->dev, ret, + "failed to enable clock\n");
if (i2c->use_pio) { i2c->adap.algo = &i2c_pxa_pio_algorithm;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Boris Burkov boris@bur.io
[ Upstream commit 895c6721d310c036dcfebb5ab845822229fa35eb ]
Currently, the async discard machinery owns a ref to the block_group when the block_group is queued on a discard list. However, to handle races with discard cancellation and the discard workfn, we have a specific logic to detect that the block_group is *currently* running in the workfn, to protect the workfn's usage amidst cancellation.
As far as I can tell, this doesn't have any overt bugs (though finish_discard_pass() and remove_from_discard_list() racing can have a surprising outcome for the caller of remove_from_discard_list() in that it is again added at the end).
But it is needlessly complicated to rely on locking and the nullity of discard_ctl->block_group. Simplify this significantly by just taking a refcount while we are in the workfn and unconditionally drop it in both the remove and workfn paths, regardless of if they race.
Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: Boris Burkov boris@bur.io Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/discard.c | 34 ++++++++++++++++------------------ 1 file changed, 16 insertions(+), 18 deletions(-)
diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c index 7b2f77a8aa982..a90f3cb83c709 100644 --- a/fs/btrfs/discard.c +++ b/fs/btrfs/discard.c @@ -152,13 +152,7 @@ static bool remove_from_discard_list(struct btrfs_discard_ctl *discard_ctl, block_group->discard_eligible_time = 0; queued = !list_empty(&block_group->discard_list); list_del_init(&block_group->discard_list); - /* - * If the block group is currently running in the discard workfn, we - * don't want to deref it, since it's still being used by the workfn. - * The workfn will notice this case and deref the block group when it is - * finished. - */ - if (queued && !running) + if (queued) btrfs_put_block_group(block_group);
spin_unlock(&discard_ctl->lock); @@ -256,9 +250,10 @@ static struct btrfs_block_group *peek_discard_list( block_group->discard_cursor = block_group->start; block_group->discard_state = BTRFS_DISCARD_EXTENTS; } - discard_ctl->block_group = block_group; } if (block_group) { + btrfs_get_block_group(block_group); + discard_ctl->block_group = block_group; *discard_state = block_group->discard_state; *discard_index = block_group->discard_index; } @@ -482,9 +477,20 @@ static void btrfs_discard_workfn(struct work_struct *work)
block_group = peek_discard_list(discard_ctl, &discard_state, &discard_index, now); - if (!block_group || !btrfs_run_discard_work(discard_ctl)) + if (!block_group) return; + if (!btrfs_run_discard_work(discard_ctl)) { + spin_lock(&discard_ctl->lock); + btrfs_put_block_group(block_group); + discard_ctl->block_group = NULL; + spin_unlock(&discard_ctl->lock); + return; + } if (now < block_group->discard_eligible_time) { + spin_lock(&discard_ctl->lock); + btrfs_put_block_group(block_group); + discard_ctl->block_group = NULL; + spin_unlock(&discard_ctl->lock); btrfs_discard_schedule_work(discard_ctl, false); return; } @@ -536,15 +542,7 @@ static void btrfs_discard_workfn(struct work_struct *work) spin_lock(&discard_ctl->lock); discard_ctl->prev_discard = trimmed; discard_ctl->prev_discard_time = now; - /* - * If the block group was removed from the discard list while it was - * running in this workfn, then we didn't deref it, since this function - * still owned that reference. But we set the discard_ctl->block_group - * back to NULL, so we can use that condition to know that now we need - * to deref the block_group. - */ - if (discard_ctl->block_group == NULL) - btrfs_put_block_group(block_group); + btrfs_put_block_group(block_group); discard_ctl->block_group = NULL; __btrfs_discard_schedule_work(discard_ctl, now, false); spin_unlock(&discard_ctl->lock);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Harmstone maharmstone@fb.com
[ Upstream commit 7ef3cbf17d2734ca66c4ed8573be45f4e461e7ee ]
The inline function btrfs_is_testing() is hardcoded to return 0 if CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set. Currently we're relying on the compiler optimizing out the call to alloc_test_extent_buffer() in btrfs_find_create_tree_block(), as it's not been defined (it's behind an #ifdef).
Add a stub version of alloc_test_extent_buffer() to avoid linker errors on non-standard optimization levels. This problem was seen on GCC 14 with -O0 and is helps to see symbols that would be otherwise optimized out.
Reviewed-by: Qu Wenruo wqu@suse.com Signed-off-by: Mark Harmstone maharmstone@fb.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/extent_io.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 72227c0b4b5a1..d5552875f872a 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -4459,10 +4459,10 @@ struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info, return eb; }
-#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info, u64 start) { +#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS struct extent_buffer *eb, *exists = NULL; int ret;
@@ -4498,8 +4498,11 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info, free_eb: btrfs_release_extent_buffer(eb); return exists; -} +#else + /* Stub to avoid linker error when compiled with optimizations turned off. */ + return NULL; #endif +}
static struct extent_buffer *grab_extent_buffer( struct btrfs_fs_info *fs_info, struct page *page)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Qu Wenruo wqu@suse.com
[ Upstream commit df94a342efb451deb0e32b495d1d6cd4bb3a1648 ]
[BUG] Even after all the error fixes related the "ASSERT(list_empty(&fs_info->delayed_iputs));" in close_ctree(), I can still hit it reliably with my experimental 2K block size.
[CAUSE] In my case, all the error is triggered after the fs is already in error status.
I find the following call trace to be the cause of race:
Main thread | endio_write_workers ---------------------------------------------+--------------------------- close_ctree() | |- btrfs_error_commit_super() | | |- btrfs_cleanup_transaction() | | | |- btrfs_destroy_all_ordered_extents() | | | |- btrfs_wait_ordered_roots() | | |- btrfs_run_delayed_iputs() | | | btrfs_finish_ordered_io() | | |- btrfs_put_ordered_extent() | | |- btrfs_add_delayed_iput() |- ASSERT(list_empty(delayed_iputs)) | !!! Triggered !!!
The root cause is that, btrfs_wait_ordered_roots() only wait for ordered extents to finish their IOs, not to wait for them to finish and removed.
[FIX] Since btrfs_error_commit_super() will flush and wait for all ordered extents, it should be executed early, before we start flushing the workqueues.
And since btrfs_error_commit_super() now runs early, there is no need to run btrfs_run_delayed_iputs() inside it, so just remove the btrfs_run_delayed_iputs() call from btrfs_error_commit_super().
Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/disk-io.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index de4f590fe30f2..6670188b9eb6b 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -4641,6 +4641,14 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info) /* clear out the rbtree of defraggable inodes */ btrfs_cleanup_defrag_inodes(fs_info);
+ /* + * Handle the error fs first, as it will flush and wait for all ordered + * extents. This will generate delayed iputs, thus we want to handle + * it first. + */ + if (unlikely(BTRFS_FS_ERROR(fs_info))) + btrfs_error_commit_super(fs_info); + /* * Wait for any fixup workers to complete. * If we don't wait for them here and they are still running by the time @@ -4730,9 +4738,6 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info) btrfs_err(fs_info, "commit super ret %d", ret); }
- if (BTRFS_FS_ERROR(fs_info)) - btrfs_error_commit_super(fs_info); - kthread_stop(fs_info->transaction_kthread); kthread_stop(fs_info->cleaner_kthread);
@@ -4888,10 +4893,6 @@ static void btrfs_error_commit_super(struct btrfs_fs_info *fs_info) /* cleanup FS via transaction */ btrfs_cleanup_transaction(fs_info);
- mutex_lock(&fs_info->cleaner_mutex); - btrfs_run_delayed_iputs(fs_info); - mutex_unlock(&fs_info->cleaner_mutex); - down_write(&fs_info->cleanup_work_sem); up_write(&fs_info->cleanup_work_sem); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Filipe Manana fdmanana@suse.com
[ Upstream commit cda76788f8b0f7de3171100e3164ec1ce702292e ]
At close_ctree() after we have ran delayed iputs either explicitly through calling btrfs_run_delayed_iputs() or later during the call to btrfs_commit_super() or btrfs_error_commit_super(), we assert that the delayed iputs list is empty.
We have (another) race where this assertion might fail because we have queued an async write into the fs_info->workers workqueue. Here's how it happens:
1) We are submitting a data bio for an inode that is not the data relocation inode, so we call btrfs_wq_submit_bio();
2) btrfs_wq_submit_bio() submits a work for the fs_info->workers queue that will run run_one_async_done();
3) We enter close_ctree(), flush several work queues except fs_info->workers, explicitly run delayed iputs with a call to btrfs_run_delayed_iputs() and then again shortly after by calling btrfs_commit_super() or btrfs_error_commit_super(), which also run delayed iputs;
4) run_one_async_done() is executed in the work queue, and because there was an IO error (bio->bi_status is not 0) it calls btrfs_bio_end_io(), which drops the final reference on the associated ordered extent by calling btrfs_put_ordered_extent() - and that adds a delayed iput for the inode;
5) At close_ctree() we find that after stopping the cleaner and transaction kthreads the delayed iputs list is not empty, failing the following assertion:
ASSERT(list_empty(&fs_info->delayed_iputs));
Fix this by flushing the fs_info->workers workqueue before running delayed iputs at close_ctree().
David reported this when running generic/648, which exercises IO error paths by using the DM error table.
Reported-by: David Sterba dsterba@suse.com Reviewed-by: Qu Wenruo wqu@suse.com Signed-off-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/disk-io.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 6670188b9eb6b..8c0da0025bc71 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -4669,6 +4669,19 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info) */ btrfs_flush_workqueue(fs_info->delalloc_workers);
+ /* + * We can have ordered extents getting their last reference dropped from + * the fs_info->workers queue because for async writes for data bios we + * queue a work for that queue, at btrfs_wq_submit_bio(), that runs + * run_one_async_done() which calls btrfs_bio_end_io() in case the bio + * has an error, and that later function can do the final + * btrfs_put_ordered_extent() on the ordered extent attached to the bio, + * which adds a delayed iput for the inode. So we must flush the queue + * so that we don't have delayed iputs after committing the current + * transaction below and stopping the cleaner and transaction kthreads. + */ + btrfs_flush_workqueue(fs_info->workers); + /* * When finishing a compressed write bio we schedule a work queue item * to finish an ordered extent - btrfs_finish_compressed_write_work()
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Filipe Manana fdmanana@suse.com
[ Upstream commit 1283b8c125a83bf7a7dbe90c33d3472b6d7bf612 ]
At btrfs_reclaim_bgs_work(), we are grabbing a block group's zone unusable bytes while not under the protection of the block group's spinlock, so this can trigger race reports from KCSAN (or similar tools) since that field is typically updated while holding the lock, such as at __btrfs_add_free_space_zoned() for example.
Fix this by grabbing the zone unusable bytes while we are still in the critical section holding the block group's spinlock, which is right above where we are currently grabbing it.
Reviewed-by: Johannes Thumshirn johannes.thumshirn@wdc.com Signed-off-by: Filipe Manana fdmanana@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/block-group.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index 0dcf7fecaf55b..91440ef79a26f 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -1678,6 +1678,17 @@ void btrfs_reclaim_bgs_work(struct work_struct *work) up_write(&space_info->groups_sem); goto next; } + + /* + * Cache the zone_unusable value before turning the block group + * to read only. As soon as the block group is read only it's + * zone_unusable value gets moved to the block group's read-only + * bytes and isn't available for calculations anymore. We also + * cache it before unlocking the block group, to prevent races + * (reports from KCSAN and such tools) with tasks updating it. + */ + zone_unusable = bg->zone_unusable; + spin_unlock(&bg->lock);
/* @@ -1693,13 +1704,6 @@ void btrfs_reclaim_bgs_work(struct work_struct *work) goto next; }
- /* - * Cache the zone_unusable value before turning the block group - * to read only. As soon as the blog group is read only it's - * zone_unusable value gets moved to the block group's read-only - * bytes and isn't available for calculations anymore. - */ - zone_unusable = bg->zone_unusable; ret = inc_block_group_ro(bg, 0); up_write(&space_info->groups_sem); if (ret < 0)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Filipe Manana fdmanana@suse.com
[ Upstream commit a77749b3e21813566cea050bbb3414ae74562eba ]
When attempting to build a too long path we are currently returning -ENOMEM, which is very odd and misleading. So update fs_path_ensure_buf() to return -ENAMETOOLONG instead. Also, while at it, move the WARN_ON() into the if statement's expression, as it makes it clear what is being tested and also has the effect of adding 'unlikely' to the statement, which allows the compiler to generate better code as this condition is never expected to happen.
Signed-off-by: Filipe Manana fdmanana@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/send.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index a2b95ccb4cf5c..0735decec99b1 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -431,10 +431,8 @@ static int fs_path_ensure_buf(struct fs_path *p, int len) if (p->buf_len >= len) return 0;
- if (len > PATH_MAX) { - WARN_ON(1); - return -ENOMEM; - } + if (WARN_ON(len > PATH_MAX)) + return -ENAMETOOLONG;
path_len = p->end - p->start; old_buf_len = p->buf_len;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jing Zhou Jing.Zhou@amd.com
[ Upstream commit 9c2f4ae64bb6f6d83a54d88b9ee0f369cdbb9fa8 ]
[WHY] We should never apply a minimum dispclk value while in prepare_bandwidth or while displays are active. This is always an optimizaiton for when all displays are disabled.
[HOW] Defer dispclk optimization until safe_to_lower = true and display_count reaches 0.
Since 0 has a special value in this logic (ie. no dispclk required) we also need adjust the logic that clamps it for the actual request to PMFW.
Reviewed-by: Charlene Liu charlene.liu@amd.com Reviewed-by: Chris Park chris.park@amd.com Reviewed-by: Eric Yang eric.yang@amd.com Signed-off-by: Jing Zhou Jing.Zhou@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Signed-off-by: Alex Hung alex.hung@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../dc/clk_mgr/dcn315/dcn315_clk_mgr.c | 20 +++++++++++-------- .../dc/clk_mgr/dcn316/dcn316_clk_mgr.c | 13 +++++++++--- 2 files changed, 22 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c index 09eb1bc9aa030..9549f9c152291 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c @@ -116,7 +116,7 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base, struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk; struct dc *dc = clk_mgr_base->ctx->dc; - int display_count; + int display_count = 0; bool update_dppclk = false; bool update_dispclk = false; bool dpp_clock_lowered = false; @@ -192,15 +192,19 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base, update_dppclk = true; }
- if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) { - /* No need to apply the w/a if we haven't taken over from bios yet */ - if (clk_mgr_base->clks.dispclk_khz) - dcn315_disable_otg_wa(clk_mgr_base, context, true); + if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) && + (new_clocks->dispclk_khz > 0 || (safe_to_lower && display_count == 0))) { + int requested_dispclk_khz = new_clocks->dispclk_khz;
+ dcn315_disable_otg_wa(clk_mgr_base, context, true); + + /* Clamp the requested clock to PMFW based on their limit. */ + if (dc->debug.min_disp_clk_khz > 0 && requested_dispclk_khz < dc->debug.min_disp_clk_khz) + requested_dispclk_khz = dc->debug.min_disp_clk_khz; + + dcn315_smu_set_dispclk(clk_mgr, requested_dispclk_khz); clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz; - dcn315_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz); - if (clk_mgr_base->clks.dispclk_khz) - dcn315_disable_otg_wa(clk_mgr_base, context, false); + dcn315_disable_otg_wa(clk_mgr_base, context, false);
update_dispclk = true; } diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c index 29d2003fb7129..afce15aa2ff10 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c @@ -153,7 +153,7 @@ static void dcn316_update_clocks(struct clk_mgr *clk_mgr_base, struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk; struct dc *dc = clk_mgr_base->ctx->dc; - int display_count; + int display_count = 0; bool update_dppclk = false; bool update_dispclk = false; bool dpp_clock_lowered = false; @@ -226,11 +226,18 @@ static void dcn316_update_clocks(struct clk_mgr *clk_mgr_base, update_dppclk = true; }
- if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) { + if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) && + (new_clocks->dispclk_khz > 0 || (safe_to_lower && display_count == 0))) { + int requested_dispclk_khz = new_clocks->dispclk_khz; + dcn316_disable_otg_wa(clk_mgr_base, context, safe_to_lower, true);
+ /* Clamp the requested clock to PMFW based on their limit. */ + if (dc->debug.min_disp_clk_khz > 0 && requested_dispclk_khz < dc->debug.min_disp_clk_khz) + requested_dispclk_khz = dc->debug.min_disp_clk_khz; + + dcn316_smu_set_dispclk(clk_mgr, requested_dispclk_khz); clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz; - dcn316_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz); dcn316_disable_otg_wa(clk_mgr_base, context, safe_to_lower, false);
update_dispclk = true;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanley Chu yschu@nuvoton.com
[ Upstream commit 0430bf9bc1ac068c8b8c540eb93e5751872efc51 ]
The controller driver nacked the master request but didn't emit a STOP to end the transaction. The driver shall refuse the unsupported requests and return the controller state to IDLE by emitting a STOP.
Signed-off-by: Stanley Chu yschu@nuvoton.com Reviewed-by: Frank Li Frank.Li@nxp.com Link: https://lore.kernel.org/r/20250318053606.3087121-4-yschu@nuvoton.com Signed-off-by: Alexandre Belloni alexandre.belloni@bootlin.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i3c/master/svc-i3c-master.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c index 38095649ed276..cf0550c6e95f0 100644 --- a/drivers/i3c/master/svc-i3c-master.c +++ b/drivers/i3c/master/svc-i3c-master.c @@ -495,6 +495,7 @@ static void svc_i3c_master_ibi_work(struct work_struct *work) queue_work(master->base.wq, &master->hj_work); break; case SVC_I3C_MSTATUS_IBITYPE_MASTER_REQUEST: + svc_i3c_master_emit_stop(master); default: break; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heming Zhao heming.zhao@suse.com
[ Upstream commit 03d2b62208a336a3bb984b9465ef6d89a046ea22 ]
This patch bypasses multi-link errors in TCP mode, allowing dlm to operate on the first tcp link.
Signed-off-by: Heming Zhao heming.zhao@suse.com Signed-off-by: David Teigland teigland@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dlm/lowcomms.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c index 2c797eb519da9..51a641822d6c4 100644 --- a/fs/dlm/lowcomms.c +++ b/fs/dlm/lowcomms.c @@ -1863,8 +1863,8 @@ static int dlm_tcp_listen_validate(void) { /* We don't support multi-homed hosts */ if (dlm_local_count > 1) { - log_print("TCP protocol can't handle multi-homed hosts, try SCTP"); - return -EINVAL; + log_print("Detect multi-homed hosts but use only the first IP address."); + log_print("Try SCTP, if you want to enable multi-link."); }
return 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Berg benjamin@sipsolutions.net
[ Upstream commit cef721e0d53d2b64f2ba177c63a0dfdd7c0daf17 ]
Doing this allows using registers as retrieved from an mcontext to be pushed to a process using PTRACE_SETREGS.
It is not entirely clear to me why CSGSFS was masked. Doing so creates issues when using the mcontext as process state in seccomp and simply copying the register appears to work perfectly fine for ptrace.
Signed-off-by: Benjamin Berg benjamin@sipsolutions.net Link: https://patch.msgid.link/20250224181827.647129-2-benjamin@sipsolutions.net Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/um/os-Linux/mcontext.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/um/os-Linux/mcontext.c b/arch/x86/um/os-Linux/mcontext.c index 49c3744cac371..81b9d1f9f4e68 100644 --- a/arch/x86/um/os-Linux/mcontext.c +++ b/arch/x86/um/os-Linux/mcontext.c @@ -26,7 +26,6 @@ void get_regs_from_mc(struct uml_pt_regs *regs, mcontext_t *mc) COPY(RIP); COPY2(EFLAGS, EFL); COPY2(CS, CSGSFS); - regs->gp[CS / sizeof(unsigned long)] &= 0xffff; - regs->gp[CS / sizeof(unsigned long)] |= 3; + regs->gp[SS / sizeof(unsigned long)] = mc->gregs[REG_CSGSFS] >> 48; #endif }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tiwei Bie tiwei.btw@antgroup.com
[ Upstream commit e82cf3051e6193f61e03898f8dba035199064d36 ]
When uml_reserved is updated, min_low_pfn must also be updated accordingly. Otherwise, min_low_pfn will not accurately reflect the lowest available PFN.
Signed-off-by: Tiwei Bie tiwei.btw@antgroup.com Link: https://patch.msgid.link/20250221041855.1156109-1-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/um/kernel/mem.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c index 38d5a71a579bc..f6c766b2bdf5e 100644 --- a/arch/um/kernel/mem.c +++ b/arch/um/kernel/mem.c @@ -68,6 +68,7 @@ void __init mem_init(void) map_memory(brk_end, __pa(brk_end), uml_reserved - brk_end, 1, 1, 0); memblock_free((void *)brk_end, uml_reserved - brk_end); uml_reserved = brk_end; + min_low_pfn = PFN_UP(__pa(uml_reserved));
/* this will put all low memory onto the freelists */ memblock_free_all();
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Göttsche cgzones@googlemail.com
[ Upstream commit 1b419c889c0767a5b66d0a6c566cae491f1cb0f7 ]
capable() calls refer to enabled LSMs whether to permit or deny the request. This is relevant in connection with SELinux, where a capability check results in a policy decision and by default a denial message on insufficient permission is issued. It can lead to three undesired cases: 1. A denial message is generated, even in case the operation was an unprivileged one and thus the syscall succeeded, creating noise. 2. To avoid the noise from 1. the policy writer adds a rule to ignore those denial messages, hiding future syscalls, where the task performs an actual privileged operation, leading to hidden limited functionality of that task. 3. To avoid the noise from 1. the policy writer adds a rule to permit the task the requested capability, while it does not need it, violating the principle of least privilege.
Signed-off-by: Christian Göttsche cgzones@googlemail.com Reviewed-by: Serge Hallyn serge@hallyn.com Reviewed-by: Jan Kara jack@suse.cz Link: https://patch.msgid.link/20250302160657.127253-2-cgoettsche@seltendoof.de Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ext4/balloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c index fbd0329cf254e..9efe97f3721bc 100644 --- a/fs/ext4/balloc.c +++ b/fs/ext4/balloc.c @@ -638,8 +638,8 @@ static int ext4_has_free_clusters(struct ext4_sb_info *sbi, /* Hm, nope. Are (enough) root reserved clusters available? */ if (uid_eq(sbi->s_resuid, current_fsuid()) || (!gid_eq(sbi->s_resgid, GLOBAL_ROOT_GID) && in_group_p(sbi->s_resgid)) || - capable(CAP_SYS_RESOURCE) || - (flags & EXT4_MB_USE_ROOT_BLOCKS)) { + (flags & EXT4_MB_USE_ROOT_BLOCKS) || + capable(CAP_SYS_RESOURCE)) {
if (free_clusters >= (nclusters + dirty_clusters + resv_clusters))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kai Mäkisara Kai.Makisara@kolumbus.fi
[ Upstream commit 8db816c6f176321e42254badd5c1a8df8bfcfdb4 ]
In the days when SCSI-2 was emerging, some drives did claim SCSI-2 but did not correctly implement it. The st driver first tries MODE SELECT with the page format bit set to set the block descriptor. If not successful, the non-page format is tried.
The test only tests the sense code and this triggers also from illegal parameter in the parameter list. The test is limited to "old" devices and made more strict to remove false alarms.
Signed-off-by: Kai Mäkisara Kai.Makisara@kolumbus.fi Link: https://lore.kernel.org/r/20250311112516.5548-4-Kai.Makisara@kolumbus.fi Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/st.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index 7f107be344236..284c2cf1ae662 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -3074,7 +3074,9 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon cmd_in == MTSETDRVBUFFER || cmd_in == SET_DENS_AND_BLK) { if (cmdstatp->sense_hdr.sense_key == ILLEGAL_REQUEST && - !(STp->use_pf & PF_TESTED)) { + cmdstatp->sense_hdr.asc == 0x24 && + (STp->device)->scsi_level <= SCSI_2 && + !(STp->use_pf & PF_TESTED)) { /* Try the other possible state of Page Format if not already tried */ STp->use_pf = (STp->use_pf ^ USE_PF) | PF_TESTED;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kai Mäkisara Kai.Makisara@kolumbus.fi
[ Upstream commit ad77cebf97bd42c93ab4e3bffd09f2b905c1959a ]
The SCSI ERASE command erases from the current position onwards. Don't clear the position variables.
Signed-off-by: Kai Mäkisara Kai.Makisara@kolumbus.fi Link: https://lore.kernel.org/r/20250311112516.5548-3-Kai.Makisara@kolumbus.fi Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/st.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index 284c2cf1ae662..3ff4e6d44db88 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -2887,7 +2887,6 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon timeout = STp->long_timeout * 8;
DEBC_printk(STp, "Erasing tape.\n"); - fileno = blkno = at_sm = 0; break; case MTSETBLK: /* Set block length */ case MTSETDENSITY: /* Set tape density */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Williamson alex.williamson@redhat.com
[ Upstream commit 860be250fc32de9cb24154bf21b4e36f40925707 ]
Some systems report INTx as not routed by setting pdev->irq to IRQ_NOTCONNECTED, resulting in a -ENOTCONN error when trying to setup eventfd signaling. Include this in the set of conditions for which the PIN register is virtualized to zero.
Additionally consolidate vfio_pci_get_irq_count() to use this virtualized value in reporting INTx support via ioctl and sanity checking ioctl paths since pdev->irq is re-used when the device is in MSI mode.
The combination of these results in both the config space of the device and the ioctl interface behaving as if the device does not support INTx.
Reviewed-by: Kevin Tian kevin.tian@intel.com Link: https://lore.kernel.org/r/20250311230623.1264283-1-alex.williamson@redhat.co... Signed-off-by: Alex Williamson alex.williamson@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/vfio/pci/vfio_pci_config.c | 3 ++- drivers/vfio/pci/vfio_pci_core.c | 10 +--------- drivers/vfio/pci/vfio_pci_intrs.c | 2 +- 3 files changed, 4 insertions(+), 11 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c index 7902e1ec0fef2..105243d83b2dc 100644 --- a/drivers/vfio/pci/vfio_pci_config.c +++ b/drivers/vfio/pci/vfio_pci_config.c @@ -1806,7 +1806,8 @@ int vfio_config_init(struct vfio_pci_core_device *vdev) cpu_to_le16(PCI_COMMAND_MEMORY); }
- if (!IS_ENABLED(CONFIG_VFIO_PCI_INTX) || vdev->nointx) + if (!IS_ENABLED(CONFIG_VFIO_PCI_INTX) || vdev->nointx || + vdev->pdev->irq == IRQ_NOTCONNECTED) vconfig[PCI_INTERRUPT_PIN] = 0;
ret = vfio_cap_init(vdev); diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index f357fd157e1ed..aa362b434413a 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -719,15 +719,7 @@ EXPORT_SYMBOL_GPL(vfio_pci_core_finish_enable); static int vfio_pci_get_irq_count(struct vfio_pci_core_device *vdev, int irq_type) { if (irq_type == VFIO_PCI_INTX_IRQ_INDEX) { - u8 pin; - - if (!IS_ENABLED(CONFIG_VFIO_PCI_INTX) || - vdev->nointx || vdev->pdev->is_virtfn) - return 0; - - pci_read_config_byte(vdev->pdev, PCI_INTERRUPT_PIN, &pin); - - return pin ? 1 : 0; + return vdev->vconfig[PCI_INTERRUPT_PIN] ? 1 : 0; } else if (irq_type == VFIO_PCI_MSI_IRQ_INDEX) { u8 pos; u16 flags; diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 5cbcde32ff79e..64d78944efa53 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -207,7 +207,7 @@ static int vfio_intx_enable(struct vfio_pci_core_device *vdev, if (!is_irq_none(vdev)) return -EINVAL;
- if (!pdev->irq) + if (!pdev->irq || pdev->irq == IRQ_NOTCONNECTED) return -ENODEV;
name = kasprintf(GFP_KERNEL_ACCOUNT, "vfio-intx(%s)", pci_name(pdev));
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mykyta Yatsenko yatsenko@meta.com
[ Upstream commit 07651ccda9ff10a8ca427670cdd06ce2c8e4269c ]
Return prog's btf_id from bpf_prog_get_info_by_fd regardless of capable check. This patch enables scenario, when freplace program, running from user namespace, requires to query target prog's btf.
Signed-off-by: Mykyta Yatsenko yatsenko@meta.com Signed-off-by: Andrii Nakryiko andrii@kernel.org Acked-by: Yonghong Song yonghong.song@linux.dev Link: https://lore.kernel.org/bpf/20250317174039.161275-3-mykyta.yatsenko5@gmail.c... Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/syscall.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 27fdf1b2fc469..b145f3ef3695e 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4005,6 +4005,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, info.recursion_misses = stats.misses;
info.verified_insns = prog->aux->verified_insns; + if (prog->aux->btf) + info.btf_id = btf_obj_id(prog->aux->btf);
if (!bpf_capable()) { info.jited_prog_len = 0; @@ -4151,8 +4153,6 @@ static int bpf_prog_get_info_by_fd(struct file *file, } }
- if (prog->aux->btf) - info.btf_id = btf_obj_id(prog->aux->btf); info.attach_btf_id = prog->aux->attach_btf_id; if (attach_btf) info.attach_btf_obj_id = btf_obj_id(attach_btf);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ilpo Järvinen ij@kernel.org
[ Upstream commit 149dfb31615e22271d2525f078c95ea49bc4db24 ]
- Move tcp_count_delivered() earlier and split tcp_count_delivered_ce() out of it - Move tcp_in_ack_event() later - While at it, remove the inline from tcp_in_ack_event() and let the compiler to decide
Accurate ECN's heuristics does not know if there is going to be ACE field based CE counter increase or not until after rtx queue has been processed. Only then the number of ACKed bytes/pkts is available. As CE or not affects presence of FLAG_ECE, that information for tcp_in_ack_event is not yet available in the old location of the call to tcp_in_ack_event().
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_input.c | 56 +++++++++++++++++++++++++------------------- 1 file changed, 32 insertions(+), 24 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 3b81f6df829ff..db1a99df29d55 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -404,6 +404,20 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr return false; }
+static void tcp_count_delivered_ce(struct tcp_sock *tp, u32 ecn_count) +{ + tp->delivered_ce += ecn_count; +} + +/* Updates the delivered and delivered_ce counts */ +static void tcp_count_delivered(struct tcp_sock *tp, u32 delivered, + bool ece_ack) +{ + tp->delivered += delivered; + if (ece_ack) + tcp_count_delivered_ce(tp, delivered); +} + /* Buffer size and advertised window tuning. * * 1. Tuning sk->sk_sndbuf, when connection enters established state. @@ -1119,15 +1133,6 @@ void tcp_mark_skb_lost(struct sock *sk, struct sk_buff *skb) } }
-/* Updates the delivered and delivered_ce counts */ -static void tcp_count_delivered(struct tcp_sock *tp, u32 delivered, - bool ece_ack) -{ - tp->delivered += delivered; - if (ece_ack) - tp->delivered_ce += delivered; -} - /* This procedure tags the retransmission queue when SACKs arrive. * * We have three tag bits: SACKED(S), RETRANS(R) and LOST(L). @@ -3783,12 +3788,23 @@ static void tcp_process_tlp_ack(struct sock *sk, u32 ack, int flag) } }
-static inline void tcp_in_ack_event(struct sock *sk, u32 flags) +static void tcp_in_ack_event(struct sock *sk, int flag) { const struct inet_connection_sock *icsk = inet_csk(sk);
- if (icsk->icsk_ca_ops->in_ack_event) - icsk->icsk_ca_ops->in_ack_event(sk, flags); + if (icsk->icsk_ca_ops->in_ack_event) { + u32 ack_ev_flags = 0; + + if (flag & FLAG_WIN_UPDATE) + ack_ev_flags |= CA_ACK_WIN_UPDATE; + if (flag & FLAG_SLOWPATH) { + ack_ev_flags |= CA_ACK_SLOWPATH; + if (flag & FLAG_ECE) + ack_ev_flags |= CA_ACK_ECE; + } + + icsk->icsk_ca_ops->in_ack_event(sk, ack_ev_flags); + } }
/* Congestion control has updated the cwnd already. So if we're in @@ -3905,12 +3921,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) tcp_snd_una_update(tp, ack); flag |= FLAG_WIN_UPDATE;
- tcp_in_ack_event(sk, CA_ACK_WIN_UPDATE); - NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPHPACKS); } else { - u32 ack_ev_flags = CA_ACK_SLOWPATH; - if (ack_seq != TCP_SKB_CB(skb)->end_seq) flag |= FLAG_DATA; else @@ -3922,19 +3934,12 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) flag |= tcp_sacktag_write_queue(sk, skb, prior_snd_una, &sack_state);
- if (tcp_ecn_rcv_ecn_echo(tp, tcp_hdr(skb))) { + if (tcp_ecn_rcv_ecn_echo(tp, tcp_hdr(skb))) flag |= FLAG_ECE; - ack_ev_flags |= CA_ACK_ECE; - }
if (sack_state.sack_delivered) tcp_count_delivered(tp, sack_state.sack_delivered, flag & FLAG_ECE); - - if (flag & FLAG_WIN_UPDATE) - ack_ev_flags |= CA_ACK_WIN_UPDATE; - - tcp_in_ack_event(sk, ack_ev_flags); }
/* This is a deviation from RFC3168 since it states that: @@ -3961,6 +3966,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
tcp_rack_update_reo_wnd(sk, &rs);
+ tcp_in_ack_event(sk, flag); + if (tp->tlp_high_seq) tcp_process_tlp_ack(sk, ack, flag);
@@ -3992,6 +3999,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) return 1;
no_queue: + tcp_in_ack_event(sk, flag); /* If data was DSACKed, see if we can undo a cwnd reduction. */ if (flag & FLAG_DSACKING_ACK) { tcp_fastretrans_alert(sk, prior_snd_una, num_dupack, &flag,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexandre Belloni alexandre.belloni@bootlin.com
[ Upstream commit b0f9cb4a0706b0356e84d67e48500b77b343debe ]
EERD is bit 2 in CTRL1
Link: https://lore.kernel.org/r/20250306214243.1167692-1-alexandre.belloni@bootlin... Signed-off-by: Alexandre Belloni alexandre.belloni@bootlin.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/rtc/rtc-rv3032.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/rtc/rtc-rv3032.c b/drivers/rtc/rtc-rv3032.c index c3bee305eacc6..9c85ecd9afb8e 100644 --- a/drivers/rtc/rtc-rv3032.c +++ b/drivers/rtc/rtc-rv3032.c @@ -69,7 +69,7 @@ #define RV3032_CLKOUT2_FD_MSK GENMASK(6, 5) #define RV3032_CLKOUT2_OS BIT(7)
-#define RV3032_CTRL1_EERD BIT(3) +#define RV3032_CTRL1_EERD BIT(2) #define RV3032_CTRL1_WADA BIT(5)
#define RV3032_CTRL2_STOP BIT(0)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mika Westerberg mika.westerberg@linux.intel.com
[ Upstream commit ad79c278e478ca8c1a3bf8e7a0afba8f862a48a1 ]
This is only used to write a new NVM in order to upgrade the retimer firmware. It does not make sense to expose it if upgrade is disabled. This also makes it consistent with the router NVM upgrade.
Signed-off-by: Mika Westerberg mika.westerberg@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/thunderbolt/retimer.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c index 5bd5c22a5085d..d2038337ea03b 100644 --- a/drivers/thunderbolt/retimer.c +++ b/drivers/thunderbolt/retimer.c @@ -89,9 +89,11 @@ static int tb_retimer_nvm_add(struct tb_retimer *rt) if (ret) goto err_nvm;
- ret = tb_nvm_add_non_active(nvm, nvm_write); - if (ret) - goto err_nvm; + if (!rt->no_nvm_upgrade) { + ret = tb_nvm_add_non_active(nvm, nvm_write); + if (ret) + goto err_nvm; + }
rt->nvm = nvm; return 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nícolas F. R. A. Prado nfraprado@collabora.com
[ Upstream commit 0116a7d84b32537a10d9bea1fd1bfc06577ef527 ]
Add a stub for mt6359_accdet_enable_jack_detect() to prevent linker failures in the machine sound drivers calling it when CONFIG_SND_SOC_MT6359_ACCDET is not enabled.
Suggested-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Signed-off-by: Nícolas F. R. A. Prado nfraprado@collabora.com Link: https://patch.msgid.link/20250306-mt8188-accdet-v3-3-7828e835ff4b@collabora.... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/mt6359-accdet.h | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/sound/soc/codecs/mt6359-accdet.h b/sound/soc/codecs/mt6359-accdet.h index c234f2f4276a1..78ada3a5bfae5 100644 --- a/sound/soc/codecs/mt6359-accdet.h +++ b/sound/soc/codecs/mt6359-accdet.h @@ -123,6 +123,15 @@ struct mt6359_accdet { struct workqueue_struct *jd_workqueue; };
+#if IS_ENABLED(CONFIG_SND_SOC_MT6359_ACCDET) int mt6359_accdet_enable_jack_detect(struct snd_soc_component *component, struct snd_soc_jack *jack); +#else +static inline int +mt6359_accdet_enable_jack_detect(struct snd_soc_component *component, + struct snd_soc_jack *jack) +{ + return -EOPNOTSUPP; +} +#endif #endif
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Seyediman Seyedarab imandevel@gmail.com
[ Upstream commit f757f6011c92b5a01db742c39149bed9e526478f ]
The script previously assumed --file was always the first argument, which caused issues when it appeared later. This patch updates the parsing logic to scan all arguments to find --file, sets the config file correctly, and resets the argument list with the remaining commands.
It also fixes --refresh to respect --file by passing KCONFIG_CONFIG=$FN to make oldconfig.
Signed-off-by: Seyediman Seyedarab imandevel@gmail.com Signed-off-by: Masahiro Yamada masahiroy@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- scripts/config | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/scripts/config b/scripts/config index ff88e2faefd35..ea475c07de283 100755 --- a/scripts/config +++ b/scripts/config @@ -32,6 +32,7 @@ commands: Disable option directly after other option --module-after|-M beforeopt option Turn option into module directly after other option + --refresh Refresh the config using old settings
commands can be repeated multiple times
@@ -124,16 +125,22 @@ undef_var() { txt_delete "^# $name is not set" "$FN" }
-if [ "$1" = "--file" ]; then - FN="$2" - if [ "$FN" = "" ] ; then - usage +FN=.config +CMDS=() +while [[ $# -gt 0 ]]; do + if [ "$1" = "--file" ]; then + if [ "$2" = "" ]; then + usage + fi + FN="$2" + shift 2 + else + CMDS+=("$1") + shift fi - shift 2 -else - FN=.config -fi +done
+set -- "${CMDS[@]}" if [ "$1" = "" ] ; then usage fi @@ -217,9 +224,8 @@ while [ "$1" != "" ] ; do set_var "${CONFIG_}$B" "${CONFIG_}$B=m" "${CONFIG_}$A" ;;
- # undocumented because it ignores --file (fixme) --refresh) - yes "" | make oldconfig + yes "" | make oldconfig KCONFIG_CONFIG=$FN ;;
*)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shashank Gupta shashankg@marvell.com
[ Upstream commit 64b7871522a4cba99d092e1c849d6f9092868aaa ]
This patch addresses an issue where authentication failures were being erroneously reported due to negative test failures in the "ccm(aes)" selftest. pr_debug suppress unnecessary screaming of these tests.
Signed-off-by: Shashank Gupta shashankg@marvell.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c index 811ded72ce5fb..798bb40fed68d 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c @@ -410,9 +410,10 @@ static int cpt_process_ccode(struct otx2_cptlfs_info *lfs, break; }
- dev_err(&pdev->dev, - "Request failed with software error code 0x%x\n", - cpt_status->s.uc_compcode); + pr_debug("Request failed with software error code 0x%x: algo = %s driver = %s\n", + cpt_status->s.uc_compcode, + info->req->areq->tfm->__crt_alg->cra_name, + info->req->areq->tfm->__crt_alg->cra_driver_name); otx2_cpt_dump_sg_list(pdev, info->req); break; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mikulas Patocka mpatocka@redhat.com
[ Upstream commit 45fc728515c14f53f6205789de5bfd72a95af3b8 ]
The devices with size >= 2^63 bytes can't be used reliably by userspace because the type off_t is a signed 64-bit integer.
Therefore, we limit the maximum size of a device mapper device to 2^63-512 bytes.
Signed-off-by: Mikulas Patocka mpatocka@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/dm-table.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index a20cf54d12dca..8b23b8bc5a036 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -671,6 +671,10 @@ int dm_table_add_target(struct dm_table *t, const char *type, DMERR("%s: zero-length target", dm_device_name(t->md)); return -EINVAL; } + if (start + len < start || start + len > LLONG_MAX >> SECTOR_SHIFT) { + DMERR("%s: too large device", dm_device_name(t->md)); + return -EINVAL; + }
ti->type = dm_get_target_type(type); if (!ti->type) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Guangguan Wang guangguan.wang@linux.alibaba.com
[ Upstream commit bfc6c67ec2d64d0ca4e5cc3e1ac84298a10b8d62 ]
When using smc_pnet in SMC, it will only search the pnetid in the base_ndev of the netdev hierarchy(both HW PNETID and User-defined sw pnetid). This may not work for some scenarios when using SMC in container on cloud environment. In container, there have choices of different container network, such as directly using host network, virtual network IPVLAN, veth, etc. Different choices of container network have different netdev hierarchy. Examples of netdev hierarchy show below. (eth0 and eth1 in host below is the netdev directly related to the physical device). _______________________________ | _________________ | | |POD | | | | | | | | eth0_________ | | | |____| |__| | | | | | | | | | | eth1|base_ndev| eth0_______ | | | | | RDMA || | host |_________| |_______|| --------------------------------- netdev hierarchy if directly using host network ________________________________ | _________________ | | |POD __________ | | | | |upper_ndev| | | | |eth0|__________| | | | |_______|_________| | | |lower netdev | | __|______ | | eth1| | eth0_______ | | |base_ndev| | RDMA || | host |_________| |_______|| --------------------------------- netdev hierarchy if using IPVLAN _______________________________ | _____________________ | | |POD _________ | | | | |base_ndev|| | | |eth0(veth)|_________|| | | |____________|________| | | |pairs | | _______|_ | | | | eth0_______ | | veth|base_ndev| | RDMA || | |_________| |_______|| | _________ | | eth1|base_ndev| | | host |_________| | --------------------------------- netdev hierarchy if using veth Due to some reasons, the eth1 in host is not RDMA attached netdevice, pnetid is needed to map the eth1(in host) with RDMA device so that POD can do SMC-R. Because the eth1(in host) is managed by CNI plugin(such as Terway, network management plugin in container environment), and in cloud environment the eth(in host) can dynamically be inserted by CNI when POD create and dynamically be removed by CNI when POD destroy and no POD related to the eth(in host) anymore. It is hard to config the pnetid to the eth1(in host). But it is easy to config the pnetid to the netdevice which can be seen in POD. When do SMC-R, both the container directly using host network and the container using veth network can successfully match the RDMA device, because the configured pnetid netdev is a base_ndev. But the container using IPVLAN can not successfully match the RDMA device and 0x03030000 fallback happens, because the configured pnetid netdev is not a base_ndev. Additionally, if config pnetid to the eth1(in host) also can not work for matching RDMA device when using veth network and doing SMC-R in POD.
To resolve the problems list above, this patch extends to search user -defined sw pnetid in the clc handshake ndev when no pnetid can be found in the base_ndev, and the base_ndev take precedence over ndev for backward compatibility. This patch also can unify the pnetid setup of different network choices list above in container(Config user-defined sw pnetid in the netdevice can be seen in POD).
Signed-off-by: Guangguan Wang guangguan.wang@linux.alibaba.com Reviewed-by: Wenjia Zhang wenjia@linux.ibm.com Reviewed-by: Halil Pasic pasic@linux.ibm.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/smc/smc_pnet.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/net/smc/smc_pnet.c b/net/smc/smc_pnet.c index 399314cfab90a..af12e02152740 100644 --- a/net/smc/smc_pnet.c +++ b/net/smc/smc_pnet.c @@ -1080,14 +1080,16 @@ static void smc_pnet_find_roce_by_pnetid(struct net_device *ndev, struct smc_init_info *ini) { u8 ndev_pnetid[SMC_MAX_PNETID_LEN]; + struct net_device *base_ndev; struct net *net;
- ndev = pnet_find_base_ndev(ndev); + base_ndev = pnet_find_base_ndev(ndev); net = dev_net(ndev); - if (smc_pnetid_by_dev_port(ndev->dev.parent, ndev->dev_port, + if (smc_pnetid_by_dev_port(base_ndev->dev.parent, base_ndev->dev_port, ndev_pnetid) && + smc_pnet_find_ndev_pnetid_by_table(base_ndev, ndev_pnetid) && smc_pnet_find_ndev_pnetid_by_table(ndev, ndev_pnetid)) { - smc_pnet_find_rdma_dev(ndev, ini); + smc_pnet_find_rdma_dev(base_ndev, ini); return; /* pnetid could not be determined */ } _smc_pnet_find_roce_by_pnetid(ndev_pnetid, ini, NULL, net);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Frediano Ziglio frediano.ziglio@cloud.com
[ Upstream commit 2356f15caefc0cc63d9cc5122641754f76ef9b25 ]
On XenServer on Windows machine a platform device with ID 2 instead of 1 is used.
This device is mainly identical to device 1 but due to some Windows update behaviour it was decided to use a device with a different ID.
This causes compatibility issues with Linux which expects, if Xen is detected, to find a Xen platform device (5853:0001) otherwise code will crash due to some missing initialization (specifically grant tables). Specifically from dmesg
RIP: 0010:gnttab_expand+0x29/0x210 Code: 90 0f 1f 44 00 00 55 31 d2 48 89 e5 41 57 41 56 41 55 41 89 fd 41 54 53 48 83 ec 10 48 8b 05 7e 9a 49 02 44 8b 35 a7 9a 49 02 <8b> 48 04 8d 44 39 ff f7 f1 45 8d 24 06 89 c3 e8 43 fe ff ff 44 39 RSP: 0000:ffffba34c01fbc88 EFLAGS: 00010086 ...
The device 2 is presented by Xapi adding device specification to Qemu command line.
Signed-off-by: Frediano Ziglio frediano.ziglio@cloud.com Acked-by: Juergen Gross jgross@suse.com Message-ID: 20250227145016.25350-1-frediano.ziglio@cloud.com Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/xen/platform-pci.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c index 544d3f9010b92..1db82da56db62 100644 --- a/drivers/xen/platform-pci.c +++ b/drivers/xen/platform-pci.c @@ -26,6 +26,8 @@
#define DRV_NAME "xen-platform-pci"
+#define PCI_DEVICE_ID_XEN_PLATFORM_XS61 0x0002 + static unsigned long platform_mmio; static unsigned long platform_mmio_alloc; static unsigned long platform_mmiolen; @@ -174,6 +176,8 @@ static int platform_pci_probe(struct pci_dev *pdev, static const struct pci_device_id platform_pci_tbl[] = { {PCI_VENDOR_ID_XEN, PCI_DEVICE_ID_XEN_PLATFORM, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, + {PCI_VENDOR_ID_XEN, PCI_DEVICE_ID_XEN_PLATFORM_XS61, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, {0,} };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Prathamesh Shete pshete@nvidia.com
[ Upstream commit c12bfa0fee65940b10ff5187349f76c6f6b1df9c ]
Each pin can be configured as a Special Function IO (SFIO) or GPIO, where the SFIO enables the pin to operate in alternative modes such as I2C, SPI, etc.
The current implementation sets all the pins back to SFIO mode even if they were initially in GPIO mode. This can cause glitches on the pins when pinctrl_gpio_free() is called.
Avoid these undesired glitches by storing the pin's SFIO/GPIO state on GPIO request and restoring it on GPIO free.
Signed-off-by: Prathamesh Shete pshete@nvidia.com Link: https://lore.kernel.org/20250305104939.15168-2-pshete@nvidia.com Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pinctrl/tegra/pinctrl-tegra.c | 59 +++++++++++++++++++++++---- drivers/pinctrl/tegra/pinctrl-tegra.h | 6 +++ 2 files changed, 57 insertions(+), 8 deletions(-)
diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c index 30341c43da59a..ba7bcc876e304 100644 --- a/drivers/pinctrl/tegra/pinctrl-tegra.c +++ b/drivers/pinctrl/tegra/pinctrl-tegra.c @@ -278,8 +278,8 @@ static int tegra_pinctrl_set_mux(struct pinctrl_dev *pctldev, return 0; }
-static const struct tegra_pingroup *tegra_pinctrl_get_group(struct pinctrl_dev *pctldev, - unsigned int offset) +static int tegra_pinctrl_get_group_index(struct pinctrl_dev *pctldev, + unsigned int offset) { struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev); unsigned int group, num_pins, j; @@ -292,12 +292,35 @@ static const struct tegra_pingroup *tegra_pinctrl_get_group(struct pinctrl_dev * continue; for (j = 0; j < num_pins; j++) { if (offset == pins[j]) - return &pmx->soc->groups[group]; + return group; } }
- dev_err(pctldev->dev, "Pingroup not found for pin %u\n", offset); - return NULL; + return -EINVAL; +} + +static const struct tegra_pingroup *tegra_pinctrl_get_group(struct pinctrl_dev *pctldev, + unsigned int offset, + int group_index) +{ + struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev); + + if (group_index < 0 || group_index > pmx->soc->ngroups) + return NULL; + + return &pmx->soc->groups[group_index]; +} + +static struct tegra_pingroup_config *tegra_pinctrl_get_group_config(struct pinctrl_dev *pctldev, + unsigned int offset, + int group_index) +{ + struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev); + + if (group_index < 0) + return NULL; + + return &pmx->pingroup_configs[group_index]; }
static int tegra_pinctrl_gpio_request_enable(struct pinctrl_dev *pctldev, @@ -306,12 +329,15 @@ static int tegra_pinctrl_gpio_request_enable(struct pinctrl_dev *pctldev, { struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev); const struct tegra_pingroup *group; + struct tegra_pingroup_config *config; + int group_index; u32 value;
if (!pmx->soc->sfsel_in_mux) return 0;
- group = tegra_pinctrl_get_group(pctldev, offset); + group_index = tegra_pinctrl_get_group_index(pctldev, offset); + group = tegra_pinctrl_get_group(pctldev, offset, group_index);
if (!group) return -EINVAL; @@ -319,7 +345,11 @@ static int tegra_pinctrl_gpio_request_enable(struct pinctrl_dev *pctldev, if (group->mux_reg < 0 || group->sfsel_bit < 0) return -EINVAL;
+ config = tegra_pinctrl_get_group_config(pctldev, offset, group_index); + if (!config) + return -EINVAL; value = pmx_readl(pmx, group->mux_bank, group->mux_reg); + config->is_sfsel = (value & BIT(group->sfsel_bit)) != 0; value &= ~BIT(group->sfsel_bit); pmx_writel(pmx, value, group->mux_bank, group->mux_reg);
@@ -332,12 +362,15 @@ static void tegra_pinctrl_gpio_disable_free(struct pinctrl_dev *pctldev, { struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev); const struct tegra_pingroup *group; + struct tegra_pingroup_config *config; + int group_index; u32 value;
if (!pmx->soc->sfsel_in_mux) return;
- group = tegra_pinctrl_get_group(pctldev, offset); + group_index = tegra_pinctrl_get_group_index(pctldev, offset); + group = tegra_pinctrl_get_group(pctldev, offset, group_index);
if (!group) return; @@ -345,8 +378,12 @@ static void tegra_pinctrl_gpio_disable_free(struct pinctrl_dev *pctldev, if (group->mux_reg < 0 || group->sfsel_bit < 0) return;
+ config = tegra_pinctrl_get_group_config(pctldev, offset, group_index); + if (!config) + return; value = pmx_readl(pmx, group->mux_bank, group->mux_reg); - value |= BIT(group->sfsel_bit); + if (config->is_sfsel) + value |= BIT(group->sfsel_bit); pmx_writel(pmx, value, group->mux_bank, group->mux_reg); }
@@ -799,6 +836,12 @@ int tegra_pinctrl_probe(struct platform_device *pdev, pmx->dev = &pdev->dev; pmx->soc = soc_data;
+ pmx->pingroup_configs = devm_kcalloc(&pdev->dev, + pmx->soc->ngroups, sizeof(*pmx->pingroup_configs), + GFP_KERNEL); + if (!pmx->pingroup_configs) + return -ENOMEM; + /* * Each mux group will appear in 4 functions' list of groups. * This over-allocates slightly, since not all groups are mux groups. diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.h b/drivers/pinctrl/tegra/pinctrl-tegra.h index f8269858eb78a..ec5198d391ea5 100644 --- a/drivers/pinctrl/tegra/pinctrl-tegra.h +++ b/drivers/pinctrl/tegra/pinctrl-tegra.h @@ -8,6 +8,10 @@ #ifndef __PINMUX_TEGRA_H__ #define __PINMUX_TEGRA_H__
+struct tegra_pingroup_config { + bool is_sfsel; +}; + struct tegra_pmx { struct device *dev; struct pinctrl_dev *pctl; @@ -18,6 +22,8 @@ struct tegra_pmx { int nbanks; void __iomem **regs; u32 *backup_regs; + /* Array of size soc->ngroups */ + struct tegra_pingroup_config *pingroup_configs; };
enum tegra_pinconf_param {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ryan Walklin ryan@testtoast.com
[ Upstream commit a149377c033afe6557c50892ebbfc0e8b7e2e253 ]
Add support for GPIO headphone detection with the hp-det-gpios property. In order for this to properly disable the path upon removal of headphones, the output must be labelled Headphone which is a common sink in the driver.
Describe a headphone jack and detection GPIO in the driver, check for a corresponding device tree node, and enable jack detection in a new machine init function if described.
Signed-off-by: Chris Morgan macromorgan@hotmail.com Signed-off-by: Ryan Walklin ryan@testtoast.com
-- Changelog v1..v2: - Separate DAPM changes into separate patch and add rationale.
Tested-by: Philippe Simons simons.philippe@gmail.com Link: https://patch.msgid.link/20250214220247.10810-4-ryan@testtoast.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/sunxi/sun4i-codec.c | 53 +++++++++++++++++++++++++++++++++++ 1 file changed, 53 insertions(+)
diff --git a/sound/soc/sunxi/sun4i-codec.c b/sound/soc/sunxi/sun4i-codec.c index 835dc34043670..1e310958f8c08 100644 --- a/sound/soc/sunxi/sun4i-codec.c +++ b/sound/soc/sunxi/sun4i-codec.c @@ -25,6 +25,7 @@ #include <linux/gpio/consumer.h>
#include <sound/core.h> +#include <sound/jack.h> #include <sound/pcm.h> #include <sound/pcm_params.h> #include <sound/soc.h> @@ -239,6 +240,7 @@ struct sun4i_codec { struct clk *clk_module; struct reset_control *rst; struct gpio_desc *gpio_pa; + struct gpio_desc *gpio_hp;
/* ADC_FIFOC register is at different offset on different SoCs */ struct regmap_field *reg_adc_fifoc; @@ -1273,6 +1275,49 @@ static struct snd_soc_dai_driver dummy_cpu_dai = { }, };
+static struct snd_soc_jack sun4i_headphone_jack; + +static struct snd_soc_jack_pin sun4i_headphone_jack_pins[] = { + { .pin = "Headphone", .mask = SND_JACK_HEADPHONE }, +}; + +static struct snd_soc_jack_gpio sun4i_headphone_jack_gpio = { + .name = "hp-det", + .report = SND_JACK_HEADPHONE, + .debounce_time = 150, +}; + +static int sun4i_codec_machine_init(struct snd_soc_pcm_runtime *rtd) +{ + struct snd_soc_card *card = rtd->card; + struct sun4i_codec *scodec = snd_soc_card_get_drvdata(card); + int ret; + + if (scodec->gpio_hp) { + ret = snd_soc_card_jack_new_pins(card, "Headphone Jack", + SND_JACK_HEADPHONE, + &sun4i_headphone_jack, + sun4i_headphone_jack_pins, + ARRAY_SIZE(sun4i_headphone_jack_pins)); + if (ret) { + dev_err(rtd->dev, + "Headphone jack creation failed: %d\n", ret); + return ret; + } + + sun4i_headphone_jack_gpio.desc = scodec->gpio_hp; + ret = snd_soc_jack_add_gpios(&sun4i_headphone_jack, 1, + &sun4i_headphone_jack_gpio); + + if (ret) { + dev_err(rtd->dev, "Headphone GPIO not added: %d\n", ret); + return ret; + } + } + + return 0; +} + static struct snd_soc_dai_link *sun4i_codec_create_link(struct device *dev, int *num_links) { @@ -1298,6 +1343,7 @@ static struct snd_soc_dai_link *sun4i_codec_create_link(struct device *dev, link->codecs->name = dev_name(dev); link->platforms->name = dev_name(dev); link->dai_fmt = SND_SOC_DAIFMT_I2S; + link->init = sun4i_codec_machine_init;
*num_links = 1;
@@ -1738,6 +1784,13 @@ static int sun4i_codec_probe(struct platform_device *pdev) return ret; }
+ scodec->gpio_hp = devm_gpiod_get_optional(&pdev->dev, "hp-det", GPIOD_IN); + if (IS_ERR(scodec->gpio_hp)) { + ret = PTR_ERR(scodec->gpio_hp); + dev_err_probe(&pdev->dev, ret, "Failed to get hp-det gpio\n"); + return ret; + } + /* reg_field setup */ scodec->reg_adc_fifoc = devm_regmap_field_alloc(&pdev->dev, scodec->regmap,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Baokun Li libaokun1@huawei.com
[ Upstream commit 26343ca0df715097065b02a6cddb4a029d5b9327 ]
data_err=abort aborts the journal on I/O errors. However, this option is meaningless if journal is disabled, so it is rejected in nojournal mode to reduce unnecessary checks. Also, this option is ignored upon remount.
Signed-off-by: Baokun Li libaokun1@huawei.com Reviewed-by: Zhang Yi yi.zhang@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://patch.msgid.link/20250122110533.4116662-4-libaokun@huaweicloud.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ext4/super.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 7f0231b349057..f829f989f2b59 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -2741,6 +2741,13 @@ static int ext4_check_opt_consistency(struct fs_context *fc, }
if (is_remount) { + if (!sbi->s_journal && + ctx_test_mount_opt(ctx, EXT4_MOUNT_DATA_ERR_ABORT)) { + ext4_msg(NULL, KERN_WARNING, + "Remounting fs w/o journal so ignoring data_err option"); + ctx_clear_mount_opt(ctx, EXT4_MOUNT_DATA_ERR_ABORT); + } + if (ctx_test_mount_opt(ctx, EXT4_MOUNT_DAX_ALWAYS) && (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA)) { ext4_msg(NULL, KERN_ERR, "can't mount with " @@ -5318,6 +5325,11 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb) "data=, fs mounted w/o journal"); goto failed_mount3a; } + if (test_opt(sb, DATA_ERR_ABORT)) { + ext4_msg(sb, KERN_ERR, + "can't mount with data_err=abort, fs mounted w/o journal"); + goto failed_mount3a; + } sbi->s_def_mount_opt &= ~EXT4_MOUNT_JOURNAL_CHECKSUM; clear_opt(sb, JOURNAL_CHECKSUM); clear_opt(sb, DATA_FLAGS);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maher Sanalla msanalla@nvidia.com
[ Upstream commit 81f8f7454ad9e0bf95efdec6542afdc9a6ab1e24 ]
Currently, the IB uverbs API calls uobj_get_uobj_read(), which in turn uses the rdma_lookup_get_uobject() helper to retrieve user objects. In case of failure, uobj_get_uobj_read() returns NULL, overriding the error code from rdma_lookup_get_uobject(). The IB uverbs API then translates this NULL to -EINVAL, masking the actual error and complicating debugging. For example, applications calling ibv_modify_qp that fails with EBUSY when retrieving the QP uobject will see the overridden error code EINVAL instead, masking the actual error.
Furthermore, based on rdma-core commit: "2a22f1ced5f3 ("Merge pull request #1568 from jakemoroni/master")" Kernel's IB uverbs return values are either ignored and passed on as is to application or overridden with other errnos in a few cases.
Thus, to improve error reporting and debuggability, propagate the original error from rdma_lookup_get_uobject() instead of replacing it with EINVAL.
Signed-off-by: Maher Sanalla msanalla@nvidia.com Link: https://patch.msgid.link/64f9d3711b183984e939962c2f83383904f97dfb.1740577869... Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/core/uverbs_cmd.c | 144 ++++++++++++++------------- include/rdma/uverbs_std_types.h | 2 +- 2 files changed, 77 insertions(+), 69 deletions(-)
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index c6053e82ecf6f..33e2fe0facd52 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -718,8 +718,8 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs) goto err_free;
pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs); - if (!pd) { - ret = -EINVAL; + if (IS_ERR(pd)) { + ret = PTR_ERR(pd); goto err_free; }
@@ -809,8 +809,8 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs) if (cmd.flags & IB_MR_REREG_PD) { new_pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs); - if (!new_pd) { - ret = -EINVAL; + if (IS_ERR(new_pd)) { + ret = PTR_ERR(new_pd); goto put_uobjs; } } else { @@ -919,8 +919,8 @@ static int ib_uverbs_alloc_mw(struct uverbs_attr_bundle *attrs) return PTR_ERR(uobj);
pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs); - if (!pd) { - ret = -EINVAL; + if (IS_ERR(pd)) { + ret = PTR_ERR(pd); goto err_free; }
@@ -1127,8 +1127,8 @@ static int ib_uverbs_resize_cq(struct uverbs_attr_bundle *attrs) return ret;
cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); - if (!cq) - return -EINVAL; + if (IS_ERR(cq)) + return PTR_ERR(cq);
ret = cq->device->ops.resize_cq(cq, cmd.cqe, &attrs->driver_udata); if (ret) @@ -1189,8 +1189,8 @@ static int ib_uverbs_poll_cq(struct uverbs_attr_bundle *attrs) return ret;
cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); - if (!cq) - return -EINVAL; + if (IS_ERR(cq)) + return PTR_ERR(cq);
/* we copy a struct ib_uverbs_poll_cq_resp to user space */ header_ptr = attrs->ucore.outbuf; @@ -1238,8 +1238,8 @@ static int ib_uverbs_req_notify_cq(struct uverbs_attr_bundle *attrs) return ret;
cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); - if (!cq) - return -EINVAL; + if (IS_ERR(cq)) + return PTR_ERR(cq);
ib_req_notify_cq(cq, cmd.solicited_only ? IB_CQ_SOLICITED : IB_CQ_NEXT_COMP); @@ -1321,8 +1321,8 @@ static int create_qp(struct uverbs_attr_bundle *attrs, ind_tbl = uobj_get_obj_read(rwq_ind_table, UVERBS_OBJECT_RWQ_IND_TBL, cmd->rwq_ind_tbl_handle, attrs); - if (!ind_tbl) { - ret = -EINVAL; + if (IS_ERR(ind_tbl)) { + ret = PTR_ERR(ind_tbl); goto err_put; }
@@ -1360,8 +1360,10 @@ static int create_qp(struct uverbs_attr_bundle *attrs, if (cmd->is_srq) { srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd->srq_handle, attrs); - if (!srq || srq->srq_type == IB_SRQT_XRC) { - ret = -EINVAL; + if (IS_ERR(srq) || + srq->srq_type == IB_SRQT_XRC) { + ret = IS_ERR(srq) ? PTR_ERR(srq) : + -EINVAL; goto err_put; } } @@ -1371,23 +1373,29 @@ static int create_qp(struct uverbs_attr_bundle *attrs, rcq = uobj_get_obj_read( cq, UVERBS_OBJECT_CQ, cmd->recv_cq_handle, attrs); - if (!rcq) { - ret = -EINVAL; + if (IS_ERR(rcq)) { + ret = PTR_ERR(rcq); goto err_put; } } } }
- if (has_sq) + if (has_sq) { scq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd->send_cq_handle, attrs); + if (IS_ERR(scq)) { + ret = PTR_ERR(scq); + goto err_put; + } + } + if (!ind_tbl && cmd->qp_type != IB_QPT_XRC_INI) rcq = rcq ?: scq; pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle, attrs); - if (!pd || (!scq && has_sq)) { - ret = -EINVAL; + if (IS_ERR(pd)) { + ret = PTR_ERR(pd); goto err_put; }
@@ -1482,18 +1490,18 @@ static int create_qp(struct uverbs_attr_bundle *attrs, err_put: if (!IS_ERR(xrcd_uobj)) uobj_put_read(xrcd_uobj); - if (pd) + if (!IS_ERR_OR_NULL(pd)) uobj_put_obj_read(pd); - if (scq) + if (!IS_ERR_OR_NULL(scq)) rdma_lookup_put_uobject(&scq->uobject->uevent.uobject, UVERBS_LOOKUP_READ); - if (rcq && rcq != scq) + if (!IS_ERR_OR_NULL(rcq) && rcq != scq) rdma_lookup_put_uobject(&rcq->uobject->uevent.uobject, UVERBS_LOOKUP_READ); - if (srq) + if (!IS_ERR_OR_NULL(srq)) rdma_lookup_put_uobject(&srq->uobject->uevent.uobject, UVERBS_LOOKUP_READ); - if (ind_tbl) + if (!IS_ERR_OR_NULL(ind_tbl)) uobj_put_obj_read(ind_tbl);
uobj_alloc_abort(&obj->uevent.uobject, attrs); @@ -1655,8 +1663,8 @@ static int ib_uverbs_query_qp(struct uverbs_attr_bundle *attrs) }
qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); - if (!qp) { - ret = -EINVAL; + if (IS_ERR(qp)) { + ret = PTR_ERR(qp); goto out; }
@@ -1761,8 +1769,8 @@ static int modify_qp(struct uverbs_attr_bundle *attrs,
qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd->base.qp_handle, attrs); - if (!qp) { - ret = -EINVAL; + if (IS_ERR(qp)) { + ret = PTR_ERR(qp); goto out; }
@@ -2027,8 +2035,8 @@ static int ib_uverbs_post_send(struct uverbs_attr_bundle *attrs) return -ENOMEM;
qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); - if (!qp) { - ret = -EINVAL; + if (IS_ERR(qp)) { + ret = PTR_ERR(qp); goto out; }
@@ -2065,9 +2073,9 @@ static int ib_uverbs_post_send(struct uverbs_attr_bundle *attrs)
ud->ah = uobj_get_obj_read(ah, UVERBS_OBJECT_AH, user_wr->wr.ud.ah, attrs); - if (!ud->ah) { + if (IS_ERR(ud->ah)) { + ret = PTR_ERR(ud->ah); kfree(ud); - ret = -EINVAL; goto out_put; } ud->remote_qpn = user_wr->wr.ud.remote_qpn; @@ -2304,8 +2312,8 @@ static int ib_uverbs_post_recv(struct uverbs_attr_bundle *attrs) return PTR_ERR(wr);
qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); - if (!qp) { - ret = -EINVAL; + if (IS_ERR(qp)) { + ret = PTR_ERR(qp); goto out; }
@@ -2355,8 +2363,8 @@ static int ib_uverbs_post_srq_recv(struct uverbs_attr_bundle *attrs) return PTR_ERR(wr);
srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs); - if (!srq) { - ret = -EINVAL; + if (IS_ERR(srq)) { + ret = PTR_ERR(srq); goto out; }
@@ -2412,8 +2420,8 @@ static int ib_uverbs_create_ah(struct uverbs_attr_bundle *attrs) }
pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs); - if (!pd) { - ret = -EINVAL; + if (IS_ERR(pd)) { + ret = PTR_ERR(pd); goto err; }
@@ -2482,8 +2490,8 @@ static int ib_uverbs_attach_mcast(struct uverbs_attr_bundle *attrs) return ret;
qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); - if (!qp) - return -EINVAL; + if (IS_ERR(qp)) + return PTR_ERR(qp);
obj = qp->uobject;
@@ -2532,8 +2540,8 @@ static int ib_uverbs_detach_mcast(struct uverbs_attr_bundle *attrs) return ret;
qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); - if (!qp) - return -EINVAL; + if (IS_ERR(qp)) + return PTR_ERR(qp);
obj = qp->uobject; mutex_lock(&obj->mcast_lock); @@ -2667,8 +2675,8 @@ static int kern_spec_to_ib_spec_action(struct uverbs_attr_bundle *attrs, UVERBS_OBJECT_FLOW_ACTION, kern_spec->action.handle, attrs); - if (!ib_spec->action.act) - return -EINVAL; + if (IS_ERR(ib_spec->action.act)) + return PTR_ERR(ib_spec->action.act); ib_spec->action.size = sizeof(struct ib_flow_spec_action_handle); flow_resources_add(uflow_res, @@ -2685,8 +2693,8 @@ static int kern_spec_to_ib_spec_action(struct uverbs_attr_bundle *attrs, UVERBS_OBJECT_COUNTERS, kern_spec->flow_count.handle, attrs); - if (!ib_spec->flow_count.counters) - return -EINVAL; + if (IS_ERR(ib_spec->flow_count.counters)) + return PTR_ERR(ib_spec->flow_count.counters); ib_spec->flow_count.size = sizeof(struct ib_flow_spec_action_count); flow_resources_add(uflow_res, @@ -2904,14 +2912,14 @@ static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs) return PTR_ERR(obj);
pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs); - if (!pd) { - err = -EINVAL; + if (IS_ERR(pd)) { + err = PTR_ERR(pd); goto err_uobj; }
cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); - if (!cq) { - err = -EINVAL; + if (IS_ERR(cq)) { + err = PTR_ERR(cq); goto err_put_pd; }
@@ -3012,8 +3020,8 @@ static int ib_uverbs_ex_modify_wq(struct uverbs_attr_bundle *attrs) return -EINVAL;
wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ, cmd.wq_handle, attrs); - if (!wq) - return -EINVAL; + if (IS_ERR(wq)) + return PTR_ERR(wq);
if (cmd.attr_mask & IB_WQ_FLAGS) { wq_attr.flags = cmd.flags; @@ -3096,8 +3104,8 @@ static int ib_uverbs_ex_create_rwq_ind_table(struct uverbs_attr_bundle *attrs) num_read_wqs++) { wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ, wqs_handles[num_read_wqs], attrs); - if (!wq) { - err = -EINVAL; + if (IS_ERR(wq)) { + err = PTR_ERR(wq); goto put_wqs; }
@@ -3252,8 +3260,8 @@ static int ib_uverbs_ex_create_flow(struct uverbs_attr_bundle *attrs) }
qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); - if (!qp) { - err = -EINVAL; + if (IS_ERR(qp)) { + err = PTR_ERR(qp); goto err_uobj; }
@@ -3399,15 +3407,15 @@ static int __uverbs_create_xsrq(struct uverbs_attr_bundle *attrs, if (ib_srq_has_cq(cmd->srq_type)) { attr.ext.cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd->cq_handle, attrs); - if (!attr.ext.cq) { - ret = -EINVAL; + if (IS_ERR(attr.ext.cq)) { + ret = PTR_ERR(attr.ext.cq); goto err_put_xrcd; } }
pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle, attrs); - if (!pd) { - ret = -EINVAL; + if (IS_ERR(pd)) { + ret = PTR_ERR(pd); goto err_put_cq; }
@@ -3514,8 +3522,8 @@ static int ib_uverbs_modify_srq(struct uverbs_attr_bundle *attrs) return ret;
srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs); - if (!srq) - return -EINVAL; + if (IS_ERR(srq)) + return PTR_ERR(srq);
attr.max_wr = cmd.max_wr; attr.srq_limit = cmd.srq_limit; @@ -3542,8 +3550,8 @@ static int ib_uverbs_query_srq(struct uverbs_attr_bundle *attrs) return ret;
srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs); - if (!srq) - return -EINVAL; + if (IS_ERR(srq)) + return PTR_ERR(srq);
ret = ib_query_srq(srq, &attr);
@@ -3668,8 +3676,8 @@ static int ib_uverbs_ex_modify_cq(struct uverbs_attr_bundle *attrs) return -EOPNOTSUPP;
cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); - if (!cq) - return -EINVAL; + if (IS_ERR(cq)) + return PTR_ERR(cq);
ret = rdma_set_cq_moderation(cq, cmd.attr.cq_count, cmd.attr.cq_period);
diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h index fe05121169589..555ea3d142a46 100644 --- a/include/rdma/uverbs_std_types.h +++ b/include/rdma/uverbs_std_types.h @@ -34,7 +34,7 @@ static inline void *_uobj_get_obj_read(struct ib_uobject *uobj) { if (IS_ERR(uobj)) - return NULL; + return ERR_CAST(uobj); return uobj->object; } #define uobj_get_obj_read(_object, _type, _id, _attrs) \
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 5f2909c6cd13564a07ae692a95457f52295c4f22 ]
With a large number of POSIX timers the search for a valid ID might cause a soft lockup on PREEMPT_NONE/VOLUNTARY kernels.
Add cond_resched() to the loop to prevent that.
[ tglx: Split out from Eric's series ]
Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Frederic Weisbecker frederic@kernel.org Link: https://lore.kernel.org/all/20250214135911.2037402-2-edumazet@google.com Link: https://lore.kernel.org/all/20250308155623.635612865@linutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/time/posix-timers.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c index 2d6cf93ca370a..fc08d4ccdeeb9 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c @@ -161,6 +161,7 @@ static int posix_timer_add(struct k_itimer *timer) return id; } spin_unlock(&hash_lock); + cond_resched(); } /* POSIX return code when no timer ID could be allocated */ return -EAGAIN;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Weißschuh thomas.weissschuh@linutronix.de
[ Upstream commit a52067c24ccf6ee4c85acffa0f155e9714f9adce ]
This reverts commit f590308536db ("timer debug: Hide kernel addresses via %pK in /proc/timer_list")
The timer list helper SEQ_printf() uses either the real seq_printf() for procfs output or vprintk() to print to the kernel log, when invoked from SysRq-q. It uses %pK for printing pointers.
In the past %pK was prefered over %p as it would not leak raw pointer values into the kernel log. Since commit ad67b74d2469 ("printk: hash addresses printed with %p") the regular %p has been improved to avoid this issue.
Furthermore, restricted pointers ("%pK") were never meant to be used through printk(). They can still unintentionally leak raw pointers or acquire sleeping looks in atomic contexts.
Switch to the regular pointer formatting which is safer, easier to reason about and sufficient here.
Signed-off-by: Thomas Weißschuh thomas.weissschuh@linutronix.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/lkml/20250113171731-dc10e3c1-da64-4af0-b767-7c707046... Link: https://lore.kernel.org/all/20250311-restricted-pointers-timer-v1-1-6626b91e... Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/time/timer_list.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c index ed7d6ad694fba..20a5e6962b696 100644 --- a/kernel/time/timer_list.c +++ b/kernel/time/timer_list.c @@ -46,7 +46,7 @@ static void print_timer(struct seq_file *m, struct hrtimer *taddr, struct hrtimer *timer, int idx, u64 now) { - SEQ_printf(m, " #%d: <%pK>, %ps", idx, taddr, timer->function); + SEQ_printf(m, " #%d: <%p>, %ps", idx, taddr, timer->function); SEQ_printf(m, ", S:%02x", timer->state); SEQ_printf(m, "\n"); SEQ_printf(m, " # expires at %Lu-%Lu nsecs [in %Ld to %Ld nsecs]\n", @@ -98,7 +98,7 @@ print_active_timers(struct seq_file *m, struct hrtimer_clock_base *base, static void print_base(struct seq_file *m, struct hrtimer_clock_base *base, u64 now) { - SEQ_printf(m, " .base: %pK\n", base); + SEQ_printf(m, " .base: %p\n", base); SEQ_printf(m, " .index: %d\n", base->index);
SEQ_printf(m, " .resolution: %u nsecs\n", hrtimer_resolution);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicolas Bouchinet nicolas.bouchinet@ssi.gouv.fr
[ Upstream commit 8b6861390ffee6b8ed78b9395e3776c16fec6579 ]
nf_conntrack_max and nf_conntrack_expect_max sysctls were authorized to be written any negative value, which would then be stored in the unsigned int variables nf_conntrack_max and nf_ct_expect_max variables.
While the do_proc_dointvec_conv function is supposed to limit writing handled by proc_dointvec proc_handler to INT_MAX. Such a negative value being written in an unsigned int leads to a very high value, exceeding this limit.
Moreover, the nf_conntrack_expect_max sysctl documentation specifies the minimum value is 1.
The proc_handlers have thus been updated to proc_dointvec_minmax in order to specify the following write bounds :
* Bound nf_conntrack_max sysctl writings between SYSCTL_ZERO and SYSCTL_INT_MAX.
* Bound nf_conntrack_expect_max sysctl writings between SYSCTL_ONE and SYSCTL_INT_MAX as defined in the sysctl documentation.
With this patch applied, sysctl writes outside the defined in the bound will thus lead to a write error :
``` sysctl -w net.netfilter.nf_conntrack_expect_max=-1 sysctl: setting key "net.netfilter.nf_conntrack_expect_max": Invalid argument ```
Signed-off-by: Nicolas Bouchinet nicolas.bouchinet@ssi.gouv.fr Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nf_conntrack_standalone.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c index 52245dbfae311..c333132e20799 100644 --- a/net/netfilter/nf_conntrack_standalone.c +++ b/net/netfilter/nf_conntrack_standalone.c @@ -631,7 +631,9 @@ static struct ctl_table nf_ct_sysctl_table[] = { .data = &nf_conntrack_max, .maxlen = sizeof(int), .mode = 0644, - .proc_handler = proc_dointvec, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, }, [NF_SYSCTL_CT_COUNT] = { .procname = "nf_conntrack_count", @@ -667,7 +669,9 @@ static struct ctl_table nf_ct_sysctl_table[] = { .data = &nf_ct_expect_max, .maxlen = sizeof(int), .mode = 0644, - .proc_handler = proc_dointvec, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_INT_MAX, }, [NF_SYSCTL_CT_ACCT] = { .procname = "nf_conntrack_acct", @@ -970,7 +974,9 @@ static struct ctl_table nf_ct_netfilter_table[] = { .data = &nf_conntrack_max, .maxlen = sizeof(int), .mode = 0644, - .proc_handler = proc_dointvec, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, }, { } };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ryan Roberts ryan.roberts@arm.com
[ Upstream commit bfb1d2b9021c21891427acc86eb848ccedeb274e ]
pud_bad() is currently defined in terms of pud_table(). Although for some configs, pud_table() is hard-coded to true i.e. when using 64K base pages or when page table levels are less than 3.
pud_bad() is intended to check that the pud is configured correctly. Hence let's open-code the same check that the full version of pud_table() uses into pud_bad(). Then it always performs the check regardless of the config.
Cc: Will Deacon will@kernel.org Cc: Ard Biesheuvel ardb@kernel.org Cc: Ryan Roberts ryan.roberts@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ryan Roberts ryan.roberts@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com Link: https://lore.kernel.org/r/20250221044227.1145393-7-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/include/asm/pgtable.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 1d713cfb0af16..426c3cb3e3bb1 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -677,7 +677,8 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) pr_err("%s:%d: bad pmd %016llx.\n", __FILE__, __LINE__, pmd_val(e))
#define pud_none(pud) (!pud_val(pud)) -#define pud_bad(pud) (!pud_table(pud)) +#define pud_bad(pud) ((pud_val(pud) & PUD_TYPE_MASK) != \ + PUD_TYPE_TABLE) #define pud_present(pud) pte_present(pud_pte(pud)) #define pud_leaf(pud) (pud_present(pud) && !pud_table(pud)) #define pud_valid(pud) pte_valid(pud_pte(pud))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kaustabh Chakraborty kauschluss@disroot.org
[ Upstream commit 7cbe799ac10fd8be85af5e0615c4337f81e575f3 ]
Add support for Exynos7870 DW MMC controllers, for both SMU and non-SMU variants. These controllers require a quirk to access 64-bit FIFO in 32-bit accesses (DW_MMC_QUIRK_FIFO64_32).
Signed-off-by: Kaustabh Chakraborty kauschluss@disroot.org Link: https://lore.kernel.org/r/20250219-exynos7870-mmc-v2-3-b4255a3e39ed@disroot.... Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mmc/host/dw_mmc-exynos.c | 41 +++++++++++++++++++++++++++++++- 1 file changed, 40 insertions(+), 1 deletion(-)
diff --git a/drivers/mmc/host/dw_mmc-exynos.c b/drivers/mmc/host/dw_mmc-exynos.c index 9f20ac524c8b8..2a5c3c822f6af 100644 --- a/drivers/mmc/host/dw_mmc-exynos.c +++ b/drivers/mmc/host/dw_mmc-exynos.c @@ -28,6 +28,8 @@ enum dw_mci_exynos_type { DW_MCI_TYPE_EXYNOS5420_SMU, DW_MCI_TYPE_EXYNOS7, DW_MCI_TYPE_EXYNOS7_SMU, + DW_MCI_TYPE_EXYNOS7870, + DW_MCI_TYPE_EXYNOS7870_SMU, DW_MCI_TYPE_ARTPEC8, };
@@ -70,6 +72,12 @@ static struct dw_mci_exynos_compatible { }, { .compatible = "samsung,exynos7-dw-mshc-smu", .ctrl_type = DW_MCI_TYPE_EXYNOS7_SMU, + }, { + .compatible = "samsung,exynos7870-dw-mshc", + .ctrl_type = DW_MCI_TYPE_EXYNOS7870, + }, { + .compatible = "samsung,exynos7870-dw-mshc-smu", + .ctrl_type = DW_MCI_TYPE_EXYNOS7870_SMU, }, { .compatible = "axis,artpec8-dw-mshc", .ctrl_type = DW_MCI_TYPE_ARTPEC8, @@ -86,6 +94,8 @@ static inline u8 dw_mci_exynos_get_ciu_div(struct dw_mci *host) return EXYNOS4210_FIXED_CIU_CLK_DIV; else if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) return SDMMC_CLKSEL_GET_DIV(mci_readl(host, CLKSEL64)) + 1; else @@ -101,7 +111,8 @@ static void dw_mci_exynos_config_smu(struct dw_mci *host) * set for non-ecryption mode at this time. */ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS5420_SMU || - priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU) { + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU) { mci_writel(host, MPSBEGIN0, 0); mci_writel(host, MPSEND0, SDMMC_ENDING_SEC_NR_MAX); mci_writel(host, MPSCTRL0, SDMMC_MPSCTRL_SECURE_WRITE_BIT | @@ -127,6 +138,12 @@ static int dw_mci_exynos_priv_init(struct dw_mci *host) DQS_CTRL_GET_RD_DELAY(priv->saved_strobe_ctrl); }
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU) { + /* Quirk needed for certain Exynos SoCs */ + host->quirks |= DW_MMC_QUIRK_FIFO64_32; + } + if (priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) { /* Quirk needed for the ARTPEC-8 SoC */ host->quirks |= DW_MMC_QUIRK_EXTENDED_TMOUT; @@ -144,6 +161,8 @@ static void dw_mci_exynos_set_clksel_timing(struct dw_mci *host, u32 timing)
if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) clksel = mci_readl(host, CLKSEL64); else @@ -153,6 +172,8 @@ static void dw_mci_exynos_set_clksel_timing(struct dw_mci *host, u32 timing)
if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) mci_writel(host, CLKSEL64, clksel); else @@ -223,6 +244,8 @@ static int dw_mci_exynos_resume_noirq(struct device *dev)
if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) clksel = mci_readl(host, CLKSEL64); else @@ -231,6 +254,8 @@ static int dw_mci_exynos_resume_noirq(struct device *dev) if (clksel & SDMMC_CLKSEL_WAKEUP_INT) { if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) mci_writel(host, CLKSEL64, clksel); else @@ -410,6 +435,8 @@ static inline u8 dw_mci_exynos_get_clksmpl(struct dw_mci *host)
if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) return SDMMC_CLKSEL_CCLK_SAMPLE(mci_readl(host, CLKSEL64)); else @@ -423,6 +450,8 @@ static inline void dw_mci_exynos_set_clksmpl(struct dw_mci *host, u8 sample)
if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) clksel = mci_readl(host, CLKSEL64); else @@ -430,6 +459,8 @@ static inline void dw_mci_exynos_set_clksmpl(struct dw_mci *host, u8 sample) clksel = SDMMC_CLKSEL_UP_SAMPLE(clksel, sample); if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) mci_writel(host, CLKSEL64, clksel); else @@ -444,6 +475,8 @@ static inline u8 dw_mci_exynos_move_next_clksmpl(struct dw_mci *host)
if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) clksel = mci_readl(host, CLKSEL64); else @@ -454,6 +487,8 @@ static inline u8 dw_mci_exynos_move_next_clksmpl(struct dw_mci *host)
if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 || priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 || + priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU || priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) mci_writel(host, CLKSEL64, clksel); else @@ -633,6 +668,10 @@ static const struct of_device_id dw_mci_exynos_match[] = { .data = &exynos_drv_data, }, { .compatible = "samsung,exynos7-dw-mshc-smu", .data = &exynos_drv_data, }, + { .compatible = "samsung,exynos7870-dw-mshc", + .data = &exynos_drv_data, }, + { .compatible = "samsung,exynos7870-dw-mshc-smu", + .data = &exynos_drv_data, }, { .compatible = "axis,artpec8-dw-mshc", .data = &artpec_drv_data, }, {},
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Erick Shepherd erick.shepherd@ni.com
[ Upstream commit fb3bbc46c94f261b6156ee863c1b06c84cf157dc ]
Per the SD Host Controller Simplified Specification v4.20 §3.2.3, change the SD card clock parameters only after first disabling the external card clock. Doing this fixes a spurious clock pulse on Baytrail and Apollo Lake SD controllers which otherwise breaks voltage switching with a specific Swissbit SD card.
Signed-off-by: Kyle Roeschley kyle.roeschley@ni.com Signed-off-by: Brad Mouring brad.mouring@ni.com Signed-off-by: Erick Shepherd erick.shepherd@ni.com Acked-by: Adrian Hunter adrian.hunter@intel.com Link: https://lore.kernel.org/r/20250211214645.469279-1-erick.shepherd@ni.com Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mmc/host/sdhci.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 536d21028a116..6822a3249286c 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -2049,10 +2049,15 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
host->mmc->actual_clock = 0;
- sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL); + clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); + if (clk & SDHCI_CLOCK_CARD_EN) + sdhci_writew(host, clk & ~SDHCI_CLOCK_CARD_EN, + SDHCI_CLOCK_CONTROL);
- if (clock == 0) + if (clock == 0) { + sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL); return; + }
clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock); sdhci_enable_clk(host, clk);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kurt Borja kuurtb@gmail.com
[ Upstream commit dbcfcb239b3b452ef8782842c36fb17dd1b9092f ]
Some Alienware laptops that support the SMM interface, may have up to 4 fans.
Tested on an Alienware x15 r1.
Signed-off-by: Kurt Borja kuurtb@gmail.com Link: https://lore.kernel.org/r/20250304055249.51940-2-kuurtb@gmail.com Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/hwmon/dell-smm-hwmon.rst | 14 +++++++------- drivers/hwmon/dell-smm-hwmon.c | 5 ++++- 2 files changed, 11 insertions(+), 8 deletions(-)
diff --git a/Documentation/hwmon/dell-smm-hwmon.rst b/Documentation/hwmon/dell-smm-hwmon.rst index d8f1d6859b964..1c12fbba440bc 100644 --- a/Documentation/hwmon/dell-smm-hwmon.rst +++ b/Documentation/hwmon/dell-smm-hwmon.rst @@ -32,12 +32,12 @@ Temperature sensors and fans can be queried and set via the standard =============================== ======= ======================================= Name Perm Description =============================== ======= ======================================= -fan[1-3]_input RO Fan speed in RPM. -fan[1-3]_label RO Fan label. -fan[1-3]_min RO Minimal Fan speed in RPM -fan[1-3]_max RO Maximal Fan speed in RPM -fan[1-3]_target RO Expected Fan speed in RPM -pwm[1-3] RW Control the fan PWM duty-cycle. +fan[1-4]_input RO Fan speed in RPM. +fan[1-4]_label RO Fan label. +fan[1-4]_min RO Minimal Fan speed in RPM +fan[1-4]_max RO Maximal Fan speed in RPM +fan[1-4]_target RO Expected Fan speed in RPM +pwm[1-4] RW Control the fan PWM duty-cycle. pwm1_enable WO Enable or disable automatic BIOS fan control (not supported on all laptops, see below for details). @@ -93,7 +93,7 @@ Again, when you find new codes, we'd be happy to have your patches! ---------------------------
The driver also exports the fans as thermal cooling devices with -``type`` set to ``dell-smm-fan[1-3]``. This allows for easy fan control +``type`` set to ``dell-smm-fan[1-4]``. This allows for easy fan control using one of the thermal governors.
Module parameters diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c index 1572b54160158..dbcb8f362061d 100644 --- a/drivers/hwmon/dell-smm-hwmon.c +++ b/drivers/hwmon/dell-smm-hwmon.c @@ -67,7 +67,7 @@ #define I8K_POWER_BATTERY 0x01
#define DELL_SMM_NO_TEMP 10 -#define DELL_SMM_NO_FANS 3 +#define DELL_SMM_NO_FANS 4
struct dell_smm_data { struct mutex i8k_mutex; /* lock for sensors writes */ @@ -940,11 +940,14 @@ static const struct hwmon_channel_info *dell_smm_info[] = { HWMON_F_INPUT | HWMON_F_LABEL | HWMON_F_MIN | HWMON_F_MAX | HWMON_F_TARGET, HWMON_F_INPUT | HWMON_F_LABEL | HWMON_F_MIN | HWMON_F_MAX | + HWMON_F_TARGET, + HWMON_F_INPUT | HWMON_F_LABEL | HWMON_F_MIN | HWMON_F_MAX | HWMON_F_TARGET ), HWMON_CHANNEL_INFO(pwm, HWMON_PWM_INPUT | HWMON_PWM_ENABLE, HWMON_PWM_INPUT, + HWMON_PWM_INPUT, HWMON_PWM_INPUT ), NULL
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Willem de Bruijn willemb@google.com
[ Upstream commit a18dfa9925b9ef6107ea3aa5814ca3c704d34a8a ]
When spanning datagram construction over multiple send calls using MSG_MORE, per datagram settings are configured on the first send.
That is when ip(6)_setup_cork stores these settings for subsequent use in __ip(6)_append_data and others.
The only flag that escaped this was dontfrag. As a result, a datagram could be constructed with df=0 on the first sendmsg, but df=1 on a next. Which is what cmsg_ip.sh does in an upcoming MSG_MORE test in the "diff" scenario.
Changing datagram conditions in the middle of constructing an skb makes this already complex code path even more convoluted. It is here unintentional. Bring this flag in line with expected sockopt/cmsg behavior.
And stop passing ipc6 to __ip6_append_data, to avoid such issues in the future. This is already the case for __ip_append_data.
inet6_cork had a 6 byte hole, so the 1B flag has no impact.
Signed-off-by: Willem de Bruijn willemb@google.com Reviewed-by: Eric Dumazet edumazet@google.com Link: https://patch.msgid.link/20250307033620.411611-3-willemdebruijn.kernel@gmail... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/ipv6.h | 1 + net/ipv6/ip6_output.c | 9 +++++---- 2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h index 9a44de45cc1f2..9f27e004127bb 100644 --- a/include/linux/ipv6.h +++ b/include/linux/ipv6.h @@ -199,6 +199,7 @@ struct inet6_cork { struct ipv6_txoptions *opt; u8 hop_limit; u8 tclass; + u8 dontfrag:1; };
/** diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index f7a225da8525b..cfc276e5a249f 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -1450,6 +1450,7 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork, } v6_cork->hop_limit = ipc6->hlimit; v6_cork->tclass = ipc6->tclass; + v6_cork->dontfrag = ipc6->dontfrag; if (rt->dst.flags & DST_XFRM_TUNNEL) mtu = np->pmtudisc >= IPV6_PMTUDISC_PROBE ? READ_ONCE(rt->dst.dev->mtu) : dst_mtu(&rt->dst); @@ -1483,7 +1484,7 @@ static int __ip6_append_data(struct sock *sk, int getfrag(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb), void *from, size_t length, int transhdrlen, - unsigned int flags, struct ipcm6_cookie *ipc6) + unsigned int flags) { struct sk_buff *skb, *skb_prev = NULL; struct inet_cork *cork = &cork_full->base; @@ -1539,7 +1540,7 @@ static int __ip6_append_data(struct sock *sk, if (headersize + transhdrlen > mtu) goto emsgsize;
- if (cork->length + length > mtu - headersize && ipc6->dontfrag && + if (cork->length + length > mtu - headersize && v6_cork->dontfrag && (sk->sk_protocol == IPPROTO_UDP || sk->sk_protocol == IPPROTO_ICMPV6 || sk->sk_protocol == IPPROTO_RAW)) { @@ -1884,7 +1885,7 @@ int ip6_append_data(struct sock *sk,
return __ip6_append_data(sk, &sk->sk_write_queue, &inet->cork, &np->cork, sk_page_frag(sk), getfrag, - from, length, transhdrlen, flags, ipc6); + from, length, transhdrlen, flags); } EXPORT_SYMBOL_GPL(ip6_append_data);
@@ -2089,7 +2090,7 @@ struct sk_buff *ip6_make_skb(struct sock *sk, err = __ip6_append_data(sk, &queue, cork, &v6_cork, ¤t->task_frag, getfrag, from, length + exthdrlen, transhdrlen + exthdrlen, - flags, ipc6); + flags); if (err) { __ip6_flush_pending_frames(sk, &queue, cork, &v6_cork); return ERR_PTR(err);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhikai Zhai zhikai.zhai@amd.com
[ Upstream commit d3069feecdb5542604d29b59acfd1fd213bad95b ]
[WHY] In some cases the remain de-tile buffer segments will be greater than zero if we don't add the non-top pipe to calculate, at this time the override de-tile buffer size will be valid and used. But it makes the de-tile buffer segments used finally for all of pipes exceed the maximum.
[HOW] Add the non-top pipe to calculate the remain de-tile buffer segments. Don't set override size to use the average according to pipe count if the value exceed the maximum.
Reviewed-by: Charlene Liu charlene.liu@amd.com Signed-off-by: Zhikai Zhai zhikai.zhai@amd.com Signed-off-by: Tom Chung chiahsuan.chung@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../amd/display/dc/dcn315/dcn315_resource.c | 42 +++++++++---------- 1 file changed, 20 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c index 958170fbfece7..9d643c79afea6 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c @@ -1717,7 +1717,7 @@ static int dcn315_populate_dml_pipes_from_context( pipes[pipe_cnt].dout.dsc_input_bpc = 0; DC_FP_START(); dcn31_zero_pipe_dcc_fraction(pipes, pipe_cnt); - if (pixel_rate_crb && !pipe->top_pipe && !pipe->prev_odm_pipe) { + if (pixel_rate_crb) { int bpp = source_format_to_bpp(pipes[pipe_cnt].pipe.src.source_format); /* Ceil to crb segment size */ int approx_det_segs_required_for_pstate = dcn_get_approx_det_segs_required_for_pstate( @@ -1768,28 +1768,26 @@ static int dcn315_populate_dml_pipes_from_context( continue; }
- if (!pipe->top_pipe && !pipe->prev_odm_pipe) { - bool split_required = pipe->stream->timing.pix_clk_100hz >= dcn_get_max_non_odm_pix_rate_100hz(&dc->dml.soc) - || (pipe->plane_state && pipe->plane_state->src_rect.width > 5120); - - if (remaining_det_segs > MIN_RESERVED_DET_SEGS && crb_pipes != 0) - pipes[pipe_cnt].pipe.src.det_size_override += (remaining_det_segs - MIN_RESERVED_DET_SEGS) / crb_pipes + - (crb_idx < (remaining_det_segs - MIN_RESERVED_DET_SEGS) % crb_pipes ? 1 : 0); - if (pipes[pipe_cnt].pipe.src.det_size_override > 2 * DCN3_15_MAX_DET_SEGS) { - /* Clamp to 2 pipe split max det segments */ - remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override - 2 * (DCN3_15_MAX_DET_SEGS); - pipes[pipe_cnt].pipe.src.det_size_override = 2 * DCN3_15_MAX_DET_SEGS; - } - if (pipes[pipe_cnt].pipe.src.det_size_override > DCN3_15_MAX_DET_SEGS || split_required) { - /* If we are splitting we must have an even number of segments */ - remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override % 2; - pipes[pipe_cnt].pipe.src.det_size_override -= pipes[pipe_cnt].pipe.src.det_size_override % 2; - } - /* Convert segments into size for DML use */ - pipes[pipe_cnt].pipe.src.det_size_override *= DCN3_15_CRB_SEGMENT_SIZE_KB; - - crb_idx++; + bool split_required = pipe->stream->timing.pix_clk_100hz >= dcn_get_max_non_odm_pix_rate_100hz(&dc->dml.soc) + || (pipe->plane_state && pipe->plane_state->src_rect.width > 5120); + + if (remaining_det_segs > MIN_RESERVED_DET_SEGS && crb_pipes != 0) + pipes[pipe_cnt].pipe.src.det_size_override += (remaining_det_segs - MIN_RESERVED_DET_SEGS) / crb_pipes + + (crb_idx < (remaining_det_segs - MIN_RESERVED_DET_SEGS) % crb_pipes ? 1 : 0); + if (pipes[pipe_cnt].pipe.src.det_size_override > 2 * DCN3_15_MAX_DET_SEGS) { + /* Clamp to 2 pipe split max det segments */ + remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override - 2 * (DCN3_15_MAX_DET_SEGS); + pipes[pipe_cnt].pipe.src.det_size_override = 2 * DCN3_15_MAX_DET_SEGS; + } + if (pipes[pipe_cnt].pipe.src.det_size_override > DCN3_15_MAX_DET_SEGS || split_required) { + /* If we are splitting we must have an even number of segments */ + remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override % 2; + pipes[pipe_cnt].pipe.src.det_size_override -= pipes[pipe_cnt].pipe.src.det_size_override % 2; } + /* Convert segments into size for DML use */ + pipes[pipe_cnt].pipe.src.det_size_override *= DCN3_15_CRB_SEGMENT_SIZE_KB; + + crb_idx++; pipe_cnt++; } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andreas Gruenbacher agruenba@redhat.com
[ Upstream commit d838605fea6eabae3746a276fd448f6719eb3926 ]
In run_queue(), check if the queue of pending requests is empty instead of blindly assuming that it won't be.
Signed-off-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/gfs2/glock.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index 6ba8460f53318..428c1db295fa1 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -885,11 +885,12 @@ static void run_queue(struct gfs2_glock *gl, const int nonblock) __releases(&gl->gl_lockref.lock) __acquires(&gl->gl_lockref.lock) { - struct gfs2_holder *gh = NULL; + struct gfs2_holder *gh;
if (test_and_set_bit(GLF_LOCK, &gl->gl_flags)) return;
+ /* While a demote is in progress, the GLF_LOCK flag must be set. */ GLOCK_BUG_ON(gl, test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags));
if (test_bit(GLF_DEMOTE, &gl->gl_flags) && @@ -901,18 +902,22 @@ __acquires(&gl->gl_lockref.lock) set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags); GLOCK_BUG_ON(gl, gl->gl_demote_state == LM_ST_EXCLUSIVE); gl->gl_target = gl->gl_demote_state; + do_xmote(gl, NULL, gl->gl_target); + return; } else { if (test_bit(GLF_DEMOTE, &gl->gl_flags)) gfs2_demote_wake(gl); if (do_promote(gl) == 0) goto out_unlock; gh = find_first_waiter(gl); + if (!gh) + goto out_unlock; gl->gl_target = gh->gh_state; if (!(gh->gh_flags & (LM_FLAG_TRY | LM_FLAG_TRY_1CB))) do_error(gl, 0); /* Fail queued try locks */ + do_xmote(gl, gh, gl->gl_target); + return; } - do_xmote(gl, gh, gl->gl_target); - return;
out_sched: clear_bit(GLF_LOCK, &gl->gl_flags);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
[ Upstream commit 09965a142078080fe7807bab0f6f1890cb5987a4 ]
Commit 2545c1c948a6 ("auxdisplay: Move hwidth and bwidth to struct hd44780_common") makes charlcd_alloc() argument-less effectively dropping the single allocation for the struct charlcd_priv object along with the driver specific one. Restore that behaviour here.
Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Reviewed-by: Geert Uytterhoeven geert@linux-m68k.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/auxdisplay/charlcd.c | 5 +++-- drivers/auxdisplay/charlcd.h | 5 +++-- drivers/auxdisplay/hd44780.c | 2 +- drivers/auxdisplay/lcd2s.c | 2 +- drivers/auxdisplay/panel.c | 2 +- 5 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/auxdisplay/charlcd.c b/drivers/auxdisplay/charlcd.c index 6d309e4971b61..e243291a7e77c 100644 --- a/drivers/auxdisplay/charlcd.c +++ b/drivers/auxdisplay/charlcd.c @@ -594,18 +594,19 @@ static int charlcd_init(struct charlcd *lcd) return 0; }
-struct charlcd *charlcd_alloc(void) +struct charlcd *charlcd_alloc(unsigned int drvdata_size) { struct charlcd_priv *priv; struct charlcd *lcd;
- priv = kzalloc(sizeof(*priv), GFP_KERNEL); + priv = kzalloc(sizeof(*priv) + drvdata_size, GFP_KERNEL); if (!priv) return NULL;
priv->esc_seq.len = -1;
lcd = &priv->lcd; + lcd->drvdata = priv->drvdata;
return lcd; } diff --git a/drivers/auxdisplay/charlcd.h b/drivers/auxdisplay/charlcd.h index eed80063a6d20..4bbf106b2dd8a 100644 --- a/drivers/auxdisplay/charlcd.h +++ b/drivers/auxdisplay/charlcd.h @@ -49,7 +49,7 @@ struct charlcd { unsigned long y; } addr;
- void *drvdata; + void *drvdata; /* Set by charlcd_alloc() */ };
/** @@ -93,7 +93,8 @@ struct charlcd_ops { };
void charlcd_backlight(struct charlcd *lcd, enum charlcd_onoff on); -struct charlcd *charlcd_alloc(void); + +struct charlcd *charlcd_alloc(unsigned int drvdata_size); void charlcd_free(struct charlcd *lcd);
int charlcd_register(struct charlcd *lcd); diff --git a/drivers/auxdisplay/hd44780.c b/drivers/auxdisplay/hd44780.c index 8b690f59df27d..ebaf0ff518f4c 100644 --- a/drivers/auxdisplay/hd44780.c +++ b/drivers/auxdisplay/hd44780.c @@ -226,7 +226,7 @@ static int hd44780_probe(struct platform_device *pdev) if (!hdc) return -ENOMEM;
- lcd = charlcd_alloc(); + lcd = charlcd_alloc(0); if (!lcd) goto fail1;
diff --git a/drivers/auxdisplay/lcd2s.c b/drivers/auxdisplay/lcd2s.c index 135831a165149..2b597f226c0c0 100644 --- a/drivers/auxdisplay/lcd2s.c +++ b/drivers/auxdisplay/lcd2s.c @@ -307,7 +307,7 @@ static int lcd2s_i2c_probe(struct i2c_client *i2c) if (err < 0) return err;
- lcd = charlcd_alloc(); + lcd = charlcd_alloc(0); if (!lcd) return -ENOMEM;
diff --git a/drivers/auxdisplay/panel.c b/drivers/auxdisplay/panel.c index eba04c0de7eb3..0f3999b665e70 100644 --- a/drivers/auxdisplay/panel.c +++ b/drivers/auxdisplay/panel.c @@ -835,7 +835,7 @@ static void lcd_init(void) if (!hdc) return;
- charlcd = charlcd_alloc(); + charlcd = charlcd_alloc(0); if (!charlcd) { kfree(hdc); return;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexey Klimov alexey.klimov@linaro.org
[ Upstream commit 89be3c15a58b2ccf31e969223c8ac93ca8932d81 ]
Setting format to s16le is required for compressed playback on compatible soundcards.
Cc: Srinivas Kandagatla srinivas.kandagatla@linaro.org Signed-off-by: Alexey Klimov alexey.klimov@linaro.org Link: https://patch.msgid.link/20250228161430.373961-1-alexey.klimov@linaro.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/qcom/sm8250.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/sound/soc/qcom/sm8250.c b/sound/soc/qcom/sm8250.c index 41be09a07ca71..65e51b6b46ff9 100644 --- a/sound/soc/qcom/sm8250.c +++ b/sound/soc/qcom/sm8250.c @@ -7,6 +7,7 @@ #include <sound/soc.h> #include <sound/soc-dapm.h> #include <sound/pcm.h> +#include <sound/pcm_params.h> #include <linux/soundwire/sdw.h> #include <sound/jack.h> #include <linux/input-event-codes.h> @@ -39,9 +40,11 @@ static int sm8250_be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd, SNDRV_PCM_HW_PARAM_RATE); struct snd_interval *channels = hw_param_interval(params, SNDRV_PCM_HW_PARAM_CHANNELS); + struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);
rate->min = rate->max = 48000; channels->min = channels->max = 2; + snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE);
return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vasant Hegde vasant.hegde@amd.com
[ Upstream commit 36a1cfd497435ba5e37572fe9463bb62a7b1b984 ]
Return -ENOMEM if v2_alloc_pte() fails to allocate memory.
Signed-off-by: Vasant Hegde vasant.hegde@amd.com Reviewed-by: Jason Gunthorpe jgg@nvidia.com Link: https://lore.kernel.org/r/20250227162320.5805-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/amd/io_pgtable_v2.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c index 232d17bd941fd..c86cbbc21e882 100644 --- a/drivers/iommu/amd/io_pgtable_v2.c +++ b/drivers/iommu/amd/io_pgtable_v2.c @@ -264,7 +264,7 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova, map_size = get_alloc_page_size(pgsize); pte = v2_alloc_pte(pdom->iop.pgd, iova, map_size, &updated); if (!pte) { - ret = -EINVAL; + ret = -ENOMEM; goto out; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Aaron Kling luceoscutum@gmail.com
[ Upstream commit be4ae8c19492cd6d5de61ccb34ffb3f5ede5eec8 ]
This functionally brings tegra186 in line with tegra210 and tegra194, sharing a cpufreq policy between all cores in a cluster.
Reviewed-by: Sumit Gupta sumitg@nvidia.com Acked-by: Thierry Reding treding@nvidia.com Signed-off-by: Aaron Kling webgeek1234@gmail.com Signed-off-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpufreq/tegra186-cpufreq.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/drivers/cpufreq/tegra186-cpufreq.c b/drivers/cpufreq/tegra186-cpufreq.c index 6c88827f4e625..1d6b543037237 100644 --- a/drivers/cpufreq/tegra186-cpufreq.c +++ b/drivers/cpufreq/tegra186-cpufreq.c @@ -73,11 +73,18 @@ static int tegra186_cpufreq_init(struct cpufreq_policy *policy) { struct tegra186_cpufreq_data *data = cpufreq_get_driver_data(); unsigned int cluster = data->cpus[policy->cpu].bpmp_cluster_id; + u32 cpu;
policy->freq_table = data->clusters[cluster].table; policy->cpuinfo.transition_latency = 300 * 1000; policy->driver_data = NULL;
+ /* set same policy for all cpus in a cluster */ + for (cpu = 0; cpu < ARRAY_SIZE(tegra186_cpus); cpu++) { + if (data->cpus[cpu].bpmp_cluster_id == cluster) + cpumask_set_cpu(cpu, policy->cpus); + } + return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Herbert Xu herbert@gondor.apana.org.au
[ Upstream commit cc47f07234f72cbd8e2c973cdbf2a6730660a463 ]
Unlike the decompression code, the compression code in LZO never checked for output overruns. It instead assumes that the caller always provides enough buffer space, disregarding the buffer length provided by the caller.
Add a safe compression interface that checks for the end of buffer before each write. Use the safe interface in crypto/lzo.
Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- crypto/lzo-rle.c | 2 +- crypto/lzo.c | 2 +- include/linux/lzo.h | 8 +++ lib/lzo/Makefile | 2 +- lib/lzo/lzo1x_compress.c | 102 +++++++++++++++++++++++++--------- lib/lzo/lzo1x_compress_safe.c | 18 ++++++ 6 files changed, 106 insertions(+), 28 deletions(-) create mode 100644 lib/lzo/lzo1x_compress_safe.c
diff --git a/crypto/lzo-rle.c b/crypto/lzo-rle.c index 0631d975bfac1..0abc2d87f0420 100644 --- a/crypto/lzo-rle.c +++ b/crypto/lzo-rle.c @@ -55,7 +55,7 @@ static int __lzorle_compress(const u8 *src, unsigned int slen, size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */ int err;
- err = lzorle1x_1_compress(src, slen, dst, &tmp_len, ctx); + err = lzorle1x_1_compress_safe(src, slen, dst, &tmp_len, ctx);
if (err != LZO_E_OK) return -EINVAL; diff --git a/crypto/lzo.c b/crypto/lzo.c index ebda132dd22bf..8338851c7406a 100644 --- a/crypto/lzo.c +++ b/crypto/lzo.c @@ -55,7 +55,7 @@ static int __lzo_compress(const u8 *src, unsigned int slen, size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */ int err;
- err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx); + err = lzo1x_1_compress_safe(src, slen, dst, &tmp_len, ctx);
if (err != LZO_E_OK) return -EINVAL; diff --git a/include/linux/lzo.h b/include/linux/lzo.h index e95c7d1092b28..4d30e3624acd2 100644 --- a/include/linux/lzo.h +++ b/include/linux/lzo.h @@ -24,10 +24,18 @@ int lzo1x_1_compress(const unsigned char *src, size_t src_len, unsigned char *dst, size_t *dst_len, void *wrkmem);
+/* Same as above but does not write more than dst_len to dst. */ +int lzo1x_1_compress_safe(const unsigned char *src, size_t src_len, + unsigned char *dst, size_t *dst_len, void *wrkmem); + /* This requires 'wrkmem' of size LZO1X_1_MEM_COMPRESS */ int lzorle1x_1_compress(const unsigned char *src, size_t src_len, unsigned char *dst, size_t *dst_len, void *wrkmem);
+/* Same as above but does not write more than dst_len to dst. */ +int lzorle1x_1_compress_safe(const unsigned char *src, size_t src_len, + unsigned char *dst, size_t *dst_len, void *wrkmem); + /* safe decompression with overrun testing */ int lzo1x_decompress_safe(const unsigned char *src, size_t src_len, unsigned char *dst, size_t *dst_len); diff --git a/lib/lzo/Makefile b/lib/lzo/Makefile index 2f58fafbbdddc..fc7b2b7ef4b20 100644 --- a/lib/lzo/Makefile +++ b/lib/lzo/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only -lzo_compress-objs := lzo1x_compress.o +lzo_compress-objs := lzo1x_compress.o lzo1x_compress_safe.o lzo_decompress-objs := lzo1x_decompress_safe.o
obj-$(CONFIG_LZO_COMPRESS) += lzo_compress.o diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c index 9d31e7126606a..f00dff9b9d4e1 100644 --- a/lib/lzo/lzo1x_compress.c +++ b/lib/lzo/lzo1x_compress.c @@ -18,11 +18,22 @@ #include <linux/lzo.h> #include "lzodefs.h"
-static noinline size_t -lzo1x_1_do_compress(const unsigned char *in, size_t in_len, - unsigned char *out, size_t *out_len, - size_t ti, void *wrkmem, signed char *state_offset, - const unsigned char bitstream_version) +#undef LZO_UNSAFE + +#ifndef LZO_SAFE +#define LZO_UNSAFE 1 +#define LZO_SAFE(name) name +#define HAVE_OP(x) 1 +#endif + +#define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun + +static noinline int +LZO_SAFE(lzo1x_1_do_compress)(const unsigned char *in, size_t in_len, + unsigned char **out, unsigned char *op_end, + size_t *tp, void *wrkmem, + signed char *state_offset, + const unsigned char bitstream_version) { const unsigned char *ip; unsigned char *op; @@ -30,8 +41,9 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len, const unsigned char * const ip_end = in + in_len - 20; const unsigned char *ii; lzo_dict_t * const dict = (lzo_dict_t *) wrkmem; + size_t ti = *tp;
- op = out; + op = *out; ip = in; ii = ip; ip += ti < 4 ? 4 - ti : 0; @@ -116,25 +128,32 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len, if (t != 0) { if (t <= 3) { op[*state_offset] |= t; + NEED_OP(4); COPY4(op, ii); op += t; } else if (t <= 16) { + NEED_OP(17); *op++ = (t - 3); COPY8(op, ii); COPY8(op + 8, ii + 8); op += t; } else { if (t <= 18) { + NEED_OP(1); *op++ = (t - 3); } else { size_t tt = t - 18; + NEED_OP(1); *op++ = 0; while (unlikely(tt > 255)) { tt -= 255; + NEED_OP(1); *op++ = 0; } + NEED_OP(1); *op++ = tt; } + NEED_OP(t); do { COPY8(op, ii); COPY8(op + 8, ii + 8); @@ -151,6 +170,7 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len, if (unlikely(run_length)) { ip += run_length; run_length -= MIN_ZERO_RUN_LENGTH; + NEED_OP(4); put_unaligned_le32((run_length << 21) | 0xfffc18 | (run_length & 0x7), op); op += 4; @@ -243,10 +263,12 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len, ip += m_len; if (m_len <= M2_MAX_LEN && m_off <= M2_MAX_OFFSET) { m_off -= 1; + NEED_OP(2); *op++ = (((m_len - 1) << 5) | ((m_off & 7) << 2)); *op++ = (m_off >> 3); } else if (m_off <= M3_MAX_OFFSET) { m_off -= 1; + NEED_OP(1); if (m_len <= M3_MAX_LEN) *op++ = (M3_MARKER | (m_len - 2)); else { @@ -254,14 +276,18 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len, *op++ = M3_MARKER | 0; while (unlikely(m_len > 255)) { m_len -= 255; + NEED_OP(1); *op++ = 0; } + NEED_OP(1); *op++ = (m_len); } + NEED_OP(2); *op++ = (m_off << 2); *op++ = (m_off >> 6); } else { m_off -= 0x4000; + NEED_OP(1); if (m_len <= M4_MAX_LEN) *op++ = (M4_MARKER | ((m_off >> 11) & 8) | (m_len - 2)); @@ -282,11 +308,14 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len, m_len -= M4_MAX_LEN; *op++ = (M4_MARKER | ((m_off >> 11) & 8)); while (unlikely(m_len > 255)) { + NEED_OP(1); m_len -= 255; *op++ = 0; } + NEED_OP(1); *op++ = (m_len); } + NEED_OP(2); *op++ = (m_off << 2); *op++ = (m_off >> 6); } @@ -295,14 +324,20 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len, ii = ip; goto next; } - *out_len = op - out; - return in_end - (ii - ti); + *out = op; + *tp = in_end - (ii - ti); + return LZO_E_OK; + +output_overrun: + return LZO_E_OUTPUT_OVERRUN; }
-static int lzogeneric1x_1_compress(const unsigned char *in, size_t in_len, - unsigned char *out, size_t *out_len, - void *wrkmem, const unsigned char bitstream_version) +static int LZO_SAFE(lzogeneric1x_1_compress)( + const unsigned char *in, size_t in_len, + unsigned char *out, size_t *out_len, + void *wrkmem, const unsigned char bitstream_version) { + unsigned char * const op_end = out + *out_len; const unsigned char *ip = in; unsigned char *op = out; unsigned char *data_start; @@ -326,14 +361,18 @@ static int lzogeneric1x_1_compress(const unsigned char *in, size_t in_len, while (l > 20) { size_t ll = min_t(size_t, l, m4_max_offset + 1); uintptr_t ll_end = (uintptr_t) ip + ll; + int err; + if ((ll_end + ((t + ll) >> 5)) <= ll_end) break; BUILD_BUG_ON(D_SIZE * sizeof(lzo_dict_t) > LZO1X_1_MEM_COMPRESS); memset(wrkmem, 0, D_SIZE * sizeof(lzo_dict_t)); - t = lzo1x_1_do_compress(ip, ll, op, out_len, t, wrkmem, - &state_offset, bitstream_version); + err = LZO_SAFE(lzo1x_1_do_compress)( + ip, ll, &op, op_end, &t, wrkmem, + &state_offset, bitstream_version); + if (err != LZO_E_OK) + return err; ip += ll; - op += *out_len; l -= ll; } t += l; @@ -342,20 +381,26 @@ static int lzogeneric1x_1_compress(const unsigned char *in, size_t in_len, const unsigned char *ii = in + in_len - t;
if (op == data_start && t <= 238) { + NEED_OP(1); *op++ = (17 + t); } else if (t <= 3) { op[state_offset] |= t; } else if (t <= 18) { + NEED_OP(1); *op++ = (t - 3); } else { size_t tt = t - 18; + NEED_OP(1); *op++ = 0; while (tt > 255) { tt -= 255; + NEED_OP(1); *op++ = 0; } + NEED_OP(1); *op++ = tt; } + NEED_OP(t); if (t >= 16) do { COPY8(op, ii); COPY8(op + 8, ii + 8); @@ -368,31 +413,38 @@ static int lzogeneric1x_1_compress(const unsigned char *in, size_t in_len, } while (--t > 0); }
+ NEED_OP(3); *op++ = M4_MARKER | 1; *op++ = 0; *op++ = 0;
*out_len = op - out; return LZO_E_OK; + +output_overrun: + return LZO_E_OUTPUT_OVERRUN; }
-int lzo1x_1_compress(const unsigned char *in, size_t in_len, - unsigned char *out, size_t *out_len, - void *wrkmem) +int LZO_SAFE(lzo1x_1_compress)(const unsigned char *in, size_t in_len, + unsigned char *out, size_t *out_len, + void *wrkmem) { - return lzogeneric1x_1_compress(in, in_len, out, out_len, wrkmem, 0); + return LZO_SAFE(lzogeneric1x_1_compress)( + in, in_len, out, out_len, wrkmem, 0); }
-int lzorle1x_1_compress(const unsigned char *in, size_t in_len, - unsigned char *out, size_t *out_len, - void *wrkmem) +int LZO_SAFE(lzorle1x_1_compress)(const unsigned char *in, size_t in_len, + unsigned char *out, size_t *out_len, + void *wrkmem) { - return lzogeneric1x_1_compress(in, in_len, out, out_len, - wrkmem, LZO_VERSION); + return LZO_SAFE(lzogeneric1x_1_compress)( + in, in_len, out, out_len, wrkmem, LZO_VERSION); }
-EXPORT_SYMBOL_GPL(lzo1x_1_compress); -EXPORT_SYMBOL_GPL(lzorle1x_1_compress); +EXPORT_SYMBOL_GPL(LZO_SAFE(lzo1x_1_compress)); +EXPORT_SYMBOL_GPL(LZO_SAFE(lzorle1x_1_compress));
+#ifndef LZO_UNSAFE MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("LZO1X-1 Compressor"); +#endif diff --git a/lib/lzo/lzo1x_compress_safe.c b/lib/lzo/lzo1x_compress_safe.c new file mode 100644 index 0000000000000..371c9f8494928 --- /dev/null +++ b/lib/lzo/lzo1x_compress_safe.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * LZO1X Compressor from LZO + * + * Copyright (C) 1996-2012 Markus F.X.J. Oberhumer markus@oberhumer.com + * + * The full LZO package can be found at: + * http://www.oberhumer.com/opensource/lzo/ + * + * Changed for Linux kernel use by: + * Nitin Gupta nitingupta910@gmail.com + * Richard Purdie rpurdie@openedhand.com + */ + +#define LZO_SAFE(name) name##_safe +#define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x)) + +#include "lzo1x_compress.c"
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Diogo Ivo diogo.ivo@tecnico.ulisboa.pt
[ Upstream commit f34621f31e3be81456c903287f7e4c0609829e29 ]
According to the board schematics the enable pin of this regulator is connected to gpio line #9 of the first instance of the TCA9539 GPIO expander, so adjust it.
Signed-off-by: Diogo Ivo diogo.ivo@tecnico.ulisboa.pt Link: https://lore.kernel.org/r/20250224-diogo-gpio_exp-v1-1-80fb84ac48c6@tecnico.... Signed-off-by: Thierry Reding treding@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi index 634373a423ef6..481a88d83a650 100644 --- a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi @@ -1631,7 +1631,7 @@ vdd_1v8_dis: regulator-vdd-1v8-dis { regulator-min-microvolt = <1800000>; regulator-max-microvolt = <1800000>; regulator-always-on; - gpio = <&exp1 14 GPIO_ACTIVE_HIGH>; + gpio = <&exp1 9 GPIO_ACTIVE_HIGH>; enable-active-high; vin-supply = <&vdd_1v8>; };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andreas Schwab schwab@linux-m68k.org
[ Upstream commit 7e67ef889c9ab7246547db73d524459f47403a77 ]
Similar to the PowerMac3,1, the PowerBook6,7 is missing the #size-cells property on the i2s node.
Depends-on: commit 045b14ca5c36 ("of: WARN on deprecated #address-cells/#size-cells handling") Signed-off-by: Andreas Schwab schwab@linux-m68k.org Acked-by: Rob Herring (Arm) robh@kernel.org [maddy: added "commit" work in depends-on to avoid checkpatch error] Signed-off-by: Madhavan Srinivasan maddy@linux.ibm.com Link: https://patch.msgid.link/875xmizl6a.fsf@igel.home Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/kernel/prom_init.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c index a6090896f7497..ac669e58e2023 100644 --- a/arch/powerpc/kernel/prom_init.c +++ b/arch/powerpc/kernel/prom_init.c @@ -2974,11 +2974,11 @@ static void __init fixup_device_tree_pmac(void) char type[8]; phandle node;
- // Some pmacs are missing #size-cells on escc nodes + // Some pmacs are missing #size-cells on escc or i2s nodes for (node = 0; prom_next_node(&node); ) { type[0] = '\0'; prom_getprop(node, "device_type", type, sizeof(type)); - if (prom_strcmp(type, "escc")) + if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s")) continue;
if (prom_getproplen(node, "#size-cells") != PROM_ERROR)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
[ Upstream commit e3cd33ab17c33bd8f1a9df66ec83a15dd8f7afbb ]
snd_seq_poll() calls snd_seq_write_pool_allocated() that reads out a field in client->pool object, while it can be updated concurrently via ioctls, as reported by syzbot. The data race itself is harmless, as it's merely a poll() call, and the state is volatile. OTOH, the read out of poll object info from the caller side is fragile, and we can leave it better in snd_seq_pool_poll_wait() alone.
A similar pattern is seen in snd_seq_kernel_client_write_poll(), too, which is called from the OSS sequencer.
This patch drops the pool checks from the caller side and add the pool->lock in snd_seq_pool_poll_wait() for better data consistency.
Reported-by: syzbot+2d373c9936c00d7e120c@syzkaller.appspotmail.com Closes: https://lore.kernel.org/67c88903.050a0220.15b4b9.0028.GAE@google.com Link: https://patch.msgid.link/20250307084246.29271-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/core/seq/seq_clientmgr.c | 5 +---- sound/core/seq/seq_memory.c | 1 + 2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c index 2d707afa1ef1c..1252ea7ad55ea 100644 --- a/sound/core/seq/seq_clientmgr.c +++ b/sound/core/seq/seq_clientmgr.c @@ -1140,8 +1140,7 @@ static __poll_t snd_seq_poll(struct file *file, poll_table * wait) if (snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_OUTPUT) {
/* check if data is available in the pool */ - if (!snd_seq_write_pool_allocated(client) || - snd_seq_pool_poll_wait(client->pool, file, wait)) + if (snd_seq_pool_poll_wait(client->pool, file, wait)) mask |= EPOLLOUT | EPOLLWRNORM; }
@@ -2382,8 +2381,6 @@ int snd_seq_kernel_client_write_poll(int clientid, struct file *file, poll_table if (client == NULL) return -ENXIO;
- if (! snd_seq_write_pool_allocated(client)) - return 1; if (snd_seq_pool_poll_wait(client->pool, file, wait)) return 1; return 0; diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c index 47ef6bc30c0ee..e30b92d85079b 100644 --- a/sound/core/seq/seq_memory.c +++ b/sound/core/seq/seq_memory.c @@ -366,6 +366,7 @@ int snd_seq_pool_poll_wait(struct snd_seq_pool *pool, struct file *file, poll_table *wait) { poll_wait(file, &pool->output_sleep, wait); + guard(spinlock_irq)(&pool->lock); return snd_seq_output_ok(pool); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit f8ece40786c9342249aa0a1b55e148ee23b2a746 ]
We have platforms with 6 NUMA nodes and 480 cpus.
inet_ehash_locks_alloc() currently allocates a single 64KB page to hold all ehash spinlocks. This adds more pressure on a single node.
Change inet_ehash_locks_alloc() to use vmalloc() to spread the spinlocks on all online nodes, driven by NUMA policies.
At boot time, NUMA policy is interleave=all, meaning that tcp_hashinfo.ehash_locks gets hash dispersion on all nodes.
Tested:
lack5:~# grep inet_ehash_locks_alloc /proc/vmallocinfo 0x00000000d9aec4d1-0x00000000a828b652 69632 inet_ehash_locks_alloc+0x90/0x100 pages=16 vmalloc N0=2 N1=3 N2=3 N3=3 N4=3 N5=2
lack5:~# echo 8192 >/proc/sys/net/ipv4/tcp_child_ehash_entries lack5:~# numactl --interleave=all unshare -n bash -c "grep inet_ehash_locks_alloc /proc/vmallocinfo" 0x000000004e99d30c-0x00000000763f3279 36864 inet_ehash_locks_alloc+0x90/0x100 pages=8 vmalloc N0=1 N1=2 N2=2 N3=1 N4=1 N5=1 0x00000000d9aec4d1-0x00000000a828b652 69632 inet_ehash_locks_alloc+0x90/0x100 pages=16 vmalloc N0=2 N1=3 N2=3 N3=3 N4=3 N5=2
lack5:~# numactl --interleave=0,5 unshare -n bash -c "grep inet_ehash_locks_alloc /proc/vmallocinfo" 0x00000000fd73a33e-0x0000000004b9a177 36864 inet_ehash_locks_alloc+0x90/0x100 pages=8 vmalloc N0=4 N5=4 0x00000000d9aec4d1-0x00000000a828b652 69632 inet_ehash_locks_alloc+0x90/0x100 pages=16 vmalloc N0=2 N1=3 N2=3 N3=3 N4=3 N5=2
lack5:~# echo 1024 >/proc/sys/net/ipv4/tcp_child_ehash_entries lack5:~# numactl --interleave=all unshare -n bash -c "grep inet_ehash_locks_alloc /proc/vmallocinfo" 0x00000000db07d7a2-0x00000000ad697d29 8192 inet_ehash_locks_alloc+0x90/0x100 pages=1 vmalloc N2=1 0x00000000d9aec4d1-0x00000000a828b652 69632 inet_ehash_locks_alloc+0x90/0x100 pages=16 vmalloc N0=2 N1=3 N2=3 N3=3 N4=3 N5=2
Signed-off-by: Eric Dumazet edumazet@google.com Tested-by: Jason Xing kerneljasonxing@gmail.com Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://patch.msgid.link/20250305130550.1865988-1-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/inet_hashtables.c | 37 ++++++++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 11 deletions(-)
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 321f509f23473..5e7cdcebd64f8 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -1218,22 +1218,37 @@ int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo) { unsigned int locksz = sizeof(spinlock_t); unsigned int i, nblocks = 1; + spinlock_t *ptr = NULL;
- if (locksz != 0) { - /* allocate 2 cache lines or at least one spinlock per cpu */ - nblocks = max(2U * L1_CACHE_BYTES / locksz, 1U); - nblocks = roundup_pow_of_two(nblocks * num_possible_cpus()); + if (locksz == 0) + goto set_mask;
- /* no more locks than number of hash buckets */ - nblocks = min(nblocks, hashinfo->ehash_mask + 1); + /* Allocate 2 cache lines or at least one spinlock per cpu. */ + nblocks = max(2U * L1_CACHE_BYTES / locksz, 1U) * num_possible_cpus();
- hashinfo->ehash_locks = kvmalloc_array(nblocks, locksz, GFP_KERNEL); - if (!hashinfo->ehash_locks) - return -ENOMEM; + /* At least one page per NUMA node. */ + nblocks = max(nblocks, num_online_nodes() * PAGE_SIZE / locksz); + + nblocks = roundup_pow_of_two(nblocks); + + /* No more locks than number of hash buckets. */ + nblocks = min(nblocks, hashinfo->ehash_mask + 1);
- for (i = 0; i < nblocks; i++) - spin_lock_init(&hashinfo->ehash_locks[i]); + if (num_online_nodes() > 1) { + /* Use vmalloc() to allow NUMA policy to spread pages + * on all available nodes if desired. + */ + ptr = vmalloc_array(nblocks, locksz); + } + if (!ptr) { + ptr = kvmalloc_array(nblocks, locksz, GFP_KERNEL); + if (!ptr) + return -ENOMEM; } + for (i = 0; i < nblocks; i++) + spin_lock_init(&ptr[i]); + hashinfo->ehash_locks = ptr; +set_mask: hashinfo->ehash_locks_mask = nblocks - 1; return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexandre Belloni alexandre.belloni@bootlin.com
[ Upstream commit dcec12617ee61beed928e889607bf37e145bf86b ]
It is a bad practice to disable alarms on probe or remove as this will prevent alarms across reboots.
Link: https://lore.kernel.org/r/20250303223744.1135672-1-alexandre.belloni@bootlin... Signed-off-by: Alexandre Belloni alexandre.belloni@bootlin.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/rtc/rtc-ds1307.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c index d51565bcc1896..b7f8b3f9b0595 100644 --- a/drivers/rtc/rtc-ds1307.c +++ b/drivers/rtc/rtc-ds1307.c @@ -1802,10 +1802,8 @@ static int ds1307_probe(struct i2c_client *client, * For some variants, be sure alarms can trigger when we're * running on Vbackup (BBSQI/BBSQW) */ - if (want_irq || ds1307_can_wakeup_device) { + if (want_irq || ds1307_can_wakeup_device) regs[0] |= DS1337_BIT_INTCN | chip->bbsqi_bit; - regs[0] &= ~(DS1337_BIT_A2IE | DS1337_BIT_A1IE); - }
regmap_write(ds1307->regmap, DS1337_REG_CONTROL, regs[0]);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
[ Upstream commit 169b2262205836a5d1213ff44dca2962276bece1 ]
Sparse complains that the driver doesn't respect the bitwise types:
drivers/net/ieee802154/ca8210.c:1796:27: warning: incorrect type in assignment (different base types) drivers/net/ieee802154/ca8210.c:1796:27: expected restricted __le16 [addressable] [assigned] [usertype] pan_id drivers/net/ieee802154/ca8210.c:1796:27: got unsigned short [usertype] drivers/net/ieee802154/ca8210.c:1801:25: warning: incorrect type in assignment (different base types) drivers/net/ieee802154/ca8210.c:1801:25: expected restricted __le16 [addressable] [assigned] [usertype] pan_id drivers/net/ieee802154/ca8210.c:1801:25: got unsigned short [usertype] drivers/net/ieee802154/ca8210.c:1928:28: warning: incorrect type in argument 3 (different base types) drivers/net/ieee802154/ca8210.c:1928:28: expected unsigned short [usertype] dst_pan_id drivers/net/ieee802154/ca8210.c:1928:28: got restricted __le16 [addressable] [usertype] pan_id
Use proper setters and getters for bitwise types.
Note, in accordance with [1] the protocol is little endian.
Link: https://www.cascoda.com/wp-content/uploads/2018/11/CA-8210_datasheet_0418.pd... [1] Reviewed-by: Miquel Raynal miquel.raynal@bootlin.com Reviewed-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Link: https://lore.kernel.org/20250305105656.2133487-2-andriy.shevchenko@linux.int... Signed-off-by: Stefan Schmidt stefan@datenfreihafen.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ieee802154/ca8210.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c index 1659bbffdb91c..463be34a4ca4c 100644 --- a/drivers/net/ieee802154/ca8210.c +++ b/drivers/net/ieee802154/ca8210.c @@ -1446,8 +1446,7 @@ static u8 mcps_data_request( command.pdata.data_req.src_addr_mode = src_addr_mode; command.pdata.data_req.dst.mode = dst_address_mode; if (dst_address_mode != MAC_MODE_NO_ADDR) { - command.pdata.data_req.dst.pan_id[0] = LS_BYTE(dst_pan_id); - command.pdata.data_req.dst.pan_id[1] = MS_BYTE(dst_pan_id); + put_unaligned_le16(dst_pan_id, command.pdata.data_req.dst.pan_id); if (dst_address_mode == MAC_MODE_SHORT_ADDR) { command.pdata.data_req.dst.address[0] = LS_BYTE( dst_addr->short_address @@ -1795,12 +1794,12 @@ static int ca8210_skb_rx( } hdr.source.mode = data_ind[0]; dev_dbg(&priv->spi->dev, "srcAddrMode: %#03x\n", hdr.source.mode); - hdr.source.pan_id = *(u16 *)&data_ind[1]; + hdr.source.pan_id = cpu_to_le16(get_unaligned_le16(&data_ind[1])); dev_dbg(&priv->spi->dev, "srcPanId: %#06x\n", hdr.source.pan_id); memcpy(&hdr.source.extended_addr, &data_ind[3], 8); hdr.dest.mode = data_ind[11]; dev_dbg(&priv->spi->dev, "dstAddrMode: %#03x\n", hdr.dest.mode); - hdr.dest.pan_id = *(u16 *)&data_ind[12]; + hdr.dest.pan_id = cpu_to_le16(get_unaligned_le16(&data_ind[12])); dev_dbg(&priv->spi->dev, "dstPanId: %#06x\n", hdr.dest.pan_id); memcpy(&hdr.dest.extended_addr, &data_ind[14], 8);
@@ -1927,7 +1926,7 @@ static int ca8210_skb_tx( status = mcps_data_request( header.source.mode, header.dest.mode, - header.dest.pan_id, + le16_to_cpu(header.dest.pan_id), (union macaddr *)&header.dest.extended_addr, skb->len - mac_len, &skb->data[mac_len],
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Svyatoslav Ryhel clamor95@gmail.com
[ Upstream commit 2b3db788f2f614b875b257cdb079adadedc060f3 ]
PLLD is usually used as parent clock for internal video devices, like DSI for example, while PLLD2 is used as parent for HDMI.
Signed-off-by: Svyatoslav Ryhel clamor95@gmail.com Link: https://lore.kernel.org/r/20250226105615.61087-3-clamor95@gmail.com Signed-off-by: Thierry Reding treding@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/tegra114.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/boot/dts/tegra114.dtsi b/arch/arm/boot/dts/tegra114.dtsi index 09996acad6399..bb23bb39dd5b1 100644 --- a/arch/arm/boot/dts/tegra114.dtsi +++ b/arch/arm/boot/dts/tegra114.dtsi @@ -139,7 +139,7 @@ dsib: dsi@54400000 { reg = <0x54400000 0x00040000>; clocks = <&tegra_car TEGRA114_CLK_DSIB>, <&tegra_car TEGRA114_CLK_DSIBLP>, - <&tegra_car TEGRA114_CLK_PLL_D2_OUT0>; + <&tegra_car TEGRA114_CLK_PLL_D_OUT0>; clock-names = "dsi", "lp", "parent"; resets = <&tegra_car 82>; reset-names = "dsi";
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Markus Elfring elfring@users.sourceforge.net
[ Upstream commit b773530a34df0687020520015057075f8b7b4ac4 ]
An of_node_put(i2c_bus) call was immediately used after a pointer check for an of_find_i2c_adapter_by_node() call in this function implementation. Thus call such a function only once instead directly before the check.
This issue was transformed by using the Coccinelle software.
Signed-off-by: Markus Elfring elfring@users.sourceforge.net Signed-off-by: Hans Verkuil hverkuil@xs4all.nl Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c index 1dbb89f0ddb8c..b2a977f1ec18a 100644 --- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c +++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c @@ -802,13 +802,12 @@ static int c8sectpfe_probe(struct platform_device *pdev) } tsin->i2c_adapter = of_find_i2c_adapter_by_node(i2c_bus); + of_node_put(i2c_bus); if (!tsin->i2c_adapter) { dev_err(&pdev->dev, "No i2c adapter found\n"); - of_node_put(i2c_bus); ret = -ENODEV; goto err_node_put; } - of_node_put(i2c_bus);
tsin->rst_gpio = of_get_named_gpio(child, "reset-gpios", 0);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ming-Hung Tsai mtsai@redhat.com
[ Upstream commit 5da692e2262b8f81993baa9592f57d12c2703dea ]
A cache device failing to resume due to mapping errors should not be retried, as the failure leaves a partially initialized policy object. Repeating the resume operation risks triggering BUG_ON when reloading cache mappings into the incomplete policy object.
Reproduce steps:
1. create a cache metadata consisting of 512 or more cache blocks, with some mappings stored in the first array block of the mapping array. Here we use cache_restore v1.0 to build the metadata.
cat <<EOF >> cmeta.xml <superblock uuid="" block_size="128" nr_cache_blocks="512" \ policy="smq" hint_width="4"> <mappings> <mapping cache_block="0" origin_block="0" dirty="false"/> </mappings> </superblock> EOF dmsetup create cmeta --table "0 8192 linear /dev/sdc 0" cache_restore -i cmeta.xml -o /dev/mapper/cmeta --metadata-version=2 dmsetup remove cmeta
2. wipe the second array block of the mapping array to simulate data degradations.
mapping_root=$(dd if=/dev/sdc bs=1c count=8 skip=192 \ 2>/dev/null | hexdump -e '1/8 "%u\n"') ablock=$(dd if=/dev/sdc bs=1c count=8 skip=$((4096*mapping_root+2056)) \ 2>/dev/null | hexdump -e '1/8 "%u\n"') dd if=/dev/zero of=/dev/sdc bs=4k count=1 seek=$ablock
3. try bringing up the cache device. The resume is expected to fail due to the broken array block.
dmsetup create cmeta --table "0 8192 linear /dev/sdc 0" dmsetup create cdata --table "0 65536 linear /dev/sdc 8192" dmsetup create corig --table "0 524288 linear /dev/sdc 262144" dmsetup create cache --notable dmsetup load cache --table "0 524288 cache /dev/mapper/cmeta \ /dev/mapper/cdata /dev/mapper/corig 128 2 metadata2 writethrough smq 0" dmsetup resume cache
4. try resuming the cache again. An unexpected BUG_ON is triggered while loading cache mappings.
dmsetup resume cache
Kernel logs:
(snip) ------------[ cut here ]------------ kernel BUG at drivers/md/dm-cache-policy-smq.c:752! Oops: invalid opcode: 0000 [#1] PREEMPT SMP KASAN NOPTI CPU: 0 UID: 0 PID: 332 Comm: dmsetup Not tainted 6.13.4 #3 RIP: 0010:smq_load_mapping+0x3e5/0x570
Fix by disallowing resume operations for devices that failed the initial attempt.
Signed-off-by: Ming-Hung Tsai mtsai@redhat.com Signed-off-by: Mikulas Patocka mpatocka@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/dm-cache-target.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c index e714114d495a9..66608b42ee1ad 100644 --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -2875,6 +2875,27 @@ static dm_cblock_t get_cache_dev_size(struct cache *cache) return to_cblock(size); }
+static bool can_resume(struct cache *cache) +{ + /* + * Disallow retrying the resume operation for devices that failed the + * first resume attempt, as the failure leaves the policy object partially + * initialized. Retrying could trigger BUG_ON when loading cache mappings + * into the incomplete policy object. + */ + if (cache->sized && !cache->loaded_mappings) { + if (get_cache_mode(cache) != CM_WRITE) + DMERR("%s: unable to resume a failed-loaded cache, please check metadata.", + cache_device_name(cache)); + else + DMERR("%s: unable to resume cache due to missing proper cache table reload", + cache_device_name(cache)); + return false; + } + + return true; +} + static bool can_resize(struct cache *cache, dm_cblock_t new_size) { if (from_cblock(new_size) > from_cblock(cache->cache_size)) { @@ -2923,6 +2944,9 @@ static int cache_preresume(struct dm_target *ti) struct cache *cache = ti->private; dm_cblock_t csize = get_cache_dev_size(cache);
+ if (!can_resume(cache)) + return -EINVAL; + /* * Check to see if the cache has resized. */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthew Wilcox (Oracle) willy@infradead.org
[ Upstream commit 062e8093592fb866b8e016641a8b27feb6ac509d ]
'len' is used to store the result of i_size_read(), so making 'len' a size_t results in truncation to 4GiB on 32-bit systems.
Signed-off-by: "Matthew Wilcox (Oracle)" willy@infradead.org Link: https://lore.kernel.org/r/20250305204734.1475264-2-willy@infradead.org Tested-by: Mike Marshall hubcap@omnibond.com Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/orangefs/inode.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c index b3bbb5a5787ac..cc81ff6ac735e 100644 --- a/fs/orangefs/inode.c +++ b/fs/orangefs/inode.c @@ -23,9 +23,9 @@ static int orangefs_writepage_locked(struct page *page, struct orangefs_write_range *wr = NULL; struct iov_iter iter; struct bio_vec bv; - size_t len, wlen; + size_t wlen; ssize_t ret; - loff_t off; + loff_t len, off;
set_page_writeback(page);
@@ -94,8 +94,7 @@ static int orangefs_writepages_work(struct orangefs_writepages *ow, struct orangefs_write_range *wrp, wr; struct iov_iter iter; ssize_t ret; - size_t len; - loff_t off; + loff_t len, off; int i;
len = i_size_read(inode);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Choong Yong Liang yong.liang.choong@linux.intel.com
[ Upstream commit b63263555eaafbf9ab1a82f2020bbee872d83759 ]
The phylink_expects_phy() function allows MAC drivers to check if they are expecting a PHY to attach. The checking condition in phylink_expects_phy() aims to achieve the same result as the checking condition in phylink_attach_phy().
However, the checking condition in phylink_expects_phy() uses pl->link_config.interface, while phylink_attach_phy() uses pl->link_interface.
Initially, both pl->link_interface and pl->link_config.interface are set to SGMII, and pl->cfg_link_an_mode is set to MLO_AN_INBAND.
When the interface switches from SGMII to 2500BASE-X, pl->link_config.interface is updated by phylink_major_config(). At this point, pl->cfg_link_an_mode remains MLO_AN_INBAND, and pl->link_config.interface is set to 2500BASE-X. Subsequently, when the STMMAC interface is taken down administratively and brought back up, it is blocked by phylink_expects_phy().
Since phylink_expects_phy() and phylink_attach_phy() aim to achieve the same result, phylink_expects_phy() should check pl->link_interface, which never changes, instead of pl->link_config.interface, which is updated by phylink_major_config().
Reviewed-by: Russell King (Oracle) rmk+kernel@armlinux.org.uk Signed-off-by: Choong Yong Liang yong.liang.choong@linux.intel.com Link: https://patch.msgid.link/20250227121522.1802832-2-yong.liang.choong@linux.in... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/phy/phylink.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c index fc58e4afb38dd..3069a7df25d3f 100644 --- a/drivers/net/phy/phylink.c +++ b/drivers/net/phy/phylink.c @@ -1566,7 +1566,7 @@ bool phylink_expects_phy(struct phylink *pl) { if (pl->cfg_link_an_mode == MLO_AN_FIXED || (pl->cfg_link_an_mode == MLO_AN_INBAND && - phy_interface_mode_is_8023z(pl->link_config.interface))) + phy_interface_mode_is_8023z(pl->link_interface))) return false; return true; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matti Lehtimäki matti.lehtimaki@gmail.com
[ Upstream commit 65991ea8a6d1e68effdc01d95ebe39f1653f7b71 ]
Both MSM8974 and MSM8226 have only CX as power domain with MX & PX being handled as regulators. Handle this case by reodering pd_names to have CX first, and handling that the driver core will already attach a single power domain internally.
Signed-off-by: Matti Lehtimäki matti.lehtimaki@gmail.com [luca: minor changes] Signed-off-by: Luca Weiss luca@lucaweiss.eu Link: https://lore.kernel.org/r/20250206-wcnss-singlepd-v2-2-9a53ee953dee@lucaweis... [bjorn: Added missing braces to else after multi-statement if] Signed-off-by: Bjorn Andersson andersson@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/remoteproc/qcom_wcnss.c | 33 ++++++++++++++++++++++++++------- 1 file changed, 26 insertions(+), 7 deletions(-)
diff --git a/drivers/remoteproc/qcom_wcnss.c b/drivers/remoteproc/qcom_wcnss.c index 68f37296b1516..ce61e0e7cbeb8 100644 --- a/drivers/remoteproc/qcom_wcnss.c +++ b/drivers/remoteproc/qcom_wcnss.c @@ -117,10 +117,10 @@ static const struct wcnss_data pronto_v1_data = { .pmu_offset = 0x1004, .spare_offset = 0x1088,
- .pd_names = { "mx", "cx" }, + .pd_names = { "cx", "mx" }, .vregs = (struct wcnss_vreg_info[]) { - { "vddmx", 950000, 1150000, 0 }, { "vddcx", .super_turbo = true}, + { "vddmx", 950000, 1150000, 0 }, { "vddpx", 1800000, 1800000, 0 }, }, .num_pd_vregs = 2, @@ -131,10 +131,10 @@ static const struct wcnss_data pronto_v2_data = { .pmu_offset = 0x1004, .spare_offset = 0x1088,
- .pd_names = { "mx", "cx" }, + .pd_names = { "cx", "mx" }, .vregs = (struct wcnss_vreg_info[]) { - { "vddmx", 1287500, 1287500, 0 }, { "vddcx", .super_turbo = true }, + { "vddmx", 1287500, 1287500, 0 }, { "vddpx", 1800000, 1800000, 0 }, }, .num_pd_vregs = 2, @@ -386,8 +386,17 @@ static irqreturn_t wcnss_stop_ack_interrupt(int irq, void *dev) static int wcnss_init_pds(struct qcom_wcnss *wcnss, const char * const pd_names[WCNSS_MAX_PDS]) { + struct device *dev = wcnss->dev; int i, ret;
+ /* Handle single power domain */ + if (dev->pm_domain) { + wcnss->pds[0] = dev; + wcnss->num_pds = 1; + pm_runtime_enable(dev); + return 0; + } + for (i = 0; i < WCNSS_MAX_PDS; i++) { if (!pd_names[i]) break; @@ -407,8 +416,15 @@ static int wcnss_init_pds(struct qcom_wcnss *wcnss,
static void wcnss_release_pds(struct qcom_wcnss *wcnss) { + struct device *dev = wcnss->dev; int i;
+ /* Handle single power domain */ + if (wcnss->num_pds == 1 && dev->pm_domain) { + pm_runtime_disable(dev); + return; + } + for (i = 0; i < wcnss->num_pds; i++) dev_pm_domain_detach(wcnss->pds[i], false); } @@ -426,10 +442,13 @@ static int wcnss_init_regulators(struct qcom_wcnss *wcnss, * the regulators for the power domains. For old device trees we need to * reserve extra space to manage them through the regulator interface. */ - if (wcnss->num_pds) - info += num_pd_vregs; - else + if (wcnss->num_pds) { + info += wcnss->num_pds; + /* Handle single power domain case */ + num_vregs += num_pd_vregs - wcnss->num_pds; + } else { num_vregs += num_pd_vregs; + }
bulk = devm_kcalloc(wcnss->dev, num_vregs, sizeof(struct regulator_bulk_data),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Victor Lu victorchengchi.lu@amd.com
[ Upstream commit 057fef20b8401110a7bc1c2fe9d804a8a0bf0d24 ]
SRIOV VF does not have write access to AGP BAR regs. Skip the writes to avoid a dmesg warning.
Signed-off-by: Victor Lu victorchengchi.lu@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c index ec4d5e15b766a..de74686cb1dbd 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c @@ -92,12 +92,12 @@ static void gfxhub_v1_0_init_system_aperture_regs(struct amdgpu_device *adev) { uint64_t value;
- /* Program the AGP BAR */ - WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_BASE, 0); - WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_BOT, adev->gmc.agp_start >> 24); - WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_TOP, adev->gmc.agp_end >> 24); - if (!amdgpu_sriov_vf(adev) || adev->asic_type <= CHIP_VEGA10) { + /* Program the AGP BAR */ + WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_BASE, 0); + WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_BOT, adev->gmc.agp_start >> 24); + WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_TOP, adev->gmc.agp_end >> 24); + /* Program the system aperture low logical page number. */ WREG32_SOC15_RLC(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR, min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil hverkuil@xs4all.nl
[ Upstream commit a79efc44b51432490538a55b9753a721f7d3ea42 ]
The video_device for the MPEG encoder did not set device_caps.
Add this, otherwise the video device can't be registered (you get a WARN_ON instead).
Not seen before since currently 417 support is disabled, but I found this while experimenting with it.
Signed-off-by: Hans Verkuil hverkuil@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/usb/cx231xx/cx231xx-417.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/media/usb/cx231xx/cx231xx-417.c b/drivers/media/usb/cx231xx/cx231xx-417.c index c5e21785fafe2..02343e88cc618 100644 --- a/drivers/media/usb/cx231xx/cx231xx-417.c +++ b/drivers/media/usb/cx231xx/cx231xx-417.c @@ -1722,6 +1722,8 @@ static void cx231xx_video_dev_init( vfd->lock = &dev->lock; vfd->release = video_device_release_empty; vfd->ctrl_handler = &dev->mpeg_ctrl_handler.hdl; + vfd->device_caps = V4L2_CAP_READWRITE | V4L2_CAP_STREAMING | + V4L2_CAP_VIDEO_CAPTURE; video_set_drvdata(vfd, dev); if (dev->tuner_type == TUNER_ABSENT) { v4l2_disable_ioctl(vfd, VIDIOC_G_FREQUENCY);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Artur Weber aweber.kernel@gmail.com
[ Upstream commit 07b5a2a13f4704c5eae3be7277ec54ffdba45f72 ]
Replace uses of bare "unsigned" with "unsigned int" to fix checkpatch warnings. No functional change.
Signed-off-by: Artur Weber aweber.kernel@gmail.com Link: https://lore.kernel.org/20250303-bcm21664-pinctrl-v3-2-5f8b80e4ab51@gmail.co... Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pinctrl/bcm/pinctrl-bcm281xx.c | 44 +++++++++++++------------- 1 file changed, 22 insertions(+), 22 deletions(-)
diff --git a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c index bba5496335eeb..c313f01789575 100644 --- a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c +++ b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c @@ -69,7 +69,7 @@ static enum bcm281xx_pin_type hdmi_pin = BCM281XX_PIN_TYPE_HDMI; struct bcm281xx_pin_function { const char *name; const char * const *groups; - const unsigned ngroups; + const unsigned int ngroups; };
/* @@ -81,10 +81,10 @@ struct bcm281xx_pinctrl_data {
/* List of all pins */ const struct pinctrl_pin_desc *pins; - const unsigned npins; + const unsigned int npins;
const struct bcm281xx_pin_function *functions; - const unsigned nfunctions; + const unsigned int nfunctions;
struct regmap *regmap; }; @@ -938,7 +938,7 @@ static struct bcm281xx_pinctrl_data bcm281xx_pinctrl = { };
static inline enum bcm281xx_pin_type pin_type_get(struct pinctrl_dev *pctldev, - unsigned pin) + unsigned int pin) { struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
@@ -982,7 +982,7 @@ static int bcm281xx_pinctrl_get_groups_count(struct pinctrl_dev *pctldev) }
static const char *bcm281xx_pinctrl_get_group_name(struct pinctrl_dev *pctldev, - unsigned group) + unsigned int group) { struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
@@ -990,9 +990,9 @@ static const char *bcm281xx_pinctrl_get_group_name(struct pinctrl_dev *pctldev, }
static int bcm281xx_pinctrl_get_group_pins(struct pinctrl_dev *pctldev, - unsigned group, + unsigned int group, const unsigned **pins, - unsigned *num_pins) + unsigned int *num_pins) { struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
@@ -1004,7 +1004,7 @@ static int bcm281xx_pinctrl_get_group_pins(struct pinctrl_dev *pctldev,
static void bcm281xx_pinctrl_pin_dbg_show(struct pinctrl_dev *pctldev, struct seq_file *s, - unsigned offset) + unsigned int offset) { seq_printf(s, " %s", dev_name(pctldev->dev)); } @@ -1026,7 +1026,7 @@ static int bcm281xx_pinctrl_get_fcns_count(struct pinctrl_dev *pctldev) }
static const char *bcm281xx_pinctrl_get_fcn_name(struct pinctrl_dev *pctldev, - unsigned function) + unsigned int function) { struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
@@ -1034,9 +1034,9 @@ static const char *bcm281xx_pinctrl_get_fcn_name(struct pinctrl_dev *pctldev, }
static int bcm281xx_pinctrl_get_fcn_groups(struct pinctrl_dev *pctldev, - unsigned function, + unsigned int function, const char * const **groups, - unsigned * const num_groups) + unsigned int * const num_groups) { struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
@@ -1047,8 +1047,8 @@ static int bcm281xx_pinctrl_get_fcn_groups(struct pinctrl_dev *pctldev, }
static int bcm281xx_pinmux_set(struct pinctrl_dev *pctldev, - unsigned function, - unsigned group) + unsigned int function, + unsigned int group) { struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev); const struct bcm281xx_pin_function *f = &pdata->functions[function]; @@ -1079,7 +1079,7 @@ static const struct pinmux_ops bcm281xx_pinctrl_pinmux_ops = { };
static int bcm281xx_pinctrl_pin_config_get(struct pinctrl_dev *pctldev, - unsigned pin, + unsigned int pin, unsigned long *config) { return -ENOTSUPP; @@ -1088,9 +1088,9 @@ static int bcm281xx_pinctrl_pin_config_get(struct pinctrl_dev *pctldev,
/* Goes through the configs and update register val/mask */ static int bcm281xx_std_pin_update(struct pinctrl_dev *pctldev, - unsigned pin, + unsigned int pin, unsigned long *configs, - unsigned num_configs, + unsigned int num_configs, u32 *val, u32 *mask) { @@ -1204,9 +1204,9 @@ static const u16 bcm281xx_pullup_map[] = {
/* Goes through the configs and update register val/mask */ static int bcm281xx_i2c_pin_update(struct pinctrl_dev *pctldev, - unsigned pin, + unsigned int pin, unsigned long *configs, - unsigned num_configs, + unsigned int num_configs, u32 *val, u32 *mask) { @@ -1274,9 +1274,9 @@ static int bcm281xx_i2c_pin_update(struct pinctrl_dev *pctldev,
/* Goes through the configs and update register val/mask */ static int bcm281xx_hdmi_pin_update(struct pinctrl_dev *pctldev, - unsigned pin, + unsigned int pin, unsigned long *configs, - unsigned num_configs, + unsigned int num_configs, u32 *val, u32 *mask) { @@ -1318,9 +1318,9 @@ static int bcm281xx_hdmi_pin_update(struct pinctrl_dev *pctldev, }
static int bcm281xx_pinctrl_pin_config_set(struct pinctrl_dev *pctldev, - unsigned pin, + unsigned int pin, unsigned long *configs, - unsigned num_configs) + unsigned int num_configs) { struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev); enum bcm281xx_pin_type pin_type;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Sverdlin alexander.sverdlin@siemens.com
[ Upstream commit 7ff1c88fc89688c27f773ba956f65f0c11367269 ]
So that of_find_net_device_by_node() can find CPSW ports and other DSA switches can be stacked downstream. Tested in conjunction with KSZ8873.
Reviewed-by: Siddharth Vadapalli s-vadapalli@ti.com Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: Alexander Sverdlin alexander.sverdlin@siemens.com Link: https://patch.msgid.link/20250303074703.1758297-1-alexander.sverdlin@siemens... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ti/cpsw_new.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c index 6e70aa1cc7bf1..42684cb83606a 100644 --- a/drivers/net/ethernet/ti/cpsw_new.c +++ b/drivers/net/ethernet/ti/cpsw_new.c @@ -1411,6 +1411,7 @@ static int cpsw_create_ports(struct cpsw_common *cpsw) ndev->netdev_ops = &cpsw_netdev_ops; ndev->ethtool_ops = &cpsw_ethtool_ops; SET_NETDEV_DEV(ndev, dev); + ndev->dev.of_node = slave_data->slave_node;
if (!napi_ndev) { /* CPSW Host port CPDMA interface is shared between
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Seiderer ps.report@gmx.net
[ Upstream commit 2b15a0693f70d1e8119743ee89edbfb1271b3ea8 ]
Fix mpls maximum labels list parsing up to MAX_MPLS_LABELS entries (instead of up to MAX_MPLS_LABELS - 1).
Addresses the following:
$ echo "mpls 00000f00,00000f01,00000f02,00000f03,00000f04,00000f05,00000f06,00000f07,00000f08,00000f09,00000f0a,00000f0b,00000f0c,00000f0d,00000f0e,00000f0f" > /proc/net/pktgen/lo@0 -bash: echo: write error: Argument list too long
Signed-off-by: Peter Seiderer ps.report@gmx.net Reviewed-by: Simon Horman horms@kernel.org Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/pktgen.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/core/pktgen.c b/net/core/pktgen.c index a2fb951996b85..5917820f92c3d 100644 --- a/net/core/pktgen.c +++ b/net/core/pktgen.c @@ -897,6 +897,10 @@ static ssize_t get_labels(const char __user *buffer, struct pktgen_dev *pkt_dev) pkt_dev->nr_labels = 0; do { __u32 tmp; + + if (n >= MAX_MPLS_LABELS) + return -E2BIG; + len = hex32_arg(&buffer[i], 8, &tmp); if (len <= 0) return len; @@ -908,8 +912,6 @@ static ssize_t get_labels(const char __user *buffer, struct pktgen_dev *pkt_dev) return -EFAULT; i++; n++; - if (n >= MAX_MPLS_LABELS) - return -E2BIG; } while (c == ',');
pkt_dev->nr_labels = n;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Saket Kumar Bhaskar skb99@linux.ibm.com
[ Upstream commit 061c991697062f3bf87b72ed553d1d33a0e370dd ]
Currently, __reserve_bp_slot() returns -ENOSPC for unsupported breakpoint types on the architecture. For example, powerpc does not support hardware instruction breakpoints. This causes the perf_skip BPF selftest to fail, as neither ENOENT nor EOPNOTSUPP is returned by perf_event_open for unsupported breakpoint types. As a result, the test that should be skipped for this arch is not correctly identified.
To resolve this, hw_breakpoint_event_init() should exit early by checking for unsupported breakpoint types using hw_breakpoint_slots_cached() and return the appropriate error (-EOPNOTSUPP).
Signed-off-by: Saket Kumar Bhaskar skb99@linux.ibm.com Signed-off-by: Ingo Molnar mingo@kernel.org Cc: Marco Elver elver@google.com Cc: Dmitry Vyukov dvyukov@google.com Cc: Ian Rogers irogers@google.com Cc: Frederic Weisbecker fweisbec@gmail.com Link: https://lore.kernel.org/r/20250303092451.1862862-1-skb99@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/hw_breakpoint.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index c3797701339cb..382a3b04f6d33 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -978,9 +978,10 @@ static int hw_breakpoint_event_init(struct perf_event *bp) return -ENOENT;
/* - * no branch sampling for breakpoint events + * Check if breakpoint type is supported before proceeding. + * Also, no branch sampling for breakpoint events. */ - if (has_branch_stack(bp)) + if (!hw_breakpoint_slots_cached(find_slot_idx(bp->attr.bp_type)) || has_branch_stack(bp)) return -EOPNOTSUPP;
err = register_perf_hw_breakpoint(bp);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maciej S. Szmigiero mail@maciej.szmigiero.name
[ Upstream commit aa85822c611aef7cd4dc17d27121d43e21bb82f0 ]
PC speaker works well on this platform in BIOS and in Linux until sound card drivers are loaded. Then it stops working.
There seems to be a beep generator node at 0x1a in this CODEC (ALC269_TYPE_ALC215) but it seems to be only connected to capture mixers at nodes 0x22 and 0x23. If I unmute the mixer input for 0x1a at node 0x23 and start recording from its "ALC285 Analog" capture device I can clearly hear beeps in that recording.
So the beep generator is indeed working properly, however I wasn't able to figure out any way to connect it to speakers.
However, the bits in the "Passthrough Control" register (0x36) seems to work at least partially: by zeroing "B" and "h" and setting "S" I can at least make the PIT PC speaker output appear either in this laptop speakers or headphones (depending on whether they are connected or not).
There are some caveats, however: * If the CODEC gets runtime-suspended the beeps stop so it needs HDA beep device for keeping it awake during beeping.
* If the beep generator node is generating any beep the PC beep passthrough seems to be temporarily inhibited, so the HDA beep device has to be prevented from using the actual beep generator node - but the beep device is still necessary due to the previous point.
* In contrast with other platforms here beep amplification has to be disabled otherwise the beeps output are WAY louder than they were on pure BIOS setup.
Unless someone (from Realtek probably) knows how to make the beep generator node output appear in speakers / headphones using PC beep passthrough seems to be the only way to make PC speaker beeping actually work on this platform.
Signed-off-by: Maciej S. Szmigiero mail@maciej.szmigiero.name Acked-by: kailang@realtek.com Link: https://patch.msgid.link/7461f695b4daed80f2fc4b1463ead47f04f9ad05.1739741254... Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- include/sound/hda_codec.h | 1 + sound/pci/hda/hda_beep.c | 15 +++++++++------ sound/pci/hda/patch_realtek.c | 34 +++++++++++++++++++++++++++++++++- 3 files changed, 43 insertions(+), 7 deletions(-)
diff --git a/include/sound/hda_codec.h b/include/sound/hda_codec.h index bbb7805e85d8e..4ca45d5895dfd 100644 --- a/include/sound/hda_codec.h +++ b/include/sound/hda_codec.h @@ -199,6 +199,7 @@ struct hda_codec { /* beep device */ struct hda_beep *beep; unsigned int beep_mode; + bool beep_just_power_on;
/* widget capabilities cache */ u32 *wcaps; diff --git a/sound/pci/hda/hda_beep.c b/sound/pci/hda/hda_beep.c index e63621bcb2142..1a684e47d4d18 100644 --- a/sound/pci/hda/hda_beep.c +++ b/sound/pci/hda/hda_beep.c @@ -31,8 +31,9 @@ static void generate_tone(struct hda_beep *beep, int tone) beep->power_hook(beep, true); beep->playing = 1; } - snd_hda_codec_write(codec, beep->nid, 0, - AC_VERB_SET_BEEP_CONTROL, tone); + if (!codec->beep_just_power_on) + snd_hda_codec_write(codec, beep->nid, 0, + AC_VERB_SET_BEEP_CONTROL, tone); if (!tone && beep->playing) { beep->playing = 0; if (beep->power_hook) @@ -212,10 +213,12 @@ int snd_hda_attach_beep_device(struct hda_codec *codec, int nid) struct hda_beep *beep; int err;
- if (!snd_hda_get_bool_hint(codec, "beep")) - return 0; /* disabled explicitly by hints */ - if (codec->beep_mode == HDA_BEEP_MODE_OFF) - return 0; /* disabled by module option */ + if (!codec->beep_just_power_on) { + if (!snd_hda_get_bool_hint(codec, "beep")) + return 0; /* disabled explicitly by hints */ + if (codec->beep_mode == HDA_BEEP_MODE_OFF) + return 0; /* disabled by module option */ + }
beep = kzalloc(sizeof(*beep), GFP_KERNEL); if (beep == NULL) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 61b48f2418bf0..2f67cd955d651 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -24,6 +24,7 @@ #include <sound/hda_codec.h> #include "hda_local.h" #include "hda_auto_parser.h" +#include "hda_beep.h" #include "hda_jack.h" #include "hda_generic.h" #include "hda_component.h" @@ -6858,6 +6859,30 @@ static void alc285_fixup_hp_envy_x360(struct hda_codec *codec, } }
+static void alc285_fixup_hp_beep(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + if (action == HDA_FIXUP_ACT_PRE_PROBE) { + codec->beep_just_power_on = true; + } else if (action == HDA_FIXUP_ACT_INIT) { +#ifdef CONFIG_SND_HDA_INPUT_BEEP + /* + * Just enable loopback to internal speaker and headphone jack. + * Disable amplification to get about the same beep volume as + * was on pure BIOS setup before loading the driver. + */ + alc_update_coef_idx(codec, 0x36, 0x7070, BIT(13)); + + snd_hda_enable_beep_device(codec, 1); + +#if !IS_ENABLED(CONFIG_INPUT_PCSPKR) + dev_warn_once(hda_codec_dev(codec), + "enable CONFIG_INPUT_PCSPKR to get PC beeps\n"); +#endif +#endif + } +} + /* for hda_fixup_thinkpad_acpi() */ #include "thinkpad_helper.c"
@@ -7400,6 +7425,7 @@ enum { ALC285_FIXUP_HP_GPIO_LED, ALC285_FIXUP_HP_MUTE_LED, ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED, + ALC285_FIXUP_HP_BEEP_MICMUTE_LED, ALC236_FIXUP_HP_MUTE_LED_COEFBIT2, ALC236_FIXUP_HP_GPIO_LED, ALC236_FIXUP_HP_MUTE_LED, @@ -8947,6 +8973,12 @@ static const struct hda_fixup alc269_fixups[] = { .type = HDA_FIXUP_FUNC, .v.func = alc285_fixup_hp_spectre_x360_mute_led, }, + [ALC285_FIXUP_HP_BEEP_MICMUTE_LED] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc285_fixup_hp_beep, + .chained = true, + .chain_id = ALC285_FIXUP_HP_MUTE_LED, + }, [ALC236_FIXUP_HP_MUTE_LED_COEFBIT2] = { .type = HDA_FIXUP_FUNC, .v.func = alc236_fixup_hp_mute_led_coefbit2, @@ -9860,7 +9892,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), SND_PCI_QUIRK(0x103c, 0x8735, "HP ProBook 435 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT), - SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED), + SND_PCI_QUIRK(0x103c, 0x8760, "HP EliteBook 8{4,5}5 G7", ALC285_FIXUP_HP_BEEP_MICMUTE_LED), SND_PCI_QUIRK(0x103c, 0x876e, "HP ENVY x360 Convertible 13-ay0xxx", ALC245_FIXUP_HP_X360_MUTE_LEDS), SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED), SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 254ba7e6032d3fc738050d500b0c1d8197af90ca ]
fib_valid_key_len() is called in the beginning of fib_table_insert() or fib_table_delete() to check if the prefix length is valid.
fib_table_insert() and fib_table_delete() are called from 3 paths
- ip_rt_ioctl() - inet_rtm_newroute() / inet_rtm_delroute() - fib_magic()
In the first ioctl() path, rtentry_to_fib_config() checks the prefix length with bad_mask(). Also, fib_magic() always passes the correct prefix: 32 or ifa->ifa_prefixlen, which is already validated.
Let's move fib_valid_key_len() to the rtnetlink path, rtm_to_fib_config().
While at it, 2 direct returns in rtm_to_fib_config() are changed to goto to match other places in the same function
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Reviewed-by: Eric Dumazet edumazet@google.com Reviewed-by: David Ahern dsahern@kernel.org Link: https://patch.msgid.link/20250228042328.96624-12-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/fib_frontend.c | 18 ++++++++++++++++-- net/ipv4/fib_trie.c | 22 ---------------------- 2 files changed, 16 insertions(+), 24 deletions(-)
diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c index 90ce87ffed461..7993ff46de23c 100644 --- a/net/ipv4/fib_frontend.c +++ b/net/ipv4/fib_frontend.c @@ -829,19 +829,33 @@ static int rtm_to_fib_config(struct net *net, struct sk_buff *skb, } }
+ if (cfg->fc_dst_len > 32) { + NL_SET_ERR_MSG(extack, "Invalid prefix length"); + err = -EINVAL; + goto errout; + } + + if (cfg->fc_dst_len < 32 && (ntohl(cfg->fc_dst) << cfg->fc_dst_len)) { + NL_SET_ERR_MSG(extack, "Invalid prefix for given prefix length"); + err = -EINVAL; + goto errout; + } + if (cfg->fc_nh_id) { if (cfg->fc_oif || cfg->fc_gw_family || cfg->fc_encap || cfg->fc_mp) { NL_SET_ERR_MSG(extack, "Nexthop specification and nexthop id are mutually exclusive"); - return -EINVAL; + err = -EINVAL; + goto errout; } }
if (has_gw && has_via) { NL_SET_ERR_MSG(extack, "Nexthop configuration can not contain both GATEWAY and VIA"); - return -EINVAL; + err = -EINVAL; + goto errout; }
if (!cfg->fc_table) diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c index 77b97c48da5ea..fa54b36b241ac 100644 --- a/net/ipv4/fib_trie.c +++ b/net/ipv4/fib_trie.c @@ -1192,22 +1192,6 @@ static int fib_insert_alias(struct trie *t, struct key_vector *tp, return 0; }
-static bool fib_valid_key_len(u32 key, u8 plen, struct netlink_ext_ack *extack) -{ - if (plen > KEYLENGTH) { - NL_SET_ERR_MSG(extack, "Invalid prefix length"); - return false; - } - - if ((plen < KEYLENGTH) && (key << plen)) { - NL_SET_ERR_MSG(extack, - "Invalid prefix for given prefix length"); - return false; - } - - return true; -} - static void fib_remove_alias(struct trie *t, struct key_vector *tp, struct key_vector *l, struct fib_alias *old);
@@ -1228,9 +1212,6 @@ int fib_table_insert(struct net *net, struct fib_table *tb,
key = ntohl(cfg->fc_dst);
- if (!fib_valid_key_len(key, plen, extack)) - return -EINVAL; - pr_debug("Insert table=%u %08x/%d\n", tb->tb_id, key, plen);
fi = fib_create_info(cfg, extack); @@ -1723,9 +1704,6 @@ int fib_table_delete(struct net *net, struct fib_table *tb,
key = ntohl(cfg->fc_dst);
- if (!fib_valid_key_len(key, plen, extack)) - return -EINVAL; - l = fib_find_node(t, &tp, key); if (!l) return -ESRCH;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Yan andy.yan@rock-chips.com
[ Upstream commit e7aae9f6d762139f8d2b86db03793ae0ab3dd802 ]
The Cluster windows of upcoming VOP on rk3576 also support linear YUV support, we need to set uv swap bit for it.
As the VOP2_WIN_UV_SWA register defined on rk3568/rk3588 is 0xffffffff, so this register will not be touched on these two platforms.
Signed-off-by: Andy Yan andy.yan@rock-chips.com Tested-by: Michael Riesch michael.riesch@wolfvision.net # on RK3568 Tested-by: Detlev Casanova detlev.casanova@collabora.com Signed-off-by: Heiko Stuebner heiko@sntech.de Link: https://patchwork.freedesktop.org/patch/msgid/20250303034436.192400-4-andysh... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/rockchip/rockchip_drm_vop2.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c index 955ef2caac89f..6efa0a51b7d65 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c @@ -1289,10 +1289,8 @@ static void vop2_plane_atomic_update(struct drm_plane *plane,
rb_swap = vop2_win_rb_swap(fb->format->format); vop2_win_write(win, VOP2_WIN_RB_SWAP, rb_swap); - if (!vop2_cluster_window(win)) { - uv_swap = vop2_win_uv_swap(fb->format->format); - vop2_win_write(win, VOP2_WIN_UV_SWAP, uv_swap); - } + uv_swap = vop2_win_uv_swap(fb->format->format); + vop2_win_write(win, VOP2_WIN_UV_SWAP, uv_swap);
if (fb->format->is_yuv) { vop2_win_write(win, VOP2_WIN_UV_VIR, DIV_ROUND_UP(fb->pitches[1], 4));
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ricardo Ribalda ribalda@chromium.org
[ Upstream commit 990262fdfce24d6055df9711424343d94d829e6a ]
Do not process unknown data types.
Tested-by: Yunke Cao yunkec@google.com Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Ricardo Ribalda ribalda@chromium.org Link: https://lore.kernel.org/r/20250203-uvc-roi-v17-15-5900a9fed613@chromium.org Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans Verkuil hverkuil@xs4all.nl Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/usb/uvc/uvc_v4l2.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c index bd4677a6e653a..0aaa4fce61dae 100644 --- a/drivers/media/usb/uvc/uvc_v4l2.c +++ b/drivers/media/usb/uvc/uvc_v4l2.c @@ -36,6 +36,12 @@ static int uvc_ioctl_ctrl_map(struct uvc_video_chain *chain, unsigned int size; int ret;
+ if (xmap->data_type > UVC_CTRL_DATA_TYPE_BITMASK) { + uvc_dbg(chain->dev, CONTROL, + "Unsupported UVC data type %u\n", xmap->data_type); + return -EINVAL; + } + map = kzalloc(sizeof(*map), GFP_KERNEL); if (map == NULL) return -ENOMEM;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ahmad Fatoum a.fatoum@pengutronix.de
[ Upstream commit 06a61b5cb6a8638fa8823cd09b17233b29696fa2 ]
The IMX8MPCEC datasheet lists maximum frequencies allowed for different modules. Some of these limits are universal, but some depend on whether the SoC is operating in nominal or in overdrive mode.
The imx8mp.dtsi currently assumes overdrive mode and configures some clocks in accordance with this. Boards wishing to make use of nominal mode will need to override some of the clock rates manually.
As operating the clocks outside of their allowed range can lead to difficult to debug issues, it makes sense to register the maximum rates allowed in the driver, so the CCF can take them into account.
Reviewed-by: Peng Fan peng.fan@nxp.com Signed-off-by: Ahmad Fatoum a.fatoum@pengutronix.de Link: https://lore.kernel.org/r/20250218-imx8m-clk-v4-6-b7697dc2dcd0@pengutronix.d... Signed-off-by: Abel Vesa abel.vesa@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/imx/clk-imx8mp.c | 151 +++++++++++++++++++++++++++++++++++ 1 file changed, 151 insertions(+)
diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c index 444dfd6adfe68..a74b73cd243e3 100644 --- a/drivers/clk/imx/clk-imx8mp.c +++ b/drivers/clk/imx/clk-imx8mp.c @@ -8,6 +8,7 @@ #include <linux/err.h> #include <linux/io.h> #include <linux/module.h> +#include <linux/units.h> #include <linux/of_address.h> #include <linux/platform_device.h> #include <linux/slab.h> @@ -405,11 +406,151 @@ static const char * const imx8mp_clkout_sels[] = {"audio_pll1_out", "audio_pll2_ static struct clk_hw **hws; static struct clk_hw_onecell_data *clk_hw_data;
+struct imx8mp_clock_constraints { + unsigned int clkid; + u32 maxrate; +}; + +/* + * Below tables are taken from IMX8MPCEC Rev. 2.1, 07/2023 + * Table 13. Maximum frequency of modules. + * Probable typos fixed are marked with a comment. + */ +static const struct imx8mp_clock_constraints imx8mp_clock_common_constraints[] = { + { IMX8MP_CLK_A53_DIV, 1000 * HZ_PER_MHZ }, + { IMX8MP_CLK_ENET_AXI, 266666667 }, /* Datasheet claims 266MHz */ + { IMX8MP_CLK_NAND_USDHC_BUS, 266666667 }, /* Datasheet claims 266MHz */ + { IMX8MP_CLK_MEDIA_APB, 200 * HZ_PER_MHZ }, + { IMX8MP_CLK_HDMI_APB, 133333333 }, /* Datasheet claims 133MHz */ + { IMX8MP_CLK_ML_AXI, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_AHB, 133333333 }, + { IMX8MP_CLK_IPG_ROOT, 66666667 }, + { IMX8MP_CLK_AUDIO_AHB, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_DISP2_PIX, 170 * HZ_PER_MHZ }, + { IMX8MP_CLK_DRAM_ALT, 666666667 }, + { IMX8MP_CLK_DRAM_APB, 200 * HZ_PER_MHZ }, + { IMX8MP_CLK_CAN1, 80 * HZ_PER_MHZ }, + { IMX8MP_CLK_CAN2, 80 * HZ_PER_MHZ }, + { IMX8MP_CLK_PCIE_AUX, 10 * HZ_PER_MHZ }, + { IMX8MP_CLK_I2C5, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_I2C6, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_SAI1, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_SAI2, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_SAI3, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_SAI5, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_SAI6, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_ENET_QOS, 125 * HZ_PER_MHZ }, + { IMX8MP_CLK_ENET_QOS_TIMER, 200 * HZ_PER_MHZ }, + { IMX8MP_CLK_ENET_REF, 125 * HZ_PER_MHZ }, + { IMX8MP_CLK_ENET_TIMER, 125 * HZ_PER_MHZ }, + { IMX8MP_CLK_ENET_PHY_REF, 125 * HZ_PER_MHZ }, + { IMX8MP_CLK_NAND, 500 * HZ_PER_MHZ }, + { IMX8MP_CLK_QSPI, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_USDHC1, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_USDHC2, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_I2C1, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_I2C2, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_I2C3, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_I2C4, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_UART1, 80 * HZ_PER_MHZ }, + { IMX8MP_CLK_UART2, 80 * HZ_PER_MHZ }, + { IMX8MP_CLK_UART3, 80 * HZ_PER_MHZ }, + { IMX8MP_CLK_UART4, 80 * HZ_PER_MHZ }, + { IMX8MP_CLK_ECSPI1, 80 * HZ_PER_MHZ }, + { IMX8MP_CLK_ECSPI2, 80 * HZ_PER_MHZ }, + { IMX8MP_CLK_PWM1, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_PWM2, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_PWM3, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_PWM4, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_GPT1, 100 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPT2, 100 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPT3, 100 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPT4, 100 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPT5, 100 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPT6, 100 * HZ_PER_MHZ }, + { IMX8MP_CLK_WDOG, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_IPP_DO_CLKO1, 200 * HZ_PER_MHZ }, + { IMX8MP_CLK_IPP_DO_CLKO2, 200 * HZ_PER_MHZ }, + { IMX8MP_CLK_HDMI_REF_266M, 266 * HZ_PER_MHZ }, + { IMX8MP_CLK_USDHC3, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_MIPI_PHY1_REF, 300 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_DISP1_PIX, 250 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_CAM2_PIX, 277 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_LDB, 595 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_MIPI_TEST_BYTE, 200 * HZ_PER_MHZ }, + { IMX8MP_CLK_ECSPI3, 80 * HZ_PER_MHZ }, + { IMX8MP_CLK_PDM, 200 * HZ_PER_MHZ }, + { IMX8MP_CLK_SAI7, 66666667 }, /* Datasheet claims 66MHz */ + { IMX8MP_CLK_MAIN_AXI, 400 * HZ_PER_MHZ }, + { /* Sentinel */ } +}; + +static const struct imx8mp_clock_constraints imx8mp_clock_nominal_constraints[] = { + { IMX8MP_CLK_M7_CORE, 600 * HZ_PER_MHZ }, + { IMX8MP_CLK_ML_CORE, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU3D_CORE, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU3D_SHADER_CORE, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU2D_CORE, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_AUDIO_AXI_SRC, 600 * HZ_PER_MHZ }, + { IMX8MP_CLK_HSIO_AXI, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_ISP, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_VPU_BUS, 600 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_AXI, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_HDMI_AXI, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU_AXI, 600 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU_AHB, 300 * HZ_PER_MHZ }, + { IMX8MP_CLK_NOC, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_NOC_IO, 600 * HZ_PER_MHZ }, + { IMX8MP_CLK_ML_AHB, 300 * HZ_PER_MHZ }, + { IMX8MP_CLK_VPU_G1, 600 * HZ_PER_MHZ }, + { IMX8MP_CLK_VPU_G2, 500 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_CAM1_PIX, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_VPU_VC8000E, 400 * HZ_PER_MHZ }, /* Datasheet claims 500MHz */ + { IMX8MP_CLK_DRAM_CORE, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_GIC, 400 * HZ_PER_MHZ }, + { /* Sentinel */ } +}; + +static const struct imx8mp_clock_constraints imx8mp_clock_overdrive_constraints[] = { + { IMX8MP_CLK_M7_CORE, 800 * HZ_PER_MHZ}, + { IMX8MP_CLK_ML_CORE, 1000 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU3D_CORE, 1000 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU3D_SHADER_CORE, 1000 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU2D_CORE, 1000 * HZ_PER_MHZ }, + { IMX8MP_CLK_AUDIO_AXI_SRC, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_HSIO_AXI, 500 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_ISP, 500 * HZ_PER_MHZ }, + { IMX8MP_CLK_VPU_BUS, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_AXI, 500 * HZ_PER_MHZ }, + { IMX8MP_CLK_HDMI_AXI, 500 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU_AXI, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_GPU_AHB, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_NOC, 1000 * HZ_PER_MHZ }, + { IMX8MP_CLK_NOC_IO, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_ML_AHB, 400 * HZ_PER_MHZ }, + { IMX8MP_CLK_VPU_G1, 800 * HZ_PER_MHZ }, + { IMX8MP_CLK_VPU_G2, 700 * HZ_PER_MHZ }, + { IMX8MP_CLK_MEDIA_CAM1_PIX, 500 * HZ_PER_MHZ }, + { IMX8MP_CLK_VPU_VC8000E, 500 * HZ_PER_MHZ }, /* Datasheet claims 400MHz */ + { IMX8MP_CLK_DRAM_CORE, 1000 * HZ_PER_MHZ }, + { IMX8MP_CLK_GIC, 500 * HZ_PER_MHZ }, + { /* Sentinel */ } +}; + +static void imx8mp_clocks_apply_constraints(const struct imx8mp_clock_constraints constraints[]) +{ + const struct imx8mp_clock_constraints *constr; + + for (constr = constraints; constr->clkid; constr++) + clk_hw_set_rate_range(hws[constr->clkid], 0, constr->maxrate); +} + static int imx8mp_clocks_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct device_node *np; void __iomem *anatop_base, *ccm_base; + const char *opmode; int err;
np = of_find_compatible_node(NULL, NULL, "fsl,imx8mp-anatop"); @@ -704,6 +845,16 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
imx_check_clk_hws(hws, IMX8MP_CLK_END);
+ imx8mp_clocks_apply_constraints(imx8mp_clock_common_constraints); + + err = of_property_read_string(np, "fsl,operating-mode", &opmode); + if (!err) { + if (!strcmp(opmode, "nominal")) + imx8mp_clocks_apply_constraints(imx8mp_clock_nominal_constraints); + else if (!strcmp(opmode, "overdrive")) + imx8mp_clocks_apply_constraints(imx8mp_clock_overdrive_constraints); + } + err = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data); if (err < 0) { dev_err(dev, "failed to register hws for i.MX8MP\n");
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Breno Leitao leitao@debian.org
[ Upstream commit 98fdaeb296f51ef08e727a7cc72e5b5c864c4f4d ]
Change the default value of spectre v2 in user mode to respect the CONFIG_MITIGATION_SPECTRE_V2 config option.
Currently, user mode spectre v2 is set to auto (SPECTRE_V2_USER_CMD_AUTO) by default, even if CONFIG_MITIGATION_SPECTRE_V2 is disabled.
Set the spectre_v2 value to auto (SPECTRE_V2_USER_CMD_AUTO) if the Spectre v2 config (CONFIG_MITIGATION_SPECTRE_V2) is enabled, otherwise set the value to none (SPECTRE_V2_USER_CMD_NONE).
Important to say the command line argument "spectre_v2_user" overwrites the default value in both cases.
When CONFIG_MITIGATION_SPECTRE_V2 is not set, users have the flexibility to opt-in for specific mitigations independently. In this scenario, setting spectre_v2= will not enable spectre_v2_user=, and command line options spectre_v2_user and spectre_v2 are independent when CONFIG_MITIGATION_SPECTRE_V2=n.
Signed-off-by: Breno Leitao leitao@debian.org Signed-off-by: Ingo Molnar mingo@kernel.org Reviewed-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Acked-by: Josh Poimboeuf jpoimboe@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: David Kaplan David.Kaplan@amd.com Link: https://lore.kernel.org/r/20241031-x86_bugs_last_v2-v2-2-b7ff1dab840e@debian... Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/admin-guide/kernel-parameters.txt | 2 ++ arch/x86/kernel/cpu/bugs.c | 10 +++++++--- 2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 6938c8cd7a6f6..15e40774e9bc9 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5780,6 +5780,8 @@
Selecting 'on' will also enable the mitigation against user space to user space task attacks. + Selecting specific mitigation does not force enable + user mitigations.
Selecting 'off' will disable both the kernel and the user space protections. diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 766cee7fa9056..7233474c798f7 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1382,9 +1382,13 @@ static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd; static enum spectre_v2_user_cmd __init spectre_v2_parse_user_cmdline(void) { + enum spectre_v2_user_cmd mode; char arg[20]; int ret, i;
+ mode = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ? + SPECTRE_V2_USER_CMD_AUTO : SPECTRE_V2_USER_CMD_NONE; + switch (spectre_v2_cmd) { case SPECTRE_V2_CMD_NONE: return SPECTRE_V2_USER_CMD_NONE; @@ -1397,7 +1401,7 @@ spectre_v2_parse_user_cmdline(void) ret = cmdline_find_option(boot_command_line, "spectre_v2_user", arg, sizeof(arg)); if (ret < 0) - return SPECTRE_V2_USER_CMD_AUTO; + return mode;
for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) { if (match_option(arg, ret, v2_user_options[i].option)) { @@ -1407,8 +1411,8 @@ spectre_v2_parse_user_cmdline(void) } }
- pr_err("Unknown user space protection option (%s). Switching to AUTO select\n", arg); - return SPECTRE_V2_USER_CMD_AUTO; + pr_err("Unknown user space protection option (%s). Switching to default\n", arg); + return mode; }
static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Stein alexander.stein@ew.tq-group.com
[ Upstream commit 9fee7d19bab635f89223cc40dfd2c8797fdc4988 ]
set_fan_speed() is expected to be called with fan_data->lock being locked. Add locking for proper synchronization.
Signed-off-by: Alexander Stein alexander.stein@ew.tq-group.com Link: https://lore.kernel.org/r/20250210145934.761280-3-alexander.stein@ew.tq-grou... Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hwmon/gpio-fan.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/drivers/hwmon/gpio-fan.c b/drivers/hwmon/gpio-fan.c index ba408942dbe73..f1926b9171e0c 100644 --- a/drivers/hwmon/gpio-fan.c +++ b/drivers/hwmon/gpio-fan.c @@ -392,7 +392,12 @@ static int gpio_fan_set_cur_state(struct thermal_cooling_device *cdev, if (state >= fan_data->num_speed) return -EINVAL;
+ mutex_lock(&fan_data->lock); + set_fan_speed(fan_data, state); + + mutex_unlock(&fan_data->lock); + return 0; }
@@ -488,7 +493,11 @@ MODULE_DEVICE_TABLE(of, of_gpio_fan_match);
static void gpio_fan_stop(void *data) { + struct gpio_fan_data *fan_data = data; + + mutex_lock(&fan_data->lock); set_fan_speed(data, 0); + mutex_unlock(&fan_data->lock); }
static int gpio_fan_probe(struct platform_device *pdev) @@ -561,7 +570,9 @@ static int gpio_fan_suspend(struct device *dev)
if (fan_data->gpios) { fan_data->resume_speed = fan_data->speed_index; + mutex_lock(&fan_data->lock); set_fan_speed(fan_data, 0); + mutex_unlock(&fan_data->lock); }
return 0; @@ -571,8 +582,11 @@ static int gpio_fan_resume(struct device *dev) { struct gpio_fan_data *fan_data = dev_get_drvdata(dev);
- if (fan_data->gpios) + if (fan_data->gpios) { + mutex_lock(&fan_data->lock); set_fan_speed(fan_data, fan_data->resume_speed); + mutex_unlock(&fan_data->lock); + }
return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Li Bin bin.li@microchip.com
[ Upstream commit bc4722c3598d0e2c2dbf9609a3d3198993093e2b ]
For sama7g5 and sama7d65 backup mode, we encountered a "ZQ calibrate error" during recalibrating the impedance in BootStrap. We found that the impedance value saved in at91_suspend_finish() before the DDR entered self-refresh mode did not match the resistor values. The ZDATA field in the DDR3PHY_ZQ0CR0 register uses a modified gray code to select the different impedance setting. But these gray code are incorrect, a workaournd from design team fixed the bug in the calibration logic. The ZDATA contains four independent impedance elements, but the algorithm combined the four elements into one. The elements were fixed using properly shifted offsets.
Signed-off-by: Li Bin bin.li@microchip.com [nicolas.ferre@microchip.com: fix indentation and combine 2 patches] Signed-off-by: Nicolas Ferre nicolas.ferre@microchip.com Tested-by: Ryan Wanner Ryan.Wanner@microchip.com Tested-by: Durai Manickam KR durai.manickamkr@microchip.com Tested-by: Andrei Simion andrei.simion@microchip.com Signed-off-by: Ryan Wanner Ryan.Wanner@microchip.com Link: https://lore.kernel.org/r/28b33f9bcd0ca60ceba032969fe054d38f2b9577.174067115... Signed-off-by: Claudiu Beznea claudiu.beznea@tuxon.dev Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/mach-at91/pm.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c index 4d0d0d49a7442..77aec670f635c 100644 --- a/arch/arm/mach-at91/pm.c +++ b/arch/arm/mach-at91/pm.c @@ -537,11 +537,12 @@ extern u32 at91_pm_suspend_in_sram_sz;
static int at91_suspend_finish(unsigned long val) { - unsigned char modified_gray_code[] = { - 0x00, 0x01, 0x02, 0x03, 0x06, 0x07, 0x04, 0x05, 0x0c, 0x0d, - 0x0e, 0x0f, 0x0a, 0x0b, 0x08, 0x09, 0x18, 0x19, 0x1a, 0x1b, - 0x1e, 0x1f, 0x1c, 0x1d, 0x14, 0x15, 0x16, 0x17, 0x12, 0x13, - 0x10, 0x11, + /* SYNOPSYS workaround to fix a bug in the calibration logic */ + unsigned char modified_fix_code[] = { + 0x00, 0x01, 0x01, 0x06, 0x07, 0x0c, 0x06, 0x07, 0x0b, 0x18, + 0x0a, 0x0b, 0x0c, 0x0d, 0x0d, 0x0a, 0x13, 0x13, 0x12, 0x13, + 0x14, 0x15, 0x15, 0x12, 0x18, 0x19, 0x19, 0x1e, 0x1f, 0x14, + 0x1e, 0x1f, }; unsigned int tmp, index; int i; @@ -552,25 +553,25 @@ static int at91_suspend_finish(unsigned long val) * restore the ZQ0SR0 with the value saved here. But the * calibration is buggy and restoring some values from ZQ0SR0 * is forbidden and risky thus we need to provide processed - * values for these (modified gray code values). + * values for these. */ tmp = readl(soc_pm.data.ramc_phy + DDR3PHY_ZQ0SR0);
/* Store pull-down output impedance select. */ index = (tmp >> DDR3PHY_ZQ0SR0_PDO_OFF) & 0x1f; - soc_pm.bu->ddr_phy_calibration[0] = modified_gray_code[index]; + soc_pm.bu->ddr_phy_calibration[0] = modified_fix_code[index] << DDR3PHY_ZQ0SR0_PDO_OFF;
/* Store pull-up output impedance select. */ index = (tmp >> DDR3PHY_ZQ0SR0_PUO_OFF) & 0x1f; - soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index]; + soc_pm.bu->ddr_phy_calibration[0] |= modified_fix_code[index] << DDR3PHY_ZQ0SR0_PUO_OFF;
/* Store pull-down on-die termination impedance select. */ index = (tmp >> DDR3PHY_ZQ0SR0_PDODT_OFF) & 0x1f; - soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index]; + soc_pm.bu->ddr_phy_calibration[0] |= modified_fix_code[index] << DDR3PHY_ZQ0SR0_PDODT_OFF;
/* Store pull-up on-die termination impedance select. */ index = (tmp >> DDR3PHY_ZQ0SRO_PUODT_OFF) & 0x1f; - soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index]; + soc_pm.bu->ddr_phy_calibration[0] |= modified_fix_code[index] << DDR3PHY_ZQ0SRO_PUODT_OFF;
/* * The 1st 8 words of memory might get corrupted in the process
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com
[ Upstream commit 8c9da7cd0bbcc90ab444454fecf535320456a312 ]
In preparation for adding support for newer DPI instances which do support direct-pin but do not have any H_FRE_CON register, like the one found in MT8195 and MT8188, add a branch to check if the reg_h_fre_con variable was declared in the mtk_dpi_conf structure for the probed SoC DPI version.
As a note, this is useful specifically only for cases in which the support_direct_pin variable is true, so mt8195-dpintf is not affected by any issue.
Reviewed-by: CK Hu ck.hu@mediatek.com Signed-off-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Link: https://patchwork.kernel.org/project/dri-devel/patch/20250217154836.108895-6... Signed-off-by: Chun-Kuang Hu chunkuang.hu@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/mediatek/mtk_dpi.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c index 1fa958e8c40a1..5ad9c384046cb 100644 --- a/drivers/gpu/drm/mediatek/mtk_dpi.c +++ b/drivers/gpu/drm/mediatek/mtk_dpi.c @@ -406,12 +406,13 @@ static void mtk_dpi_config_swap_input(struct mtk_dpi *dpi, bool enable)
static void mtk_dpi_config_2n_h_fre(struct mtk_dpi *dpi) { - mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, H_FRE_2N, H_FRE_2N); + if (dpi->conf->reg_h_fre_con) + mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, H_FRE_2N, H_FRE_2N); }
static void mtk_dpi_config_disable_edge(struct mtk_dpi *dpi) { - if (dpi->conf->edge_sel_en) + if (dpi->conf->edge_sel_en && dpi->conf->reg_h_fre_con) mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, 0, EDGE_SEL_EN); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuhanh Murugasen Krishnan kuhanh.murugasen.krishnan@intel.com
[ Upstream commit 0f05886a40fdc55016ba4d9ae0a9c41f8312f15b ]
Increase the timeout for SDM (Secure device manager) data credits from 20ms to 40ms. Internal stress tests running at 500 loops failed with the current timeout of 20ms. At the start of a FPGA configuration, the CVP host driver reads the transmit credits from SDM. It then sends bitstream FPGA data to SDM based on the total credits. Each credit allows the CVP host driver to send 4kBytes of data. There are situations whereby, the SDM did not respond in time during testing.
Signed-off-by: Ang Tien Sung tien.sung.ang@intel.com Signed-off-by: Kuhanh Murugasen Krishnan kuhanh.murugasen.krishnan@intel.com Acked-by: Xu Yilun yilun.xu@intel.com Link: https://lore.kernel.org/r/20250212221249.2715929-1-tien.sung.ang@intel.com Signed-off-by: Xu Yilun yilun.xu@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/fpga/altera-cvp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/fpga/altera-cvp.c b/drivers/fpga/altera-cvp.c index 4ffb9da537d82..5295ff90482bc 100644 --- a/drivers/fpga/altera-cvp.c +++ b/drivers/fpga/altera-cvp.c @@ -52,7 +52,7 @@ /* V2 Defines */ #define VSE_CVP_TX_CREDITS 0x49 /* 8bit */
-#define V2_CREDIT_TIMEOUT_US 20000 +#define V2_CREDIT_TIMEOUT_US 40000 #define V2_CHECK_CREDIT_US 10 #define V2_POLL_TIMEOUT_US 1000000 #define V2_USER_TIMEOUT_US 500000
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Janne Grunau j@jannau.net
[ Upstream commit 22af2fac88fa5dbc310bfe7d0b66d4de3ac47305 ]
rtkit messages as communication with the DCP firmware for framebuffer swaps or input events are time critical so use WQ_HIGHPRI to prevent user space CPU load to increase latency. With kwin_wayland 6's explicit sync mode user space load was able to delay the IOMFB rtkit communication enough to miss vsync for surface swaps. Minimal test scenario is constantly resizing a glxgears Xwayland window.
Signed-off-by: Janne Grunau j@jannau.net Reviewed-by: Alyssa Rosenzweig alyssa@rosenzweig.io Link: https://lore.kernel.org/r/20250226-apple-soc-misc-v2-3-c3ec37f9021b@svenpete... Signed-off-by: Sven Peter sven@svenpeter.dev Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/apple/rtkit.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/soc/apple/rtkit.c b/drivers/soc/apple/rtkit.c index 8ec74d7539eb4..1ec0c3ba0be22 100644 --- a/drivers/soc/apple/rtkit.c +++ b/drivers/soc/apple/rtkit.c @@ -731,7 +731,7 @@ static struct apple_rtkit *apple_rtkit_init(struct device *dev, void *cookie, rtk->mbox_cl.rx_callback = &apple_rtkit_rx; rtk->mbox_cl.tx_done = &apple_rtkit_tx_done;
- rtk->wq = alloc_ordered_workqueue("rtkit-%s", WQ_MEM_RECLAIM, + rtk->wq = alloc_ordered_workqueue("rtkit-%s", WQ_HIGHPRI | WQ_MEM_RECLAIM, dev_name(rtk->dev)); if (!rtk->wq) { ret = -ENOMEM;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hector Martin marcan@marcan.st
[ Upstream commit a06398687065e0c334dc5fc4d2778b5b87292e43 ]
Apparently nobody can figure out where the old logic came from, but it seems like it has never been actually used on any supported firmware to this day. OSLog buffers were apparently never requested.
But starting with 13.3, we actually need this implemented properly for MTP (and later AOP) to work, so let's actually do that.
Signed-off-by: Hector Martin marcan@marcan.st Reviewed-by: Alyssa Rosenzweig alyssa@rosenzweig.io Link: https://lore.kernel.org/r/20250226-apple-soc-misc-v2-2-c3ec37f9021b@svenpete... Signed-off-by: Sven Peter sven@svenpeter.dev Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/apple/rtkit-internal.h | 1 + drivers/soc/apple/rtkit.c | 56 ++++++++++++++++++------------ 2 files changed, 35 insertions(+), 22 deletions(-)
diff --git a/drivers/soc/apple/rtkit-internal.h b/drivers/soc/apple/rtkit-internal.h index 24bd619ec5e48..1da1dfd9cb199 100644 --- a/drivers/soc/apple/rtkit-internal.h +++ b/drivers/soc/apple/rtkit-internal.h @@ -48,6 +48,7 @@ struct apple_rtkit {
struct apple_rtkit_shmem ioreport_buffer; struct apple_rtkit_shmem crashlog_buffer; + struct apple_rtkit_shmem oslog_buffer;
struct apple_rtkit_shmem syslog_buffer; char *syslog_msg_buffer; diff --git a/drivers/soc/apple/rtkit.c b/drivers/soc/apple/rtkit.c index 1ec0c3ba0be22..968f9f6333936 100644 --- a/drivers/soc/apple/rtkit.c +++ b/drivers/soc/apple/rtkit.c @@ -65,8 +65,9 @@ enum { #define APPLE_RTKIT_SYSLOG_MSG_SIZE GENMASK_ULL(31, 24)
#define APPLE_RTKIT_OSLOG_TYPE GENMASK_ULL(63, 56) -#define APPLE_RTKIT_OSLOG_INIT 1 -#define APPLE_RTKIT_OSLOG_ACK 3 +#define APPLE_RTKIT_OSLOG_BUFFER_REQUEST 1 +#define APPLE_RTKIT_OSLOG_SIZE GENMASK_ULL(55, 36) +#define APPLE_RTKIT_OSLOG_IOVA GENMASK_ULL(35, 0)
#define APPLE_RTKIT_MIN_SUPPORTED_VERSION 11 #define APPLE_RTKIT_MAX_SUPPORTED_VERSION 12 @@ -255,15 +256,21 @@ static int apple_rtkit_common_rx_get_buffer(struct apple_rtkit *rtk, struct apple_rtkit_shmem *buffer, u8 ep, u64 msg) { - size_t n_4kpages = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_SIZE, msg); u64 reply; int err;
+ /* The different size vs. IOVA shifts look odd but are indeed correct this way */ + if (ep == APPLE_RTKIT_EP_OSLOG) { + buffer->size = FIELD_GET(APPLE_RTKIT_OSLOG_SIZE, msg); + buffer->iova = FIELD_GET(APPLE_RTKIT_OSLOG_IOVA, msg) << 12; + } else { + buffer->size = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_SIZE, msg) << 12; + buffer->iova = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_IOVA, msg); + } + buffer->buffer = NULL; buffer->iomem = NULL; buffer->is_mapped = false; - buffer->iova = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_IOVA, msg); - buffer->size = n_4kpages << 12;
dev_dbg(rtk->dev, "RTKit: buffer request for 0x%zx bytes at %pad\n", buffer->size, &buffer->iova); @@ -288,11 +295,21 @@ static int apple_rtkit_common_rx_get_buffer(struct apple_rtkit *rtk, }
if (!buffer->is_mapped) { - reply = FIELD_PREP(APPLE_RTKIT_SYSLOG_TYPE, - APPLE_RTKIT_BUFFER_REQUEST); - reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_SIZE, n_4kpages); - reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_IOVA, - buffer->iova); + /* oslog uses different fields and needs a shifted IOVA instead of size */ + if (ep == APPLE_RTKIT_EP_OSLOG) { + reply = FIELD_PREP(APPLE_RTKIT_OSLOG_TYPE, + APPLE_RTKIT_OSLOG_BUFFER_REQUEST); + reply |= FIELD_PREP(APPLE_RTKIT_OSLOG_SIZE, buffer->size); + reply |= FIELD_PREP(APPLE_RTKIT_OSLOG_IOVA, + buffer->iova >> 12); + } else { + reply = FIELD_PREP(APPLE_RTKIT_SYSLOG_TYPE, + APPLE_RTKIT_BUFFER_REQUEST); + reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_SIZE, + buffer->size >> 12); + reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_IOVA, + buffer->iova); + } apple_rtkit_send_message(rtk, ep, reply, NULL, false); }
@@ -474,25 +491,18 @@ static void apple_rtkit_syslog_rx(struct apple_rtkit *rtk, u64 msg) } }
-static void apple_rtkit_oslog_rx_init(struct apple_rtkit *rtk, u64 msg) -{ - u64 ack; - - dev_dbg(rtk->dev, "RTKit: oslog init: msg: 0x%llx\n", msg); - ack = FIELD_PREP(APPLE_RTKIT_OSLOG_TYPE, APPLE_RTKIT_OSLOG_ACK); - apple_rtkit_send_message(rtk, APPLE_RTKIT_EP_OSLOG, ack, NULL, false); -} - static void apple_rtkit_oslog_rx(struct apple_rtkit *rtk, u64 msg) { u8 type = FIELD_GET(APPLE_RTKIT_OSLOG_TYPE, msg);
switch (type) { - case APPLE_RTKIT_OSLOG_INIT: - apple_rtkit_oslog_rx_init(rtk, msg); + case APPLE_RTKIT_OSLOG_BUFFER_REQUEST: + apple_rtkit_common_rx_get_buffer(rtk, &rtk->oslog_buffer, + APPLE_RTKIT_EP_OSLOG, msg); break; default: - dev_warn(rtk->dev, "RTKit: Unknown oslog message: %llx\n", msg); + dev_warn(rtk->dev, "RTKit: Unknown oslog message: %llx\n", + msg); } }
@@ -773,6 +783,7 @@ int apple_rtkit_reinit(struct apple_rtkit *rtk)
apple_rtkit_free_buffer(rtk, &rtk->ioreport_buffer); apple_rtkit_free_buffer(rtk, &rtk->crashlog_buffer); + apple_rtkit_free_buffer(rtk, &rtk->oslog_buffer); apple_rtkit_free_buffer(rtk, &rtk->syslog_buffer);
kfree(rtk->syslog_msg_buffer); @@ -935,6 +946,7 @@ static void apple_rtkit_free(void *data)
apple_rtkit_free_buffer(rtk, &rtk->ioreport_buffer); apple_rtkit_free_buffer(rtk, &rtk->crashlog_buffer); + apple_rtkit_free_buffer(rtk, &rtk->oslog_buffer); apple_rtkit_free_buffer(rtk, &rtk->syslog_buffer);
kfree(rtk->syslog_msg_buffer);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanimir Varbanov svarbanov@suse.de
[ Upstream commit 25a98c727015638baffcfa236e3f37b70cedcf87 ]
The BCM2712 memory map can support up to 64GB of system memory, thus expand the inbound window size in calculation helper function.
The change is safe for the currently supported SoCs that have smaller inbound window sizes.
Signed-off-by: Stanimir Varbanov svarbanov@suse.de Reviewed-by: Florian Fainelli florian.fainelli@broadcom.com Reviewed-by: Jim Quinlan james.quinlan@broadcom.com Tested-by: Ivan T. Ivanov iivanov@suse.de Link: https://lore.kernel.org/r/20250224083559.47645-7-svarbanov@suse.de [kwilczynski: commit log] Signed-off-by: Krzysztof Wilczyński kwilczynski@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pci/controller/pcie-brcmstb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c index 425db793080d4..fe37bd28761a8 100644 --- a/drivers/pci/controller/pcie-brcmstb.c +++ b/drivers/pci/controller/pcie-brcmstb.c @@ -281,8 +281,8 @@ static int brcm_pcie_encode_ibar_size(u64 size) if (log2_in >= 12 && log2_in <= 15) /* Covers 4KB to 32KB (inclusive) */ return (log2_in - 12) + 0x1c; - else if (log2_in >= 16 && log2_in <= 35) - /* Covers 64KB to 32GB, (inclusive) */ + else if (log2_in >= 16 && log2_in <= 36) + /* Covers 64KB to 64GB, (inclusive) */ return log2_in - 15; /* Something is awry so disable */ return 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanimir Varbanov svarbanov@suse.de
[ Upstream commit 2294059118c550464dd8906286324d90c33b152b ]
Then the brcmstb PCIe driver and MIP MSI-X interrupt controller drivers are built as modules there could be a race in probing.
To avoid this, add a softdep to MIP driver to guarantee that MIP driver will be load first.
Signed-off-by: Stanimir Varbanov svarbanov@suse.de Reviewed-by: Florian Fainelli florian.fainelli@broadcom.com Tested-by: Ivan T. Ivanov iivanov@suse.de Link: https://lore.kernel.org/r/20250224083559.47645-5-svarbanov@suse.de [kwilczynski: commit log] Signed-off-by: Krzysztof Wilczyński kwilczynski@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pci/controller/pcie-brcmstb.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c index fe37bd28761a8..c89ad1f92a07f 100644 --- a/drivers/pci/controller/pcie-brcmstb.c +++ b/drivers/pci/controller/pcie-brcmstb.c @@ -1619,3 +1619,4 @@ module_platform_driver(brcm_pcie_driver); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Broadcom STB PCIe RC driver"); MODULE_AUTHOR("Broadcom"); +MODULE_SOFTDEP("pre: irq_bcm2712_mip");
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Viresh Kumar viresh.kumar@linaro.org
[ Upstream commit cc0aac7ca17e0ea3ca84b552fc79f3e86fd07f53 ]
Set dma_mask for FFA devices, otherwise DMA allocation using the device pointer lead to following warning:
WARNING: CPU: 1 PID: 1 at kernel/dma/mapping.c:597 dma_alloc_attrs+0xe0/0x124
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org Message-Id: e3dd8042ac680bd74b6580c25df855d092079c18.1737107520.git.viresh.kumar@linaro.org Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_ffa/bus.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/firmware/arm_ffa/bus.c b/drivers/firmware/arm_ffa/bus.c index 248594b59c64d..5bda5d7ade42d 100644 --- a/drivers/firmware/arm_ffa/bus.c +++ b/drivers/firmware/arm_ffa/bus.c @@ -191,6 +191,7 @@ struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id, dev = &ffa_dev->dev; dev->bus = &ffa_bus_type; dev->release = ffa_release_device; + dev->dma_mask = &dev->coherent_dma_mask; dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id);
ffa_dev->id = id;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Moshe Shemesh moshe@nvidia.com
[ Upstream commit b5d7b2f04ebcff740f44ef4d295b3401aeb029f4 ]
In case health counter has not increased for few polling intervals, miss counter will reach max misses threshold and health report will be triggered for FW health reporter. In case syndrome found on same health poll another health report will be triggered.
Avoid two health reports on same syndrome by marking this syndrome as already known.
Signed-off-by: Moshe Shemesh moshe@nvidia.com Reviewed-by: Shahar Shitrit shshitrit@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Reviewed-by: Kalesh AP kalesh-anakkur.purayil@broadcom.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/health.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c index 65483dab90573..b4faac12789d9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c @@ -850,6 +850,7 @@ static void poll_health(struct timer_list *t) health->prev = count; if (health->miss_counter == MAX_MISSES) { mlx5_core_err(dev, "device's health compromised - reached miss count\n"); + health->synd = ioread8(&h->synd); print_health_info(dev); queue_work(health->wq, &health->report_work); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kevin Krakauer krakauer@google.com
[ Upstream commit 784e6abd99f24024a8998b5916795f0bec9d2fd9 ]
Modify gro.sh to return a useful exit code when the -t flag is used. It formerly returned 0 no matter what.
Tested: Ran `gro.sh -t large` and verified that test failures return 1. Signed-off-by: Kevin Krakauer krakauer@google.com Reviewed-by: Willem de Bruijn willemb@google.com Link: https://patch.msgid.link/20250226192725.621969-2-krakauer@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/net/gro.sh | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/net/gro.sh b/tools/testing/selftests/net/gro.sh index 342ad27f631b1..e771f5f7faa26 100755 --- a/tools/testing/selftests/net/gro.sh +++ b/tools/testing/selftests/net/gro.sh @@ -95,5 +95,6 @@ trap cleanup EXIT if [[ "${test}" == "all" ]]; then run_all_tests else - run_test "${proto}" "${test}" + exit_code=$(run_test "${proto}" "${test}") + exit $exit_code fi;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Philip Yang Philip.Yang@amd.com
[ Upstream commit 1b9366c601039d60546794c63fbb83ce8e53b978 ]
If waiting for gpu reset done in KFD release_work, thers is WARNING: possible circular locking dependency detected
#2 kfd_create_process kfd_process_mutex flush kfd release work
#1 kfd release work wait for amdgpu reset work
#0 amdgpu_device_gpu_reset kgd2kfd_pre_reset kfd_process_mutex
Possible unsafe locking scenario:
CPU0 CPU1 ---- ---- lock((work_completion)(&p->release_work)); lock((wq_completion)kfd_process_wq); lock((work_completion)(&p->release_work)); lock((wq_completion)amdgpu-reset-dev);
To fix this, KFD create process move flush release work outside kfd_process_mutex.
Signed-off-by: Philip Yang Philip.Yang@amd.com Reviewed-by: Felix Kuehling felix.kuehling@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdkfd/kfd_process.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c index bc01c5173ab9a..fd7fecaa9254b 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c @@ -813,6 +813,14 @@ struct kfd_process *kfd_create_process(struct file *filep) if (thread->group_leader->mm != thread->mm) return ERR_PTR(-EINVAL);
+ /* If the process just called exec(3), it is possible that the + * cleanup of the kfd_process (following the release of the mm + * of the old process image) is still in the cleanup work queue. + * Make sure to drain any job before trying to recreate any + * resource for this process. + */ + flush_workqueue(kfd_process_wq); + /* * take kfd processes mutex before starting of process creation * so there won't be a case where two threads of the same process @@ -825,14 +833,6 @@ struct kfd_process *kfd_create_process(struct file *filep) if (process) { pr_debug("Process already found\n"); } else { - /* If the process just called exec(3), it is possible that the - * cleanup of the kfd_process (following the release of the mm - * of the old process image) is still in the cleanup work queue. - * Make sure to drain any job before trying to recreate any - * resource for this process. - */ - flush_workqueue(kfd_process_wq); - process = create_process(thread); if (IS_ERR(process)) goto out;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yuanjun Gong ruc_gongyuanjun@163.com
[ Upstream commit 6d91124e7edc109f114b1afe6d00d85d0d0ac174 ]
Add a check to the return value of fwnode_property_read_u32() in case it fails.
Signed-off-by: Yuanjun Gong ruc_gongyuanjun@163.com Link: https://lore.kernel.org/r/20250223121459.2889484-1-ruc_gongyuanjun@163.com Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/leds/rgb/leds-pwm-multicolor.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/leds/rgb/leds-pwm-multicolor.c b/drivers/leds/rgb/leds-pwm-multicolor.c index da9d2218ae184..97aa06e2ff603 100644 --- a/drivers/leds/rgb/leds-pwm-multicolor.c +++ b/drivers/leds/rgb/leds-pwm-multicolor.c @@ -135,8 +135,11 @@ static int led_pwm_mc_probe(struct platform_device *pdev)
/* init the multicolor's LED class device */ cdev = &priv->mc_cdev.led_cdev; - fwnode_property_read_u32(mcnode, "max-brightness", + ret = fwnode_property_read_u32(mcnode, "max-brightness", &cdev->max_brightness); + if (ret) + goto release_mcnode; + cdev->flags = LED_CORE_SUSPENDRESUME; cdev->brightness_set_blocking = led_pwm_mc_set;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Woudstra ericwouds@gmail.com
[ Upstream commit 7fe0353606d77a32c4c7f2814833dd1c043ebdd2 ]
mtk_foe_entry_set_vlan() in mtk_ppe.c already supports double vlan tagging, but mtk_flow_offload_replace() in mtk_ppe_offload.c only allows for 1 vlan tag, optionally in combination with pppoe and dsa tags.
However, mtk_foe_entry_set_vlan() only allows for setting the vlan id. The protocol cannot be set, it is always ETH_P_8021Q, for inner and outer tag. This patch adds QinQ support to mtk_flow_offload_replace(), only in the case that both inner and outer tags are ETH_P_8021Q.
Only PPPoE-in-Q (as before) and Q-in-Q are allowed. A combination of PPPoE and Q-in-Q is not allowed.
Signed-off-by: Eric Woudstra ericwouds@gmail.com Link: https://patch.msgid.link/20250225201509.20843-1-ericwouds@gmail.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/mediatek/mtk_ppe_offload.c | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c index 6a72687d5b83f..8cb8d47227f51 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c @@ -34,8 +34,10 @@ struct mtk_flow_data { u16 vlan_in;
struct { - u16 id; - __be16 proto; + struct { + u16 id; + __be16 proto; + } vlans[2]; u8 num; } vlan; struct { @@ -321,18 +323,19 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f) case FLOW_ACTION_CSUM: break; case FLOW_ACTION_VLAN_PUSH: - if (data.vlan.num == 1 || + if (data.vlan.num + data.pppoe.num == 2 || act->vlan.proto != htons(ETH_P_8021Q)) return -EOPNOTSUPP;
- data.vlan.id = act->vlan.vid; - data.vlan.proto = act->vlan.proto; + data.vlan.vlans[data.vlan.num].id = act->vlan.vid; + data.vlan.vlans[data.vlan.num].proto = act->vlan.proto; data.vlan.num++; break; case FLOW_ACTION_VLAN_POP: break; case FLOW_ACTION_PPPOE_PUSH: - if (data.pppoe.num == 1) + if (data.pppoe.num == 1 || + data.vlan.num == 2) return -EOPNOTSUPP;
data.pppoe.sid = act->pppoe.sid; @@ -422,12 +425,9 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f) if (offload_type == MTK_PPE_PKT_TYPE_BRIDGE) foe.bridge.vlan = data.vlan_in;
- if (data.vlan.num == 1) { - if (data.vlan.proto != htons(ETH_P_8021Q)) - return -EOPNOTSUPP; + for (i = 0; i < data.vlan.num; i++) + mtk_foe_entry_set_vlan(eth, &foe, data.vlan.vlans[i].id);
- mtk_foe_entry_set_vlan(eth, &foe, data.vlan.id); - } if (data.pppoe.num == 1) mtk_foe_entry_set_pppoe(eth, &foe, data.pppoe.sid);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann arnd@arndb.de
[ Upstream commit 01358e8fe922f716c05d7864ac2213b2440026e7 ]
Building with W=1 shows a warning about xge_acpi_match being unused when CONFIG_ACPI is disabled:
drivers/net/ethernet/apm/xgene-v2/main.c:723:36: error: unused variable 'xge_acpi_match' [-Werror,-Wunused-const-variable]
Signed-off-by: Arnd Bergmann arnd@arndb.de Link: https://patch.msgid.link/20250225163341.4168238-2-arnd@kernel.org Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/apm/xgene-v2/main.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/apm/xgene-v2/main.c b/drivers/net/ethernet/apm/xgene-v2/main.c index 379d19d18dbed..5808e3c73a8f4 100644 --- a/drivers/net/ethernet/apm/xgene-v2/main.c +++ b/drivers/net/ethernet/apm/xgene-v2/main.c @@ -9,8 +9,6 @@
#include "main.h"
-static const struct acpi_device_id xge_acpi_match[]; - static int xge_get_resources(struct xge_pdata *pdata) { struct platform_device *pdev; @@ -733,7 +731,7 @@ MODULE_DEVICE_TABLE(acpi, xge_acpi_match); static struct platform_driver xge_driver = { .driver = { .name = "xgene-enet-v2", - .acpi_match_table = ACPI_PTR(xge_acpi_match), + .acpi_match_table = xge_acpi_match, }, .probe = xge_probe, .remove = xge_remove,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hangbin Liu liuhangbin@gmail.com
[ Upstream commit 28d68d396a1cd21591e8c6d74afbde33a7ea107e ]
Normally, a bond uses the MAC address of the first added slave as the bond’s MAC address. And the bond will set active slave’s MAC address to bond’s address if fail_over_mac is set to none (0) or follow (2).
When the first slave is removed, the bond will still use the removed slave’s MAC address, which can lead to a duplicate MAC address and potentially cause issues with the switch. To avoid confusion, let's warn the user in all situations, including when fail_over_mac is set to 2 or not in active-backup mode.
Signed-off-by: Hangbin Liu liuhangbin@gmail.com Reviewed-by: Nikolay Aleksandrov razor@blackwall.org Link: https://patch.msgid.link/20250225033914.18617-1-liuhangbin@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/bonding/bond_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index ded9e369e4038..3cedadef9c8ab 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -2431,7 +2431,7 @@ static int __bond_release_one(struct net_device *bond_dev,
RCU_INIT_POINTER(bond->current_arp_slave, NULL);
- if (!all && (!bond->params.fail_over_mac || + if (!all && (bond->params.fail_over_mac != BOND_FOM_ACTIVE || BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP)) { if (ether_addr_equal_64bits(bond_dev->dev_addr, slave->perm_hwaddr) && bond_has_slaves(bond))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Davis afd@ti.com
[ Upstream commit a5caf03188e44388e8c618dcbe5fffad1a249385 ]
The syscon helper device_node_to_regmap() is used to fetch a regmap registered to a device node. It also currently creates this regmap if the node did not already have a regmap associated with it. This should only be used on "syscon" nodes. This driver is not such a device and instead uses device_node_to_regmap() on its own node as a hacky way to create a regmap for itself.
This will not work going forward and so we should create our regmap the normal way by defining our regmap_config, fetching our memory resource, then using the normal regmap_init_mmio() function.
Signed-off-by: Andrew Davis afd@ti.com Link: https://lore.kernel.org/r/20250123181726.597144-1-afd@ti.com Signed-off-by: Nishanth Menon nm@ti.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/ti/k3-socinfo.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/soc/ti/k3-socinfo.c b/drivers/soc/ti/k3-socinfo.c index 91f441ee61752..5b0d8260918d2 100644 --- a/drivers/soc/ti/k3-socinfo.c +++ b/drivers/soc/ti/k3-socinfo.c @@ -60,6 +60,12 @@ k3_chipinfo_partno_to_names(unsigned int partno, return -EINVAL; }
+static const struct regmap_config k3_chipinfo_regmap_cfg = { + .reg_bits = 32, + .val_bits = 32, + .reg_stride = 4, +}; + static int k3_chipinfo_probe(struct platform_device *pdev) { struct device_node *node = pdev->dev.of_node; @@ -67,13 +73,18 @@ static int k3_chipinfo_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; struct soc_device *soc_dev; struct regmap *regmap; + void __iomem *base; u32 partno_id; u32 variant; u32 jtag_id; u32 mfg; int ret;
- regmap = device_node_to_regmap(node); + base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(base)) + return PTR_ERR(base); + + regmap = regmap_init_mmio(dev, base, &k3_chipinfo_regmap_cfg); if (IS_ERR(regmap)) return PTR_ERR(regmap);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nir Lichtman nir@lichtman.org
[ Upstream commit e451630226bd09dc730eedb4e32cab1cc7155ae8 ]
Problem: Currently when running the "make isoimage" command there is an error related to wrong parameters passed to the cp command:
"cp: missing destination file operand after 'arch/x86/boot/isoimage/'"
This is caused because FDINITRDS is an empty array.
Solution: Check if FDINITRDS is empty before executing the "cp" command, similar to how it is done in the case of hdimage.
Signed-off-by: Nir Lichtman nir@lichtman.org Signed-off-by: Ingo Molnar mingo@kernel.org Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ard Biesheuvel ardb@kernel.org Cc: Masahiro Yamada yamada.masahiro@socionext.com Cc: Michal Marek michal.lkml@markovi.net Link: https://lore.kernel.org/r/20250110120500.GA923218@lichtman.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/boot/genimage.sh | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/x86/boot/genimage.sh b/arch/x86/boot/genimage.sh index c9299aeb7333e..3882ead513f74 100644 --- a/arch/x86/boot/genimage.sh +++ b/arch/x86/boot/genimage.sh @@ -22,6 +22,7 @@ # This script requires: # bash # syslinux +# genisoimage # mtools (for fdimage* and hdimage) # edk2/OVMF (for hdimage) # @@ -251,7 +252,9 @@ geniso() { cp "$isolinux" "$ldlinux" "$tmp_dir" cp "$FBZIMAGE" "$tmp_dir"/linux echo default linux "$KCMDLINE" > "$tmp_dir"/isolinux.cfg - cp "${FDINITRDS[@]}" "$tmp_dir"/ + if [ ${#FDINITRDS[@]} -gt 0 ]; then + cp "${FDINITRDS[@]}" "$tmp_dir"/ + fi genisoimage -J -r -appid 'LINUX_BOOT' -input-charset=utf-8 \ -quiet -o "$FIMAGE" -b isolinux.bin \ -c boot.cat -no-emul-boot -boot-load-size 4 \
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yihan Zhu Yihan.Zhu@amd.com
[ Upstream commit 02a940da2ccc0cc0299811379580852b405a0ea2 ]
[WHY] If max_downscale_src_width check fails, we exit early from TAP calculation and left a NULL value to the scaling data structure to cause the zero divide in the DML validation.
[HOW] Call set default TAP calculation before early exit in get_optimal_number_of_taps due to max downscale limit exceed.
Reviewed-by: Samson Tam samson.tam@amd.com Signed-off-by: Yihan Zhu Yihan.Zhu@amd.com Signed-off-by: Zaeem Mohamed zaeem.mohamed@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c index 50dc834046446..4ce45f1bdac0f 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c @@ -392,11 +392,6 @@ bool dpp3_get_optimal_number_of_taps( int min_taps_y, min_taps_c; enum lb_memory_config lb_config;
- if (scl_data->viewport.width > scl_data->h_active && - dpp->ctx->dc->debug.max_downscale_src_width != 0 && - scl_data->viewport.width > dpp->ctx->dc->debug.max_downscale_src_width) - return false; - /* * Set default taps if none are provided * From programming guide: taps = min{ ceil(2*H_RATIO,1), 8} for downscaling @@ -434,6 +429,12 @@ bool dpp3_get_optimal_number_of_taps( else scl_data->taps.h_taps_c = in_taps->h_taps_c;
+ // Avoid null data in the scl data with this early return, proceed non-adaptive calcualtion first + if (scl_data->viewport.width > scl_data->h_active && + dpp->ctx->dc->debug.max_downscale_src_width != 0 && + scl_data->viewport.width > dpp->ctx->dc->debug.max_downscale_src_width) + return false; + /*Ensure we can support the requested number of vtaps*/ min_taps_y = dc_fixpt_ceil(scl_data->ratios.vert); min_taps_c = dc_fixpt_ceil(scl_data->ratios.vert_c);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Waiman Long longman@redhat.com
[ Upstream commit fe37c699ae3eed6e02ee55fbf5cb9ceb7fcfd76c ]
Depending on the type of panics, it was found that the __register_nmi_handler() function can be called in NMI context from nmi_shootdown_cpus() leading to a lockdep splat:
WARNING: inconsistent lock state inconsistent {INITIAL USE} -> {IN-NMI} usage.
lock(&nmi_desc[0].lock); <Interrupt> lock(&nmi_desc[0].lock);
Call Trace: _raw_spin_lock_irqsave __register_nmi_handler nmi_shootdown_cpus kdump_nmi_shootdown_cpus native_machine_crash_shutdown __crash_kexec
In this particular case, the following panic message was printed before:
Kernel panic - not syncing: Fatal hardware error!
This message seemed to be given out from __ghes_panic() running in NMI context.
The __register_nmi_handler() function which takes the nmi_desc lock with irq disabled shouldn't be called from NMI context as this can lead to deadlock.
The nmi_shootdown_cpus() function can only be invoked once. After the first invocation, all other CPUs should be stuck in the newly added crash_nmi_callback() and cannot respond to a second NMI.
Fix it by adding a new emergency NMI handler to the nmi_desc structure and provide a new set_emergency_nmi_handler() helper to set crash_nmi_callback() in any context. The new emergency handler will preempt other handlers in the linked list. That will eliminate the need to take any lock and serve the panic in NMI use case.
Signed-off-by: Waiman Long longman@redhat.com Signed-off-by: Ingo Molnar mingo@kernel.org Acked-by: Rik van Riel riel@surriel.com Cc: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20250206191844.131700-1-longman@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/include/asm/nmi.h | 2 ++ arch/x86/kernel/nmi.c | 42 ++++++++++++++++++++++++++++++++++++++ arch/x86/kernel/reboot.c | 10 +++------ 3 files changed, 47 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/nmi.h b/arch/x86/include/asm/nmi.h index 5c5f1e56c4048..6f3d145670a95 100644 --- a/arch/x86/include/asm/nmi.h +++ b/arch/x86/include/asm/nmi.h @@ -59,6 +59,8 @@ int __register_nmi_handler(unsigned int, struct nmiaction *);
void unregister_nmi_handler(unsigned int, const char *);
+void set_emergency_nmi_handler(unsigned int type, nmi_handler_t handler); + void stop_nmi(void); void restart_nmi(void); void local_touch_nmi(void); diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index ed6cce6c39504..b9a128546970f 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -38,8 +38,12 @@ #define CREATE_TRACE_POINTS #include <trace/events/nmi.h>
+/* + * An emergency handler can be set in any context including NMI + */ struct nmi_desc { raw_spinlock_t lock; + nmi_handler_t emerg_handler; struct list_head head; };
@@ -121,9 +125,22 @@ static void nmi_check_duration(struct nmiaction *action, u64 duration) static int nmi_handle(unsigned int type, struct pt_regs *regs) { struct nmi_desc *desc = nmi_to_desc(type); + nmi_handler_t ehandler; struct nmiaction *a; int handled=0;
+ /* + * Call the emergency handler, if set + * + * In the case of crash_nmi_callback() emergency handler, it will + * return in the case of the crashing CPU to enable it to complete + * other necessary crashing actions ASAP. Other handlers in the + * linked list won't need to be run. + */ + ehandler = desc->emerg_handler; + if (ehandler) + return ehandler(type, regs); + rcu_read_lock();
/* @@ -213,6 +230,31 @@ void unregister_nmi_handler(unsigned int type, const char *name) } EXPORT_SYMBOL_GPL(unregister_nmi_handler);
+/** + * set_emergency_nmi_handler - Set emergency handler + * @type: NMI type + * @handler: the emergency handler to be stored + * + * Set an emergency NMI handler which, if set, will preempt all the other + * handlers in the linked list. If a NULL handler is passed in, it will clear + * it. It is expected that concurrent calls to this function will not happen + * or the system is screwed beyond repair. + */ +void set_emergency_nmi_handler(unsigned int type, nmi_handler_t handler) +{ + struct nmi_desc *desc = nmi_to_desc(type); + + if (WARN_ON_ONCE(desc->emerg_handler == handler)) + return; + desc->emerg_handler = handler; + + /* + * Ensure the emergency handler is visible to other CPUs before + * function return + */ + smp_wmb(); +} + static void pci_serr_error(unsigned char reason, struct pt_regs *regs) { diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c index 299b970e5f829..d9dbcd1cf75f8 100644 --- a/arch/x86/kernel/reboot.c +++ b/arch/x86/kernel/reboot.c @@ -896,15 +896,11 @@ void nmi_shootdown_cpus(nmi_shootdown_cb callback) shootdown_callback = callback;
atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1); - /* Would it be better to replace the trap vector here? */ - if (register_nmi_handler(NMI_LOCAL, crash_nmi_callback, - NMI_FLAG_FIRST, "crash")) - return; /* Return what? */ + /* - * Ensure the new callback function is set before sending - * out the NMI + * Set emergency handler to preempt other handlers. */ - wmb(); + set_emergency_nmi_handler(NMI_LOCAL, crash_nmi_callback);
apic_send_IPI_allbutself(NMI_VECTOR);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
[ Upstream commit 85975daeaa4d6ec560bfcd354fc9c08ad7f38888 ]
When giving up on making a high-confidence prediction, get_typical_interval() always returns UINT_MAX which means that the next idle interval prediction will be based entirely on the time till the next timer. However, the information represented by the most recent intervals may not be completely useless in those cases.
Namely, the largest recent idle interval is an upper bound on the recently observed idle duration, so it is reasonable to assume that the next idle duration is unlikely to exceed it. Moreover, this is still true after eliminating the suspected outliers if the sample set still under consideration is at least as large as 50% of the maximum sample set size.
Accordingly, make get_typical_interval() return the current maximum recent interval value in that case instead of UINT_MAX.
Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Reported-by: Artem Bityutskiy artem.bityutskiy@linux.intel.com Tested-by: Artem Bityutskiy artem.bityutskiy@linux.intel.com Reviewed-by: Christian Loehle christian.loehle@arm.com Tested-by: Christian Loehle christian.loehle@arm.com Tested-by: Aboorva Devarajan aboorvad@linux.ibm.com Link: https://patch.msgid.link/7770672.EvYhyI6sBW@rjwysocki.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpuidle/governors/menu.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c index c4922684f3058..4edac724983a4 100644 --- a/drivers/cpuidle/governors/menu.c +++ b/drivers/cpuidle/governors/menu.c @@ -249,8 +249,19 @@ static unsigned int get_typical_interval(struct menu_device *data, * This can deal with workloads that have long pauses interspersed * with sporadic activity with a bunch of short pauses. */ - if ((divisor * 4) <= INTERVALS * 3) + if (divisor * 4 <= INTERVALS * 3) { + /* + * If there are sufficiently many data points still under + * consideration after the outliers have been eliminated, + * returning without a prediction would be a mistake because it + * is likely that the next interval will not exceed the current + * maximum, so return the latter in that case. + */ + if (divisor >= INTERVALS / 2) + return max; + return UINT_MAX; + }
thresh = max - 1; goto again;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Niklas Söderlund niklas.soderlund+renesas@ragnatech.se
[ Upstream commit a980bc5f56b0292336e408f657f79e574e8067c0 ]
The register that enables selecting a test-pattern to be outputted in free-run mode (FREE_RUN_PAT_SEL[2:0]) is only available on adv7280 based devices, not the adv7180 based ones.
Add a flag to mark devices that are capable of generating test-patterns, and those that are not. And only register the control on supported devices.
Signed-off-by: Niklas Söderlund niklas.soderlund+renesas@ragnatech.se Signed-off-by: Hans Verkuil hverkuil@xs4all.nl Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/i2c/adv7180.c | 34 ++++++++++++++++++++++------------ 1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/drivers/media/i2c/adv7180.c b/drivers/media/i2c/adv7180.c index 216fe396973f2..46912a7b671a8 100644 --- a/drivers/media/i2c/adv7180.c +++ b/drivers/media/i2c/adv7180.c @@ -194,6 +194,7 @@ struct adv7180_state; #define ADV7180_FLAG_V2 BIT(1) #define ADV7180_FLAG_MIPI_CSI2 BIT(2) #define ADV7180_FLAG_I2P BIT(3) +#define ADV7180_FLAG_TEST_PATTERN BIT(4)
struct adv7180_chip_info { unsigned int flags; @@ -673,11 +674,15 @@ static int adv7180_init_controls(struct adv7180_state *state) ADV7180_HUE_MAX, 1, ADV7180_HUE_DEF); v4l2_ctrl_new_custom(&state->ctrl_hdl, &adv7180_ctrl_fast_switch, NULL);
- v4l2_ctrl_new_std_menu_items(&state->ctrl_hdl, &adv7180_ctrl_ops, - V4L2_CID_TEST_PATTERN, - ARRAY_SIZE(test_pattern_menu) - 1, - 0, ARRAY_SIZE(test_pattern_menu) - 1, - test_pattern_menu); + if (state->chip_info->flags & ADV7180_FLAG_TEST_PATTERN) { + v4l2_ctrl_new_std_menu_items(&state->ctrl_hdl, + &adv7180_ctrl_ops, + V4L2_CID_TEST_PATTERN, + ARRAY_SIZE(test_pattern_menu) - 1, + 0, + ARRAY_SIZE(test_pattern_menu) - 1, + test_pattern_menu); + }
state->sd.ctrl_handler = &state->ctrl_hdl; if (state->ctrl_hdl.error) { @@ -1209,7 +1214,7 @@ static const struct adv7180_chip_info adv7182_info = { };
static const struct adv7180_chip_info adv7280_info = { - .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_I2P, + .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_I2P | ADV7180_FLAG_TEST_PATTERN, .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) | BIT(ADV7182_INPUT_CVBS_AIN2) | BIT(ADV7182_INPUT_CVBS_AIN3) | @@ -1223,7 +1228,8 @@ static const struct adv7180_chip_info adv7280_info = { };
static const struct adv7180_chip_info adv7280_m_info = { - .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | ADV7180_FLAG_I2P, + .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | ADV7180_FLAG_I2P | + ADV7180_FLAG_TEST_PATTERN, .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) | BIT(ADV7182_INPUT_CVBS_AIN2) | BIT(ADV7182_INPUT_CVBS_AIN3) | @@ -1244,7 +1250,8 @@ static const struct adv7180_chip_info adv7280_m_info = { };
static const struct adv7180_chip_info adv7281_info = { - .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2, + .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | + ADV7180_FLAG_TEST_PATTERN, .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) | BIT(ADV7182_INPUT_CVBS_AIN2) | BIT(ADV7182_INPUT_CVBS_AIN7) | @@ -1259,7 +1266,8 @@ static const struct adv7180_chip_info adv7281_info = { };
static const struct adv7180_chip_info adv7281_m_info = { - .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2, + .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | + ADV7180_FLAG_TEST_PATTERN, .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) | BIT(ADV7182_INPUT_CVBS_AIN2) | BIT(ADV7182_INPUT_CVBS_AIN3) | @@ -1279,7 +1287,8 @@ static const struct adv7180_chip_info adv7281_m_info = { };
static const struct adv7180_chip_info adv7281_ma_info = { - .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2, + .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | + ADV7180_FLAG_TEST_PATTERN, .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) | BIT(ADV7182_INPUT_CVBS_AIN2) | BIT(ADV7182_INPUT_CVBS_AIN3) | @@ -1304,7 +1313,7 @@ static const struct adv7180_chip_info adv7281_ma_info = { };
static const struct adv7180_chip_info adv7282_info = { - .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_I2P, + .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_I2P | ADV7180_FLAG_TEST_PATTERN, .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) | BIT(ADV7182_INPUT_CVBS_AIN2) | BIT(ADV7182_INPUT_CVBS_AIN7) | @@ -1319,7 +1328,8 @@ static const struct adv7180_chip_info adv7282_info = { };
static const struct adv7180_chip_info adv7282_m_info = { - .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | ADV7180_FLAG_I2P, + .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | ADV7180_FLAG_I2P | + ADV7180_FLAG_TEST_PATTERN, .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) | BIT(ADV7182_INPUT_CVBS_AIN2) | BIT(ADV7182_INPUT_CVBS_AIN3) |
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nandakumar Edamana nandakumar@nandakumar.co.in
[ Upstream commit 236d3910117e9f97ebf75e511d8bcc950f1a4e5f ]
In `set_kcfg_value_str`, an untrusted string is accessed with the assumption that it will be at least two characters long due to the presence of checks for opening and closing quotes. But the check for the closing quote (value[len - 1] != '"') misses the fact that it could be checking the opening quote itself in case of an invalid input that consists of just the opening quote.
This commit adds an explicit check to make sure the string is at least two characters long.
Signed-off-by: Nandakumar Edamana nandakumar@nandakumar.co.in Signed-off-by: Andrii Nakryiko andrii@kernel.org Link: https://lore.kernel.org/bpf/20250221210110.3182084-1-nandakumar@nandakumar.c... Signed-off-by: Sasha Levin sashal@kernel.org --- tools/lib/bpf/libbpf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index a0fb50718daef..98d5e566e0582 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -1751,7 +1751,7 @@ static int set_kcfg_value_str(struct extern_desc *ext, char *ext_val, }
len = strlen(value); - if (value[len - 1] != '"') { + if (len < 2 || value[len - 1] != '"') { pr_warn("extern (kcfg) '%s': invalid string config '%s'\n", ext->name, value); return -EINVAL;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jinliang Zheng alexjlzheng@gmail.com
[ Upstream commit 88f7f56d16f568f19e1a695af34a7f4a6ce537a6 ]
When a bio with REQ_PREFLUSH is submitted to dm, __send_empty_flush() generates a flush_bio with REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC, which causes the flush_bio to be throttled by wbt_wait().
An example from v5.4, similar problem also exists in upstream:
crash> bt 2091206 PID: 2091206 TASK: ffff2050df92a300 CPU: 109 COMMAND: "kworker/u260:0" #0 [ffff800084a2f7f0] __switch_to at ffff80004008aeb8 #1 [ffff800084a2f820] __schedule at ffff800040bfa0c4 #2 [ffff800084a2f880] schedule at ffff800040bfa4b4 #3 [ffff800084a2f8a0] io_schedule at ffff800040bfa9c4 #4 [ffff800084a2f8c0] rq_qos_wait at ffff8000405925bc #5 [ffff800084a2f940] wbt_wait at ffff8000405bb3a0 #6 [ffff800084a2f9a0] __rq_qos_throttle at ffff800040592254 #7 [ffff800084a2f9c0] blk_mq_make_request at ffff80004057cf38 #8 [ffff800084a2fa60] generic_make_request at ffff800040570138 #9 [ffff800084a2fae0] submit_bio at ffff8000405703b4 #10 [ffff800084a2fb50] xlog_write_iclog at ffff800001280834 [xfs] #11 [ffff800084a2fbb0] xlog_sync at ffff800001280c3c [xfs] #12 [ffff800084a2fbf0] xlog_state_release_iclog at ffff800001280df4 [xfs] #13 [ffff800084a2fc10] xlog_write at ffff80000128203c [xfs] #14 [ffff800084a2fcd0] xlog_cil_push at ffff8000012846dc [xfs] #15 [ffff800084a2fda0] xlog_cil_push_work at ffff800001284a2c [xfs] #16 [ffff800084a2fdb0] process_one_work at ffff800040111d08 #17 [ffff800084a2fe00] worker_thread at ffff8000401121cc #18 [ffff800084a2fe70] kthread at ffff800040118de4
After commit 2def2845cc33 ("xfs: don't allow log IO to be throttled"), the metadata submitted by xlog_write_iclog() should not be throttled. But due to the existence of the dm layer, throttling flush_bio indirectly causes the metadata bio to be throttled.
Fix this by conditionally adding REQ_IDLE to flush_bio.bi_opf, which makes wbt_should_throttle() return false to avoid wbt_wait().
Signed-off-by: Jinliang Zheng alexjlzheng@tencent.com Reviewed-by: Tianxiang Peng txpeng@tencent.com Reviewed-by: Hao Peng flyingpeng@tencent.com Signed-off-by: Mikulas Patocka mpatocka@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/dm.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c index f70129bc703b8..4767265793de7 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1523,14 +1523,18 @@ static void __send_empty_flush(struct clone_info *ci) { struct dm_table *t = ci->map; struct bio flush_bio; + blk_opf_t opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC; + + if ((ci->io->orig_bio->bi_opf & (REQ_IDLE | REQ_SYNC)) == + (REQ_IDLE | REQ_SYNC)) + opf |= REQ_IDLE;
/* * Use an on-stack bio for this, it's safe since we don't * need to reference it after submit. It's just used as * the basis for the clone(s). */ - bio_init(&flush_bio, ci->io->md->disk->part0, NULL, 0, - REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC); + bio_init(&flush_bio, ci->io->md->disk->part0, NULL, 0, opf);
ci->bio = &flush_bio; ci->sector_count = 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Balbir Singh balbirs@nvidia.com
[ Upstream commit 7ffb791423c7c518269a9aad35039ef824a40adb ]
When CONFIG_PCI_P2PDMA=y (which is basically enabled on all large x86 distros), it maps the PFN's via a ZONE_DEVICE mapping using devm_memremap_pages(). The mapped virtual address range corresponds to the pci_resource_start() of the BAR address and size corresponding to the BAR length.
When KASLR is enabled, the direct map range of the kernel is reduced to the size of physical memory plus additional padding. If the BAR address is beyond this limit, PCI peer to peer DMA mappings fail.
Fix this by not shrinking the size of the direct map when CONFIG_PCI_P2PDMA=y.
This reduces the total available entropy, but it's better than the current work around of having to disable KASLR completely.
[ mingo: Clarified the changelog to point out the broad impact ... ]
Signed-off-by: Balbir Singh balbirs@nvidia.com Signed-off-by: Ingo Molnar mingo@kernel.org Reviewed-by: Kees Cook kees@kernel.org Acked-by: Bjorn Helgaas bhelgaas@google.com # drivers/pci/Kconfig Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Andy Lutomirski luto@kernel.org Link: https://lore.kernel.org/lkml/20250206023201.1481957-1-balbirs@nvidia.com/ Link: https://lore.kernel.org/r/20250206234234.1912585-1-balbirs@nvidia.com -- arch/x86/mm/kaslr.c | 10 ++++++++-- drivers/pci/Kconfig | 6 ++++++ 2 files changed, 14 insertions(+), 2 deletions(-) Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/mm/kaslr.c | 10 ++++++++-- drivers/pci/Kconfig | 6 ++++++ 2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c index 230f1dee4f095..e0b0ec0f82457 100644 --- a/arch/x86/mm/kaslr.c +++ b/arch/x86/mm/kaslr.c @@ -109,8 +109,14 @@ void __init kernel_randomize_memory(void) memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) + CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;
- /* Adapt physical memory region size based on available memory */ - if (memory_tb < kaslr_regions[0].size_tb) + /* + * Adapt physical memory region size based on available memory, + * except when CONFIG_PCI_P2PDMA is enabled. P2PDMA exposes the + * device BAR space assuming the direct map space is large enough + * for creating a ZONE_DEVICE mapping in the direct map corresponding + * to the physical BAR address. + */ + if (!IS_ENABLED(CONFIG_PCI_P2PDMA) && (memory_tb < kaslr_regions[0].size_tb)) kaslr_regions[0].size_tb = memory_tb;
/* diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 55c028af4bd94..c4f9ac5c00c12 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -184,6 +184,12 @@ config PCI_P2PDMA P2P DMA transactions must be between devices behind the same root port.
+ Enabling this option will reduce the entropy of x86 KASLR memory + regions. For example - on a 46 bit system, the entropy goes down + from 16 bits to 15 bits. The actual reduction in entropy depends + on the physical address bits, on processor features, kernel config + (5 level page table) and physical memory present on the system. + If unsure, say N.
config PCI_LABEL
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bibo Mao maobibo@loongson.cn
[ Upstream commit 756276ce78d5624dc814f9d99f7d16c8fd51076e ]
On MIPS system, most of the syscall function name begin with prefix sys_. Some syscalls are special such as clone/fork, function name of these begin with __sys_. Since scratch registers need be saved in stack when these system calls happens.
With ftrace system call method, system call functions are declared with SYSCALL_DEFINEx, metadata of the system call symbol name begins with sys_. Here mips specific function arch_syscall_match_sym_name is used to compare function name between sys_call_table[] and metadata of syscall symbol.
Signed-off-by: Bibo Mao maobibo@loongson.cn Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/mips/include/asm/ftrace.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/arch/mips/include/asm/ftrace.h b/arch/mips/include/asm/ftrace.h index db497a8167da2..e3212f44446fa 100644 --- a/arch/mips/include/asm/ftrace.h +++ b/arch/mips/include/asm/ftrace.h @@ -87,4 +87,20 @@ struct dyn_arch_ftrace { #endif /* CONFIG_DYNAMIC_FTRACE */ #endif /* __ASSEMBLY__ */ #endif /* CONFIG_FUNCTION_TRACER */ + +#ifdef CONFIG_FTRACE_SYSCALLS +#ifndef __ASSEMBLY__ +/* + * Some syscall entry functions on mips start with "__sys_" (fork and clone, + * for instance). We should also match the sys_ variant with those. + */ +#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME +static inline bool arch_syscall_match_sym_name(const char *sym, + const char *name) +{ + return !strcmp(sym, name) || + (!strncmp(sym, "__sys_", 6) && !strcmp(sym + 6, name + 4)); +} +#endif /* __ASSEMBLY__ */ +#endif /* CONFIG_FTRACE_SYSCALLS */ #endif /* _ASM_MIPS_FTRACE_H */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jason Gunthorpe jgg@nvidia.com
[ Upstream commit 1f7df3a691740a7736bbc99dc4ed536120eb4746 ]
The IOMMU translation for MSI message addresses has been a 2-step process, separated in time:
1) iommu_dma_prepare_msi(): A cookie pointer containing the IOVA address is stored in the MSI descriptor when an MSI interrupt is allocated.
2) iommu_dma_compose_msi_msg(): this cookie pointer is used to compute a translated message address.
This has an inherent lifetime problem for the pointer stored in the cookie that must remain valid between the two steps. However, there is no locking at the irq layer that helps protect the lifetime. Today, this works under the assumption that the iommu domain is not changed while MSI interrupts being programmed. This is true for normal DMA API users within the kernel, as the iommu domain is attached before the driver is probed and cannot be changed while a driver is attached.
Classic VFIO type1 also prevented changing the iommu domain while VFIO was running as it does not support changing the "container" after starting up.
However, iommufd has improved this so that the iommu domain can be changed during VFIO operation. This potentially allows userspace to directly race VFIO_DEVICE_ATTACH_IOMMUFD_PT (which calls iommu_attach_group()) and VFIO_DEVICE_SET_IRQS (which calls into iommu_dma_compose_msi_msg()).
This potentially causes both the cookie pointer and the unlocked call to iommu_get_domain_for_dev() on the MSI translation path to become UAFs.
Fix the MSI cookie UAF by removing the cookie pointer. The translated IOVA address is already known during iommu_dma_prepare_msi() and cannot change. Thus, it can simply be stored as an integer in the MSI descriptor.
The other UAF related to iommu_get_domain_for_dev() will be addressed in patch "iommu: Make iommu_dma_prepare_msi() into a generic operation" by using the IOMMU group mutex.
Link: https://patch.msgid.link/r/a4f2cd76b9dc1833ee6c1cf325cba57def22231c.17400149... Signed-off-by: Nicolin Chen nicolinc@nvidia.com Reviewed-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/dma-iommu.c | 28 +++++++++++++--------------- include/linux/msi.h | 33 ++++++++++++--------------------- 2 files changed, 25 insertions(+), 36 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 3fa66dba0a326..cbf9ec320691a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1661,7 +1661,7 @@ int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr) static DEFINE_MUTEX(msi_prepare_lock); /* see below */
if (!domain || !domain->iova_cookie) { - desc->iommu_cookie = NULL; + msi_desc_set_iommu_msi_iova(desc, 0, 0); return 0; }
@@ -1673,11 +1673,12 @@ int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr) mutex_lock(&msi_prepare_lock); msi_page = iommu_dma_get_msi_page(dev, msi_addr, domain); mutex_unlock(&msi_prepare_lock); - - msi_desc_set_iommu_cookie(desc, msi_page); - if (!msi_page) return -ENOMEM; + + msi_desc_set_iommu_msi_iova( + desc, msi_page->iova, + ilog2(cookie_msi_granule(domain->iova_cookie))); return 0; }
@@ -1688,18 +1689,15 @@ int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr) */ void iommu_dma_compose_msi_msg(struct msi_desc *desc, struct msi_msg *msg) { - struct device *dev = msi_desc_to_dev(desc); - const struct iommu_domain *domain = iommu_get_domain_for_dev(dev); - const struct iommu_dma_msi_page *msi_page; +#ifdef CONFIG_IRQ_MSI_IOMMU + if (desc->iommu_msi_shift) { + u64 msi_iova = desc->iommu_msi_iova << desc->iommu_msi_shift;
- msi_page = msi_desc_get_iommu_cookie(desc); - - if (!domain || !domain->iova_cookie || WARN_ON(!msi_page)) - return; - - msg->address_hi = upper_32_bits(msi_page->iova); - msg->address_lo &= cookie_msi_granule(domain->iova_cookie) - 1; - msg->address_lo += lower_32_bits(msi_page->iova); + msg->address_hi = upper_32_bits(msi_iova); + msg->address_lo = lower_32_bits(msi_iova) | + (msg->address_lo & ((1 << desc->iommu_msi_shift) - 1)); + } +#endif }
static int iommu_dma_init(void) diff --git a/include/linux/msi.h b/include/linux/msi.h index e5dfb9cf3aa11..1bf8d126f7928 100644 --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -129,6 +129,10 @@ struct pci_msi_desc { * @dev: Pointer to the device which uses this descriptor * @msg: The last set MSI message cached for reuse * @affinity: Optional pointer to a cpu affinity mask for this descriptor + * @iommu_msi_iova: Optional shifted IOVA from the IOMMU to override the msi_addr. + * Only used if iommu_msi_shift != 0 + * @iommu_msi_shift: Indicates how many bits of the original address should be + * preserved when using iommu_msi_iova. * @sysfs_attr: Pointer to sysfs device attribute * * @write_msi_msg: Callback that may be called when the MSI message @@ -146,7 +150,8 @@ struct msi_desc { struct msi_msg msg; struct irq_affinity_desc *affinity; #ifdef CONFIG_IRQ_MSI_IOMMU - const void *iommu_cookie; + u64 iommu_msi_iova : 58; + u64 iommu_msi_shift : 6; #endif #ifdef CONFIG_SYSFS struct device_attribute *sysfs_attrs; @@ -214,28 +219,14 @@ struct msi_desc *msi_next_desc(struct device *dev, enum msi_desc_filter filter);
#define msi_desc_to_dev(desc) ((desc)->dev)
-#ifdef CONFIG_IRQ_MSI_IOMMU -static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc) -{ - return desc->iommu_cookie; -} - -static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc, - const void *iommu_cookie) -{ - desc->iommu_cookie = iommu_cookie; -} -#else -static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc) +static inline void msi_desc_set_iommu_msi_iova(struct msi_desc *desc, u64 msi_iova, + unsigned int msi_shift) { - return NULL; -} - -static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc, - const void *iommu_cookie) -{ -} +#ifdef CONFIG_IRQ_MSI_IOMMU + desc->iommu_msi_iova = msi_iova >> msi_shift; + desc->iommu_msi_shift = msi_shift; #endif +}
#ifdef CONFIG_PCI_MSI struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Burton paulburton@kernel.org
[ Upstream commit 00a134fc2bb4a5f8fada58cf7ff4259149691d64 ]
The pm-cps code has up until now used per-CPU variables indexed by core, rather than CPU number, in order to share data amongst sibling CPUs (ie. VPs/threads in a core). This works fine for single cluster systems, but with multi-cluster systems a core number is no longer unique in the system, leading to sharing between CPUs that are not actually siblings.
Avoid this issue by using per-CPU variables as they are more generally used - ie. access them using CPU numbers rather than core numbers. Sharing between siblings is then accomplished by: - Assigning the same pointer to entries for each sibling CPU for the nc_asm_enter & ready_count variables, which allow this by virtue of being per-CPU pointers.
- Indexing by the first CPU set in a CPUs cpu_sibling_map in the case of pm_barrier, for which we can't use the previous approach because the per-CPU variable is not a pointer.
Signed-off-by: Paul Burton paulburton@kernel.org Signed-off-by: Dragan Mladjenovic dragan.mladjenovic@syrmia.com Signed-off-by: Aleksandar Rikalo arikalo@gmail.com Tested-by: Serge Semin fancer.lancer@gmail.com Tested-by: Gregory CLEMENT gregory.clement@bootlin.com Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/mips/kernel/pm-cps.c | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-)
diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c index 9bf60d7d44d36..a7bcf2b814c86 100644 --- a/arch/mips/kernel/pm-cps.c +++ b/arch/mips/kernel/pm-cps.c @@ -56,10 +56,7 @@ static DEFINE_PER_CPU_ALIGNED(u32*, ready_count); /* Indicates online CPUs coupled with the current CPU */ static DEFINE_PER_CPU_ALIGNED(cpumask_t, online_coupled);
-/* - * Used to synchronize entry to deep idle states. Actually per-core rather - * than per-CPU. - */ +/* Used to synchronize entry to deep idle states */ static DEFINE_PER_CPU_ALIGNED(atomic_t, pm_barrier);
/* Saved CPU state across the CPS_PM_POWER_GATED state */ @@ -118,9 +115,10 @@ int cps_pm_enter_state(enum cps_pm_state state) cps_nc_entry_fn entry; struct core_boot_config *core_cfg; struct vpe_boot_config *vpe_cfg; + atomic_t *barrier;
/* Check that there is an entry function for this state */ - entry = per_cpu(nc_asm_enter, core)[state]; + entry = per_cpu(nc_asm_enter, cpu)[state]; if (!entry) return -EINVAL;
@@ -156,7 +154,7 @@ int cps_pm_enter_state(enum cps_pm_state state) smp_mb__after_atomic();
/* Create a non-coherent mapping of the core ready_count */ - core_ready_count = per_cpu(ready_count, core); + core_ready_count = per_cpu(ready_count, cpu); nc_addr = kmap_noncoherent(virt_to_page(core_ready_count), (unsigned long)core_ready_count); nc_addr += ((unsigned long)core_ready_count & ~PAGE_MASK); @@ -164,7 +162,8 @@ int cps_pm_enter_state(enum cps_pm_state state)
/* Ensure ready_count is zero-initialised before the assembly runs */ WRITE_ONCE(*nc_core_ready_count, 0); - coupled_barrier(&per_cpu(pm_barrier, core), online); + barrier = &per_cpu(pm_barrier, cpumask_first(&cpu_sibling_map[cpu])); + coupled_barrier(barrier, online);
/* Run the generated entry code */ left = entry(online, nc_core_ready_count); @@ -635,12 +634,14 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state)
static int cps_pm_online_cpu(unsigned int cpu) { - enum cps_pm_state state; - unsigned core = cpu_core(&cpu_data[cpu]); + unsigned int sibling, core; void *entry_fn, *core_rc; + enum cps_pm_state state; + + core = cpu_core(&cpu_data[cpu]);
for (state = CPS_PM_NC_WAIT; state < CPS_PM_STATE_COUNT; state++) { - if (per_cpu(nc_asm_enter, core)[state]) + if (per_cpu(nc_asm_enter, cpu)[state]) continue; if (!test_bit(state, state_support)) continue; @@ -652,16 +653,19 @@ static int cps_pm_online_cpu(unsigned int cpu) clear_bit(state, state_support); }
- per_cpu(nc_asm_enter, core)[state] = entry_fn; + for_each_cpu(sibling, &cpu_sibling_map[cpu]) + per_cpu(nc_asm_enter, sibling)[state] = entry_fn; }
- if (!per_cpu(ready_count, core)) { + if (!per_cpu(ready_count, cpu)) { core_rc = kmalloc(sizeof(u32), GFP_KERNEL); if (!core_rc) { pr_err("Failed allocate core %u ready_count\n", core); return -ENOMEM; } - per_cpu(ready_count, core) = core_rc; + + for_each_cpu(sibling, &cpu_sibling_map[cpu]) + per_cpu(ready_count, sibling) = core_rc; }
return 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Burton paulburton@kernel.org
[ Upstream commit 3128b0a2e0cf6e07aa78e5f8cf7dd9cd59dc8174 ]
In multi-cluster MIPS I6500 systems there is a GIC in each cluster, each with its own counter. When a cluster powers up the counter will be stopped, with the COUNTSTOP bit set in the GIC_CONFIG register.
In single cluster systems, it has been fine to clear COUNTSTOP once in gic_clocksource_of_init() to start the counter. In multi-cluster systems, this will only have started the counter in the boot cluster, and any CPUs in other clusters will find their counter stopped which will break the GIC clock_event_device.
Resolve this by having CPUs clear the COUNTSTOP bit when they come online, using the existing gic_starting_cpu() CPU hotplug callback. This will allow CPUs in secondary clusters to ensure that the cluster's GIC counter is running as expected.
Signed-off-by: Paul Burton paulburton@kernel.org Signed-off-by: Chao-ying Fu cfu@wavecomp.com Signed-off-by: Dragan Mladjenovic dragan.mladjenovic@syrmia.com Signed-off-by: Aleksandar Rikalo arikalo@gmail.com Reviewed-by: Philippe Mathieu-Daudé philmd@linaro.org Tested-by: Serge Semin fancer.lancer@gmail.com Tested-by: Gregory CLEMENT gregory.clement@bootlin.com Acked-by: Daniel Lezcano daniel.lezcano@linaro.org Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clocksource/mips-gic-timer.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/clocksource/mips-gic-timer.c b/drivers/clocksource/mips-gic-timer.c index b3ae38f367205..39c70b5ac44c9 100644 --- a/drivers/clocksource/mips-gic-timer.c +++ b/drivers/clocksource/mips-gic-timer.c @@ -114,6 +114,9 @@ static void gic_update_frequency(void *data)
static int gic_starting_cpu(unsigned int cpu) { + /* Ensure the GIC counter is running */ + clear_gic_config(GIC_CONFIG_COUNTSTOP); + gic_clockevent_cpu_init(cpu, this_cpu_ptr(&gic_clockevent_device)); return 0; } @@ -248,9 +251,6 @@ static int __init gic_clocksource_of_init(struct device_node *node) pr_warn("Unable to register clock notifier\n"); }
- /* And finally start the counter */ - clear_gic_config(GIC_CONFIG_COUNTSTOP); - /* * It's safe to use the MIPS GIC timer as a sched clock source only if * its ticks are stable, which is true on either the platforms with
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shivasharan S shivasharan.srikanteshwara@broadcom.com
[ Upstream commit 5612d6d51ed2634a033c95de2edec7449409cbb9 ]
When an IOCTL times out and driver issues a target reset, if firmware fails the task management elevate the recovery by issuing a diag reset to controller.
Signed-off-by: Shivasharan S shivasharan.srikanteshwara@broadcom.com Link: https://lore.kernel.org/r/1739410016-27503-5-git-send-email-shivasharan.srik... Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/mpt3sas/mpt3sas_ctl.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/mpt3sas/mpt3sas_ctl.c b/drivers/scsi/mpt3sas/mpt3sas_ctl.c index fc5af6a5114e3..863503e8a4d1a 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_ctl.c +++ b/drivers/scsi/mpt3sas/mpt3sas_ctl.c @@ -679,6 +679,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg, size_t data_in_sz = 0; long ret; u16 device_handle = MPT3SAS_INVALID_DEVICE_HANDLE; + int tm_ret;
issue_reset = 0;
@@ -1120,18 +1121,25 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg, if (pcie_device && (!ioc->tm_custom_handling) && (!(mpt3sas_scsih_is_pcie_scsi_device( pcie_device->device_info)))) - mpt3sas_scsih_issue_locked_tm(ioc, + tm_ret = mpt3sas_scsih_issue_locked_tm(ioc, le16_to_cpu(mpi_request->FunctionDependent1), 0, 0, 0, MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, 0, pcie_device->reset_timeout, MPI26_SCSITASKMGMT_MSGFLAGS_PROTOCOL_LVL_RST_PCIE); else - mpt3sas_scsih_issue_locked_tm(ioc, + tm_ret = mpt3sas_scsih_issue_locked_tm(ioc, le16_to_cpu(mpi_request->FunctionDependent1), 0, 0, 0, MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, 0, 30, MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET); + + if (tm_ret != SUCCESS) { + ioc_info(ioc, + "target reset failed, issue hard reset: handle (0x%04x)\n", + le16_to_cpu(mpi_request->FunctionDependent1)); + mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER); + } } else mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bitterblue Smith rtl8821cerfe2@gmail.com
[ Upstream commit 6be7544d19fcfcb729495e793bc6181f85bb8949 ]
Set the MCS maps and the highest rates according to the number of spatial streams the chip has. For RTL8814AU that is 3.
Signed-off-by: Bitterblue Smith rtl8821cerfe2@gmail.com Acked-by: Ping-Ke Shih pkshih@realtek.com Signed-off-by: Ping-Ke Shih pkshih@realtek.com Link: https://patch.msgid.link/e86aa009-b5bf-4b3a-8112-ea5e3cd49465@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw88/main.c | 23 +++++++++-------------- 1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c index 7c390c2c608d8..8a47cfb358b16 100644 --- a/drivers/net/wireless/realtek/rtw88/main.c +++ b/drivers/net/wireless/realtek/rtw88/main.c @@ -1552,8 +1552,9 @@ static void rtw_init_vht_cap(struct rtw_dev *rtwdev, struct ieee80211_sta_vht_cap *vht_cap) { struct rtw_efuse *efuse = &rtwdev->efuse; - u16 mcs_map; + u16 mcs_map = 0; __le16 highest; + int i;
if (efuse->hw_cap.ptcl != EFUSE_HW_CAP_IGNORE && efuse->hw_cap.ptcl != EFUSE_HW_CAP_PTCL_VHT) @@ -1576,21 +1577,15 @@ static void rtw_init_vht_cap(struct rtw_dev *rtwdev, if (rtw_chip_has_rx_ldpc(rtwdev)) vht_cap->cap |= IEEE80211_VHT_CAP_RXLDPC;
- mcs_map = IEEE80211_VHT_MCS_SUPPORT_0_9 << 0 | - IEEE80211_VHT_MCS_NOT_SUPPORTED << 4 | - IEEE80211_VHT_MCS_NOT_SUPPORTED << 6 | - IEEE80211_VHT_MCS_NOT_SUPPORTED << 8 | - IEEE80211_VHT_MCS_NOT_SUPPORTED << 10 | - IEEE80211_VHT_MCS_NOT_SUPPORTED << 12 | - IEEE80211_VHT_MCS_NOT_SUPPORTED << 14; - if (efuse->hw_cap.nss > 1) { - highest = cpu_to_le16(780); - mcs_map |= IEEE80211_VHT_MCS_SUPPORT_0_9 << 2; - } else { - highest = cpu_to_le16(390); - mcs_map |= IEEE80211_VHT_MCS_NOT_SUPPORTED << 2; + for (i = 0; i < 8; i++) { + if (i < efuse->hw_cap.nss) + mcs_map |= IEEE80211_VHT_MCS_SUPPORT_0_9 << (i * 2); + else + mcs_map |= IEEE80211_VHT_MCS_NOT_SUPPORTED << (i * 2); }
+ highest = cpu_to_le16(390 * efuse->hw_cap.nss); + vht_cap->vht_mcs.rx_mcs_map = cpu_to_le16(mcs_map); vht_cap->vht_mcs.tx_mcs_map = cpu_to_le16(mcs_map); vht_cap->vht_mcs.rx_highest = highest;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bitterblue Smith rtl8821cerfe2@gmail.com
[ Upstream commit c7eea1ba05ca5b0dbf77a27cf2e1e6e2fb3c0043 ]
Set the RX mask and the highest RX rate according to the number of spatial streams the chip can receive. For RTL8814AU that is 3.
Signed-off-by: Bitterblue Smith rtl8821cerfe2@gmail.com Acked-by: Ping-Ke Shih pkshih@realtek.com Signed-off-by: Ping-Ke Shih pkshih@realtek.com Link: https://patch.msgid.link/4e786f50-ed1c-4387-8b28-e6ff00e35e81@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw88/main.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-)
diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c index 8a47cfb358b16..0a913cf6a615b 100644 --- a/drivers/net/wireless/realtek/rtw88/main.c +++ b/drivers/net/wireless/realtek/rtw88/main.c @@ -1516,6 +1516,7 @@ static void rtw_init_ht_cap(struct rtw_dev *rtwdev, { const struct rtw_chip_info *chip = rtwdev->chip; struct rtw_efuse *efuse = &rtwdev->efuse; + int i;
ht_cap->ht_supported = true; ht_cap->cap = 0; @@ -1535,17 +1536,11 @@ static void rtw_init_ht_cap(struct rtw_dev *rtwdev, ht_cap->ampdu_factor = IEEE80211_HT_MAX_AMPDU_64K; ht_cap->ampdu_density = chip->ampdu_density; ht_cap->mcs.tx_params = IEEE80211_HT_MCS_TX_DEFINED; - if (efuse->hw_cap.nss > 1) { - ht_cap->mcs.rx_mask[0] = 0xFF; - ht_cap->mcs.rx_mask[1] = 0xFF; - ht_cap->mcs.rx_mask[4] = 0x01; - ht_cap->mcs.rx_highest = cpu_to_le16(300); - } else { - ht_cap->mcs.rx_mask[0] = 0xFF; - ht_cap->mcs.rx_mask[1] = 0x00; - ht_cap->mcs.rx_mask[4] = 0x01; - ht_cap->mcs.rx_highest = cpu_to_le16(150); - } + + for (i = 0; i < efuse->hw_cap.nss; i++) + ht_cap->mcs.rx_mask[i] = 0xFF; + ht_cap->mcs.rx_mask[4] = 0x01; + ht_cap->mcs.rx_highest = cpu_to_le16(150 * efuse->hw_cap.nss); }
static void rtw_init_vht_cap(struct rtw_dev *rtwdev,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bitterblue Smith rtl8821cerfe2@gmail.com
[ Upstream commit 86d04f8f991a0509e318fe886d5a1cf795736c7d ]
This function translates the rate number reported by the hardware into something mac80211 can understand. It was ignoring the 3SS and 4SS HT rates. Translate them too.
Also set *nss to 0 for the HT rates, just to make sure it's initialised.
Signed-off-by: Bitterblue Smith rtl8821cerfe2@gmail.com Acked-by: Ping-Ke Shih pkshih@realtek.com Signed-off-by: Ping-Ke Shih pkshih@realtek.com Link: https://patch.msgid.link/d0a5a86b-4869-47f6-a5a7-01c0f987cc7f@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw88/util.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/realtek/rtw88/util.c b/drivers/net/wireless/realtek/rtw88/util.c index cdfd66a85075a..43cd06aa39b13 100644 --- a/drivers/net/wireless/realtek/rtw88/util.c +++ b/drivers/net/wireless/realtek/rtw88/util.c @@ -101,7 +101,8 @@ void rtw_desc_to_mcsrate(u16 rate, u8 *mcs, u8 *nss) *nss = 4; *mcs = rate - DESC_RATEVHT4SS_MCS0; } else if (rate >= DESC_RATEMCS0 && - rate <= DESC_RATEMCS15) { + rate <= DESC_RATEMCS31) { + *nss = 0; *mcs = rate - DESC_RATEMCS0; } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ping-Ke Shih pkshih@realtek.com
[ Upstream commit 56e1acaa0f80620b8e2c3410db35b4b975782b0a ]
The error code should be propagated to callers during downloading firmware header and body. Remove unnecessary assignment of -1.
Signed-off-by: Ping-Ke Shih pkshih@realtek.com Link: https://patch.msgid.link/20250217064308.43559-4-pkshih@realtek.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw89/fw.c | 2 -- 1 file changed, 2 deletions(-)
diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c index 1d57a8c5e97df..0f022a5192ac6 100644 --- a/drivers/net/wireless/realtek/rtw89/fw.c +++ b/drivers/net/wireless/realtek/rtw89/fw.c @@ -373,7 +373,6 @@ static int __rtw89_fw_download_hdr(struct rtw89_dev *rtwdev, const u8 *fw, u32 l ret = rtw89_h2c_tx(rtwdev, skb, false); if (ret) { rtw89_err(rtwdev, "failed to send h2c\n"); - ret = -1; goto fail; }
@@ -434,7 +433,6 @@ static int __rtw89_fw_download_main(struct rtw89_dev *rtwdev, ret = rtw89_h2c_tx(rtwdev, skb, true); if (ret) { rtw89_err(rtwdev, "failed to send h2c\n"); - ret = -1; goto fail; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Seiderer ps.report@gmx.net
[ Upstream commit 425e64440ad0a2f03bdaf04be0ae53dededbaa77 ]
Honour the user given buffer size for the strn_len() calls (otherwise strn_len() will access memory outside of the user given buffer).
Signed-off-by: Peter Seiderer ps.report@gmx.net Reviewed-by: Simon Horman horms@kernel.org Link: https://patch.msgid.link/20250219084527.20488-8-ps.report@gmx.net Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/pktgen.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/net/core/pktgen.c b/net/core/pktgen.c index 5917820f92c3d..a2838c15aa9da 100644 --- a/net/core/pktgen.c +++ b/net/core/pktgen.c @@ -1877,8 +1877,8 @@ static ssize_t pktgen_thread_write(struct file *file, i = len;
/* Read variable name */ - - len = strn_len(&user_buffer[i], sizeof(name) - 1); + max = min(sizeof(name) - 1, count - i); + len = strn_len(&user_buffer[i], max); if (len < 0) return len;
@@ -1908,7 +1908,8 @@ static ssize_t pktgen_thread_write(struct file *file, if (!strcmp(name, "add_device")) { char f[32]; memset(f, 0, 32); - len = strn_len(&user_buffer[i], sizeof(f) - 1); + max = min(sizeof(f) - 1, count - i); + len = strn_len(&user_buffer[i], max); if (len < 0) { ret = len; goto out;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann arnd@arndb.de
[ Upstream commit c29dfd661fe2f8d1b48c7f00590929c04b25bf40 ]
gcc-14 produces a bogus warning in some configurations:
drivers/edac/ie31200_edac.c: In function 'ie31200_probe1.isra': drivers/edac/ie31200_edac.c:412:26: error: 'dimm_info' is used uninitialized [-Werror=uninitialized] 412 | struct dimm_data dimm_info[IE31200_CHANNELS][IE31200_DIMMS_PER_CHANNEL]; | ^~~~~~~~~ drivers/edac/ie31200_edac.c:412:26: note: 'dimm_info' declared here 412 | struct dimm_data dimm_info[IE31200_CHANNELS][IE31200_DIMMS_PER_CHANNEL]; | ^~~~~~~~~
I don't see any way the unintialized access could really happen here, but I can see why the compiler gets confused by the two loops.
Instead, rework the two nested loops to only read the addr_decode registers and then keep only one instance of the dimm info structure.
[Tony: Qiuxu pointed out that the "populate DIMM info" comment was left behind in the refactor and suggested moving it. I deleted the comment as unnecessry in front os a call to populate_dimm_info(). That seems pretty self-describing.]
Signed-off-by: Arnd Bergmann arnd@arndb.de Acked-by: Jason Baron jbaron@akamai.com Signed-off-by: Tony Luck tony.luck@intel.com Link: https://lore.kernel.org/all/20250122065031.1321015-1-arnd@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/edac/ie31200_edac.c | 28 +++++++++++++--------------- 1 file changed, 13 insertions(+), 15 deletions(-)
diff --git a/drivers/edac/ie31200_edac.c b/drivers/edac/ie31200_edac.c index 56be8ef40f376..e3635fba63b49 100644 --- a/drivers/edac/ie31200_edac.c +++ b/drivers/edac/ie31200_edac.c @@ -405,10 +405,9 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx) int i, j, ret; struct mem_ctl_info *mci = NULL; struct edac_mc_layer layers[2]; - struct dimm_data dimm_info[IE31200_CHANNELS][IE31200_DIMMS_PER_CHANNEL]; void __iomem *window; struct ie31200_priv *priv; - u32 addr_decode, mad_offset; + u32 addr_decode[IE31200_CHANNELS], mad_offset;
/* * Kaby Lake, Coffee Lake seem to work like Skylake. Please re-visit @@ -466,19 +465,10 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx) mad_offset = IE31200_MAD_DIMM_0_OFFSET; }
- /* populate DIMM info */ for (i = 0; i < IE31200_CHANNELS; i++) { - addr_decode = readl(window + mad_offset + + addr_decode[i] = readl(window + mad_offset + (i * 4)); - edac_dbg(0, "addr_decode: 0x%x\n", addr_decode); - for (j = 0; j < IE31200_DIMMS_PER_CHANNEL; j++) { - populate_dimm_info(&dimm_info[i][j], addr_decode, j, - skl); - edac_dbg(0, "size: 0x%x, rank: %d, width: %d\n", - dimm_info[i][j].size, - dimm_info[i][j].dual_rank, - dimm_info[i][j].x16_width); - } + edac_dbg(0, "addr_decode: 0x%x\n", addr_decode[i]); }
/* @@ -489,14 +479,22 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx) */ for (i = 0; i < IE31200_DIMMS_PER_CHANNEL; i++) { for (j = 0; j < IE31200_CHANNELS; j++) { + struct dimm_data dimm_info; struct dimm_info *dimm; unsigned long nr_pages;
- nr_pages = IE31200_PAGES(dimm_info[j][i].size, skl); + populate_dimm_info(&dimm_info, addr_decode[j], i, + skl); + edac_dbg(0, "size: 0x%x, rank: %d, width: %d\n", + dimm_info.size, + dimm_info.dual_rank, + dimm_info.x16_width); + + nr_pages = IE31200_PAGES(dimm_info.size, skl); if (nr_pages == 0) continue;
- if (dimm_info[j][i].dual_rank) { + if (dimm_info.dual_rank) { nr_pages = nr_pages / 2; dimm = edac_get_dimm(mci, (i * 2) + 1, j, 0); dimm->nr_pages = nr_pages;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Frank Li Frank.Li@nxp.com
[ Upstream commit a892ee4cf22a50e1d6988d0464a9a421f3e5db2f ]
Ensure the FIFO is empty before issuing the DAA command to prevent incorrect command data from being sent. Align with other data transfers, such as svc_i3c_master_start_xfer_locked(), which flushes the FIFO before sending a command.
Signed-off-by: Frank Li Frank.Li@nxp.com Reviewed-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/r/20250129162250.3629189-1-Frank.Li@nxp.com Signed-off-by: Alexandre Belloni alexandre.belloni@bootlin.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i3c/master/svc-i3c-master.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c index cf0550c6e95f0..8cf3aa800d89d 100644 --- a/drivers/i3c/master/svc-i3c-master.c +++ b/drivers/i3c/master/svc-i3c-master.c @@ -833,6 +833,8 @@ static int svc_i3c_master_do_daa_locked(struct svc_i3c_master *master, u32 reg; int ret, i;
+ svc_i3c_master_flush_fifo(master); + while (true) { /* Enter/proceed with DAA */ writel(SVC_I3C_MCTRL_REQUEST_PROC_DAA |
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexis Lothoré alexis.lothore@bootlin.com
[ Upstream commit 1bd2aad57da95f7f2d2bb52f7ad15c0f4993a685 ]
The following splat has been observed on a SAMA5D27 platform using atmel_serial:
BUG: sleeping function called from invalid context at kernel/irq/manage.c:738 in_atomic(): 1, irqs_disabled(): 128, non_block: 0, pid: 27, name: kworker/u5:0 preempt_count: 1, expected: 0 INFO: lockdep is turned off. irq event stamp: 0 hardirqs last enabled at (0): [<00000000>] 0x0 hardirqs last disabled at (0): [<c01588f0>] copy_process+0x1c4c/0x7bec softirqs last enabled at (0): [<c0158944>] copy_process+0x1ca0/0x7bec softirqs last disabled at (0): [<00000000>] 0x0 CPU: 0 UID: 0 PID: 27 Comm: kworker/u5:0 Not tainted 6.13.0-rc7+ #74 Hardware name: Atmel SAMA5 Workqueue: hci0 hci_power_on [bluetooth] Call trace: unwind_backtrace from show_stack+0x18/0x1c show_stack from dump_stack_lvl+0x44/0x70 dump_stack_lvl from __might_resched+0x38c/0x598 __might_resched from disable_irq+0x1c/0x48 disable_irq from mctrl_gpio_disable_ms+0x74/0xc0 mctrl_gpio_disable_ms from atmel_disable_ms.part.0+0x80/0x1f4 atmel_disable_ms.part.0 from atmel_set_termios+0x764/0x11e8 atmel_set_termios from uart_change_line_settings+0x15c/0x994 uart_change_line_settings from uart_set_termios+0x2b0/0x668 uart_set_termios from tty_set_termios+0x600/0x8ec tty_set_termios from ttyport_set_flow_control+0x188/0x1e0 ttyport_set_flow_control from wilc_setup+0xd0/0x524 [hci_wilc] wilc_setup [hci_wilc] from hci_dev_open_sync+0x330/0x203c [bluetooth] hci_dev_open_sync [bluetooth] from hci_dev_do_open+0x40/0xb0 [bluetooth] hci_dev_do_open [bluetooth] from hci_power_on+0x12c/0x664 [bluetooth] hci_power_on [bluetooth] from process_one_work+0x998/0x1a38 process_one_work from worker_thread+0x6e0/0xfb4 worker_thread from kthread+0x3d4/0x484 kthread from ret_from_fork+0x14/0x28
This warning is emitted when trying to toggle, at the highest level, some flow control (with serdev_device_set_flow_control) in a device driver. At the lowest level, the atmel_serial driver is using serial_mctrl_gpio lib to enable/disable the corresponding IRQs accordingly. The warning emitted by CONFIG_DEBUG_ATOMIC_SLEEP is due to disable_irq (called in mctrl_gpio_disable_ms) being possibly called in some atomic context (some tty drivers perform modem lines configuration in regions protected by port lock).
Split mctrl_gpio_disable_ms into two differents APIs, a non-blocking one and a blocking one. Replace mctrl_gpio_disable_ms calls with the relevant version depending on whether the call is protected by some port lock.
Suggested-by: Jiri Slaby jirislaby@kernel.org Signed-off-by: Alexis Lothoré alexis.lothore@bootlin.com Acked-by: Richard Genoud richard.genoud@bootlin.com Link: https://lore.kernel.org/r/20250217-atomic_sleep_mctrl_serial_gpio-v3-1-59324... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/driver-api/serial/driver.rst | 2 +- drivers/tty/serial/8250/8250_port.c | 2 +- drivers/tty/serial/atmel_serial.c | 2 +- drivers/tty/serial/imx.c | 2 +- drivers/tty/serial/serial_mctrl_gpio.c | 34 +++++++++++++++++----- drivers/tty/serial/serial_mctrl_gpio.h | 17 +++++++++-- drivers/tty/serial/sh-sci.c | 2 +- drivers/tty/serial/stm32-usart.c | 2 +- 8 files changed, 47 insertions(+), 16 deletions(-)
diff --git a/Documentation/driver-api/serial/driver.rst b/Documentation/driver-api/serial/driver.rst index 23c6b956cd90d..9436f7c11306b 100644 --- a/Documentation/driver-api/serial/driver.rst +++ b/Documentation/driver-api/serial/driver.rst @@ -100,4 +100,4 @@ Some helpers are provided in order to set/get modem control lines via GPIO. .. kernel-doc:: drivers/tty/serial/serial_mctrl_gpio.c :identifiers: mctrl_gpio_init mctrl_gpio_free mctrl_gpio_to_gpiod mctrl_gpio_set mctrl_gpio_get mctrl_gpio_enable_ms - mctrl_gpio_disable_ms + mctrl_gpio_disable_ms_sync mctrl_gpio_disable_ms_no_sync diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c index 711de54eda989..c1917774e0bb3 100644 --- a/drivers/tty/serial/8250/8250_port.c +++ b/drivers/tty/serial/8250/8250_port.c @@ -1694,7 +1694,7 @@ static void serial8250_disable_ms(struct uart_port *port) if (up->bugs & UART_BUG_NOMSR) return;
- mctrl_gpio_disable_ms(up->gpios); + mctrl_gpio_disable_ms_no_sync(up->gpios);
up->ier &= ~UART_IER_MSI; serial_port_out(port, UART_IER, up->ier); diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c index 6a9310379dc2b..b3463cdd1d4b9 100644 --- a/drivers/tty/serial/atmel_serial.c +++ b/drivers/tty/serial/atmel_serial.c @@ -692,7 +692,7 @@ static void atmel_disable_ms(struct uart_port *port)
atmel_port->ms_irq_enabled = false;
- mctrl_gpio_disable_ms(atmel_port->gpios); + mctrl_gpio_disable_ms_no_sync(atmel_port->gpios);
if (!mctrl_gpio_to_gpiod(atmel_port->gpios, UART_GPIO_CTS)) idr |= ATMEL_US_CTSIC; diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c index 94e0781e00e80..fe22ca009fb3a 100644 --- a/drivers/tty/serial/imx.c +++ b/drivers/tty/serial/imx.c @@ -1586,7 +1586,7 @@ static void imx_uart_shutdown(struct uart_port *port) imx_uart_dma_exit(sport); }
- mctrl_gpio_disable_ms(sport->gpios); + mctrl_gpio_disable_ms_sync(sport->gpios);
spin_lock_irqsave(&sport->port.lock, flags); ucr2 = imx_uart_readl(sport, UCR2); diff --git a/drivers/tty/serial/serial_mctrl_gpio.c b/drivers/tty/serial/serial_mctrl_gpio.c index 7d5aaa8d422b1..d5fb293dd5a93 100644 --- a/drivers/tty/serial/serial_mctrl_gpio.c +++ b/drivers/tty/serial/serial_mctrl_gpio.c @@ -322,11 +322,7 @@ void mctrl_gpio_enable_ms(struct mctrl_gpios *gpios) } EXPORT_SYMBOL_GPL(mctrl_gpio_enable_ms);
-/** - * mctrl_gpio_disable_ms - disable irqs and handling of changes to the ms lines - * @gpios: gpios to disable - */ -void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios) +static void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios, bool sync) { enum mctrl_gpio_idx i;
@@ -342,10 +338,34 @@ void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios) if (!gpios->irq[i]) continue;
- disable_irq(gpios->irq[i]); + if (sync) + disable_irq(gpios->irq[i]); + else + disable_irq_nosync(gpios->irq[i]); } } -EXPORT_SYMBOL_GPL(mctrl_gpio_disable_ms); + +/** + * mctrl_gpio_disable_ms_sync - disable irqs and handling of changes to the ms + * lines, and wait for any pending IRQ to be processed + * @gpios: gpios to disable + */ +void mctrl_gpio_disable_ms_sync(struct mctrl_gpios *gpios) +{ + mctrl_gpio_disable_ms(gpios, true); +} +EXPORT_SYMBOL_GPL(mctrl_gpio_disable_ms_sync); + +/** + * mctrl_gpio_disable_ms_no_sync - disable irqs and handling of changes to the + * ms lines, and return immediately + * @gpios: gpios to disable + */ +void mctrl_gpio_disable_ms_no_sync(struct mctrl_gpios *gpios) +{ + mctrl_gpio_disable_ms(gpios, false); +} +EXPORT_SYMBOL_GPL(mctrl_gpio_disable_ms_no_sync);
void mctrl_gpio_enable_irq_wake(struct mctrl_gpios *gpios) { diff --git a/drivers/tty/serial/serial_mctrl_gpio.h b/drivers/tty/serial/serial_mctrl_gpio.h index fc76910fb105a..79e97838ebe56 100644 --- a/drivers/tty/serial/serial_mctrl_gpio.h +++ b/drivers/tty/serial/serial_mctrl_gpio.h @@ -87,9 +87,16 @@ void mctrl_gpio_free(struct device *dev, struct mctrl_gpios *gpios); void mctrl_gpio_enable_ms(struct mctrl_gpios *gpios);
/* - * Disable gpio interrupts to report status line changes. + * Disable gpio interrupts to report status line changes, and block until + * any corresponding IRQ is processed */ -void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios); +void mctrl_gpio_disable_ms_sync(struct mctrl_gpios *gpios); + +/* + * Disable gpio interrupts to report status line changes, and return + * immediately + */ +void mctrl_gpio_disable_ms_no_sync(struct mctrl_gpios *gpios);
/* * Enable gpio wakeup interrupts to enable wake up source. @@ -148,7 +155,11 @@ static inline void mctrl_gpio_enable_ms(struct mctrl_gpios *gpios) { }
-static inline void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios) +static inline void mctrl_gpio_disable_ms_sync(struct mctrl_gpios *gpios) +{ +} + +static inline void mctrl_gpio_disable_ms_no_sync(struct mctrl_gpios *gpios) { }
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index 6182ae5f6fa1e..e2dfca4c2eff8 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -2182,7 +2182,7 @@ static void sci_shutdown(struct uart_port *port) dev_dbg(port->dev, "%s(%d)\n", __func__, port->line);
s->autorts = false; - mctrl_gpio_disable_ms(to_sci_port(port)->gpios); + mctrl_gpio_disable_ms_sync(to_sci_port(port)->gpios);
spin_lock_irqsave(&port->lock, flags); sci_stop_rx(port); diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c index 7d11511c8c12a..8670bb5042c42 100644 --- a/drivers/tty/serial/stm32-usart.c +++ b/drivers/tty/serial/stm32-usart.c @@ -850,7 +850,7 @@ static void stm32_usart_enable_ms(struct uart_port *port)
static void stm32_usart_disable_ms(struct uart_port *port) { - mctrl_gpio_disable_ms(to_stm32_port(port)->gpios); + mctrl_gpio_disable_ms_sync(to_stm32_port(port)->gpios); }
/* Transmit stop */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Margolin mrgolin@amazon.com
[ Upstream commit 486055f5e09df959ad4e3aa4ee75b5c91ddeec2e ]
A single scatter-gather entry is limited by a 32 bits "length" field that is practically 4GB - PAGE_SIZE. This means that even when the memory is physically contiguous, we might need more than one entry to represent it. Additionally when using dmabuf, the sg_table might be originated outside the subsystem and optimized for other needs.
For instance an SGT of 16GB GPU continuous memory might look like this: (a real life example)
dma_address 34401400000, length fffff000 dma_address 345013ff000, length fffff000 dma_address 346013fe000, length fffff000 dma_address 347013fd000, length fffff000 dma_address 348013fc000, length 4000
Since ib_umem_find_best_pgsz works within SG entries, in the above case we will result with the worst possible 4KB page size.
Fix this by taking into consideration only the alignment of addresses of real discontinuity points rather than treating SG entries as such, and adjust the page iterator to correctly handle cross SG entry pages.
There is currently an assumption that drivers do not ask for pages bigger than maximal DMA size supported by their devices.
Reviewed-by: Firas Jahjah firasj@amazon.com Reviewed-by: Yonatan Nachum ynachum@amazon.com Signed-off-by: Michael Margolin mrgolin@amazon.com Link: https://patch.msgid.link/20250217141623.12428-1-mrgolin@amazon.com Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/core/umem.c | 36 ++++++++++++++++++++++++--------- drivers/infiniband/core/verbs.c | 11 +++++----- 2 files changed, 32 insertions(+), 15 deletions(-)
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 8ce569bf7525e..1d154055a335b 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -80,9 +80,12 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, unsigned long pgsz_bitmap, unsigned long virt) { - struct scatterlist *sg; + unsigned long curr_len = 0; + dma_addr_t curr_base = ~0; unsigned long va, pgoff; + struct scatterlist *sg; dma_addr_t mask; + dma_addr_t end; int i;
umem->iova = va = virt; @@ -107,17 +110,30 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, pgoff = umem->address & ~PAGE_MASK;
for_each_sgtable_dma_sg(&umem->sgt_append.sgt, sg, i) { - /* Walk SGL and reduce max page size if VA/PA bits differ - * for any address. + /* If the current entry is physically contiguous with the previous + * one, no need to take its start addresses into consideration. */ - mask |= (sg_dma_address(sg) + pgoff) ^ va; + if (check_add_overflow(curr_base, curr_len, &end) || + end != sg_dma_address(sg)) { + + curr_base = sg_dma_address(sg); + curr_len = 0; + + /* Reduce max page size if VA/PA bits differ */ + mask |= (curr_base + pgoff) ^ va; + + /* The alignment of any VA matching a discontinuity point + * in the physical memory sets the maximum possible page + * size as this must be a starting point of a new page that + * needs to be aligned. + */ + if (i != 0) + mask |= va; + } + + curr_len += sg_dma_len(sg); va += sg_dma_len(sg) - pgoff; - /* Except for the last entry, the ending iova alignment sets - * the maximum possible page size as the low bits of the iova - * must be zero when starting the next chunk. - */ - if (i != (umem->sgt_append.sgt.nents - 1)) - mask |= va; + pgoff = 0; }
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index b99b3cc283b65..97a116960f317 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -2959,22 +2959,23 @@ EXPORT_SYMBOL(__rdma_block_iter_start); bool __rdma_block_iter_next(struct ib_block_iter *biter) { unsigned int block_offset; - unsigned int sg_delta; + unsigned int delta;
if (!biter->__sg_nents || !biter->__sg) return false;
biter->__dma_addr = sg_dma_address(biter->__sg) + biter->__sg_advance; block_offset = biter->__dma_addr & (BIT_ULL(biter->__pg_bit) - 1); - sg_delta = BIT_ULL(biter->__pg_bit) - block_offset; + delta = BIT_ULL(biter->__pg_bit) - block_offset;
- if (sg_dma_len(biter->__sg) - biter->__sg_advance > sg_delta) { - biter->__sg_advance += sg_delta; - } else { + while (biter->__sg_nents && biter->__sg && + sg_dma_len(biter->__sg) - biter->__sg_advance <= delta) { + delta -= sg_dma_len(biter->__sg) - biter->__sg_advance; biter->__sg_advance = 0; biter->__sg = sg_next(biter->__sg); biter->__sg_nents--; } + biter->__sg_advance += delta;
return true; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ahmad Fatoum a.fatoum@pengutronix.de
[ Upstream commit 6568cb40e73163fa25e2779f7234b169b2e1a32e ]
Starting with commit c141ecc3cecd7 ("of: Warn when of_property_read_bool() is used on non-boolean properties"), probing the gpcv2 device on i.MX8M SoCs leads to warnings when LOCKDEP is enabled.
Fix this by checking property presence with of_property_present as intended.
Signed-off-by: Ahmad Fatoum a.fatoum@pengutronix.de Link: https://lore.kernel.org/r/20250218-gpcv2-of-property-present-v1-1-3bb1a97896... Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/imx/gpcv2.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c index 88aee59730e39..6d5b6ed36169f 100644 --- a/drivers/soc/imx/gpcv2.c +++ b/drivers/soc/imx/gpcv2.c @@ -1347,7 +1347,7 @@ static int imx_pgc_domain_probe(struct platform_device *pdev) }
if (IS_ENABLED(CONFIG_LOCKDEP) && - of_property_read_bool(domain->dev->of_node, "power-domains")) + of_property_present(domain->dev->of_node, "power-domains")) lockdep_set_subclass(&domain->genpd.mlock, 1);
ret = of_genpd_add_provider_simple(domain->dev->of_node,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit ab1bc2290fd8311d49b87c29f1eb123fcb581bee ]
of_property_read_bool() should be used only on boolean properties.
Cc: Rob Herring robh@kernel.org Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Reviewed-by: Vincent Mailhol mailhol.vincent@wanadoo.fr Link: https://patch.msgid.link/20250212-syscon-phandle-args-can-v2-3-ac9a1253396b@... Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/can/c_can/c_can_platform.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/can/c_can/c_can_platform.c b/drivers/net/can/c_can/c_can_platform.c index c5d7093d54133..c29862b3bb1f3 100644 --- a/drivers/net/can/c_can/c_can_platform.c +++ b/drivers/net/can/c_can/c_can_platform.c @@ -334,7 +334,7 @@ static int c_can_plat_probe(struct platform_device *pdev) /* Check if we need custom RAMINIT via syscon. Mostly for TI * platforms. Only supported with DT boot. */ - if (np && of_property_read_bool(np, "syscon-raminit")) { + if (np && of_property_present(np, "syscon-raminit")) { u32 id; struct c_can_raminit *raminit = &priv->raminit_sys;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jakub Kicinski kuba@kernel.org
[ Upstream commit 8fdeafd66edaf420ea0063a1f13442fe3470fe70 ]
mlx4 doesn't support ndo_xdp_xmit / XDP_REDIRECT and wasn't using page pool until now, so it could run XDP completions in netpoll (NAPI budget == 0) just fine. Page pool has calling context requirements, make sure we don't try to call it from what is potentially HW IRQ context.
Reviewed-by: Tariq Toukan tariqt@nvidia.com Link: https://patch.msgid.link/20250213010635.1354034-3-kuba@kernel.org Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx4/en_tx.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c index 7fccf1a79f09b..95eae224bbb4a 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c @@ -446,6 +446,8 @@ int mlx4_en_process_tx_cq(struct net_device *dev,
if (unlikely(!priv->port_up)) return 0; + if (unlikely(!napi_budget) && cq->type == TX_XDP) + return 0;
netdev_txq_bql_complete_prefetchw(ring->tx_queue);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ilpo Järvinen ilpo.jarvinen@linux.intel.com
[ Upstream commit ff61f380de5652e723168341480cc7adf1dd6213 ]
Commit 903534fa7d30 ("PCI: Fix resource double counting on remove & rescan") fixed double counting of mem resources because of old_size being applied too early.
Fix a similar counting bug on the io resource side.
Link: https://lore.kernel.org/r/20241216175632.4175-6-ilpo.jarvinen@linux.intel.co... Signed-off-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Tested-by: Xiaochun Lee lixc17@lenovo.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pci/setup-bus.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index 8c1ad20a21ec5..3ce68adda9b7c 100644 --- a/drivers/pci/setup-bus.c +++ b/drivers/pci/setup-bus.c @@ -806,11 +806,9 @@ static resource_size_t calculate_iosize(resource_size_t size, size = (size & 0xff) + ((size & ~0xffUL) << 2); #endif size = size + size1; - if (size < old_size) - size = old_size;
- size = ALIGN(max(size, add_size) + children_add_size, align); - return size; + size = max(size, add_size) + children_add_size; + return ALIGN(max(size, old_size), align); }
static resource_size_t calculate_memsize(resource_size_t size,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xiaofei Tan tanxiaofei@huawei.com
[ Upstream commit cccf6ee090c8c133072d5d5b52ae25f3bc907a16 ]
When the HED driver is built-in, it initializes after evged because they both are at the same initcall level, so the initialization ordering depends on the Makefile order. However, this prevents RAS records coming in between the evged driver initialization and the HED driver initialization from being handled.
If the number of such RAS records is above the APEI HEST error source number, the HEST resources may be exhausted, and that may affect subsequent RAS error reporting.
To fix this issue, change the initcall level of HED to subsys_initcall and prevent the driver from being built as a module by changing ACPI_HED in Kconfig from "tristate" to "bool".
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com Link: https://patch.msgid.link/20250212063408.927666-1-tanxiaofei@huawei.com [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/acpi/Kconfig | 2 +- drivers/acpi/hed.c | 7 ++++++- 2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig index 473241b5193fa..596e96d3b3bdb 100644 --- a/drivers/acpi/Kconfig +++ b/drivers/acpi/Kconfig @@ -438,7 +438,7 @@ config ACPI_SBS the modules will be called sbs and sbshc.
config ACPI_HED - tristate "Hardware Error Device" + bool "Hardware Error Device" help This driver supports the Hardware Error Device (PNP0C33), which is used to report some hardware errors notified via diff --git a/drivers/acpi/hed.c b/drivers/acpi/hed.c index 60a2939cde6c5..e8e9b1ac06b88 100644 --- a/drivers/acpi/hed.c +++ b/drivers/acpi/hed.c @@ -72,7 +72,12 @@ static struct acpi_driver acpi_hed_driver = { .notify = acpi_hed_notify, }, }; -module_acpi_driver(acpi_hed_driver); + +static int __init acpi_hed_driver_init(void) +{ + return acpi_bus_register_driver(&acpi_hed_driver); +} +subsys_initcall(acpi_hed_driver_init);
MODULE_AUTHOR("Huang Ying"); MODULE_DESCRIPTION("ACPI Hardware Error Device Driver");
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Petr Machata petrm@nvidia.com
[ Upstream commit d42d543368343c0449a4e433b5f02e063a86209c ]
When a vxlan netdevice is brought up, if its default remote is a multicast address, the device joins the indicated group.
Therefore when the multicast remote address changes, the device should leave the current group and subscribe to the new one. Similarly when the interface used for endpoint communication is changed in a situation when multicast remote is configured. This is currently not done.
Both vxlan_igmp_join() and vxlan_igmp_leave() can however fail. So it is possible that with such fix, the netdevice will end up in an inconsistent situation where the old group is not joined anymore, but joining the new group fails. Should we join the new group first, and leave the old one second, we might end up in the opposite situation, where both groups are joined. Undoing any of this during rollback is going to be similarly problematic.
One solution would be to just forbid the change when the netdevice is up. However in vnifilter mode, changing the group address is allowed, and these problems are simply ignored (see vxlan_vni_update_group()):
# ip link add name br up type bridge vlan_filtering 1 # ip link add vx1 up master br type vxlan external vnifilter local 192.0.2.1 dev lo dstport 4789 # bridge vni add dev vx1 vni 200 group 224.0.0.1 # tcpdump -i lo & # bridge vni add dev vx1 vni 200 group 224.0.0.2 18:55:46.523438 IP 0.0.0.0 > 224.0.0.22: igmp v3 report, 1 group record(s) 18:55:46.943447 IP 0.0.0.0 > 224.0.0.22: igmp v3 report, 1 group record(s) # bridge vni dev vni group/remote vx1 200 224.0.0.2
Having two different modes of operation for conceptually the same interface is silly, so in this patch, just do what the vnifilter code does and deal with the errors by crossing fingers real hard.
The vnifilter code leaves old before joining new, and in case of join / leave failures does not roll back the configuration changes that have already been applied, but bails out of joining if it could not leave. Do the same here: leave before join, apply changes unconditionally and do not attempt to join if we couldn't leave.
Signed-off-by: Petr Machata petrm@nvidia.com Reviewed-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Nikolay Aleksandrov razor@blackwall.org Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/vxlan/vxlan_core.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c index 50be5a3c47795..0afd7eb976e6c 100644 --- a/drivers/net/vxlan/vxlan_core.c +++ b/drivers/net/vxlan/vxlan_core.c @@ -4231,6 +4231,7 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[], struct netlink_ext_ack *extack) { struct vxlan_dev *vxlan = netdev_priv(dev); + bool rem_ip_changed, change_igmp; struct net_device *lowerdev; struct vxlan_config conf; struct vxlan_rdst *dst; @@ -4254,8 +4255,13 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[], if (err) return err;
+ rem_ip_changed = !vxlan_addr_equal(&conf.remote_ip, &dst->remote_ip); + change_igmp = vxlan->dev->flags & IFF_UP && + (rem_ip_changed || + dst->remote_ifindex != conf.remote_ifindex); + /* handle default dst entry */ - if (!vxlan_addr_equal(&conf.remote_ip, &dst->remote_ip)) { + if (rem_ip_changed) { u32 hash_index = fdb_head_index(vxlan, all_zeros_mac, conf.vni);
spin_lock_bh(&vxlan->hash_lock[hash_index]); @@ -4299,6 +4305,9 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[], } }
+ if (change_igmp && vxlan_addr_multicast(&dst->remote_ip)) + err = vxlan_multicast_leave(vxlan); + if (conf.age_interval != vxlan->cfg.age_interval) mod_timer(&vxlan->age_timer, jiffies);
@@ -4306,7 +4315,12 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[], if (lowerdev && lowerdev != dst->remote_dev) dst->remote_dev = lowerdev; vxlan_config_apply(dev, &conf, lowerdev, vxlan->net, true); - return 0; + + if (!err && change_igmp && + vxlan_addr_multicast(&dst->remote_ip)) + err = vxlan_multicast_join(vxlan); + + return err; }
static void vxlan_dellink(struct net_device *dev, struct list_head *head)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil hverkuil@xs4all.nl
[ Upstream commit e4740118b752005cbed339aec9a1d1c43620e0b9 ]
Artem reported that the CPU load was 100% when capturing from vivid at low resolution with ffmpeg.
This was caused by:
while (time_is_after_jiffies(cur_jiffies + wait_jiffies) && !kthread_should_stop()) schedule();
If there are no other processes running that can be scheduled, then this is basically a busy-loop.
Change it to wait_event_interruptible_timeout() which doesn't have that problem.
Signed-off-by: Hans Verkuil hverkuil@xs4all.nl Reported-by: Artem S. Tashkinov aros@gmx.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219570 Reviewed-by: Nicolas Dufresne nicolas.dufresne@collabora.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/test-drivers/vivid/vivid-kthread-cap.c | 11 ++++++++--- drivers/media/test-drivers/vivid/vivid-kthread-out.c | 11 ++++++++--- .../media/test-drivers/vivid/vivid-kthread-touch.c | 11 ++++++++--- drivers/media/test-drivers/vivid/vivid-sdr-cap.c | 11 ++++++++--- 4 files changed, 32 insertions(+), 12 deletions(-)
diff --git a/drivers/media/test-drivers/vivid/vivid-kthread-cap.c b/drivers/media/test-drivers/vivid/vivid-kthread-cap.c index 690daada7db4f..54e6a6772035f 100644 --- a/drivers/media/test-drivers/vivid/vivid-kthread-cap.c +++ b/drivers/media/test-drivers/vivid/vivid-kthread-cap.c @@ -894,9 +894,14 @@ static int vivid_thread_vid_cap(void *data) next_jiffies_since_start = jiffies_since_start;
wait_jiffies = next_jiffies_since_start - jiffies_since_start; - while (time_is_after_jiffies(cur_jiffies + wait_jiffies) && - !kthread_should_stop()) - schedule(); + if (!time_is_after_jiffies(cur_jiffies + wait_jiffies)) + continue; + + wait_queue_head_t wait; + + init_waitqueue_head(&wait); + wait_event_interruptible_timeout(wait, kthread_should_stop(), + cur_jiffies + wait_jiffies - jiffies); } dprintk(dev, 1, "Video Capture Thread End\n"); return 0; diff --git a/drivers/media/test-drivers/vivid/vivid-kthread-out.c b/drivers/media/test-drivers/vivid/vivid-kthread-out.c index 0833e021bb11d..8a17a01e6e426 100644 --- a/drivers/media/test-drivers/vivid/vivid-kthread-out.c +++ b/drivers/media/test-drivers/vivid/vivid-kthread-out.c @@ -235,9 +235,14 @@ static int vivid_thread_vid_out(void *data) next_jiffies_since_start = jiffies_since_start;
wait_jiffies = next_jiffies_since_start - jiffies_since_start; - while (time_is_after_jiffies(cur_jiffies + wait_jiffies) && - !kthread_should_stop()) - schedule(); + if (!time_is_after_jiffies(cur_jiffies + wait_jiffies)) + continue; + + wait_queue_head_t wait; + + init_waitqueue_head(&wait); + wait_event_interruptible_timeout(wait, kthread_should_stop(), + cur_jiffies + wait_jiffies - jiffies); } dprintk(dev, 1, "Video Output Thread End\n"); return 0; diff --git a/drivers/media/test-drivers/vivid/vivid-kthread-touch.c b/drivers/media/test-drivers/vivid/vivid-kthread-touch.c index fa711ee36a3fb..c862689786b69 100644 --- a/drivers/media/test-drivers/vivid/vivid-kthread-touch.c +++ b/drivers/media/test-drivers/vivid/vivid-kthread-touch.c @@ -135,9 +135,14 @@ static int vivid_thread_touch_cap(void *data) next_jiffies_since_start = jiffies_since_start;
wait_jiffies = next_jiffies_since_start - jiffies_since_start; - while (time_is_after_jiffies(cur_jiffies + wait_jiffies) && - !kthread_should_stop()) - schedule(); + if (!time_is_after_jiffies(cur_jiffies + wait_jiffies)) + continue; + + wait_queue_head_t wait; + + init_waitqueue_head(&wait); + wait_event_interruptible_timeout(wait, kthread_should_stop(), + cur_jiffies + wait_jiffies - jiffies); } dprintk(dev, 1, "Touch Capture Thread End\n"); return 0; diff --git a/drivers/media/test-drivers/vivid/vivid-sdr-cap.c b/drivers/media/test-drivers/vivid/vivid-sdr-cap.c index 0ae5628b86c95..abccd1d0109ec 100644 --- a/drivers/media/test-drivers/vivid/vivid-sdr-cap.c +++ b/drivers/media/test-drivers/vivid/vivid-sdr-cap.c @@ -206,9 +206,14 @@ static int vivid_thread_sdr_cap(void *data) next_jiffies_since_start = jiffies_since_start;
wait_jiffies = next_jiffies_since_start - jiffies_since_start; - while (time_is_after_jiffies(cur_jiffies + wait_jiffies) && - !kthread_should_stop()) - schedule(); + if (!time_is_after_jiffies(cur_jiffies + wait_jiffies)) + continue; + + wait_queue_head_t wait; + + init_waitqueue_head(&wait); + wait_event_interruptible_timeout(wait, kthread_should_stop(), + cur_jiffies + wait_jiffies - jiffies); } dprintk(dev, 1, "SDR Capture Thread End\n"); return 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shahar Shitrit shshitrit@nvidia.com
[ Upstream commit 633f16d7e07c129a36b882c05379e01ce5bdb542 ]
In the sensor_count field of the MTEWE register, bits 1-62 are supported only for unmanaged switches, not for NICs, and bit 63 is reserved for internal use.
To prevent confusing output that may include set bits that are not relevant to NIC sensors, we update the bitmask to retain only the first bit, which corresponds to the sensor ASIC.
Signed-off-by: Shahar Shitrit shshitrit@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Reviewed-by: Mateusz Polchlopek mateusz.polchlopek@intel.com Link: https://patch.msgid.link/20250213094641.226501-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/events.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c index 9459e56ee90a6..68b92927c74e9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/events.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c @@ -163,6 +163,10 @@ static int temp_warn(struct notifier_block *nb, unsigned long type, void *data) u64 value_msb;
value_lsb = be64_to_cpu(eqe->data.temp_warning.sensor_warning_lsb); + /* bit 1-63 are not supported for NICs, + * hence read only bit 0 (asic) from lsb. + */ + value_lsb &= 0x1; value_msb = be64_to_cpu(eqe->data.temp_warning.sensor_warning_msb);
mlx5_core_warn(events->dev,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shahar Shitrit shshitrit@nvidia.com
[ Upstream commit 9dd3d5d258aceb37bdf09c8b91fa448f58ea81f0 ]
Wrap the high temperature warning in a temperature event with a call to net_ratelimit() to prevent flooding the kernel log with repeated warning messages when temperature exceeds the threshold multiple times within a short duration.
Signed-off-by: Shahar Shitrit shshitrit@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Reviewed-by: Mateusz Polchlopek mateusz.polchlopek@intel.com Link: https://patch.msgid.link/20250213094641.226501-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/events.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c index 68b92927c74e9..6aa96d33c210b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/events.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c @@ -169,9 +169,10 @@ static int temp_warn(struct notifier_block *nb, unsigned long type, void *data) value_lsb &= 0x1; value_msb = be64_to_cpu(eqe->data.temp_warning.sensor_warning_msb);
- mlx5_core_warn(events->dev, - "High temperature on sensors with bit set %llx %llx", - value_msb, value_lsb); + if (net_ratelimit()) + mlx5_core_warn(events->dev, + "High temperature on sensors with bit set %llx %llx", + value_msb, value_lsb);
return NOTIFY_OK; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Martin Povišer povik+lin@cutebit.org
[ Upstream commit 783db6851c1821d8b983ffb12b99c279ff64f2ee ]
Lower the volume if it is violating the platform maximum at its initial value (i.e. at the time of the 'snd_soc_limit_volume' call).
Signed-off-by: Martin Povišer povik+lin@cutebit.org [Cherry picked from the Asahi kernel with fixups -- broonie] Signed-off-by: Mark Brown broonie@kernel.org Link: https://patch.msgid.link/20250208-asoc-volume-limit-v1-1-b98fcf4cdbad@kernel... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/soc-ops.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c index b4cfc34d00ee6..eff1355cc3df0 100644 --- a/sound/soc/soc-ops.c +++ b/sound/soc/soc-ops.c @@ -638,6 +638,33 @@ int snd_soc_get_volsw_range(struct snd_kcontrol *kcontrol, } EXPORT_SYMBOL_GPL(snd_soc_get_volsw_range);
+static int snd_soc_clip_to_platform_max(struct snd_kcontrol *kctl) +{ + struct soc_mixer_control *mc = (struct soc_mixer_control *)kctl->private_value; + struct snd_ctl_elem_value uctl; + int ret; + + if (!mc->platform_max) + return 0; + + ret = kctl->get(kctl, &uctl); + if (ret < 0) + return ret; + + if (uctl.value.integer.value[0] > mc->platform_max) + uctl.value.integer.value[0] = mc->platform_max; + + if (snd_soc_volsw_is_stereo(mc) && + uctl.value.integer.value[1] > mc->platform_max) + uctl.value.integer.value[1] = mc->platform_max; + + ret = kctl->put(kctl, &uctl); + if (ret < 0) + return ret; + + return 0; +} + /** * snd_soc_limit_volume - Set new limit to an existing volume control. * @@ -662,7 +689,7 @@ int snd_soc_limit_volume(struct snd_soc_card *card, struct soc_mixer_control *mc = (struct soc_mixer_control *)kctl->private_value; if (max <= mc->max - mc->min) { mc->platform_max = max; - ret = 0; + ret = snd_soc_clip_to_platform_max(kctl); } } return ret;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hector Martin marcan@marcan.st
[ Upstream commit d64c4c3d1c578f98d70db1c5e2535b47adce9d07 ]
Signed-off-by: Hector Martin marcan@marcan.st Signed-off-by: Mark Brown broonie@kernel.org Link: https://patch.msgid.link/20250208-asoc-tas2764-v1-4-dbab892a69b5@kernel.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/tas2764.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c index fc8479d3d2852..72db361ac3611 100644 --- a/sound/soc/codecs/tas2764.c +++ b/sound/soc/codecs/tas2764.c @@ -636,6 +636,7 @@ static const struct reg_default tas2764_reg_defaults[] = { { TAS2764_TDM_CFG2, 0x0a }, { TAS2764_TDM_CFG3, 0x10 }, { TAS2764_TDM_CFG5, 0x42 }, + { TAS2764_INT_CLK_CFG, 0x19 }, };
static const struct regmap_range_cfg tas2764_regmap_ranges[] = {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hector Martin marcan@marcan.st
[ Upstream commit f37f1748564ac51d32f7588bd7bfc99913ccab8e ]
Since the bit is self-clearing.
Signed-off-by: Hector Martin marcan@marcan.st Signed-off-by: Mark Brown broonie@kernel.org Link: https://patch.msgid.link/20250208-asoc-tas2764-v1-3-dbab892a69b5@kernel.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/tas2764.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c index 72db361ac3611..94428487a8855 100644 --- a/sound/soc/codecs/tas2764.c +++ b/sound/soc/codecs/tas2764.c @@ -654,6 +654,7 @@ static const struct regmap_range_cfg tas2764_regmap_ranges[] = { static bool tas2764_volatile_register(struct device *dev, unsigned int reg) { switch (reg) { + case TAS2764_SW_RST: case TAS2764_INT_LTCH0 ... TAS2764_INT_LTCH4: case TAS2764_INT_CLK_CFG: return true;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hector Martin marcan@marcan.st
[ Upstream commit 1c3b5f37409682184669457a5bdf761268eafbe5 ]
The ASoC convention is that clocks are removed after codec mute, and power up/down is more about top level power management. For these chips, the "mute" state still expects a TDM clock, and yanking the clock in this state will trigger clock errors. So, do the full shutdown<->mute<->active transition on the mute operation, so the amp is in software shutdown by the time the clocks are removed.
This fixes TDM clock errors when streams are stopped.
Signed-off-by: Hector Martin marcan@marcan.st Signed-off-by: Mark Brown broonie@kernel.org Link: https://patch.msgid.link/20250208-asoc-tas2764-v1-1-dbab892a69b5@kernel.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/tas2764.c | 51 ++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 30 deletions(-)
diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c index 94428487a8855..10f0f07b90ff2 100644 --- a/sound/soc/codecs/tas2764.c +++ b/sound/soc/codecs/tas2764.c @@ -182,33 +182,6 @@ static SOC_ENUM_SINGLE_DECL( static const struct snd_kcontrol_new tas2764_asi1_mux = SOC_DAPM_ENUM("ASI1 Source", tas2764_ASI1_src_enum);
-static int tas2764_dac_event(struct snd_soc_dapm_widget *w, - struct snd_kcontrol *kcontrol, int event) -{ - struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); - struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component); - int ret; - - switch (event) { - case SND_SOC_DAPM_POST_PMU: - tas2764->dac_powered = true; - ret = tas2764_update_pwr_ctrl(tas2764); - break; - case SND_SOC_DAPM_PRE_PMD: - tas2764->dac_powered = false; - ret = tas2764_update_pwr_ctrl(tas2764); - break; - default: - dev_err(tas2764->dev, "Unsupported event\n"); - return -EINVAL; - } - - if (ret < 0) - return ret; - - return 0; -} - static const struct snd_kcontrol_new isense_switch = SOC_DAPM_SINGLE("Switch", TAS2764_PWR_CTRL, TAS2764_ISENSE_POWER_EN, 1, 1); static const struct snd_kcontrol_new vsense_switch = @@ -221,8 +194,7 @@ static const struct snd_soc_dapm_widget tas2764_dapm_widgets[] = { 1, &isense_switch), SND_SOC_DAPM_SWITCH("VSENSE", TAS2764_PWR_CTRL, TAS2764_VSENSE_POWER_EN, 1, &vsense_switch), - SND_SOC_DAPM_DAC_E("DAC", NULL, SND_SOC_NOPM, 0, 0, tas2764_dac_event, - SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), + SND_SOC_DAPM_DAC("DAC", NULL, SND_SOC_NOPM, 0, 0), SND_SOC_DAPM_OUTPUT("OUT"), SND_SOC_DAPM_SIGGEN("VMON"), SND_SOC_DAPM_SIGGEN("IMON") @@ -243,9 +215,28 @@ static int tas2764_mute(struct snd_soc_dai *dai, int mute, int direction) { struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(dai->component); + int ret; + + if (!mute) { + tas2764->dac_powered = true; + ret = tas2764_update_pwr_ctrl(tas2764); + if (ret) + return ret; + }
tas2764->unmuted = !mute; - return tas2764_update_pwr_ctrl(tas2764); + ret = tas2764_update_pwr_ctrl(tas2764); + if (ret) + return ret; + + if (mute) { + tas2764->dac_powered = false; + ret = tas2764_update_pwr_ctrl(tas2764); + if (ret) + return ret; + } + + return 0; }
static int tas2764_set_bitwidth(struct tas2764_priv *tas2764, int bitwidth)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuninori Morimoto kuninori.morimoto.gx@renesas.com
[ Upstream commit 7f1186a8d738661b941b298fd6d1d5725ed71428 ]
snd_soc_dai_set_tdm_slot() calls .xlate_tdm_slot_mask() or snd_soc_xlate_tdm_slot_mask(), but didn't check its return value. Let's check it.
This patch might break existing driver. In such case, let's makes each func to void instead of int.
Signed-off-by: Kuninori Morimoto kuninori.morimoto.gx@renesas.com Link: https://patch.msgid.link/87o6z7yk61.wl-kuninori.morimoto.gx@renesas.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/soc-dai.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c index 49752af0e205d..ba38b6e6b2649 100644 --- a/sound/soc/soc-dai.c +++ b/sound/soc/soc-dai.c @@ -270,10 +270,11 @@ int snd_soc_dai_set_tdm_slot(struct snd_soc_dai *dai,
if (dai->driver->ops && dai->driver->ops->xlate_tdm_slot_mask) - dai->driver->ops->xlate_tdm_slot_mask(slots, - &tx_mask, &rx_mask); + ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask); else - snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask); + ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask); + if (ret) + goto err;
dai->tx_mask = tx_mask; dai->rx_mask = rx_mask; @@ -282,6 +283,7 @@ int snd_soc_dai_set_tdm_slot(struct snd_soc_dai *dai, dai->driver->ops->set_tdm_slot) ret = dai->driver->ops->set_tdm_slot(dai, tx_mask, rx_mask, slots, slot_width); +err: return soc_dai_ret(dai, ret); } EXPORT_SYMBOL_GPL(snd_soc_dai_set_tdm_slot);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Valentin Caron valentin.caron@foss.st.com
[ Upstream commit c98868e816209e568c9d72023ba0bc1e4d96e611 ]
Cross case in pinctrl framework make impossible to an hogged pin and another, not hogged, used within the same device-tree node. For example with this simplified device-tree :
&pinctrl { pinctrl_pin_1: pinctrl-pin-1 { pins = "dummy-pinctrl-pin"; }; };
&rtc { pinctrl-names = "default" pinctrl-0 = <&pinctrl_pin_1 &rtc_pin_1>
rtc_pin_1: rtc-pin-1 { pins = "dummy-rtc-pin"; }; };
"pinctrl_pin_1" configuration is never set. This produces this path in the code:
really_probe() pinctrl_bind_pins() | devm_pinctrl_get() | pinctrl_get() | create_pinctrl() | pinctrl_dt_to_map() | // Hog pin create an abort for all pins of the node | ret = dt_to_map_one_config() | | /* Do not defer probing of hogs (circular loop) */ | | if (np_pctldev == p->dev->of_node) | | return -ENODEV; | if (ret) | goto err | call_driver_probe() stm32_rtc_probe() pinctrl_enable() pinctrl_claim_hogs() create_pinctrl() for_each_maps(maps_node, i, map) // Not hog pin is skipped if (pctldev && strcmp(dev_name(pctldev->dev), map->ctrl_dev_name)) continue;
At the first call of create_pinctrl() the hogged pin produces an abort to avoid a defer of hogged pins. All other pin configurations are trashed.
At the second call, create_pinctrl is now called with pctldev parameter to get hogs, but in this context only hogs are set. And other pins are skipped.
To handle this, do not produce an abort in the first call of create_pinctrl(). Classic pin configuration will be set in pinctrl_bind_pins() context. And the hogged pin configuration will be set in pinctrl_claim_hogs() context.
Signed-off-by: Valentin Caron valentin.caron@foss.st.com Link: https://lore.kernel.org/20250116170009.2075544-1-valentin.caron@foss.st.com Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pinctrl/devicetree.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c index 5ee746cb81f59..6520b88db1105 100644 --- a/drivers/pinctrl/devicetree.c +++ b/drivers/pinctrl/devicetree.c @@ -143,10 +143,14 @@ static int dt_to_map_one_config(struct pinctrl *p, pctldev = get_pinctrl_dev_from_of_node(np_pctldev); if (pctldev) break; - /* Do not defer probing of hogs (circular loop) */ + /* + * Do not defer probing of hogs (circular loop) + * + * Return 1 to let the caller catch the case. + */ if (np_pctldev == p->dev->of_node) { of_node_put(np_pctldev); - return -ENODEV; + return 1; } } of_node_put(np_pctldev); @@ -265,6 +269,8 @@ int pinctrl_dt_to_map(struct pinctrl *p, struct pinctrl_dev *pctldev) ret = dt_to_map_one_config(p, pctldev, statename, np_config); of_node_put(np_config); + if (ret == 1) + continue; if (ret < 0) goto err; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Konstantin Andreev andreev@swemel.ru
[ Upstream commit a158a937d864d0034fea14913c1f09c6d5f574b8 ]
If SMACK label has CIPSO representation w/o categories, e.g.:
| # cat /smack/cipso2 | foo 10 | @ 250/2 | ...
then SMACK does not recognize such CIPSO in input ipv4 packets and substitues '*' label instead. Audit records may look like
| lsm=SMACK fn=smack_socket_sock_rcv_skb action=denied | subject="*" object="_" requested=w pid=0 comm="swapper/1" ...
This happens in two steps:
1) security/smack/smackfs.c`smk_set_cipso does not clear NETLBL_SECATTR_MLS_CAT from (struct smack_known *)skp->smk_netlabel.flags on assigning CIPSO w/o categories:
| rcu_assign_pointer(skp->smk_netlabel.attr.mls.cat, ncats.attr.mls.cat); | skp->smk_netlabel.attr.mls.lvl = ncats.attr.mls.lvl;
2) security/smack/smack_lsm.c`smack_from_secattr can not match skp->smk_netlabel with input packet's struct netlbl_lsm_secattr *sap because sap->flags have not NETLBL_SECATTR_MLS_CAT (what is correct) but skp->smk_netlabel.flags have (what is incorrect):
| if ((sap->flags & NETLBL_SECATTR_MLS_CAT) == 0) { | if ((skp->smk_netlabel.flags & | NETLBL_SECATTR_MLS_CAT) == 0) | found = 1; | break; | }
This commit sets/clears NETLBL_SECATTR_MLS_CAT in skp->smk_netlabel.flags according to the presense of CIPSO categories. The update of smk_netlabel is not atomic, so input packets processing still may be incorrect during short time while update proceeds.
Signed-off-by: Konstantin Andreev andreev@swemel.ru Signed-off-by: Casey Schaufler casey@schaufler-ca.com Signed-off-by: Sasha Levin sashal@kernel.org --- security/smack/smackfs.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c index d955f3dcb3a5e..9dca3672d82b4 100644 --- a/security/smack/smackfs.c +++ b/security/smack/smackfs.c @@ -922,6 +922,10 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf, if (rc >= 0) { old_cat = skp->smk_netlabel.attr.mls.cat; rcu_assign_pointer(skp->smk_netlabel.attr.mls.cat, ncats.attr.mls.cat); + if (ncats.attr.mls.cat) + skp->smk_netlabel.flags |= NETLBL_SECATTR_MLS_CAT; + else + skp->smk_netlabel.flags &= ~(u32)NETLBL_SECATTR_MLS_CAT; skp->smk_netlabel.attr.mls.lvl = ncats.attr.mls.lvl; synchronize_rcu(); netlbl_catmap_free(old_cat);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brendan Jackman jackmanb@google.com
[ Upstream commit 08fafac4c9f289a9d9a22d838921e4b3eb22c664 ]
As noted in [0], SeaBIOS (QEMU default) makes a mess of the terminal, qboot does not.
It turns out this is actually useful with kunit.py, since the user is exposed to this issue if they set --raw_output=all.
qboot is also faster than SeaBIOS, but it's is marginal for this usecase.
[0] https://lore.kernel.org/all/CA+i-1C0wYb-gZ8Mwh3WSVpbk-LF-Uo+njVbASJPe1WXDURo...
Both SeaBIOS and qboot are x86-specific.
Link: https://lore.kernel.org/r/20250124-kunit-qboot-v1-1-815e4d4c6f7c@google.com Signed-off-by: Brendan Jackman jackmanb@google.com Reviewed-by: David Gow davidgow@google.com Signed-off-by: Shuah Khan skhan@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/kunit/qemu_configs/x86_64.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/tools/testing/kunit/qemu_configs/x86_64.py b/tools/testing/kunit/qemu_configs/x86_64.py index dc79490768630..4a6bf4e048f5b 100644 --- a/tools/testing/kunit/qemu_configs/x86_64.py +++ b/tools/testing/kunit/qemu_configs/x86_64.py @@ -7,4 +7,6 @@ CONFIG_SERIAL_8250_CONSOLE=y''', qemu_arch='x86_64', kernel_path='arch/x86/boot/bzImage', kernel_command_line='console=ttyS0', - extra_qemu_params=[]) + # qboot is faster than SeaBIOS and doesn't mess up + # the terminal. + extra_qemu_params=['-bios', 'qboot.rom'])
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kees Cook kees@kernel.org
[ Upstream commit 4a6f18f28627e121bd1f74b5fcc9f945d6dbeb1e ]
GCC can see that the value range for "order" is capped, but this leads it to consider that it might be negative, leading to a false positive warning (with GCC 15 with -Warray-bounds -fdiagnostics-details):
../drivers/net/ethernet/mellanox/mlx4/alloc.c:691:47: error: array subscript -1 is below array bounds of 'long unsigned int *[2]' [-Werror=array-bounds=] 691 | i = find_first_bit(pgdir->bits[o], MLX4_DB_PER_PAGE >> o); | ~~~~~~~~~~~^~~ 'mlx4_alloc_db_from_pgdir': events 1-2 691 | i = find_first_bit(pgdir->bits[o], MLX4_DB_PER_PAGE >> o); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (2) out of array bounds here | (1) when the condition is evaluated to true In file included from ../drivers/net/ethernet/mellanox/mlx4/mlx4.h:53, from ../drivers/net/ethernet/mellanox/mlx4/alloc.c:42: ../include/linux/mlx4/device.h:664:33: note: while referencing 'bits' 664 | unsigned long *bits[2]; | ^~~~
Switch the argument to unsigned int, which removes the compiler needing to consider negative values.
Signed-off-by: Kees Cook kees@kernel.org Link: https://patch.msgid.link/20250210174504.work.075-kees@kernel.org Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx4/alloc.c | 6 +++--- include/linux/mlx4/device.h | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/alloc.c b/drivers/net/ethernet/mellanox/mlx4/alloc.c index b330020dc0d67..f2bded847e61d 100644 --- a/drivers/net/ethernet/mellanox/mlx4/alloc.c +++ b/drivers/net/ethernet/mellanox/mlx4/alloc.c @@ -682,9 +682,9 @@ static struct mlx4_db_pgdir *mlx4_alloc_db_pgdir(struct device *dma_device) }
static int mlx4_alloc_db_from_pgdir(struct mlx4_db_pgdir *pgdir, - struct mlx4_db *db, int order) + struct mlx4_db *db, unsigned int order) { - int o; + unsigned int o; int i;
for (o = order; o <= 1; ++o) { @@ -712,7 +712,7 @@ static int mlx4_alloc_db_from_pgdir(struct mlx4_db_pgdir *pgdir, return 0; }
-int mlx4_db_alloc(struct mlx4_dev *dev, struct mlx4_db *db, int order) +int mlx4_db_alloc(struct mlx4_dev *dev, struct mlx4_db *db, unsigned int order) { struct mlx4_priv *priv = mlx4_priv(dev); struct mlx4_db_pgdir *pgdir; diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h index 6646634a0b9d4..0cb296f0f8d1d 100644 --- a/include/linux/mlx4/device.h +++ b/include/linux/mlx4/device.h @@ -1115,7 +1115,7 @@ int mlx4_write_mtt(struct mlx4_dev *dev, struct mlx4_mtt *mtt, int mlx4_buf_write_mtt(struct mlx4_dev *dev, struct mlx4_mtt *mtt, struct mlx4_buf *buf);
-int mlx4_db_alloc(struct mlx4_dev *dev, struct mlx4_db *db, int order); +int mlx4_db_alloc(struct mlx4_dev *dev, struct mlx4_db *db, unsigned int order); void mlx4_db_free(struct mlx4_dev *dev, struct mlx4_db *db);
int mlx4_alloc_hwq_res(struct mlx4_dev *dev, struct mlx4_hwq_resources *wqres,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit 7a243e1b814a02ab40793026ef64223155d86395 ]
If regmap_read() fails, random stack value was used in calculating new frequency in recalc_rate() callbacks. Such failure is really not expected as these are all MMIO reads, however code should be here correct and bail out. This also avoids possible warning on uninitialized value.
Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Link: https://lore.kernel.org/r/20250212-b4-clk-qcom-clean-v3-1-499f37444f5d@linar... Signed-off-by: Bjorn Andersson andersson@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/qcom/clk-alpha-pll.c | 52 ++++++++++++++++++++++---------- 1 file changed, 36 insertions(+), 16 deletions(-)
diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c index e63a90db1505a..c591fa1ad802d 100644 --- a/drivers/clk/qcom/clk-alpha-pll.c +++ b/drivers/clk/qcom/clk-alpha-pll.c @@ -561,14 +561,19 @@ clk_alpha_pll_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); u32 alpha_width = pll_alpha_width(pll);
- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l); + if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l)) + return 0; + + if (regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl)) + return 0;
- regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl); if (ctl & PLL_ALPHA_EN) { - regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &low); + if (regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &low)) + return 0; if (alpha_width > 32) { - regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL_U(pll), - &high); + if (regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL_U(pll), + &high)) + return 0; a = (u64)high << 32 | low; } else { a = low & GENMASK(alpha_width - 1, 0); @@ -760,8 +765,11 @@ alpha_pll_huayra_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); u32 l, alpha = 0, ctl, alpha_m, alpha_n;
- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l); - regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl); + if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l)) + return 0; + + if (regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl)) + return 0;
if (ctl & PLL_ALPHA_EN) { regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &alpha); @@ -955,8 +963,11 @@ clk_trion_pll_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); u32 l, frac, alpha_width = pll_alpha_width(pll);
- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l); - regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &frac); + if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l)) + return 0; + + if (regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &frac)) + return 0;
return alpha_pll_calc_rate(parent_rate, l, frac, alpha_width); } @@ -1014,7 +1025,8 @@ clk_alpha_pll_postdiv_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) struct clk_alpha_pll_postdiv *pll = to_clk_alpha_pll_postdiv(hw); u32 ctl;
- regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl); + if (regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl)) + return 0;
ctl >>= PLL_POST_DIV_SHIFT; ctl &= PLL_POST_DIV_MASK(pll); @@ -1230,8 +1242,11 @@ static unsigned long alpha_pll_fabia_recalc_rate(struct clk_hw *hw, struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); u32 l, frac, alpha_width = pll_alpha_width(pll);
- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l); - regmap_read(pll->clkr.regmap, PLL_FRAC(pll), &frac); + if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l)) + return 0; + + if (regmap_read(pll->clkr.regmap, PLL_FRAC(pll), &frac)) + return 0;
return alpha_pll_calc_rate(parent_rate, l, frac, alpha_width); } @@ -1381,7 +1396,8 @@ clk_trion_pll_postdiv_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) struct regmap *regmap = pll->clkr.regmap; u32 i, div = 1, val;
- regmap_read(regmap, PLL_USER_CTL(pll), &val); + if (regmap_read(regmap, PLL_USER_CTL(pll), &val)) + return 0;
val >>= pll->post_div_shift; val &= PLL_POST_DIV_MASK(pll); @@ -2254,9 +2270,12 @@ static unsigned long alpha_pll_lucid_evo_recalc_rate(struct clk_hw *hw, struct regmap *regmap = pll->clkr.regmap; u32 l, frac;
- regmap_read(regmap, PLL_L_VAL(pll), &l); + if (regmap_read(regmap, PLL_L_VAL(pll), &l)) + return 0; l &= LUCID_EVO_PLL_L_VAL_MASK; - regmap_read(regmap, PLL_ALPHA_VAL(pll), &frac); + + if (regmap_read(regmap, PLL_ALPHA_VAL(pll), &frac)) + return 0;
return alpha_pll_calc_rate(parent_rate, l, frac, pll_alpha_width(pll)); } @@ -2331,7 +2350,8 @@ static unsigned long clk_rivian_evo_pll_recalc_rate(struct clk_hw *hw, struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); u32 l;
- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l); + if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l)) + return 0;
return parent_rate * l; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com
[ Upstream commit 22a6984c5b5df8eab864d7f3e8b94d5a554d31ab ]
The Renesas RZ/G3S supports a power saving mode where power to most of the SoC components is turned off. When returning from this power saving mode, SoC components need to be re-configured.
The SCIFs on the Renesas RZ/G3S need to be re-configured as well when returning from this power saving mode. The sh-sci code already configures the SCIF clocks, power domain and registers by calling uart_resume_port() in sci_resume(). On suspend path the SCIF UART ports are suspended accordingly (by calling uart_suspend_port() in sci_suspend()). The only missing setting is the reset signal. For this assert/de-assert the reset signal on driver suspend/resume.
In case the no_console_suspend is specified by the user, the registers need to be saved on suspend path and restore on resume path. To do this the sci_console_save()/sci_console_restore() functions were added. There is no need to cache/restore the status or FIFO registers. Only the control registers. The registers that will be saved/restored on suspend/resume are specified by the struct sci_suspend_regs data structure.
Signed-off-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://lore.kernel.org/r/20250207113313.545432-1-claudiu.beznea.uj@bp.renes... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/tty/serial/sh-sci.c | 71 +++++++++++++++++++++++++++++++++++-- 1 file changed, 69 insertions(+), 2 deletions(-)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index e2dfca4c2eff8..191136dcb94d0 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -105,6 +105,15 @@ struct plat_sci_reg { u8 offset, size; };
+struct sci_suspend_regs { + u16 scsmr; + u16 scscr; + u16 scfcr; + u16 scsptr; + u8 scbrr; + u8 semr; +}; + struct sci_port_params { const struct plat_sci_reg regs[SCIx_NR_REGS]; unsigned int fifosize; @@ -135,6 +144,8 @@ struct sci_port { struct dma_chan *chan_tx; struct dma_chan *chan_rx;
+ struct reset_control *rstc; + #ifdef CONFIG_SERIAL_SH_SCI_DMA struct dma_chan *chan_tx_saved; struct dma_chan *chan_rx_saved; @@ -154,6 +165,7 @@ struct sci_port { int rx_trigger; struct timer_list rx_fifo_timer; int rx_fifo_timeout; + struct sci_suspend_regs suspend_regs; u16 hscif_tot;
bool has_rtscts; @@ -3252,6 +3264,7 @@ static struct plat_sci_port *sci_parse_dt(struct platform_device *pdev, }
sp = &sci_ports[id]; + sp->rstc = rstc; *dev_id = id;
p->type = SCI_OF_TYPE(data); @@ -3400,13 +3413,57 @@ static int sci_probe(struct platform_device *dev) return 0; }
+static void sci_console_save(struct sci_port *s) +{ + struct sci_suspend_regs *regs = &s->suspend_regs; + struct uart_port *port = &s->port; + + if (sci_getreg(port, SCSMR)->size) + regs->scsmr = sci_serial_in(port, SCSMR); + if (sci_getreg(port, SCSCR)->size) + regs->scscr = sci_serial_in(port, SCSCR); + if (sci_getreg(port, SCFCR)->size) + regs->scfcr = sci_serial_in(port, SCFCR); + if (sci_getreg(port, SCSPTR)->size) + regs->scsptr = sci_serial_in(port, SCSPTR); + if (sci_getreg(port, SCBRR)->size) + regs->scbrr = sci_serial_in(port, SCBRR); + if (sci_getreg(port, SEMR)->size) + regs->semr = sci_serial_in(port, SEMR); +} + +static void sci_console_restore(struct sci_port *s) +{ + struct sci_suspend_regs *regs = &s->suspend_regs; + struct uart_port *port = &s->port; + + if (sci_getreg(port, SCSMR)->size) + sci_serial_out(port, SCSMR, regs->scsmr); + if (sci_getreg(port, SCSCR)->size) + sci_serial_out(port, SCSCR, regs->scscr); + if (sci_getreg(port, SCFCR)->size) + sci_serial_out(port, SCFCR, regs->scfcr); + if (sci_getreg(port, SCSPTR)->size) + sci_serial_out(port, SCSPTR, regs->scsptr); + if (sci_getreg(port, SCBRR)->size) + sci_serial_out(port, SCBRR, regs->scbrr); + if (sci_getreg(port, SEMR)->size) + sci_serial_out(port, SEMR, regs->semr); +} + static __maybe_unused int sci_suspend(struct device *dev) { struct sci_port *sport = dev_get_drvdata(dev);
- if (sport) + if (sport) { uart_suspend_port(&sci_uart_driver, &sport->port);
+ if (!console_suspend_enabled && uart_console(&sport->port)) + sci_console_save(sport); + else + return reset_control_assert(sport->rstc); + } + return 0; }
@@ -3414,8 +3471,18 @@ static __maybe_unused int sci_resume(struct device *dev) { struct sci_port *sport = dev_get_drvdata(dev);
- if (sport) + if (sport) { + if (!console_suspend_enabled && uart_console(&sport->port)) { + sci_console_restore(sport); + } else { + int ret = reset_control_deassert(sport->rstc); + + if (ret) + return ret; + } + uart_resume_port(&sci_uart_driver, &sport->port); + }
return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Baryshkov dmitry.baryshkov@linaro.org
[ Upstream commit d58c04e305afbaa9dda7969151f06c4efe2c98b0 ]
As reported by Damon Ding, the phy_get_mode() call doesn't work as expected unless the PHY driver has a .set_mode() call. This prompts PHY drivers to have empty stubs for .set_mode() for the sake of being able to get the mode.
Make .set_mode() callback truly optional and update PHY's mode even if it there is none.
Cc: Damon Ding damon.ding@rock-chips.com Link: https://lore.kernel.org/r/96f8310f-93f1-4bcb-8637-137e1159ff83@rock-chips.co... Tested-by: Damon Ding damon.ding@rock-chips.com Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Link: https://lore.kernel.org/r/20250209-phy-fix-set-moe-v2-1-76e248503856@linaro.... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/phy/phy-core.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c index 0730fe80dc3c1..069bcf49ee8f7 100644 --- a/drivers/phy/phy-core.c +++ b/drivers/phy/phy-core.c @@ -398,13 +398,14 @@ EXPORT_SYMBOL_GPL(phy_power_off);
int phy_set_mode_ext(struct phy *phy, enum phy_mode mode, int submode) { - int ret; + int ret = 0;
- if (!phy || !phy->ops->set_mode) + if (!phy) return 0;
mutex_lock(&phy->mutex); - ret = phy->ops->set_mode(phy, mode, submode); + if (phy->ops->set_mode) + ret = phy->ops->set_mode(phy, mode, submode); if (!ret) phy->attrs.mode = mode; mutex_unlock(&phy->mutex);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiang Liu gerry@linux.alibaba.com
[ Upstream commit e92f3f94cad24154fd3baae30c6dfb918492278d ]
Reset psp->cmd to NULL after releasing the buffer in function psp_sw_fini().
Reviewed-by: Lijo Lazar lijo.lazar@amd.com Signed-off-by: Jiang Liu gerry@linux.alibaba.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c index f8740ad08af41..a176b1da03bd3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c @@ -484,7 +484,6 @@ static int psp_sw_fini(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct psp_context *psp = &adev->psp; - struct psp_gfx_cmd_resp *cmd = psp->cmd;
psp_memory_training_fini(psp); if (psp->sos_fw) { @@ -511,8 +510,8 @@ static int psp_sw_fini(void *handle) adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 7)) psp_sysfs_fini(adev);
- kfree(cmd); - cmd = NULL; + kfree(psp->cmd); + psp->cmd = NULL;
psp_free_shared_bufs(psp);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tom Chung chiahsuan.chung@amd.com
[ Upstream commit d8c782cac5007e68e7484d420168f12d3490def6 ]
[Why & How] The initial setting for psr_version is not correct while create a virtual link.
The default psr_version should be DC_PSR_VERSION_UNSUPPORTED.
Reviewed-by: Roman Li roman.li@amd.com Signed-off-by: Tom Chung chiahsuan.chung@amd.com Signed-off-by: Zaeem Mohamed zaeem.mohamed@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/core/dc.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c index 2721842af8067..10672bb90a029 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c @@ -267,6 +267,7 @@ static bool create_links( link->link_id.type = OBJECT_TYPE_CONNECTOR; link->link_id.id = CONNECTOR_ID_VIRTUAL; link->link_id.enum_id = ENUM_ID_1; + link->psr_settings.psr_version = DC_PSR_VERSION_UNSUPPORTED; link->link_enc = kzalloc(sizeof(*link->link_enc), GFP_KERNEL);
if (!link->link_enc) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shiwu Zhang shiwu.zhang@amd.com
[ Upstream commit 667b96134c9e206aebe40985650bf478935cbe04 ]
Some chips have a larger VBIOS file so raise the size limit to support the flashing tool.
Signed-off-by: Shiwu Zhang shiwu.zhang@amd.com Reviewed-by: Hawking Zhang Hawking.Zhang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c index a176b1da03bd3..ae6643c8ade6c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c @@ -43,7 +43,7 @@ #include "amdgpu_securedisplay.h" #include "amdgpu_atomfirmware.h"
-#define AMD_VBIOS_FILE_MAX_SIZE_B (1024*1024*3) +#define AMD_VBIOS_FILE_MAX_SIZE_B (1024*1024*16)
static int psp_sysfs_init(struct amdgpu_device *adev); static void psp_sysfs_fini(struct amdgpu_device *adev);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Deucher alexander.deucher@amd.com
[ Upstream commit 33da70bd1e115d7d73f45fb1c09f5ecc448f3f13 ]
DC supports SW i2c as well. Drop the check.
Reviewed-by: Harry Wentland harry.wentland@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 0d8c020cd1216..998dde73ecc67 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7281,7 +7281,7 @@ static int amdgpu_dm_i2c_xfer(struct i2c_adapter *i2c_adap, int i; int result = -EIO;
- if (!ddc_service->ddc_pin || !ddc_service->ddc_pin->hw_info.hw_supported) + if (!ddc_service->ddc_pin) return result;
cmd.payloads = kcalloc(num, sizeof(struct i2c_payload), GFP_KERNEL);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexei Lazar alazar@nvidia.com
[ Upstream commit 95b9606b15bb3ce1198d28d2393dd0e1f0a5f3e9 ]
Current loopback test validation ignores non-linear SKB case in the SKB access, which can lead to failures in scenarios such as when HW GRO is enabled. Linearize the SKB so both cases will be handled.
Signed-off-by: Alexei Lazar alazar@nvidia.com Reviewed-by: Dragos Tatulea dtatulea@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Link: https://patch.msgid.link/20250209101716.112774-15-tariqt@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c index 08a75654f5f18..c170503b3aace 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c @@ -165,6 +165,9 @@ mlx5e_test_loopback_validate(struct sk_buff *skb, struct udphdr *udph; struct iphdr *iph;
+ if (skb_linearize(skb)) + goto out; + /* We are only going to peek, no need to clone the SKB */ if (MLX5E_TEST_PKT_SIZE - ETH_HLEN > skb_headlen(skb)) goto out;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: William Tu witu@nvidia.com
[ Upstream commit a38cc5706fb9f7dc4ee3a443f61de13ce1e410ed ]
By default, the mq netdev creates a pfifo_fast qdisc. On a system with 16 core, the pfifo_fast with 3 bands consumes 16 * 3 * 8 (size of pointer) * 1024 (default tx queue len) = 393KB. The patch sets the tx qlen to representor default value, 128 (1<<MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE), which consumes 16 * 3 * 8 * 128 = 49KB, saving 344KB for each representor at ECPF.
Signed-off-by: William Tu witu@nvidia.com Reviewed-by: Daniel Jurgens danielj@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Reviewed-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Link: https://patch.msgid.link/20250209101716.112774-9-tariqt@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index 5aeca9534f15a..837524d1d2258 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -726,6 +726,8 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev, netdev->ethtool_ops = &mlx5e_rep_ethtool_ops;
netdev->watchdog_timeo = 15 * HZ; + if (mlx5_core_is_ecpf(mdev)) + netdev->tx_queue_len = 1 << MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE;
#if IS_ENABLED(CONFIG_MLX5_CLS_ACT) netdev->hw_features |= NETIF_F_HW_TC;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: William Tu witu@nvidia.com
[ Upstream commit b9cc8f9d700867aaa77aedddfea85e53d5e5d584 ]
By experiments, a single queue representor netdev consumes kernel memory around 2.8MB, and 1.8MB out of the 2.8MB is due to page pool for the RXQ. Scaling to a thousand representors consumes 2.8GB, which becomes a memory pressure issue for embedded devices such as BlueField-2 16GB / BlueField-3 32GB memory.
Since representor netdevs mostly handles miss traffic, and ideally, most of the traffic will be offloaded, reduce the default non-uplink rep netdev's RXQ default depth from 1024 to 256 if mdev is ecpf eswitch manager. This saves around 1MB of memory per regular RQ, (1024 - 256) * 2KB, allocated from page pool.
With rxq depth of 256, the netlink page pool tool reports $./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump page-pool-get {'id': 277, 'ifindex': 9, 'inflight': 128, 'inflight-mem': 786432, 'napi-id': 775}]
This is due to mtu 1500 + headroom consumes half pages, so 256 rxq entries consumes around 128 pages (thus create a page pool with size 128), shown above at inflight.
Note that each netdev has multiple types of RQs, including Regular RQ, XSK, PTP, Drop, Trap RQ. Since non-uplink representor only supports regular rq, this patch only changes the regular RQ's default depth.
Signed-off-by: William Tu witu@nvidia.com Reviewed-by: Bodong Wang bodong@nvidia.com Reviewed-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Reviewed-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Link: https://patch.msgid.link/20250209101716.112774-8-tariqt@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index 837524d1d2258..b4980245b50b2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -61,6 +61,7 @@ #define MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE \ max(0x7, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE) #define MLX5E_REP_PARAMS_DEF_NUM_CHANNELS 1 +#define MLX5E_REP_PARAMS_DEF_LOG_RQ_SIZE 0x8
static const char mlx5e_rep_driver_name[] = "mlx5e_rep";
@@ -705,6 +706,8 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
/* RQ */ mlx5e_build_rq_params(mdev, params); + if (!mlx5e_is_uplink_rep(priv) && mlx5_core_is_ecpf(mdev)) + params->log_rq_mtu_frames = MLX5E_REP_PARAMS_DEF_LOG_RQ_SIZE;
/* CQ moderation params */ params->rx_dim_enabled = MLX5_CAP_GEN(mdev, cq_moderation);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit 1798271b3604b902d45033ec569f2bf77e94ecc2 ]
We might not have called drv_mgd_prepare_tx(), so only call drv_mgd_complete_tx() under the same conditions.
Signed-off-by: Johannes Berg johannes.berg@intel.com Reviewed-by: Emmanuel Grumbach emmanuel.grumbach@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250205110958.e091fc39a351.Ie6a3cdca070612a0aa4b3c... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/mlme.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 9a5530ca2f6b2..8f0e6d7fe7167 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -2896,7 +2896,8 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata, if (tx) ieee80211_flush_queues(local, sdata, false);
- drv_mgd_complete_tx(sdata->local, sdata, &info); + if (tx || frame_buf) + drv_mgd_complete_tx(sdata->local, sdata, &info);
/* clear AP addr only after building the needed mgmt frames */ eth_zero_addr(sdata->deflink.u.mgd.bssid);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit f4995cdc4d02d0abc8e9fcccad5c71ce676c1e3f ]
In the original commit 15fae3410f1d ("mac80211: notify driver on mgd TX completion") I evidently made a mistake and placed the call in the "associated" if, rather than the "assoc_data". Later I noticed the missing call and placed it in commit c042600c17d8 ("wifi: mac80211: adding missing drv_mgd_complete_tx() call"), but didn't remove the wrong one. Remove it now.
Signed-off-by: Johannes Berg johannes.berg@intel.com Reviewed-by: Emmanuel Grumbach emmanuel.grumbach@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250205110958.6ed954179bbf.Id8ef8835b7e6da3bf913c7... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/mlme.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 8f0e6d7fe7167..b300972c31500 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -7312,7 +7312,6 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata, ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), true, req->reason_code, false); - drv_mgd_complete_tx(sdata->local, sdata, &info); return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Athira Rajeev atrajeev@linux.vnet.ibm.com
[ Upstream commit 2ffb26afa64261139e608bf087a0c1fe24d76d4d ]
perf mem report aborts as below sometimes (during some corner case) in powerpc:
# ./perf mem report 1>out *** stack smashing detected ***: terminated Aborted (core dumped)
The backtrace is as below: __pthread_kill_implementation () raise () abort () __libc_message __fortify_fail __stack_chk_fail hist_entry.lvl_snprintf __sort__hpp_entry __hist_entry__snprintf hists.fprintf cmd_report cmd_mem
Snippet of code which triggers the issue from tools/perf/util/sort.c
static int hist_entry__lvl_snprintf(struct hist_entry *he, char *bf, size_t size, unsigned int width) { char out[64];
perf_mem__lvl_scnprintf(out, sizeof(out), he->mem_info); return repsep_snprintf(bf, size, "%-*s", width, out); }
The value of "out" is filled from perf_mem_data_src value. Debugging this further showed that for some corner cases, the value of "data_src" was pointing to wrong value. This resulted in bigger size of string and causing stack check fail.
The perf mem data source values are captured in the sample via isa207_get_mem_data_src function. The initial check is to fetch the type of sampled instruction. If the type of instruction is not valid (not a load/store instruction), the function returns.
Since 'commit e16fd7f2cb1a ("perf: Use sample_flags for data_src")', data_src field is not initialized by the perf_sample_data_init() function. If the PMU driver doesn't set the data_src value to zero if type is not valid, this will result in uninitailised value for data_src. The uninitailised value of data_src resulted in stack check fail followed by abort for "perf mem report".
When requesting for data source information in the sample, the instruction type is expected to be load or store instruction. In ISA v3.0, due to hardware limitation, there are corner cases where the instruction type other than load or store is observed. In ISA v3.0 and before values "0" and "7" are considered reserved. In ISA v3.1, value "7" has been used to indicate "larx/stcx". Drop the sample if instruction type has reserved values for this field with a ISA version check. Initialize data_src to zero in isa207_get_mem_data_src if the instruction type is not load/store.
Reported-by: Disha Goel disgoel@linux.vnet.ibm.com Signed-off-by: Athira Rajeev atrajeev@linux.vnet.ibm.com Signed-off-by: Madhavan Srinivasan maddy@linux.ibm.com Link: https://patch.msgid.link/20250121131621.39054-1-atrajeev@linux.vnet.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/perf/core-book3s.c | 20 ++++++++++++++++++++ arch/powerpc/perf/isa207-common.c | 4 +++- 2 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index e3c31c771ce91..470d7715ecf4b 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -2229,6 +2229,10 @@ static struct pmu power_pmu = { #define PERF_SAMPLE_ADDR_TYPE (PERF_SAMPLE_ADDR | \ PERF_SAMPLE_PHYS_ADDR | \ PERF_SAMPLE_DATA_PAGE_SIZE) + +#define SIER_TYPE_SHIFT 15 +#define SIER_TYPE_MASK (0x7ull << SIER_TYPE_SHIFT) + /* * A counter has overflowed; update its count and record * things if requested. Note that interrupts are hard-disabled @@ -2297,6 +2301,22 @@ static void record_and_restart(struct perf_event *event, unsigned long val, is_kernel_addr(mfspr(SPRN_SIAR))) record = 0;
+ /* + * SIER[46-48] presents instruction type of the sampled instruction. + * In ISA v3.0 and before values "0" and "7" are considered reserved. + * In ISA v3.1, value "7" has been used to indicate "larx/stcx". + * Drop the sample if "type" has reserved values for this field with a + * ISA version check. + */ + if (event->attr.sample_type & PERF_SAMPLE_DATA_SRC && + ppmu->get_mem_data_src) { + val = (regs->dar & SIER_TYPE_MASK) >> SIER_TYPE_SHIFT; + if (val == 0 || (val == 7 && !cpu_has_feature(CPU_FTR_ARCH_31))) { + record = 0; + atomic64_inc(&event->lost_samples); + } + } + /* * Finally record data if requested. */ diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c index 56301b2bc8ae8..031a2b63c171d 100644 --- a/arch/powerpc/perf/isa207-common.c +++ b/arch/powerpc/perf/isa207-common.c @@ -321,8 +321,10 @@ void isa207_get_mem_data_src(union perf_mem_data_src *dsrc, u32 flags,
sier = mfspr(SPRN_SIER); val = (sier & ISA207_SIER_TYPE_MASK) >> ISA207_SIER_TYPE_SHIFT; - if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31))) + if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31))) { + dsrc->val = 0; return; + }
idx = (sier & ISA207_SIER_LDST_MASK) >> ISA207_SIER_LDST_SHIFT; sub_idx = (sier & ISA207_SIER_DATA_SRC_MASK) >> ISA207_SIER_DATA_SRC_SHIFT;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 5a1ccffd30a08f5a2428cd5fbb3ab03e8eb6c66d ]
The following patch will not set skb->sk from VRF path.
Let's fetch net from fib_rule->fr_net instead of sock_net(skb->sk) in fib[46]_rule_configure().
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Reviewed-by: Eric Dumazet edumazet@google.com Reviewed-by: Ido Schimmel idosch@nvidia.com Tested-by: Ido Schimmel idosch@nvidia.com Link: https://patch.msgid.link/20250207072502.87775-5-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/fib_rules.c | 4 ++-- net/ipv6/fib6_rules.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/fib_rules.c b/net/ipv4/fib_rules.c index 513f475c6a534..298a9944a3d1e 100644 --- a/net/ipv4/fib_rules.c +++ b/net/ipv4/fib_rules.c @@ -222,9 +222,9 @@ static int fib4_rule_configure(struct fib_rule *rule, struct sk_buff *skb, struct nlattr **tb, struct netlink_ext_ack *extack) { - struct net *net = sock_net(skb->sk); + struct fib4_rule *rule4 = (struct fib4_rule *)rule; + struct net *net = rule->fr_net; int err = -EINVAL; - struct fib4_rule *rule4 = (struct fib4_rule *) rule;
if (!inet_validate_dscp(frh->tos)) { NL_SET_ERR_MSG(extack, diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c index 6eeab21512ba9..e0f0c5f8cccda 100644 --- a/net/ipv6/fib6_rules.c +++ b/net/ipv6/fib6_rules.c @@ -350,9 +350,9 @@ static int fib6_rule_configure(struct fib_rule *rule, struct sk_buff *skb, struct nlattr **tb, struct netlink_ext_ack *extack) { + struct fib6_rule *rule6 = (struct fib6_rule *)rule; + struct net *net = rule->fr_net; int err = -EINVAL; - struct net *net = sock_net(skb->sk); - struct fib6_rule *rule6 = (struct fib6_rule *) rule;
if (!inet_validate_dscp(frh->tos)) { NL_SET_ERR_MSG(extack,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Aleksander Jan Bajkowski olek2@wp.pl
[ Upstream commit 848b09d53d923b4caee5491f57a5c5b22d81febc ]
The Dell AW1022z is an RTL8156B based 2.5G Ethernet controller.
Add the vendor and product ID values to the driver. This makes Ethernet work with the adapter.
Signed-off-by: Aleksander Jan Bajkowski olek2@wp.pl Link: https://patch.msgid.link/20250206224033.980115-1-olek2@wp.pl Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/r8152.c | 1 + include/linux/usb/r8152.h | 1 + 2 files changed, 2 insertions(+)
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index 061a7a9afad04..c2b715541989b 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -9880,6 +9880,7 @@ static const struct usb_device_id rtl8152_table[] = { { USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) }, { USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) }, { USB_DEVICE(VENDOR_ID_DLINK, 0xb301) }, + { USB_DEVICE(VENDOR_ID_DELL, 0xb097) }, { USB_DEVICE(VENDOR_ID_ASUS, 0x1976) }, {} }; diff --git a/include/linux/usb/r8152.h b/include/linux/usb/r8152.h index 33a4c146dc19c..2ca60828f28bb 100644 --- a/include/linux/usb/r8152.h +++ b/include/linux/usb/r8152.h @@ -30,6 +30,7 @@ #define VENDOR_ID_NVIDIA 0x0955 #define VENDOR_ID_TPLINK 0x2357 #define VENDOR_ID_DLINK 0x2001 +#define VENDOR_ID_DELL 0x413c #define VENDOR_ID_ASUS 0x0b05
#if IS_REACHABLE(CONFIG_USB_RTL8152)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bitterblue Smith rtl8821cerfe2@gmail.com
[ Upstream commit 9e8243025cc06abc975c876dffda052073207ab3 ]
After the firmware is uploaded, download_firmware_validate() checks some bits in REG_MCUFW_CTRL to see if everything went okay. The RTL8814AU power on sequence sets bits 13 and 12 to 2, which this function does not expect, so it thinks the firmware upload failed.
Make download_firmware_validate() ignore bits 13 and 12.
Signed-off-by: Bitterblue Smith rtl8821cerfe2@gmail.com Acked-by: Ping-Ke Shih pkshih@realtek.com Signed-off-by: Ping-Ke Shih pkshih@realtek.com Link: https://patch.msgid.link/049d2887-22fc-47b7-9e59-62627cb525f8@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw88/reg.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h index 03bd8dc53f72a..08628ba3419da 100644 --- a/drivers/net/wireless/realtek/rtw88/reg.h +++ b/drivers/net/wireless/realtek/rtw88/reg.h @@ -107,6 +107,7 @@ #define BIT_SHIFT_ROM_PGE 16 #define BIT_FW_INIT_RDY BIT(15) #define BIT_FW_DW_RDY BIT(14) +#define BIT_CPU_CLK_SEL (BIT(12) | BIT(13)) #define BIT_RPWM_TOGGLE BIT(7) #define BIT_RAM_DL_SEL BIT(7) /* legacy only */ #define BIT_DMEM_CHKSUM_OK BIT(6) @@ -124,7 +125,7 @@ BIT_CHECK_SUM_OK) #define FW_READY_LEGACY (BIT_MCUFWDL_RDY | BIT_FWDL_CHK_RPT | \ BIT_WINTINI_RDY | BIT_RAM_DL_SEL) -#define FW_READY_MASK 0xffff +#define FW_READY_MASK (0xffff & ~BIT_CPU_CLK_SEL)
#define REG_MCU_TST_CFG 0x84 #define VAL_FW_TRIGGER 0x1
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jordan Crouse jorcrous@amazon.com
[ Upstream commit 52b10b591f83dc6d9a1d6c2dc89433470a787ecd ]
Update some RCGs on the sm8250 camera clock controller to use clk_rcg2_shared_ops. The shared_ops ensure the RCGs get parked to the XO during clock disable to prevent the clocks from locking up when the GDSC is enabled. These mirror similar fixes for other controllers such as commit e5c359f70e4b ("clk: qcom: camcc: Update the clock ops for the SC7180").
Signed-off-by: Jordan Crouse jorcrous@amazon.com Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Reviewed-by: Bryan O'Donoghue bryan.odonoghue@linaro.org Link: https://lore.kernel.org/r/20250122222612.32351-1-jorcrous@amazon.com Signed-off-by: Bjorn Andersson andersson@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/qcom/camcc-sm8250.c | 56 ++++++++++++++++----------------- 1 file changed, 28 insertions(+), 28 deletions(-)
diff --git a/drivers/clk/qcom/camcc-sm8250.c b/drivers/clk/qcom/camcc-sm8250.c index 9b32c56a5bc5a..e29706d782870 100644 --- a/drivers/clk/qcom/camcc-sm8250.c +++ b/drivers/clk/qcom/camcc-sm8250.c @@ -411,7 +411,7 @@ static struct clk_rcg2 cam_cc_bps_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -433,7 +433,7 @@ static struct clk_rcg2 cam_cc_camnoc_axi_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -454,7 +454,7 @@ static struct clk_rcg2 cam_cc_cci_0_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -469,7 +469,7 @@ static struct clk_rcg2 cam_cc_cci_1_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -490,7 +490,7 @@ static struct clk_rcg2 cam_cc_cphy_rx_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -511,7 +511,7 @@ static struct clk_rcg2 cam_cc_csi0phytimer_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -526,7 +526,7 @@ static struct clk_rcg2 cam_cc_csi1phytimer_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -556,7 +556,7 @@ static struct clk_rcg2 cam_cc_csi3phytimer_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -571,7 +571,7 @@ static struct clk_rcg2 cam_cc_csi4phytimer_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -586,7 +586,7 @@ static struct clk_rcg2 cam_cc_csi5phytimer_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -611,7 +611,7 @@ static struct clk_rcg2 cam_cc_fast_ahb_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -634,7 +634,7 @@ static struct clk_rcg2 cam_cc_fd_core_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -649,7 +649,7 @@ static struct clk_rcg2 cam_cc_icp_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -673,7 +673,7 @@ static struct clk_rcg2 cam_cc_ife_0_clk_src = { .parent_data = cam_cc_parent_data_2, .num_parents = ARRAY_SIZE(cam_cc_parent_data_2), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -710,7 +710,7 @@ static struct clk_rcg2 cam_cc_ife_0_csid_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -734,7 +734,7 @@ static struct clk_rcg2 cam_cc_ife_1_clk_src = { .parent_data = cam_cc_parent_data_3, .num_parents = ARRAY_SIZE(cam_cc_parent_data_3), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -749,7 +749,7 @@ static struct clk_rcg2 cam_cc_ife_1_csid_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -771,7 +771,7 @@ static struct clk_rcg2 cam_cc_ife_lite_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -786,7 +786,7 @@ static struct clk_rcg2 cam_cc_ife_lite_csid_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -810,7 +810,7 @@ static struct clk_rcg2 cam_cc_ipe_0_clk_src = { .parent_data = cam_cc_parent_data_4, .num_parents = ARRAY_SIZE(cam_cc_parent_data_4), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -825,7 +825,7 @@ static struct clk_rcg2 cam_cc_jpeg_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -847,7 +847,7 @@ static struct clk_rcg2 cam_cc_mclk0_clk_src = { .parent_data = cam_cc_parent_data_1, .num_parents = ARRAY_SIZE(cam_cc_parent_data_1), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -862,7 +862,7 @@ static struct clk_rcg2 cam_cc_mclk1_clk_src = { .parent_data = cam_cc_parent_data_1, .num_parents = ARRAY_SIZE(cam_cc_parent_data_1), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -877,7 +877,7 @@ static struct clk_rcg2 cam_cc_mclk2_clk_src = { .parent_data = cam_cc_parent_data_1, .num_parents = ARRAY_SIZE(cam_cc_parent_data_1), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -892,7 +892,7 @@ static struct clk_rcg2 cam_cc_mclk3_clk_src = { .parent_data = cam_cc_parent_data_1, .num_parents = ARRAY_SIZE(cam_cc_parent_data_1), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -907,7 +907,7 @@ static struct clk_rcg2 cam_cc_mclk4_clk_src = { .parent_data = cam_cc_parent_data_1, .num_parents = ARRAY_SIZE(cam_cc_parent_data_1), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -922,7 +922,7 @@ static struct clk_rcg2 cam_cc_mclk5_clk_src = { .parent_data = cam_cc_parent_data_1, .num_parents = ARRAY_SIZE(cam_cc_parent_data_1), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
@@ -993,7 +993,7 @@ static struct clk_rcg2 cam_cc_slow_ahb_clk_src = { .parent_data = cam_cc_parent_data_0, .num_parents = ARRAY_SIZE(cam_cc_parent_data_0), .flags = CLK_SET_RATE_PARENT, - .ops = &clk_rcg2_ops, + .ops = &clk_rcg2_shared_ops, }, };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrey Vatoropin a.vatoropin@crpt.ru
[ Upstream commit 8df0f002827e18632dcd986f7546c1abf1953a6f ]
The expression PCC_NUM_RETRIES * pcc_chan->latency is currently being evaluated using 32-bit arithmetic.
Since a value of type 'u64' is used to store the eventual result, and this result is later sent to the function usecs_to_jiffies with input parameter unsigned int, the current data type is too wide to store the value of ctx->usecs_lat.
Change the data type of "usecs_lat" to a more suitable (narrower) type.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Signed-off-by: Andrey Vatoropin a.vatoropin@crpt.ru Link: https://lore.kernel.org/r/20250204095400.95013-1-a.vatoropin@crpt.ru Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hwmon/xgene-hwmon.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hwmon/xgene-hwmon.c b/drivers/hwmon/xgene-hwmon.c index 207084d55044a..6768dbf390390 100644 --- a/drivers/hwmon/xgene-hwmon.c +++ b/drivers/hwmon/xgene-hwmon.c @@ -111,7 +111,7 @@ struct xgene_hwmon_dev {
phys_addr_t comm_base_addr; void *pcc_comm_addr; - u64 usecs_lat; + unsigned int usecs_lat; };
/*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Depeng Shao quic_depengs@quicinc.com
[ Upstream commit 2f1361f862a68063f37362f1beb400e78e289581 ]
There is no CSID TPG on some SoCs, so the v4l2 ctrl in CSID driver shouldn't be registered. Checking the supported TPG modes to indicate if the TPG hardware exists or not and only registering v4l2 ctrl for CSID only when the TPG hardware is present.
Signed-off-by: Depeng Shao quic_depengs@quicinc.com Signed-off-by: Hans Verkuil hverkuil@xs4all.nl Signed-off-by: Sasha Levin sashal@kernel.org --- .../media/platform/qcom/camss/camss-csid.c | 60 +++++++++++-------- 1 file changed, 35 insertions(+), 25 deletions(-)
diff --git a/drivers/media/platform/qcom/camss/camss-csid.c b/drivers/media/platform/qcom/camss/camss-csid.c index 6360314f04a63..b90e2e690f3aa 100644 --- a/drivers/media/platform/qcom/camss/camss-csid.c +++ b/drivers/media/platform/qcom/camss/camss-csid.c @@ -239,11 +239,13 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable) int ret;
if (enable) { - ret = v4l2_ctrl_handler_setup(&csid->ctrls); - if (ret < 0) { - dev_err(csid->camss->dev, - "could not sync v4l2 controls: %d\n", ret); - return ret; + if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED) { + ret = v4l2_ctrl_handler_setup(&csid->ctrls); + if (ret < 0) { + dev_err(csid->camss->dev, + "could not sync v4l2 controls: %d\n", ret); + return ret; + } }
if (!csid->testgen.enabled && @@ -318,7 +320,8 @@ static void csid_try_format(struct csid_device *csid, break;
case MSM_CSID_PAD_SRC: - if (csid->testgen_mode->cur.val == 0) { + if (csid->testgen.nmodes == CSID_PAYLOAD_MODE_DISABLED || + csid->testgen_mode->cur.val == 0) { /* Test generator is disabled, */ /* keep pad formats in sync */ u32 code = fmt->code; @@ -368,7 +371,8 @@ static int csid_enum_mbus_code(struct v4l2_subdev *sd,
code->code = csid->formats[code->index].code; } else { - if (csid->testgen_mode->cur.val == 0) { + if (csid->testgen.nmodes == CSID_PAYLOAD_MODE_DISABLED || + csid->testgen_mode->cur.val == 0) { struct v4l2_mbus_framefmt *sink_fmt;
sink_fmt = __csid_get_format(csid, sd_state, @@ -750,7 +754,8 @@ static int csid_link_setup(struct media_entity *entity,
/* If test generator is enabled */ /* do not allow a link from CSIPHY to CSID */ - if (csid->testgen_mode->cur.val != 0) + if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED && + csid->testgen_mode->cur.val != 0) return -EBUSY;
sd = media_entity_to_v4l2_subdev(remote->entity); @@ -843,24 +848,27 @@ int msm_csid_register_entity(struct csid_device *csid, MSM_CSID_NAME, csid->id); v4l2_set_subdevdata(sd, csid);
- ret = v4l2_ctrl_handler_init(&csid->ctrls, 1); - if (ret < 0) { - dev_err(dev, "Failed to init ctrl handler: %d\n", ret); - return ret; - } + if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED) { + ret = v4l2_ctrl_handler_init(&csid->ctrls, 1); + if (ret < 0) { + dev_err(dev, "Failed to init ctrl handler: %d\n", ret); + return ret; + }
- csid->testgen_mode = v4l2_ctrl_new_std_menu_items(&csid->ctrls, - &csid_ctrl_ops, V4L2_CID_TEST_PATTERN, - csid->testgen.nmodes, 0, 0, - csid->testgen.modes); + csid->testgen_mode = + v4l2_ctrl_new_std_menu_items(&csid->ctrls, + &csid_ctrl_ops, V4L2_CID_TEST_PATTERN, + csid->testgen.nmodes, 0, 0, + csid->testgen.modes);
- if (csid->ctrls.error) { - dev_err(dev, "Failed to init ctrl: %d\n", csid->ctrls.error); - ret = csid->ctrls.error; - goto free_ctrl; - } + if (csid->ctrls.error) { + dev_err(dev, "Failed to init ctrl: %d\n", csid->ctrls.error); + ret = csid->ctrls.error; + goto free_ctrl; + }
- csid->subdev.ctrl_handler = &csid->ctrls; + csid->subdev.ctrl_handler = &csid->ctrls; + }
ret = csid_init_formats(sd, NULL); if (ret < 0) { @@ -891,7 +899,8 @@ int msm_csid_register_entity(struct csid_device *csid, media_cleanup: media_entity_cleanup(&sd->entity); free_ctrl: - v4l2_ctrl_handler_free(&csid->ctrls); + if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED) + v4l2_ctrl_handler_free(&csid->ctrls);
return ret; } @@ -904,5 +913,6 @@ void msm_csid_unregister_entity(struct csid_device *csid) { v4l2_device_unregister_subdev(&csid->subdev); media_entity_cleanup(&csid->subdev.entity); - v4l2_ctrl_handler_free(&csid->ctrls); + if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED) + v4l2_ctrl_handler_free(&csid->ctrls); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit f6205f8215f12a96518ac9469ff76294ae7bd612 ]
The 'used' and 'updated' fields in the FDB entry structure can be accessed concurrently by multiple threads, leading to reports such as [1]. Can be reproduced using [2].
Suppress these reports by annotating these accesses using READ_ONCE() / WRITE_ONCE().
[1] BUG: KCSAN: data-race in vxlan_xmit / vxlan_xmit
write to 0xffff942604d263a8 of 8 bytes by task 286 on cpu 0: vxlan_xmit+0xb29/0x2380 dev_hard_start_xmit+0x84/0x2f0 __dev_queue_xmit+0x45a/0x1650 packet_xmit+0x100/0x150 packet_sendmsg+0x2114/0x2ac0 __sys_sendto+0x318/0x330 __x64_sys_sendto+0x76/0x90 x64_sys_call+0x14e8/0x1c00 do_syscall_64+0x9e/0x1a0 entry_SYSCALL_64_after_hwframe+0x77/0x7f
read to 0xffff942604d263a8 of 8 bytes by task 287 on cpu 2: vxlan_xmit+0xadf/0x2380 dev_hard_start_xmit+0x84/0x2f0 __dev_queue_xmit+0x45a/0x1650 packet_xmit+0x100/0x150 packet_sendmsg+0x2114/0x2ac0 __sys_sendto+0x318/0x330 __x64_sys_sendto+0x76/0x90 x64_sys_call+0x14e8/0x1c00 do_syscall_64+0x9e/0x1a0 entry_SYSCALL_64_after_hwframe+0x77/0x7f
value changed: 0x00000000fffbac6e -> 0x00000000fffbac6f
Reported by Kernel Concurrency Sanitizer on: CPU: 2 UID: 0 PID: 287 Comm: mausezahn Not tainted 6.13.0-rc7-01544-gb4b270f11a02 #5 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014
[2] #!/bin/bash
set +H echo whitelist > /sys/kernel/debug/kcsan echo !vxlan_xmit > /sys/kernel/debug/kcsan
ip link add name vx0 up type vxlan id 10010 dstport 4789 local 192.0.2.1 bridge fdb add 00:11:22:33:44:55 dev vx0 self static dst 198.51.100.1 taskset -c 0 mausezahn vx0 -a own -b 00:11:22:33:44:55 -c 0 -q & taskset -c 2 mausezahn vx0 -a own -b 00:11:22:33:44:55 -c 0 -q &
Reviewed-by: Petr Machata petrm@nvidia.com Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Eric Dumazet edumazet@google.com Reviewed-by: Nikolay Aleksandrov razor@blackwall.org Link: https://patch.msgid.link/20250204145549.1216254-2-idosch@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/vxlan/vxlan_core.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c index 0afd7eb976e6c..ef61eab81707c 100644 --- a/drivers/net/vxlan/vxlan_core.c +++ b/drivers/net/vxlan/vxlan_core.c @@ -274,9 +274,9 @@ static int vxlan_fdb_info(struct sk_buff *skb, struct vxlan_dev *vxlan, be32_to_cpu(fdb->vni))) goto nla_put_failure;
- ci.ndm_used = jiffies_to_clock_t(now - fdb->used); + ci.ndm_used = jiffies_to_clock_t(now - READ_ONCE(fdb->used)); ci.ndm_confirmed = 0; - ci.ndm_updated = jiffies_to_clock_t(now - fdb->updated); + ci.ndm_updated = jiffies_to_clock_t(now - READ_ONCE(fdb->updated)); ci.ndm_refcnt = 0;
if (nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci)) @@ -482,8 +482,8 @@ static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan, struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac, vni); - if (f && f->used != jiffies) - f->used = jiffies; + if (f && READ_ONCE(f->used) != jiffies) + WRITE_ONCE(f->used, jiffies);
return f; } @@ -1057,12 +1057,12 @@ static int vxlan_fdb_update_existing(struct vxlan_dev *vxlan, !(f->flags & NTF_VXLAN_ADDED_BY_USER)) { if (f->state != state) { f->state = state; - f->updated = jiffies; + WRITE_ONCE(f->updated, jiffies); notify = 1; } if (f->flags != fdb_flags) { f->flags = fdb_flags; - f->updated = jiffies; + WRITE_ONCE(f->updated, jiffies); notify = 1; } } @@ -1096,7 +1096,7 @@ static int vxlan_fdb_update_existing(struct vxlan_dev *vxlan, }
if (ndm_flags & NTF_USE) - f->used = jiffies; + WRITE_ONCE(f->used, jiffies);
if (notify) { if (rd == NULL) @@ -1525,7 +1525,7 @@ static bool vxlan_snoop(struct net_device *dev, src_mac, &rdst->remote_ip.sa, &src_ip->sa);
rdst->remote_ip = *src_ip; - f->updated = jiffies; + WRITE_ONCE(f->updated, jiffies); vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH, true, NULL); } else { u32 hash_index = fdb_head_index(vxlan, src_mac, vni); @@ -2936,7 +2936,7 @@ static void vxlan_cleanup(struct timer_list *t) if (f->flags & NTF_EXT_LEARNED) continue;
- timeout = f->used + vxlan->cfg.age_interval * HZ; + timeout = READ_ONCE(f->used) + vxlan->cfg.age_interval * HZ; if (time_before_eq(timeout, jiffies)) { netdev_dbg(vxlan->dev, "garbage collect %pM\n",
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiner Kallweit hkallweit1@gmail.com
[ Upstream commit faac69a4ae5abb49e62c79c66b51bb905c9aa5ec ]
The PHY address is a dummy, because r8169 PHY access registers don't support a PHY address. Therefore scan address 0 only.
Signed-off-by: Heiner Kallweit hkallweit1@gmail.com Reviewed-by: Andrew Lunn andrew@lunn.ch Link: https://patch.msgid.link/830637dd-4016-4a68-92b3-618fcac6589d@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/realtek/r8169_main.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c index 4b461e93ffe9d..6346821d480bd 100644 --- a/drivers/net/ethernet/realtek/r8169_main.c +++ b/drivers/net/ethernet/realtek/r8169_main.c @@ -5156,6 +5156,7 @@ static int r8169_mdio_register(struct rtl8169_private *tp) new_bus->priv = tp; new_bus->parent = &pdev->dev; new_bus->irq[0] = PHY_MAC_INTERRUPT; + new_bus->phy_mask = GENMASK(31, 1); snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x-%x", pci_domain_nr(pdev->bus), pci_dev_id(pdev));
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ankur Arora ankur.a.arora@oracle.com
[ Upstream commit 83b28cfe796464ebbde1cf7916c126da6d572685 ]
With PREEMPT_RCU=n, cond_resched() provides urgently needed quiescent states for read-side critical sections via rcu_all_qs(). One reason why this was needed: lacking preempt-count, the tick handler has no way of knowing whether it is executing in a read-side critical section or not.
With (PREEMPT_LAZY=y, PREEMPT_DYNAMIC=n), we get (PREEMPT_COUNT=y, PREEMPT_RCU=n). In this configuration cond_resched() is a stub and does not provide quiescent states via rcu_all_qs(). (PREEMPT_RCU=y provides this information via rcu_read_unlock() and its nesting counter.)
So, use the availability of preempt_count() to report quiescent states in rcu_flavor_sched_clock_irq().
Suggested-by: Paul E. McKenney paulmck@kernel.org Reviewed-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Reviewed-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Paul E. McKenney paulmck@kernel.org Signed-off-by: Boqun Feng boqun.feng@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/rcu/tree_plugin.h | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 044026abfdd7f..4f45562be7b54 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -963,13 +963,16 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp) */ static void rcu_flavor_sched_clock_irq(int user) { - if (user || rcu_is_cpu_rrupt_from_idle()) { + if (user || rcu_is_cpu_rrupt_from_idle() || + (IS_ENABLED(CONFIG_PREEMPT_COUNT) && + (preempt_count() == HARDIRQ_OFFSET))) {
/* * Get here if this CPU took its interrupt from user - * mode or from the idle loop, and if this is not a - * nested interrupt. In this case, the CPU is in - * a quiescent state, so note it. + * mode, from the idle loop without this being a nested + * interrupt, or while not holding the task preempt count + * (with PREEMPT_COUNT=y). In this case, the CPU is in a + * quiescent state, so note it. * * No memory barrier is required here because rcu_qs() * references only CPU-local variables that other CPUs
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ankur Arora ankur.a.arora@oracle.com
[ Upstream commit fcf0e25ad4c8d14d2faab4d9a17040f31efce205 ]
rcu_read_unlock_strict() can be called with preemption enabled which can make for an unstable rdp and a racy norm value.
Fix this by dropping the preempt-count in __rcu_read_unlock() after the call to rcu_read_unlock_strict(), adjusting the preempt-count check appropriately.
Suggested-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Reviewed-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Paul E. McKenney paulmck@kernel.org Signed-off-by: Boqun Feng boqun.feng@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/rcupdate.h | 2 +- kernel/rcu/tree_plugin.h | 11 ++++++++++- 2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index aef8c7304d45d..d1c35009831b6 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -97,9 +97,9 @@ static inline void __rcu_read_lock(void)
static inline void __rcu_read_unlock(void) { - preempt_enable(); if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD)) rcu_read_unlock_strict(); + preempt_enable(); }
static inline int rcu_preempt_depth(void) diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 4f45562be7b54..3929ef8148c10 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -821,8 +821,17 @@ void rcu_read_unlock_strict(void) { struct rcu_data *rdp;
- if (irqs_disabled() || preempt_count() || !rcu_state.gp_kthread) + if (irqs_disabled() || in_atomic_preempt_off() || !rcu_state.gp_kthread) return; + + /* + * rcu_report_qs_rdp() can only be invoked with a stable rdp and + * from the local CPU. + * + * The in_atomic_preempt_off() check ensures that we come here holding + * the last preempt_count (which will get dropped once we return to + * __rcu_read_unlock(). + */ rdp = this_cpu_ptr(&rcu_data); rdp->cpu_no_qs.b.norm = false; rcu_report_qs_rdp(rdp);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ankur Arora ankur.a.arora@oracle.com
[ Upstream commit ad6b5b73ff565e88aca7a7d1286788d80c97ba71 ]
rcu_all_qs() is defined for !CONFIG_PREEMPT_RCU but the declaration is conditioned on CONFIG_PREEMPTION.
With CONFIG_PREEMPT_LAZY, CONFIG_PREEMPTION=y does not imply CONFIG_PREEMPT_RCU=y.
Decouple the two.
Cc: Paul E. McKenney paulmck@kernel.org Reviewed-by: Frederic Weisbecker frederic@kernel.org Reviewed-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: Paul E. McKenney paulmck@kernel.org Signed-off-by: Boqun Feng boqun.feng@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/rcutree.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 5efb51486e8af..54483d5e6f918 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -105,7 +105,7 @@ extern int rcu_scheduler_active; void rcu_end_inkernel_boot(void); bool rcu_inkernel_boot_has_ended(void); bool rcu_is_watching(void); -#ifndef CONFIG_PREEMPTION +#ifndef CONFIG_PREEMPT_RCU void rcu_all_qs(void); #endif
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra (Intel) peterz@infradead.org
[ Upstream commit 8ce939a0fa194939cc1f92dbd8bc1a7806e7d40a ]
The event may have been updated in the PMU-specific implementation, e.g., Intel PEBS counters snapshotting. The common code should not read and overwrite the value.
The PERF_SAMPLE_READ in the data->sample_type can be used to detect whether the PMU-specific value is available. If yes, avoid the pmu->read() in the common code. Add a new flag, skip_read, to track the case.
Factor out a perf_pmu_read() to clean up the code.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Kan Liang kan.liang@linux.intel.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20250121152303.3128733-3-kan.liang@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/perf_event.h | 8 +++++++- kernel/events/core.c | 33 ++++++++++++++++----------------- kernel/events/ring_buffer.c | 1 + 3 files changed, 24 insertions(+), 18 deletions(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 27b694552d58b..41ff70f315a92 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -935,7 +935,13 @@ struct perf_output_handle { struct perf_buffer *rb; unsigned long wakeup; unsigned long size; - u64 aux_flags; + union { + u64 flags; /* perf_output*() */ + u64 aux_flags; /* perf_aux_output*() */ + struct { + u64 skip_read : 1; + }; + }; union { void *addr; unsigned long head; diff --git a/kernel/events/core.c b/kernel/events/core.c index 8fc2bc5646ee2..552bb00bfceb0 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1180,6 +1180,12 @@ static void perf_event_ctx_deactivate(struct perf_event_context *ctx) list_del_init(&ctx->active_ctx_list); }
+static inline void perf_pmu_read(struct perf_event *event) +{ + if (event->state == PERF_EVENT_STATE_ACTIVE) + event->pmu->read(event); +} + static void get_ctx(struct perf_event_context *ctx) { refcount_inc(&ctx->refcount); @@ -3389,8 +3395,7 @@ static void __perf_event_sync_stat(struct perf_event *event, * we know the event must be on the current CPU, therefore we * don't need to use it. */ - if (event->state == PERF_EVENT_STATE_ACTIVE) - event->pmu->read(event); + perf_pmu_read(event);
perf_event_update_time(event);
@@ -4444,15 +4449,8 @@ static void __perf_event_read(void *info)
pmu->read(event);
- for_each_sibling_event(sub, event) { - if (sub->state == PERF_EVENT_STATE_ACTIVE) { - /* - * Use sibling's PMU rather than @event's since - * sibling could be on different (eg: software) PMU. - */ - sub->pmu->read(sub); - } - } + for_each_sibling_event(sub, event) + perf_pmu_read(sub);
data->ret = pmu->commit_txn(pmu);
@@ -7101,9 +7099,8 @@ static void perf_output_read_group(struct perf_output_handle *handle, if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) values[n++] = running;
- if ((leader != event) && - (leader->state == PERF_EVENT_STATE_ACTIVE)) - leader->pmu->read(leader); + if ((leader != event) && !handle->skip_read) + perf_pmu_read(leader);
values[n++] = perf_event_count(leader); if (read_format & PERF_FORMAT_ID) @@ -7116,9 +7113,8 @@ static void perf_output_read_group(struct perf_output_handle *handle, for_each_sibling_event(sub, leader) { n = 0;
- if ((sub != event) && - (sub->state == PERF_EVENT_STATE_ACTIVE)) - sub->pmu->read(sub); + if ((sub != event) && !handle->skip_read) + perf_pmu_read(sub);
values[n++] = perf_event_count(sub); if (read_format & PERF_FORMAT_ID) @@ -7173,6 +7169,9 @@ void perf_output_sample(struct perf_output_handle *handle, { u64 sample_type = data->type;
+ if (data->sample_flags & PERF_SAMPLE_READ) + handle->skip_read = 1; + perf_output_put(handle, *header);
if (sample_type & PERF_SAMPLE_IDENTIFIER) diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 3e1655374c2ed..4c4894de6e5d1 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -181,6 +181,7 @@ __perf_output_begin(struct perf_output_handle *handle,
handle->rb = rb; handle->event = event; + handle->flags = 0;
have_lost = local_read(&rb->lost); if (unlikely(have_lost)) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michal Swiatkowski michal.swiatkowski@linux.intel.com
[ Upstream commit c3a392bdd31adc474f1009ee85c13fdd01fe800d ]
Previous implementation assumes that there is 1:1 matching between vectors and queues. It isn't always true.
Get minimum value from Rx/Tx queues to determine combined queues number.
Reviewed-by: Jacob Keller jacob.e.keller@intel.com Tested-by: Pucha Himasekhar Reddy himasekharx.reddy.pucha@intel.com Signed-off-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice_ethtool.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index a163e7717a534..1f62d11831567 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -3373,8 +3373,7 @@ static u32 ice_get_combined_cnt(struct ice_vsi *vsi) ice_for_each_q_vector(vsi, q_idx) { struct ice_q_vector *q_vector = vsi->q_vectors[q_idx];
- if (q_vector->rx.rx_ring && q_vector->tx.tx_ring) - combined++; + combined += min(q_vector->num_ring_tx, q_vector->num_ring_rx); }
return combined;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Konstantin Taranov kotaranov@microsoft.com
[ Upstream commit 5ec7e1c86c441c46a374577bccd9488abea30037 ]
Do not warn on missing pad_data when oob is in sgl.
Signed-off-by: Konstantin Taranov kotaranov@microsoft.com Link: https://patch.msgid.link/1737394039-28772-9-git-send-email-kotaranov@linux.m... Reviewed-by: Shiraz Saleem shirazsaleem@microsoft.com Reviewed-by: Long Li longli@microsoft.com Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/microsoft/mana/gdma_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index d674ebda2053d..9e55679796d93 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -995,7 +995,7 @@ static u32 mana_gd_write_client_oob(const struct gdma_wqe_request *wqe_req, header->inline_oob_size_div4 = client_oob_size / sizeof(u32);
if (oob_in_sgl) { - WARN_ON_ONCE(!pad_data || wqe_req->num_sge < 2); + WARN_ON_ONCE(wqe_req->num_sge < 2);
header->client_oob_in_sgl = 1;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Tee justin.tee@broadcom.com
[ Upstream commit 56c3d809b7b450379162d0b8a70bbe71ab8db706 ]
After a port swap between separate fabrics, there may be multiple nodes in the vport's fc_nodes list with the same fabric well known address. Duplication is temporary and eventually resolves itself after dev_loss_tmo expires, but nameserver queries may still occur before dev_loss_tmo. This possibly results in returning stale fabric ndlp objects. Fix by adding an nlp_state check to ensure the ndlp search routine returns the correct newer allocated ndlp fabric object.
Signed-off-by: Justin Tee justin.tee@broadcom.com Link: https://lore.kernel.org/r/20250131000524.163662-5-justintee8345@gmail.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/lpfc/lpfc_hbadisc.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c index 57be02f8d5c18..b04112c77fcd1 100644 --- a/drivers/scsi/lpfc/lpfc_hbadisc.c +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c @@ -5619,6 +5619,7 @@ static struct lpfc_nodelist * __lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did) { struct lpfc_nodelist *ndlp; + struct lpfc_nodelist *np = NULL; uint32_t data1;
list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { @@ -5633,14 +5634,20 @@ __lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did) ndlp, ndlp->nlp_DID, ndlp->nlp_flag, data1, ndlp->nlp_rpi, ndlp->active_rrqs_xri_bitmap); - return ndlp; + + /* Check for new or potentially stale node */ + if (ndlp->nlp_state != NLP_STE_UNUSED_NODE) + return ndlp; + np = ndlp; } }
- /* FIND node did <did> NOT FOUND */ - lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, - "0932 FIND node did x%x NOT FOUND.\n", did); - return NULL; + if (!np) + /* FIND node did <did> NOT FOUND */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, + "0932 FIND node did x%x NOT FOUND.\n", did); + + return np; }
struct lpfc_nodelist *
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Tee justin.tee@broadcom.com
[ Upstream commit f0842902b383982d1f72c490996aa8fc29a7aa0d ]
Fix smatch warning regarding missed calls to free_irq(). Free the phba IRQ in the failed pci_irq_vector cases.
lpfc_init.c: lpfc_sli4_enable_msi() warn: 'phba->pcidev->irq' from request_irq() not released.
Signed-off-by: Justin Tee justin.tee@broadcom.com Link: https://lore.kernel.org/r/20250131000524.163662-3-justintee8345@gmail.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/lpfc/lpfc_init.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 1a0bafde34d86..97f3c5240d572 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -13204,6 +13204,7 @@ lpfc_sli4_enable_msi(struct lpfc_hba *phba) eqhdl = lpfc_get_eq_hdl(0); rc = pci_irq_vector(phba->pcidev, 0); if (rc < 0) { + free_irq(phba->pcidev->irq, phba); pci_free_irq_vectors(phba->pcidev); lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, "0496 MSI pci_irq_vec failed (%d)\n", rc); @@ -13284,6 +13285,7 @@ lpfc_sli4_enable_intr(struct lpfc_hba *phba, uint32_t cfg_mode) eqhdl = lpfc_get_eq_hdl(0); retval = pci_irq_vector(phba->pcidev, 0); if (retval < 0) { + free_irq(phba->pcidev->irq, phba); lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, "0502 INTR pci_irq_vec failed (%d)\n", retval);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kai Mäkisara Kai.Makisara@kolumbus.fi
[ Upstream commit 7081dc75df79696d8322d01821c28e53416c932c ]
Some of the allowed operations put the tape into a known position to continue operation assuming only the tape position has changed. But reset sets partition, density and block size to drive default values. These should be restored to the values before reset.
Normally the current block size and density are stored by the drive. If the settings have been changed, the changed values have to be saved by the driver across reset.
Signed-off-by: Kai Mäkisara Kai.Makisara@kolumbus.fi Link: https://lore.kernel.org/r/20250120194925.44432-2-Kai.Makisara@kolumbus.fi Reviewed-by: John Meneghini jmeneghi@redhat.com Tested-by: John Meneghini jmeneghi@redhat.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/st.c | 24 +++++++++++++++++++++--- drivers/scsi/st.h | 2 ++ 2 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index 3ff4e6d44db88..9ba5ad106b653 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -950,7 +950,6 @@ static void reset_state(struct scsi_tape *STp) STp->partition = find_partition(STp); if (STp->partition < 0) STp->partition = 0; - STp->new_partition = STp->partition; } } @@ -2919,14 +2918,17 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon if (cmd_in == MTSETDENSITY) { (STp->buffer)->b_data[4] = arg; STp->density_changed = 1; /* At least we tried ;-) */ + STp->changed_density = arg; } else if (cmd_in == SET_DENS_AND_BLK) (STp->buffer)->b_data[4] = arg >> 24; else (STp->buffer)->b_data[4] = STp->density; if (cmd_in == MTSETBLK || cmd_in == SET_DENS_AND_BLK) { ltmp = arg & MT_ST_BLKSIZE_MASK; - if (cmd_in == MTSETBLK) + if (cmd_in == MTSETBLK) { STp->blksize_changed = 1; /* At least we tried ;-) */ + STp->changed_blksize = arg; + } } else ltmp = STp->block_size; (STp->buffer)->b_data[9] = (ltmp >> 16); @@ -3627,9 +3629,25 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg) retval = (-EIO); goto out; } - reset_state(STp); + reset_state(STp); /* Clears pos_unknown */ /* remove this when the midlevel properly clears was_reset */ STp->device->was_reset = 0; + + /* Fix the device settings after reset, ignore errors */ + if (mtc.mt_op == MTREW || mtc.mt_op == MTSEEK || + mtc.mt_op == MTEOM) { + if (STp->can_partitions) { + /* STp->new_partition contains the + * latest partition set + */ + STp->partition = 0; + switch_partition(STp); + } + if (STp->density_changed) + st_int_ioctl(STp, MTSETDENSITY, STp->changed_density); + if (STp->blksize_changed) + st_int_ioctl(STp, MTSETBLK, STp->changed_blksize); + } }
if (mtc.mt_op != MTNOP && mtc.mt_op != MTSETBLK && diff --git a/drivers/scsi/st.h b/drivers/scsi/st.h index 7a68eaba7e810..2105c6a5b4586 100644 --- a/drivers/scsi/st.h +++ b/drivers/scsi/st.h @@ -165,12 +165,14 @@ struct scsi_tape { unsigned char compression_changed; unsigned char drv_buffer; unsigned char density; + unsigned char changed_density; unsigned char door_locked; unsigned char autorew_dev; /* auto-rewind device */ unsigned char rew_at_close; /* rewind necessary at close */ unsigned char inited; unsigned char cleaning_req; /* cleaning requested? */ int block_size; + int changed_blksize; int min_block; int max_block; int recover_count; /* From tape opening */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: junan junan76@163.com
[ Upstream commit d73a4bfa2881a6859b384b75a414c33d4898b055 ]
Since "LED_KANA" was defined as "0x04", the shift number should be "4".
Signed-off-by: junan junan76@163.com Signed-off-by: Jiri Kosina jkosina@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/usbhid/usbkbd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hid/usbhid/usbkbd.c b/drivers/hid/usbhid/usbkbd.c index c439ed2f16dbc..af6bc76dbf649 100644 --- a/drivers/hid/usbhid/usbkbd.c +++ b/drivers/hid/usbhid/usbkbd.c @@ -160,7 +160,7 @@ static int usb_kbd_event(struct input_dev *dev, unsigned int type, return -1;
spin_lock_irqsave(&kbd->leds_lock, flags); - kbd->newleds = (!!test_bit(LED_KANA, dev->led) << 3) | (!!test_bit(LED_COMPOSE, dev->led) << 3) | + kbd->newleds = (!!test_bit(LED_KANA, dev->led) << 4) | (!!test_bit(LED_COMPOSE, dev->led) << 3) | (!!test_bit(LED_SCROLLL, dev->led) << 2) | (!!test_bit(LED_CAPSL, dev->led) << 1) | (!!test_bit(LED_NUML, dev->led));
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Cezary Rojewski cezary.rojewski@intel.com
[ Upstream commit 7d92a38d67e5d937b64b20aa4fd14451ee1772f3 ]
As per codec device specification, 24-bit is allowed in provider mode. Update the code to reflect that.
Signed-off-by: Cezary Rojewski cezary.rojewski@intel.com Link: https://patch.msgid.link/20250203141051.2361323-4-cezary.rojewski@intel.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/pcm3168a.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/sound/soc/codecs/pcm3168a.c b/sound/soc/codecs/pcm3168a.c index 9d6431338fb71..329549936bd5c 100644 --- a/sound/soc/codecs/pcm3168a.c +++ b/sound/soc/codecs/pcm3168a.c @@ -494,9 +494,9 @@ static int pcm3168a_hw_params(struct snd_pcm_substream *substream, } break; case 24: - if (provider_mode || (format == SND_SOC_DAIFMT_DSP_A) || - (format == SND_SOC_DAIFMT_DSP_B)) { - dev_err(component->dev, "24-bit slots not supported in provider mode, or consumer mode using DSP\n"); + if (!provider_mode && ((format == SND_SOC_DAIFMT_DSP_A) || + (format == SND_SOC_DAIFMT_DSP_B))) { + dev_err(component->dev, "24-bit slots not supported in consumer mode using DSP\n"); return -EINVAL; } break;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann tzimmermann@suse.de
[ Upstream commit c81202906b5cd56db403e95db3d29c9dfc8c74c1 ]
The ast driver looks up supplied display modes from an internal list of display modes supported by the VBIOS.
Do not use the crtc_-prefixed display values from struct drm_display_mode for looking up the VBIOS mode. The fields contain raw values that the driver programs to hardware. They are affected by display settings like double-scan or interlace.
Instead use the regular vdisplay and hdisplay fields for lookup. As the programmed values can now differ from the values used for lookup, set struct drm_display_mode.crtc_vdisplay and .crtc_hdisplay from the VBIOS mode.
Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Reviewed-by: Jocelyn Falempe jfalempe@redhat.com Link: https://patchwork.freedesktop.org/patch/msgid/20250131092257.115596-9-tzimme... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/ast/ast_mode.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c index 1bc0220e6783e..9fe856fd8a84f 100644 --- a/drivers/gpu/drm/ast/ast_mode.c +++ b/drivers/gpu/drm/ast/ast_mode.c @@ -103,7 +103,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format, return false; }
- switch (mode->crtc_hdisplay) { + switch (mode->hdisplay) { case 640: vbios_mode->enh_table = &res_640x480[refresh_rate_index]; break; @@ -117,7 +117,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format, vbios_mode->enh_table = &res_1152x864[refresh_rate_index]; break; case 1280: - if (mode->crtc_vdisplay == 800) + if (mode->vdisplay == 800) vbios_mode->enh_table = &res_1280x800[refresh_rate_index]; else vbios_mode->enh_table = &res_1280x1024[refresh_rate_index]; @@ -129,7 +129,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format, vbios_mode->enh_table = &res_1440x900[refresh_rate_index]; break; case 1600: - if (mode->crtc_vdisplay == 900) + if (mode->vdisplay == 900) vbios_mode->enh_table = &res_1600x900[refresh_rate_index]; else vbios_mode->enh_table = &res_1600x1200[refresh_rate_index]; @@ -138,7 +138,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format, vbios_mode->enh_table = &res_1680x1050[refresh_rate_index]; break; case 1920: - if (mode->crtc_vdisplay == 1080) + if (mode->vdisplay == 1080) vbios_mode->enh_table = &res_1920x1080[refresh_rate_index]; else vbios_mode->enh_table = &res_1920x1200[refresh_rate_index]; @@ -182,6 +182,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format, hborder = (vbios_mode->enh_table->flags & HBorder) ? 8 : 0; vborder = (vbios_mode->enh_table->flags & VBorder) ? 8 : 0;
+ adjusted_mode->crtc_hdisplay = vbios_mode->enh_table->hde; adjusted_mode->crtc_htotal = vbios_mode->enh_table->ht; adjusted_mode->crtc_hblank_start = vbios_mode->enh_table->hde + hborder; adjusted_mode->crtc_hblank_end = vbios_mode->enh_table->ht - hborder; @@ -191,6 +192,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format, vbios_mode->enh_table->hfp + vbios_mode->enh_table->hsync);
+ adjusted_mode->crtc_vdisplay = vbios_mode->enh_table->vde; adjusted_mode->crtc_vtotal = vbios_mode->enh_table->vt; adjusted_mode->crtc_vblank_start = vbios_mode->enh_table->vde + vborder; adjusted_mode->crtc_vblank_end = vbios_mode->enh_table->vt - vborder;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Viktor Malik vmalik@redhat.com
[ Upstream commit 0053f7d39d491b6138d7c526876d13885cbb65f1 ]
The `readlink(path, buf, sizeof(buf))` call reads at most sizeof(buf) bytes and *does not* append null-terminator to buf. With respect to that, fix two pieces in get_fd_type:
1. Change the truncation check to contain sizeof(buf) rather than sizeof(path). 2. Append null-terminator to buf.
Reported by Coverity.
Signed-off-by: Viktor Malik vmalik@redhat.com Signed-off-by: Andrii Nakryiko andrii@kernel.org Reviewed-by: Quentin Monnet qmo@kernel.org Link: https://lore.kernel.org/bpf/20250129071857.75182-1-vmalik@redhat.com Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/bpf/bpftool/common.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c index db02b000fbebd..eea00bc15b5cc 100644 --- a/tools/bpf/bpftool/common.c +++ b/tools/bpf/bpftool/common.c @@ -384,10 +384,11 @@ int get_fd_type(int fd) p_err("can't read link type: %s", strerror(errno)); return -1; } - if (n == sizeof(path)) { + if (n == sizeof(buf)) { p_err("can't read link type: path too long!"); return -1; } + buf[n] = '\0';
if (strstr(buf, "bpf-map")) return BPF_OBJ_MAP;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ravi Bangoria ravi.bangoria@amd.com
[ Upstream commit 46dcf85566170d4528b842bf83ffc350d71771fa ]
IBS Op uses two counters: MaxCnt and CurCnt. MaxCnt is programmed with the desired sample period. IBS hw generates sample when CurCnt reaches to MaxCnt. The size of these counter used to be 20 bits but later they were extended to 27 bits. The 7 bit extension is indicated by CPUID Fn8000_001B_EAX[6 / OpCntExt].
perf_ibs->cnt_mask variable contains bit masks for MaxCnt and CurCnt. But IBS driver does not set upper 7 bits of CurCnt in cnt_mask even when OpCntExt CPUID bit is set. Fix this.
IBS driver uses cnt_mask[CurCnt] bits only while disabling an event. Fortunately, CurCnt bits are not read from MSR while re-enabling the event, instead MaxCnt is programmed with desired period and CurCnt is set to 0. Hence, we did not see any issues so far.
Signed-off-by: Ravi Bangoria ravi.bangoria@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Acked-by: Namhyung Kim namhyung@kernel.org Link: https://lkml.kernel.org/r/20250115054438.1021-5-ravi.bangoria@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/events/amd/ibs.c | 3 ++- arch/x86/include/asm/perf_event.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index 37cbbc5c659a5..8c385e231e070 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -1216,7 +1216,8 @@ static __init int perf_ibs_op_init(void) if (ibs_caps & IBS_CAPS_OPCNTEXT) { perf_ibs_op.max_period |= IBS_OP_MAX_CNT_EXT_MASK; perf_ibs_op.config_mask |= IBS_OP_MAX_CNT_EXT_MASK; - perf_ibs_op.cnt_mask |= IBS_OP_MAX_CNT_EXT_MASK; + perf_ibs_op.cnt_mask |= (IBS_OP_MAX_CNT_EXT_MASK | + IBS_OP_CUR_CNT_EXT_MASK); }
if (ibs_caps & IBS_CAPS_ZEN4) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 4d810b9478a43..fa3576ef19daa 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -456,6 +456,7 @@ struct pebs_xmm { */ #define IBS_OP_CUR_CNT (0xFFF80ULL<<32) #define IBS_OP_CUR_CNT_RAND (0x0007FULL<<32) +#define IBS_OP_CUR_CNT_EXT_MASK (0x7FULL<<52) #define IBS_OP_CNT_CTL (1ULL<<19) #define IBS_OP_VAL (1ULL<<18) #define IBS_OP_ENABLE (1ULL<<17)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Soeren Moch smoch@web.de
[ Upstream commit 3d3e28feca7ac8c6cf2a390dbbe1f97e3feb7f36 ]
Occasionally there is an EPROTO error during firmware download. This error is converted to EAGAIN in the download function. But nobody tries again and so device probe fails.
Implement download retry to fix this.
This error was observed (and fix tested) on a tbs2910 board [1] with an embedded RTL8188EU (0bda:8179) device behind a USB hub.
[1] arch/arm/boot/dts/nxp/imx/imx6q-tbs2910.dts
Signed-off-by: Soeren Moch smoch@web.de Acked-by: Ping-Ke Shih pkshih@realtek.com Signed-off-by: Ping-Ke Shih pkshih@realtek.com Link: https://patch.msgid.link/20250127194828.599379-1-smoch@web.de Signed-off-by: Sasha Levin sashal@kernel.org --- .../wireless/realtek/rtl8xxxu/rtl8xxxu_core.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c index 9ccf8550a0679..cd22c756acc69 100644 --- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c +++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c @@ -798,9 +798,10 @@ rtl8xxxu_writeN(struct rtl8xxxu_priv *priv, u16 addr, u8 *buf, u16 len) return len;
write_error: - dev_info(&udev->dev, - "%s: Failed to write block at addr: %04x size: %04x\n", - __func__, addr, blocksize); + if (rtl8xxxu_debug & RTL8XXXU_DEBUG_REG_WRITE) + dev_info(&udev->dev, + "%s: Failed to write block at addr: %04x size: %04x\n", + __func__, addr, blocksize); return -EAGAIN; }
@@ -3920,8 +3921,14 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw) */ rtl8xxxu_write16(priv, REG_TRXFF_BNDY + 2, fops->trxff_boundary);
- ret = rtl8xxxu_download_firmware(priv); - dev_dbg(dev, "%s: download_firmware %i\n", __func__, ret); + for (int retry = 5; retry >= 0 ; retry--) { + ret = rtl8xxxu_download_firmware(priv); + dev_dbg(dev, "%s: download_firmware %i\n", __func__, ret); + if (ret != -EAGAIN) + break; + if (retry) + dev_dbg(dev, "%s: retry firmware download\n", __func__); + } if (ret) goto exit; ret = rtl8xxxu_start_firmware(priv);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bitterblue Smith rtl8821cerfe2@gmail.com
[ Upstream commit 00451eb3bec763f708e7e58326468c1e575e5a66 ]
Some users want to plug two identical USB devices at the same time. This static variable could theoretically cause them to use incorrect TX power values.
Move the variable to the caller and pass a pointer to it to rtw8822b_set_tx_power_index_by_rate().
Signed-off-by: Bitterblue Smith rtl8821cerfe2@gmail.com Acked-by: Ping-Ke Shih pkshih@realtek.com Signed-off-by: Ping-Ke Shih pkshih@realtek.com Link: https://patch.msgid.link/8a60f581-0ab5-4d98-a97d-dd83b605008f@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw88/rtw8822b.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c index 690e35c98f6e5..0b071a116c58e 100644 --- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c +++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c @@ -957,11 +957,11 @@ static void rtw8822b_query_rx_desc(struct rtw_dev *rtwdev, u8 *rx_desc, }
static void -rtw8822b_set_tx_power_index_by_rate(struct rtw_dev *rtwdev, u8 path, u8 rs) +rtw8822b_set_tx_power_index_by_rate(struct rtw_dev *rtwdev, u8 path, + u8 rs, u32 *phy_pwr_idx) { struct rtw_hal *hal = &rtwdev->hal; static const u32 offset_txagc[2] = {0x1d00, 0x1d80}; - static u32 phy_pwr_idx; u8 rate, rate_idx, pwr_index, shift; int j;
@@ -969,12 +969,12 @@ rtw8822b_set_tx_power_index_by_rate(struct rtw_dev *rtwdev, u8 path, u8 rs) rate = rtw_rate_section[rs][j]; pwr_index = hal->tx_pwr_tbl[path][rate]; shift = rate & 0x3; - phy_pwr_idx |= ((u32)pwr_index << (shift * 8)); + *phy_pwr_idx |= ((u32)pwr_index << (shift * 8)); if (shift == 0x3) { rate_idx = rate & 0xfc; rtw_write32(rtwdev, offset_txagc[path] + rate_idx, - phy_pwr_idx); - phy_pwr_idx = 0; + *phy_pwr_idx); + *phy_pwr_idx = 0; } } } @@ -982,11 +982,13 @@ rtw8822b_set_tx_power_index_by_rate(struct rtw_dev *rtwdev, u8 path, u8 rs) static void rtw8822b_set_tx_power_index(struct rtw_dev *rtwdev) { struct rtw_hal *hal = &rtwdev->hal; + u32 phy_pwr_idx = 0; int rs, path;
for (path = 0; path < hal->rf_path_num; path++) { for (rs = 0; rs < RTW_RATE_SECTION_MAX; rs++) - rtw8822b_set_tx_power_index_by_rate(rtwdev, path, rs); + rtw8822b_set_tx_power_index_by_rate(rtwdev, path, rs, + &phy_pwr_idx); } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ping-Ke Shih pkshih@realtek.com
[ Upstream commit ebfc9199df05d37b67f4d1b7ee997193f3d2e7c8 ]
To ensure where are protected by driver mutex can also be protected by wiphy_lock(), so afterward we can remove driver mutex safely.
Signed-off-by: Ping-Ke Shih pkshih@realtek.com Link: https://patch.msgid.link/20250122060310.31976-2-pkshih@realtek.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw89/regd.c | 2 ++ drivers/net/wireless/realtek/rtw89/ser.c | 4 ++++ 2 files changed, 6 insertions(+)
diff --git a/drivers/net/wireless/realtek/rtw89/regd.c b/drivers/net/wireless/realtek/rtw89/regd.c index 6e5a740b128f0..2d31193fcc87d 100644 --- a/drivers/net/wireless/realtek/rtw89/regd.c +++ b/drivers/net/wireless/realtek/rtw89/regd.c @@ -334,6 +334,7 @@ void rtw89_regd_notifier(struct wiphy *wiphy, struct regulatory_request *request struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy); struct rtw89_dev *rtwdev = hw->priv;
+ wiphy_lock(wiphy); mutex_lock(&rtwdev->mutex); rtw89_leave_ps_mode(rtwdev);
@@ -350,4 +351,5 @@ void rtw89_regd_notifier(struct wiphy *wiphy, struct regulatory_request *request
exit: mutex_unlock(&rtwdev->mutex); + wiphy_unlock(wiphy); } diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c index afb1b41e1a9a5..f5dacdc4d11ab 100644 --- a/drivers/net/wireless/realtek/rtw89/ser.c +++ b/drivers/net/wireless/realtek/rtw89/ser.c @@ -153,9 +153,11 @@ static void ser_state_run(struct rtw89_ser *ser, u8 evt) rtw89_debug(rtwdev, RTW89_DBG_SER, "ser: %s receive %s\n", ser_st_name(ser), ser_ev_name(ser, evt));
+ wiphy_lock(rtwdev->hw->wiphy); mutex_lock(&rtwdev->mutex); rtw89_leave_lps(rtwdev); mutex_unlock(&rtwdev->mutex); + wiphy_unlock(rtwdev->hw->wiphy);
ser->st_tbl[ser->state].st_func(ser, evt); } @@ -624,9 +626,11 @@ static void ser_l2_reset_st_hdl(struct rtw89_ser *ser, u8 evt)
switch (evt) { case SER_EV_STATE_IN: + wiphy_lock(rtwdev->hw->wiphy); mutex_lock(&rtwdev->mutex); ser_l2_reset_st_pre_hdl(ser); mutex_unlock(&rtwdev->mutex); + wiphy_unlock(rtwdev->hw->wiphy);
ieee80211_restart_hw(rtwdev->hw); ser_set_alarm(ser, SER_RECFG_TIMEOUT, SER_EV_L2_RECFG_TIMEOUT);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Anderson sean.anderson@linux.dev
[ Upstream commit 89785306453ce6d949e783f6936821a0b7649ee2 ]
RXEMPTY can cause an IRQ, even though we may not do anything about it (such as if we are waiting for more received data). We must still handle these IRQs because we can tell they were caused by the device.
Signed-off-by: Sean Anderson sean.anderson@linux.dev Link: https://patch.msgid.link/20250116224130.2684544-6-sean.anderson@linux.dev Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-zynqmp-gqspi.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c index c89544ae5ed91..fde7c38103596 100644 --- a/drivers/spi/spi-zynqmp-gqspi.c +++ b/drivers/spi/spi-zynqmp-gqspi.c @@ -698,7 +698,6 @@ static void zynqmp_process_dma_irq(struct zynqmp_qspi *xqspi) static irqreturn_t zynqmp_qspi_irq(int irq, void *dev_id) { struct zynqmp_qspi *xqspi = (struct zynqmp_qspi *)dev_id; - irqreturn_t ret = IRQ_NONE; u32 status, mask, dma_status = 0;
status = zynqmp_gqspi_read(xqspi, GQSPI_ISR_OFST); @@ -713,27 +712,24 @@ static irqreturn_t zynqmp_qspi_irq(int irq, void *dev_id) dma_status); }
- if (mask & GQSPI_ISR_TXNOT_FULL_MASK) { + if (!mask && !dma_status) + return IRQ_NONE; + + if (mask & GQSPI_ISR_TXNOT_FULL_MASK) zynqmp_qspi_filltxfifo(xqspi, GQSPI_TX_FIFO_FILL); - ret = IRQ_HANDLED; - }
- if (dma_status & GQSPI_QSPIDMA_DST_I_STS_DONE_MASK) { + if (dma_status & GQSPI_QSPIDMA_DST_I_STS_DONE_MASK) zynqmp_process_dma_irq(xqspi); - ret = IRQ_HANDLED; - } else if (!(mask & GQSPI_IER_RXEMPTY_MASK) && - (mask & GQSPI_IER_GENFIFOEMPTY_MASK)) { + else if (!(mask & GQSPI_IER_RXEMPTY_MASK) && + (mask & GQSPI_IER_GENFIFOEMPTY_MASK)) zynqmp_qspi_readrxfifo(xqspi, GQSPI_RX_FIFO_FILL); - ret = IRQ_HANDLED; - }
if (xqspi->bytes_to_receive == 0 && xqspi->bytes_to_transfer == 0 && ((status & GQSPI_IRQ_MASK) == GQSPI_IRQ_MASK)) { zynqmp_gqspi_write(xqspi, GQSPI_IDR_OFST, GQSPI_ISR_IDR_MASK); complete(&xqspi->data_completion); - ret = IRQ_HANDLED; } - return ret; + return IRQ_HANDLED; }
/**
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Isaac Scott isaac.scott@ideasonboard.com
[ Upstream commit 5a6a461079decea452fdcae955bccecf92e07e97 ]
Previously, the ad5398 driver used only platform_data, which is deprecated in favour of device tree. This caused the AD5398 to fail to probe as it could not load its init_data. If the AD5398 has a device tree node, pull the init_data from there using of_get_regulator_init_data.
Signed-off-by: Isaac Scott isaac.scott@ideasonboard.com Acked-by: Michael Hennerich michael.hennerich@analog.com Link: https://patch.msgid.link/20250128173143.959600-4-isaac.scott@ideasonboard.co... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/regulator/ad5398.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/drivers/regulator/ad5398.c b/drivers/regulator/ad5398.c index 75f432f61e919..f4d6e62bd963e 100644 --- a/drivers/regulator/ad5398.c +++ b/drivers/regulator/ad5398.c @@ -14,6 +14,7 @@ #include <linux/platform_device.h> #include <linux/regulator/driver.h> #include <linux/regulator/machine.h> +#include <linux/regulator/of_regulator.h>
#define AD5398_CURRENT_EN_MASK 0x8000
@@ -221,15 +222,20 @@ static int ad5398_probe(struct i2c_client *client, const struct ad5398_current_data_format *df = (struct ad5398_current_data_format *)id->driver_data;
- if (!init_data) - return -EINVAL; - chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL); if (!chip) return -ENOMEM;
config.dev = &client->dev; + if (client->dev.of_node) + init_data = of_get_regulator_init_data(&client->dev, + client->dev.of_node, + &ad5398_reg); + if (!init_data) + return -EINVAL; + config.init_data = init_data; + config.of_node = client->dev.of_node; config.driver_data = chip;
chip->client = client;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rosen Penev rosenp@gmail.com
[ Upstream commit dfffb317519f88534bb82797f055f0a2fd867e7b ]
When using nvmem, ath9k could potentially be loaded before nvmem, which loads after mtd. This is an issue if DT contains an nvmem mac address.
If nvmem is not ready in time for ath9k, -EPROBE_DEFER is returned. Pass it to _probe so that ath9k can properly grab a potentially present MAC address.
Signed-off-by: Rosen Penev rosenp@gmail.com Acked-by: Toke Høiland-Jørgensen toke@toke.dk Link: https://patch.msgid.link/20241105222326.194417-1-rosenp@gmail.com Signed-off-by: Jeff Johnson jeff.johnson@oss.qualcomm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath9k/init.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c index 4f00400c7ffb8..58386906598a7 100644 --- a/drivers/net/wireless/ath/ath9k/init.c +++ b/drivers/net/wireless/ath/ath9k/init.c @@ -691,7 +691,9 @@ static int ath9k_of_init(struct ath_softc *sc) ah->ah_flags |= AH_NO_EEP_SWAP; }
- of_get_mac_address(np, common->macaddr); + ret = of_get_mac_address(np, common->macaddr); + if (ret == -EPROBE_DEFER) + return ret;
return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Simona Vetter simona.vetter@ffwll.ch
[ Upstream commit c5e3306a424b52e38ad2c28c7f3399fcd03e383d ]
msm is automagically upgrading normal commits to full modesets, and that's a big no-no:
- for one this results in full on->off->on transitions on all these crtc, at least if you're using the usual helpers. Which seems to be the case, and is breaking uapi
- further even if the ctm change itself would not result in flicker, this can hide modesets for other reasons. Which again breaks the uapi
v2: I forgot the case of adding unrelated crtc state. Add that case and link to the existing kerneldoc explainers. This has come up in an irc discussion with Manasi and Ville about intel's bigjoiner mode. Also cc everyone involved in the msm irc discussion, more people joined after I sent out v1.
v3: Wording polish from Pekka and Thomas
Acked-by: Pekka Paalanen pekka.paalanen@collabora.com Acked-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Thomas Zimmermann tzimmermann@suse.de Cc: David Airlie airlied@gmail.com Cc: Daniel Vetter daniel@ffwll.ch Cc: Pekka Paalanen pekka.paalanen@collabora.com Cc: Rob Clark robdclark@gmail.com Cc: Simon Ser contact@emersion.fr Cc: Manasi Navare navaremanasi@google.com Cc: Ville Syrjälä ville.syrjala@linux.intel.com Cc: Abhinav Kumar quic_abhinavk@quicinc.com Cc: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Simona Vetter simona.vetter@intel.com Signed-off-by: Simona Vetter simona.vetter@ffwll.ch Link: https://patchwork.freedesktop.org/patch/msgid/20250108172417.160831-1-simona... Signed-off-by: Sasha Levin sashal@kernel.org --- include/drm/drm_atomic.h | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h index 10b1990bc1f68..36225aedf6138 100644 --- a/include/drm/drm_atomic.h +++ b/include/drm/drm_atomic.h @@ -372,8 +372,27 @@ struct drm_atomic_state { * * Allow full modeset. This is used by the ATOMIC IOCTL handler to * implement the DRM_MODE_ATOMIC_ALLOW_MODESET flag. Drivers should - * never consult this flag, instead looking at the output of - * drm_atomic_crtc_needs_modeset(). + * generally not consult this flag, but instead look at the output of + * drm_atomic_crtc_needs_modeset(). The detailed rules are: + * + * - Drivers must not consult @allow_modeset in the atomic commit path. + * Use drm_atomic_crtc_needs_modeset() instead. + * + * - Drivers must consult @allow_modeset before adding unrelated struct + * drm_crtc_state to this commit by calling + * drm_atomic_get_crtc_state(). See also the warning in the + * documentation for that function. + * + * - Drivers must never change this flag, it is under the exclusive + * control of userspace. + * + * - Drivers may consult @allow_modeset in the atomic check path, if + * they have the choice between an optimal hardware configuration + * which requires a modeset, and a less optimal configuration which + * can be committed without a modeset. An example would be suboptimal + * scanout FIFO allocation resulting in increased idle power + * consumption. This allows userspace to avoid flickering and delays + * for the normal composition loop at reasonable cost. */ bool allow_modeset : 1; /**
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Douglas Anderson dianders@chromium.org
[ Upstream commit 749b5b279e5636cdcef51e15d67b77162cca6caa ]
We have a few reports of sc7180-trogdor-pompom devices that have a panel in them that IDs as STA 0x0004 and has the following raw EDID:
00 ff ff ff ff ff ff 00 4e 81 04 00 00 00 00 00 10 20 01 04 a5 1a 0e 78 0a dc dd 96 5b 5b 91 28 1f 52 54 00 00 00 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 8e 1c 56 a0 50 00 1e 30 28 20 55 00 00 90 10 00 00 18 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fe 00 31 31 36 4b 48 44 30 32 34 30 30 36 0a 00 e6
We've been unable to locate a datasheet for this panel and our partner has not been responsive, but all Starry eDP datasheets that we can find agree on the same timing (delay_100_500_e200) so it should be safe to use that here instead of the super conservative timings. We'll still go a little extra conservative and allow `hpd_absent` of 200 instead of 100 because that won't add any real-world delay in most cases.
We'll associate the string from the EDID ("116KHD024006") with this panel. Given that the ID is the suspicious value of 0x0004 it seems likely that Starry doesn't always update their IDs but the string will still work to differentiate if we ever need to in the future.
Reviewed-by: Neil Armstrong neil.armstrong@linaro.org Signed-off-by: Douglas Anderson dianders@chromium.org Link: https://patchwork.freedesktop.org/patch/msgid/20250109142853.1.Ibcc3009933fd... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/panel/panel-edp.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/panel/panel-edp.c b/drivers/gpu/drm/panel/panel-edp.c index 2c14779a39e88..1ef1b4c966d2e 100644 --- a/drivers/gpu/drm/panel/panel-edp.c +++ b/drivers/gpu/drm/panel/panel-edp.c @@ -1944,6 +1944,7 @@ static const struct edp_panel_entry edp_panels[] = { EDP_PANEL_ENTRY('S', 'H', 'P', 0x1523, &sharp_lq140m1jw46.delay, "LQ140M1JW46"), EDP_PANEL_ENTRY('S', 'H', 'P', 0x154c, &delay_200_500_p2e100, "LQ116M1JW10"),
+ EDP_PANEL_ENTRY('S', 'T', 'A', 0x0004, &delay_200_500_e200, "116KHD024006"), EDP_PANEL_ENTRY('S', 'T', 'A', 0x0100, &delay_100_500_e200, "2081116HHD028001-51D"),
{ /* sentinal */ }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jessica Zhang quic_jesszhan@quicinc.com
[ Upstream commit 41b4b11da02157c7474caf41d56baae0e941d01a ]
Check that all encoders attached to a given CRTC are valid possible_clones of each other.
Signed-off-by: Jessica Zhang quic_jesszhan@quicinc.com Reviewed-by: Maxime Ripard mripard@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20241216-concurrent-wb-v4-3-fe... Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/drm_atomic_helper.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+)
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c index 66d223c2d9ab9..e737e45a3a702 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -573,6 +573,30 @@ mode_valid(struct drm_atomic_state *state) return 0; }
+static int drm_atomic_check_valid_clones(struct drm_atomic_state *state, + struct drm_crtc *crtc) +{ + struct drm_encoder *drm_enc; + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, + crtc); + + drm_for_each_encoder_mask(drm_enc, crtc->dev, crtc_state->encoder_mask) { + if (!drm_enc->possible_clones) { + DRM_DEBUG("enc%d possible_clones is 0\n", drm_enc->base.id); + continue; + } + + if ((crtc_state->encoder_mask & drm_enc->possible_clones) != + crtc_state->encoder_mask) { + DRM_DEBUG("crtc%d failed valid clone check for mask 0x%x\n", + crtc->base.id, crtc_state->encoder_mask); + return -EINVAL; + } + } + + return 0; +} + /** * drm_atomic_helper_check_modeset - validate state object for modeset changes * @dev: DRM device @@ -744,6 +768,10 @@ drm_atomic_helper_check_modeset(struct drm_device *dev, ret = drm_atomic_add_affected_planes(state, crtc); if (ret != 0) return ret; + + ret = drm_atomic_check_valid_clones(state, crtc); + if (ret != 0) + return ret; }
/*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chenyuan Yang chenyuan0y@gmail.com
[ Upstream commit a9a69c3b38c89d7992fb53db4abb19104b531d32 ]
Incorrect types are used as sizeof() arguments in devm_kcalloc(). It should be sizeof(dai_link_data) for link_data instead of sizeof(snd_soc_dai_link).
This is found by our static analysis tool.
Signed-off-by: Chenyuan Yang chenyuan0y@gmail.com Link: https://patch.msgid.link/20250406210854.149316-1-chenyuan0y@gmail.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/fsl/imx-card.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c index c6d55b21f9496..11430f9f49968 100644 --- a/sound/soc/fsl/imx-card.c +++ b/sound/soc/fsl/imx-card.c @@ -517,7 +517,7 @@ static int imx_card_parse_of(struct imx_card_data *data) if (!card->dai_link) return -ENOMEM;
- data->link_data = devm_kcalloc(dev, num_links, sizeof(*link), GFP_KERNEL); + data->link_data = devm_kcalloc(dev, num_links, sizeof(*link_data), GFP_KERNEL); if (!data->link_data) return -ENOMEM;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Martin Blumenstingl martin.blumenstingl@googlemail.com
[ Upstream commit e56088a13708757da68ad035269d69b93ac8c389 ]
The public datasheets of the following Amlogic SoCs describe a typical resistor value for the built-in pull up/down resistor: - Meson8/8b/8m2: not documented - GXBB (S905): 60 kOhm - GXL (S905X): 60 kOhm - GXM (S912): 60 kOhm - G12B (S922X): 60 kOhm - SM1 (S905D3): 60 kOhm
The public G12B and SM1 datasheets additionally state min and max values: - min value: 50 kOhm for both, pull-up and pull-down - max value for the pull-up: 70 kOhm - max value for the pull-down: 130 kOhm
Use 60 kOhm in the pinctrl-meson driver as well so it's shown in the debugfs output. It may not be accurate for Meson8/8b/8m2 but in reality 60 kOhm is closer to the actual value than 1 Ohm.
Signed-off-by: Martin Blumenstingl martin.blumenstingl@googlemail.com Reviewed-by: Neil Armstrong neil.armstrong@linaro.org Link: https://lore.kernel.org/20250329190132.855196-1-martin.blumenstingl@googlema... Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pinctrl/meson/pinctrl-meson.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/pinctrl/meson/pinctrl-meson.c b/drivers/pinctrl/meson/pinctrl-meson.c index 530f3f934e196..1f05f7f1a9aee 100644 --- a/drivers/pinctrl/meson/pinctrl-meson.c +++ b/drivers/pinctrl/meson/pinctrl-meson.c @@ -487,7 +487,7 @@ static int meson_pinconf_get(struct pinctrl_dev *pcdev, unsigned int pin, case PIN_CONFIG_BIAS_PULL_DOWN: case PIN_CONFIG_BIAS_PULL_UP: if (meson_pinconf_get_pull(pc, pin) == param) - arg = 1; + arg = 60000; else return -EINVAL; break;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
[ Upstream commit a549b927ea3f5e50b1394209b64e6e17e31d4db8 ]
Acer Aspire SW3-013 requires the very same quirk as other Acer Aspire model for making it working.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=220011 Signed-off-by: Takashi Iwai tiwai@suse.de Link: https://patch.msgid.link/20250420085716.12095-1-tiwai@suse.de Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/intel/boards/bytcr_rt5640.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c index 67b343632a10d..b00a9fdd7a9cc 100644 --- a/sound/soc/intel/boards/bytcr_rt5640.c +++ b/sound/soc/intel/boards/bytcr_rt5640.c @@ -576,6 +576,19 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = { BYT_RT5640_SSP0_AIF2 | BYT_RT5640_MCLK_EN), }, + { /* Acer Aspire SW3-013 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW3-013"), + }, + .driver_data = (void *)(BYT_RT5640_DMIC1_MAP | + BYT_RT5640_JD_SRC_JD2_IN4N | + BYT_RT5640_OVCD_TH_2000UA | + BYT_RT5640_OVCD_SF_0P75 | + BYT_RT5640_DIFF_MIC | + BYT_RT5640_SSP0_AIF1 | + BYT_RT5640_MCLK_EN), + }, { .matches = { DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
[ Upstream commit be0c40da888840fe91b45474cb70779e6cbaf7ca ]
HP Spectre x360 15-df1xxx with SSID 13c:863e requires similar workarounds that were applied to another HP Spectre x360 models; it has a mute LED only, no micmute LEDs, and needs the speaker GPIO seup.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=220054 Link: https://patch.msgid.link/20250427081035.11567-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_realtek.c | 42 +++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 2f67cd955d651..682ae18e211c5 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -6787,6 +6787,41 @@ static void alc285_fixup_hp_spectre_x360_eb1(struct hda_codec *codec, } }
+/* GPIO1 = amplifier on/off */ +static void alc285_fixup_hp_spectre_x360_df1(struct hda_codec *codec, + const struct hda_fixup *fix, + int action) +{ + struct alc_spec *spec = codec->spec; + static const hda_nid_t conn[] = { 0x02 }; + static const struct hda_pintbl pincfgs[] = { + { 0x14, 0x90170110 }, /* front/high speakers */ + { 0x17, 0x90170130 }, /* back/bass speakers */ + { } + }; + + // enable mute led + alc285_fixup_hp_mute_led_coefbit(codec, fix, action); + + switch (action) { + case HDA_FIXUP_ACT_PRE_PROBE: + /* needed for amp of back speakers */ + spec->gpio_mask |= 0x01; + spec->gpio_dir |= 0x01; + snd_hda_apply_pincfgs(codec, pincfgs); + /* share DAC to have unified volume control */ + snd_hda_override_conn_list(codec, 0x14, ARRAY_SIZE(conn), conn); + snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn); + break; + case HDA_FIXUP_ACT_INIT: + /* need to toggle GPIO to enable the amp of back speakers */ + alc_update_gpio_data(codec, 0x01, true); + msleep(100); + alc_update_gpio_data(codec, 0x01, false); + break; + } +} + static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec, const struct hda_fixup *fix, int action) { @@ -7326,6 +7361,7 @@ enum { ALC280_FIXUP_HP_9480M, ALC245_FIXUP_HP_X360_AMP, ALC285_FIXUP_HP_SPECTRE_X360_EB1, + ALC285_FIXUP_HP_SPECTRE_X360_DF1, ALC285_FIXUP_HP_ENVY_X360, ALC288_FIXUP_DELL_HEADSET_MODE, ALC288_FIXUP_DELL1_MIC_NO_PRESENCE, @@ -9322,6 +9358,10 @@ static const struct hda_fixup alc269_fixups[] = { .type = HDA_FIXUP_FUNC, .v.func = alc285_fixup_hp_spectre_x360_eb1 }, + [ALC285_FIXUP_HP_SPECTRE_X360_DF1] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc285_fixup_hp_spectre_x360_df1 + }, [ALC285_FIXUP_HP_ENVY_X360] = { .type = HDA_FIXUP_FUNC, .v.func = alc285_fixup_hp_envy_x360, @@ -9882,6 +9922,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x86c1, "HP Laptop 15-da3001TU", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO), SND_PCI_QUIRK(0x103c, 0x86e7, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), + SND_PCI_QUIRK(0x103c, 0x863e, "HP Spectre x360 15-df1xxx", ALC285_FIXUP_HP_SPECTRE_X360_DF1), SND_PCI_QUIRK(0x103c, 0x86e8, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), SND_PCI_QUIRK(0x103c, 0x86f9, "HP Spectre x360 13-aw0xxx", ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED), SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), @@ -10591,6 +10632,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = { {.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"}, {.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"}, {.id = ALC285_FIXUP_HP_SPECTRE_X360_EB1, .name = "alc285-hp-spectre-x360-eb1"}, + {.id = ALC285_FIXUP_HP_SPECTRE_X360_DF1, .name = "alc285-hp-spectre-x360-df1"}, {.id = ALC285_FIXUP_HP_ENVY_X360, .name = "alc285-hp-envy-x360"}, {.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"}, {.id = ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN, .name = "alc287-yoga9-bass-spk-pin"},
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alistair Francis alistair.francis@wdc.com
[ Upstream commit 46d22b47df2741996af277a2838b95f130436c13 ]
queue->state_change is set as part of nvmet_tcp_set_queue_sock(), but if the TCP connection isn't established when nvmet_tcp_set_queue_sock() is called then queue->state_change isn't set and sock->sk->sk_state_change isn't replaced.
As such we don't need to restore sock->sk->sk_state_change if queue->state_change is NULL.
This avoids NULL pointer dereferences such as this:
[ 286.462026][ C0] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 286.462814][ C0] #PF: supervisor instruction fetch in kernel mode [ 286.463796][ C0] #PF: error_code(0x0010) - not-present page [ 286.464392][ C0] PGD 8000000140620067 P4D 8000000140620067 PUD 114201067 PMD 0 [ 286.465086][ C0] Oops: Oops: 0010 [#1] SMP KASAN PTI [ 286.465559][ C0] CPU: 0 UID: 0 PID: 1628 Comm: nvme Not tainted 6.15.0-rc2+ #11 PREEMPT(voluntary) [ 286.466393][ C0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014 [ 286.467147][ C0] RIP: 0010:0x0 [ 286.467420][ C0] Code: Unable to access opcode bytes at 0xffffffffffffffd6. [ 286.467977][ C0] RSP: 0018:ffff8883ae008580 EFLAGS: 00010246 [ 286.468425][ C0] RAX: 0000000000000000 RBX: ffff88813fd34100 RCX: ffffffffa386cc43 [ 286.469019][ C0] RDX: 1ffff11027fa68b6 RSI: 0000000000000008 RDI: ffff88813fd34100 [ 286.469545][ C0] RBP: ffff88813fd34160 R08: 0000000000000000 R09: ffffed1027fa682c [ 286.470072][ C0] R10: ffff88813fd34167 R11: 0000000000000000 R12: ffff88813fd344c3 [ 286.470585][ C0] R13: ffff88813fd34112 R14: ffff88813fd34aec R15: ffff888132cdd268 [ 286.471070][ C0] FS: 00007fe3c04c7d80(0000) GS:ffff88840743f000(0000) knlGS:0000000000000000 [ 286.471644][ C0] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 286.472543][ C0] CR2: ffffffffffffffd6 CR3: 000000012daca000 CR4: 00000000000006f0 [ 286.473500][ C0] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 286.474467][ C0] DR3: 0000000000000000 DR6: 00000000ffff07f0 DR7: 0000000000000400 [ 286.475453][ C0] Call Trace: [ 286.476102][ C0] <IRQ> [ 286.476719][ C0] tcp_fin+0x2bb/0x440 [ 286.477429][ C0] tcp_data_queue+0x190f/0x4e60 [ 286.478174][ C0] ? __build_skb_around+0x234/0x330 [ 286.478940][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.479659][ C0] ? __pfx_tcp_data_queue+0x10/0x10 [ 286.480431][ C0] ? tcp_try_undo_loss+0x640/0x6c0 [ 286.481196][ C0] ? seqcount_lockdep_reader_access.constprop.0+0x82/0x90 [ 286.482046][ C0] ? kvm_clock_get_cycles+0x14/0x30 [ 286.482769][ C0] ? ktime_get+0x66/0x150 [ 286.483433][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.484146][ C0] tcp_rcv_established+0x6e4/0x2050 [ 286.484857][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.485523][ C0] ? ipv4_dst_check+0x160/0x2b0 [ 286.486203][ C0] ? __pfx_tcp_rcv_established+0x10/0x10 [ 286.486917][ C0] ? lock_release+0x217/0x2c0 [ 286.487595][ C0] tcp_v4_do_rcv+0x4d6/0x9b0 [ 286.488279][ C0] tcp_v4_rcv+0x2af8/0x3e30 [ 286.488904][ C0] ? raw_local_deliver+0x51b/0xad0 [ 286.489551][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.490198][ C0] ? __pfx_tcp_v4_rcv+0x10/0x10 [ 286.490813][ C0] ? __pfx_raw_local_deliver+0x10/0x10 [ 286.491487][ C0] ? __pfx_nf_confirm+0x10/0x10 [nf_conntrack] [ 286.492275][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.492900][ C0] ip_protocol_deliver_rcu+0x8f/0x370 [ 286.493579][ C0] ip_local_deliver_finish+0x297/0x420 [ 286.494268][ C0] ip_local_deliver+0x168/0x430 [ 286.494867][ C0] ? __pfx_ip_local_deliver+0x10/0x10 [ 286.495498][ C0] ? __pfx_ip_local_deliver_finish+0x10/0x10 [ 286.496204][ C0] ? ip_rcv_finish_core+0x19a/0x1f20 [ 286.496806][ C0] ? lock_release+0x217/0x2c0 [ 286.497414][ C0] ip_rcv+0x455/0x6e0 [ 286.497945][ C0] ? __pfx_ip_rcv+0x10/0x10 [ 286.498550][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.499137][ C0] ? __pfx_ip_rcv_finish+0x10/0x10 [ 286.499763][ C0] ? lock_release+0x217/0x2c0 [ 286.500327][ C0] ? dl_scaled_delta_exec+0xd1/0x2c0 [ 286.500922][ C0] ? __pfx_ip_rcv+0x10/0x10 [ 286.501480][ C0] __netif_receive_skb_one_core+0x166/0x1b0 [ 286.502173][ C0] ? __pfx___netif_receive_skb_one_core+0x10/0x10 [ 286.502903][ C0] ? lock_acquire+0x2b2/0x310 [ 286.503487][ C0] ? process_backlog+0x372/0x1350 [ 286.504087][ C0] ? lock_release+0x217/0x2c0 [ 286.504642][ C0] process_backlog+0x3b9/0x1350 [ 286.505214][ C0] ? process_backlog+0x372/0x1350 [ 286.505779][ C0] __napi_poll.constprop.0+0xa6/0x490 [ 286.506363][ C0] net_rx_action+0x92e/0xe10 [ 286.506889][ C0] ? __pfx_net_rx_action+0x10/0x10 [ 286.507437][ C0] ? timerqueue_add+0x1f0/0x320 [ 286.507977][ C0] ? sched_clock_cpu+0x68/0x540 [ 286.508492][ C0] ? lock_acquire+0x2b2/0x310 [ 286.509043][ C0] ? kvm_sched_clock_read+0xd/0x20 [ 286.509607][ C0] ? handle_softirqs+0x1aa/0x7d0 [ 286.510187][ C0] handle_softirqs+0x1f2/0x7d0 [ 286.510754][ C0] ? __pfx_handle_softirqs+0x10/0x10 [ 286.511348][ C0] ? irqtime_account_irq+0x181/0x290 [ 286.511937][ C0] ? __dev_queue_xmit+0x85d/0x3450 [ 286.512510][ C0] do_softirq.part.0+0x89/0xc0 [ 286.513100][ C0] </IRQ> [ 286.513548][ C0] <TASK> [ 286.513953][ C0] __local_bh_enable_ip+0x112/0x140 [ 286.514522][ C0] ? __dev_queue_xmit+0x85d/0x3450 [ 286.515072][ C0] __dev_queue_xmit+0x872/0x3450 [ 286.515619][ C0] ? nft_do_chain+0xe16/0x15b0 [nf_tables] [ 286.516252][ C0] ? __pfx___dev_queue_xmit+0x10/0x10 [ 286.516817][ C0] ? selinux_ip_postroute+0x43c/0xc50 [ 286.517433][ C0] ? __pfx_selinux_ip_postroute+0x10/0x10 [ 286.518061][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.518606][ C0] ? ip_output+0x164/0x4a0 [ 286.519149][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.519671][ C0] ? ip_finish_output2+0x17d5/0x1fb0 [ 286.520258][ C0] ip_finish_output2+0xb4b/0x1fb0 [ 286.520787][ C0] ? __pfx_ip_finish_output2+0x10/0x10 [ 286.521355][ C0] ? __ip_finish_output+0x15d/0x750 [ 286.521890][ C0] ip_output+0x164/0x4a0 [ 286.522372][ C0] ? __pfx_ip_output+0x10/0x10 [ 286.522872][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.523402][ C0] ? _raw_spin_unlock_irqrestore+0x4c/0x60 [ 286.524031][ C0] ? __pfx_ip_finish_output+0x10/0x10 [ 286.524605][ C0] ? __ip_queue_xmit+0x999/0x2260 [ 286.525200][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.525744][ C0] ? ipv4_dst_check+0x16a/0x2b0 [ 286.526279][ C0] ? lock_release+0x217/0x2c0 [ 286.526793][ C0] __ip_queue_xmit+0x1883/0x2260 [ 286.527324][ C0] ? __skb_clone+0x54c/0x730 [ 286.527827][ C0] __tcp_transmit_skb+0x209b/0x37a0 [ 286.528374][ C0] ? __pfx___tcp_transmit_skb+0x10/0x10 [ 286.528952][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.529472][ C0] ? seqcount_lockdep_reader_access.constprop.0+0x82/0x90 [ 286.530152][ C0] ? trace_hardirqs_on+0x12/0x120 [ 286.530691][ C0] tcp_write_xmit+0xb81/0x88b0 [ 286.531224][ C0] ? mod_memcg_state+0x4d/0x60 [ 286.531736][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.532253][ C0] __tcp_push_pending_frames+0x90/0x320 [ 286.532826][ C0] tcp_send_fin+0x141/0xb50 [ 286.533352][ C0] ? __pfx_tcp_send_fin+0x10/0x10 [ 286.533908][ C0] ? __local_bh_enable_ip+0xab/0x140 [ 286.534495][ C0] inet_shutdown+0x243/0x320 [ 286.535077][ C0] nvme_tcp_alloc_queue+0xb3b/0x2590 [nvme_tcp] [ 286.535709][ C0] ? do_raw_spin_lock+0x129/0x260 [ 286.536314][ C0] ? __pfx_nvme_tcp_alloc_queue+0x10/0x10 [nvme_tcp] [ 286.536996][ C0] ? do_raw_spin_unlock+0x54/0x1e0 [ 286.537550][ C0] ? _raw_spin_unlock+0x29/0x50 [ 286.538127][ C0] ? do_raw_spin_lock+0x129/0x260 [ 286.538664][ C0] ? __pfx_do_raw_spin_lock+0x10/0x10 [ 286.539249][ C0] ? nvme_tcp_alloc_admin_queue+0xd5/0x340 [nvme_tcp] [ 286.539892][ C0] ? __wake_up+0x40/0x60 [ 286.540392][ C0] nvme_tcp_alloc_admin_queue+0xd5/0x340 [nvme_tcp] [ 286.541047][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.541589][ C0] nvme_tcp_setup_ctrl+0x8b/0x7a0 [nvme_tcp] [ 286.542254][ C0] ? _raw_spin_unlock_irqrestore+0x4c/0x60 [ 286.542887][ C0] ? __pfx_nvme_tcp_setup_ctrl+0x10/0x10 [nvme_tcp] [ 286.543568][ C0] ? trace_hardirqs_on+0x12/0x120 [ 286.544166][ C0] ? _raw_spin_unlock_irqrestore+0x35/0x60 [ 286.544792][ C0] ? nvme_change_ctrl_state+0x196/0x2e0 [nvme_core] [ 286.545477][ C0] nvme_tcp_create_ctrl+0x839/0xb90 [nvme_tcp] [ 286.546126][ C0] nvmf_dev_write+0x3db/0x7e0 [nvme_fabrics] [ 286.546775][ C0] ? rw_verify_area+0x69/0x520 [ 286.547334][ C0] vfs_write+0x218/0xe90 [ 286.547854][ C0] ? do_syscall_64+0x9f/0x190 [ 286.548408][ C0] ? trace_hardirqs_on_prepare+0xdb/0x120 [ 286.549037][ C0] ? syscall_exit_to_user_mode+0x93/0x280 [ 286.549659][ C0] ? __pfx_vfs_write+0x10/0x10 [ 286.550259][ C0] ? do_syscall_64+0x9f/0x190 [ 286.550840][ C0] ? syscall_exit_to_user_mode+0x8e/0x280 [ 286.551516][ C0] ? trace_hardirqs_on_prepare+0xdb/0x120 [ 286.552180][ C0] ? syscall_exit_to_user_mode+0x93/0x280 [ 286.552834][ C0] ? ksys_read+0xf5/0x1c0 [ 286.553386][ C0] ? __pfx_ksys_read+0x10/0x10 [ 286.553964][ C0] ksys_write+0xf5/0x1c0 [ 286.554499][ C0] ? __pfx_ksys_write+0x10/0x10 [ 286.555072][ C0] ? trace_hardirqs_on_prepare+0xdb/0x120 [ 286.555698][ C0] ? syscall_exit_to_user_mode+0x93/0x280 [ 286.556319][ C0] ? do_syscall_64+0x54/0x190 [ 286.556866][ C0] do_syscall_64+0x93/0x190 [ 286.557420][ C0] ? rcu_read_unlock+0x17/0x60 [ 286.557986][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.558526][ C0] ? lock_release+0x217/0x2c0 [ 286.559087][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.559659][ C0] ? count_memcg_events.constprop.0+0x4a/0x60 [ 286.560476][ C0] ? exc_page_fault+0x7a/0x110 [ 286.561064][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.561647][ C0] ? lock_release+0x217/0x2c0 [ 286.562257][ C0] ? do_user_addr_fault+0x171/0xa00 [ 286.562839][ C0] ? do_user_addr_fault+0x4a2/0xa00 [ 286.563453][ C0] ? irqentry_exit_to_user_mode+0x84/0x270 [ 286.564112][ C0] ? rcu_is_watching+0x11/0xb0 [ 286.564677][ C0] ? irqentry_exit_to_user_mode+0x84/0x270 [ 286.565317][ C0] ? trace_hardirqs_on_prepare+0xdb/0x120 [ 286.565922][ C0] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 286.566542][ C0] RIP: 0033:0x7fe3c05e6504 [ 286.567102][ C0] Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 80 3d c5 8b 10 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20 48 89 [ 286.568931][ C0] RSP: 002b:00007fff76444f58 EFLAGS: 00000202 ORIG_RAX: 0000000000000001 [ 286.569807][ C0] RAX: ffffffffffffffda RBX: 000000003b40d930 RCX: 00007fe3c05e6504 [ 286.570621][ C0] RDX: 00000000000000cf RSI: 000000003b40d930 RDI: 0000000000000003 [ 286.571443][ C0] RBP: 0000000000000003 R08: 00000000000000cf R09: 000000003b40d930 [ 286.572246][ C0] R10: 0000000000000000 R11: 0000000000000202 R12: 000000003b40cd60 [ 286.573069][ C0] R13: 00000000000000cf R14: 00007fe3c07417f8 R15: 00007fe3c073502e [ 286.573886][ C0] </TASK>
Closes: https://lore.kernel.org/linux-nvme/5hdonndzoqa265oq3bj6iarwtfk5dewxxjtbjvn5u... Signed-off-by: Alistair Francis alistair.francis@wdc.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Tested-by: Shin'ichiro Kawasaki shinichiro.kawasaki@wdc.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/target/tcp.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 125e22bd34e2a..eee052dbf80c1 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1417,6 +1417,9 @@ static void nvmet_tcp_restore_socket_callbacks(struct nvmet_tcp_queue *queue) { struct socket *sock = queue->sock;
+ if (!queue->state_change) + return; + write_lock_bh(&sock->sk->sk_callback_lock); sock->sk->sk_data_ready = queue->data_ready; sock->sk->sk_state_change = queue->state_change;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jens Axboe axboe@kernel.dk
[ Upstream commit f024d3a8ded0d8d2129ae123d7a5305c29ca44ce ]
syzbot complains about the cached sq head read, and it's totally right. But we don't need to care, it's just reading fdinfo, and reading the CQ or SQ tail/head entries are known racy in that they are just a view into that very instant and may of course be outdated by the time they are reported.
Annotate both the SQ head and CQ tail read with data_race() to avoid this syzbot complaint.
Link: https://lore.kernel.org/io-uring/6811f6dc.050a0220.39e3a1.0d0e.GAE@google.co... Reported-by: syzbot+3e77fd302e99f5af9394@syzkaller.appspotmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- io_uring/fdinfo.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c index ea2c2ded4e412..4a50531699777 100644 --- a/io_uring/fdinfo.c +++ b/io_uring/fdinfo.c @@ -79,11 +79,11 @@ static __cold void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, seq_printf(m, "SqMask:\t0x%x\n", sq_mask); seq_printf(m, "SqHead:\t%u\n", sq_head); seq_printf(m, "SqTail:\t%u\n", sq_tail); - seq_printf(m, "CachedSqHead:\t%u\n", ctx->cached_sq_head); + seq_printf(m, "CachedSqHead:\t%u\n", data_race(ctx->cached_sq_head)); seq_printf(m, "CqMask:\t0x%x\n", cq_mask); seq_printf(m, "CqHead:\t%u\n", cq_head); seq_printf(m, "CqTail:\t%u\n", cq_tail); - seq_printf(m, "CachedCqTail:\t%u\n", ctx->cached_cq_tail); + seq_printf(m, "CachedCqTail:\t%u\n", data_race(ctx->cached_cq_tail)); seq_printf(m, "SQEs:\t%u\n", sq_tail - sq_head); sq_entries = min(sq_tail - sq_head, ctx->sq_entries); for (i = 0; i < sq_entries; i++) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Goldwyn Rodrigues rgoldwyn@suse.de
[ Upstream commit bc7e0975093567f51be8e1bdf4aa5900a3cf0b1e ]
btrfs_prelim_ref() calls the old and new reference variables in the incorrect order. This causes a NULL pointer dereference because oldref is passed as NULL to trace_btrfs_prelim_ref_insert().
Note, trace_btrfs_prelim_ref_insert() is being called with newref as oldref (and oldref as NULL) on purpose in order to print out the values of newref.
To reproduce: echo 1 > /sys/kernel/debug/tracing/events/btrfs/btrfs_prelim_ref_insert/enable
Perform some writeback operations.
Backtrace: BUG: kernel NULL pointer dereference, address: 0000000000000018 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 115949067 P4D 115949067 PUD 11594a067 PMD 0 Oops: Oops: 0000 [#1] SMP NOPTI CPU: 1 UID: 0 PID: 1188 Comm: fsstress Not tainted 6.15.0-rc2-tester+ #47 PREEMPT(voluntary) 7ca2cef72d5e9c600f0c7718adb6462de8149622 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.3-2-gc13ff2cd-prebuilt.qemu.org 04/01/2014 RIP: 0010:trace_event_raw_event_btrfs__prelim_ref+0x72/0x130 Code: e8 43 81 9f ff 48 85 c0 74 78 4d 85 e4 0f 84 8f 00 00 00 49 8b 94 24 c0 06 00 00 48 8b 0a 48 89 48 08 48 8b 52 08 48 89 50 10 <49> 8b 55 18 48 89 50 18 49 8b 55 20 48 89 50 20 41 0f b6 55 28 88 RSP: 0018:ffffce44820077a0 EFLAGS: 00010286 RAX: ffff8c6b403f9014 RBX: ffff8c6b55825730 RCX: 304994edf9cf506b RDX: d8b11eb7f0fdb699 RSI: ffff8c6b403f9010 RDI: ffff8c6b403f9010 RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000010 R10: 00000000ffffffff R11: 0000000000000000 R12: ffff8c6b4e8fb000 R13: 0000000000000000 R14: ffffce44820077a8 R15: ffff8c6b4abd1540 FS: 00007f4dc6813740(0000) GS:ffff8c6c1d378000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000018 CR3: 000000010eb42000 CR4: 0000000000750ef0 PKRU: 55555554 Call Trace: <TASK> prelim_ref_insert+0x1c1/0x270 find_parent_nodes+0x12a6/0x1ee0 ? __entry_text_end+0x101f06/0x101f09 ? srso_alias_return_thunk+0x5/0xfbef5 ? srso_alias_return_thunk+0x5/0xfbef5 ? srso_alias_return_thunk+0x5/0xfbef5 ? srso_alias_return_thunk+0x5/0xfbef5 btrfs_is_data_extent_shared+0x167/0x640 ? fiemap_process_hole+0xd0/0x2c0 extent_fiemap+0xa5c/0xbc0 ? __entry_text_end+0x101f05/0x101f09 btrfs_fiemap+0x7e/0xd0 do_vfs_ioctl+0x425/0x9d0 __x64_sys_ioctl+0x75/0xc0
Signed-off-by: Goldwyn Rodrigues rgoldwyn@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/trace/events/btrfs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h index 7a6c5a870d33c..31847ccae4936 100644 --- a/include/trace/events/btrfs.h +++ b/include/trace/events/btrfs.h @@ -1847,7 +1847,7 @@ DECLARE_EVENT_CLASS(btrfs__prelim_ref, TP_PROTO(const struct btrfs_fs_info *fs_info, const struct prelim_ref *oldref, const struct prelim_ref *newref, u64 tree_size), - TP_ARGS(fs_info, newref, oldref, tree_size), + TP_ARGS(fs_info, oldref, newref, tree_size),
TP_STRUCT__entry_btrfs( __field( u64, root_id )
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit ebedf8b7f05b9c886d68d63025db8d1b12343157 ]
For now, we need another entry for these devices, this will be changed completely for 6.16.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219926 Link: https://patch.msgid.link/20250506214258.2efbdc9e9a82.I31915ec252bd1c74bd53b8... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c index 7f30e6add9933..39ac9d81d10d6 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -552,6 +552,8 @@ static const struct iwl_dev_info iwl_dev_info_table[] = { IWL_DEV_INFO(0x7A70, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name), IWL_DEV_INFO(0x7AF0, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name), IWL_DEV_INFO(0x7AF0, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name), + IWL_DEV_INFO(0x7F70, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name), + IWL_DEV_INFO(0x7F70, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
IWL_DEV_INFO(0x271C, 0x0214, iwl9260_2ac_cfg, iwl9260_1_name), IWL_DEV_INFO(0x7E40, 0x1691, iwl_cfg_ma_a0_gf4_a0, iwl_ax411_killer_1690s_name),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jason Andryuk jason.andryuk@amd.com
[ Upstream commit 90989869baae47ee2aa3bcb6f6eb9fbbe4287958 ]
Make xenbus_init() allow a non-local xenstore for a PVH dom0 - it is currently forced to XS_LOCAL. With Hyperlaunch booting dom0 and a xenstore stubdom, dom0 can be handled as a regular XS_HVM following the late init path.
Ideally we'd drop the use of xen_initial_domain() and just check for the event channel instead. However, ARM has a xen,enhanced no-xenstore mode, where the event channel and PFN would both be 0. Retain the xen_initial_domain() check, and use that for an additional check when the event channel is 0.
Check the full 64bit HVM_PARAM_STORE_EVTCHN value to catch the off chance that high bits are set for the 32bit event channel.
Signed-off-by: Jason Andryuk jason.andryuk@amd.com Change-Id: I5506da42e4c6b8e85079fefb2f193c8de17c7437 Reviewed-by: Stefano Stabellini sstabellini@kernel.org Signed-off-by: Juergen Gross jgross@suse.com Message-ID: 20250506204456.5220-1-jason.andryuk@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/xen/xenbus/xenbus_probe.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c index 25164d56c9d99..d3b6908110c6f 100644 --- a/drivers/xen/xenbus/xenbus_probe.c +++ b/drivers/xen/xenbus/xenbus_probe.c @@ -966,9 +966,15 @@ static int __init xenbus_init(void) if (xen_pv_domain()) xen_store_domain_type = XS_PV; if (xen_hvm_domain()) + { xen_store_domain_type = XS_HVM; - if (xen_hvm_domain() && xen_initial_domain()) - xen_store_domain_type = XS_LOCAL; + err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v); + if (err) + goto out_error; + xen_store_evtchn = (int)v; + if (!v && xen_initial_domain()) + xen_store_domain_type = XS_LOCAL; + } if (xen_pv_domain() && !xen_start_info->store_evtchn) xen_store_domain_type = XS_LOCAL; if (xen_pv_domain() && xen_start_info->store_evtchn) @@ -987,10 +993,6 @@ static int __init xenbus_init(void) xen_store_interface = gfn_to_virt(xen_store_gfn); break; case XS_HVM: - err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v); - if (err) - goto out_error; - xen_store_evtchn = (int)v; err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v); if (err) goto out_error;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Al Viro viro@zeniv.linux.org.uk
[ Upstream commit 250cf3693060a5f803c5f1ddc082bb06b16112a9 ]
... or we risk stealing final mntput from sync umount - raising mnt_count after umount(2) has verified that victim is not busy, but before it has set MNT_SYNC_UMOUNT; in that case __legitimize_mnt() doesn't see that it's safe to quietly undo mnt_count increment and leaves dropping the reference to caller, where it'll be a full-blown mntput().
Check under mount_lock is needed; leaving the current one done before taking that makes no sense - it's nowhere near common enough to bother with.
Reviewed-by: Christian Brauner brauner@kernel.org Signed-off-by: Al Viro viro@zeniv.linux.org.uk Signed-off-by: Sasha Levin sashal@kernel.org --- fs/namespace.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/fs/namespace.c b/fs/namespace.c index 0dcd57a75ad49..211a81240680d 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -632,12 +632,8 @@ int __legitimize_mnt(struct vfsmount *bastard, unsigned seq) smp_mb(); // see mntput_no_expire() and do_umount() if (likely(!read_seqretry(&mount_lock, seq))) return 0; - if (bastard->mnt_flags & MNT_SYNC_UMOUNT) { - mnt_add_count(mnt, -1); - return 1; - } lock_mount_hash(); - if (unlikely(bastard->mnt_flags & MNT_DOOMED)) { + if (unlikely(bastard->mnt_flags & (MNT_SYNC_UMOUNT | MNT_DOOMED))) { mnt_add_count(mnt, -1); unlock_mount_hash(); return 1;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sabrina Dubroca sd@queasysnail.net
[ Upstream commit 028363685bd0b7a19b4a820f82dd905b1dc83999 ]
The current scheme for caching the encap socket can lead to reference leaks when we try to delete the netns.
The reference chain is: xfrm_state -> enacp_sk -> netns
Since the encap socket is a userspace socket, it holds a reference on the netns. If we delete the espintcp state (through flush or individual delete) before removing the netns, the reference on the socket is dropped and the netns is correctly deleted. Otherwise, the netns may not be reachable anymore (if all processes within the ns have terminated), so we cannot delete the xfrm state to drop its reference on the socket.
This patch results in a small (~2% in my tests) performance regression.
A GC-type mechanism could be added for the socket cache, to clear references if the state hasn't been used "recently", but it's a lot more complex than just not caching the socket.
Fixes: e27cca96cd68 ("xfrm: add espintcp (RFC 8229)") Signed-off-by: Sabrina Dubroca sd@queasysnail.net Reviewed-by: Simon Horman horms@kernel.org Signed-off-by: Steffen Klassert steffen.klassert@secunet.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/xfrm.h | 1 - net/ipv4/esp4.c | 49 ++++--------------------------------------- net/ipv6/esp6.c | 49 ++++--------------------------------------- net/xfrm/xfrm_state.c | 3 --- 4 files changed, 8 insertions(+), 94 deletions(-)
diff --git a/include/net/xfrm.h b/include/net/xfrm.h index bf670929622dc..64911162ab5f4 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -212,7 +212,6 @@ struct xfrm_state {
/* Data for encapsulator */ struct xfrm_encap_tmpl *encap; - struct sock __rcu *encap_sk;
/* Data for care-of address */ xfrm_address_t *coaddr; diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index 419969b268225..8f5417ff355d7 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -118,47 +118,16 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp) }
#ifdef CONFIG_INET_ESPINTCP -struct esp_tcp_sk { - struct sock *sk; - struct rcu_head rcu; -}; - -static void esp_free_tcp_sk(struct rcu_head *head) -{ - struct esp_tcp_sk *esk = container_of(head, struct esp_tcp_sk, rcu); - - sock_put(esk->sk); - kfree(esk); -} - static struct sock *esp_find_tcp_sk(struct xfrm_state *x) { struct xfrm_encap_tmpl *encap = x->encap; struct net *net = xs_net(x); - struct esp_tcp_sk *esk; __be16 sport, dport; - struct sock *nsk; struct sock *sk;
- sk = rcu_dereference(x->encap_sk); - if (sk && sk->sk_state == TCP_ESTABLISHED) - return sk; - spin_lock_bh(&x->lock); sport = encap->encap_sport; dport = encap->encap_dport; - nsk = rcu_dereference_protected(x->encap_sk, - lockdep_is_held(&x->lock)); - if (sk && sk == nsk) { - esk = kmalloc(sizeof(*esk), GFP_ATOMIC); - if (!esk) { - spin_unlock_bh(&x->lock); - return ERR_PTR(-ENOMEM); - } - RCU_INIT_POINTER(x->encap_sk, NULL); - esk->sk = sk; - call_rcu(&esk->rcu, esp_free_tcp_sk); - } spin_unlock_bh(&x->lock);
sk = inet_lookup_established(net, net->ipv4.tcp_death_row.hashinfo, x->id.daddr.a4, @@ -171,20 +140,6 @@ static struct sock *esp_find_tcp_sk(struct xfrm_state *x) return ERR_PTR(-EINVAL); }
- spin_lock_bh(&x->lock); - nsk = rcu_dereference_protected(x->encap_sk, - lockdep_is_held(&x->lock)); - if (encap->encap_sport != sport || - encap->encap_dport != dport) { - sock_put(sk); - sk = nsk ?: ERR_PTR(-EREMCHG); - } else if (sk == nsk) { - sock_put(sk); - } else { - rcu_assign_pointer(x->encap_sk, sk); - } - spin_unlock_bh(&x->lock); - return sk; }
@@ -207,6 +162,8 @@ static int esp_output_tcp_finish(struct xfrm_state *x, struct sk_buff *skb) err = espintcp_push_skb(sk, skb); bh_unlock_sock(sk);
+ sock_put(sk); + out: rcu_read_unlock(); return err; @@ -391,6 +348,8 @@ static struct ip_esp_hdr *esp_output_tcp_encap(struct xfrm_state *x, if (IS_ERR(sk)) return ERR_CAST(sk);
+ sock_put(sk); + *lenp = htons(len); esph = (struct ip_esp_hdr *)(lenp + 1);
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c index a021c88d3d9b8..085a83b807afd 100644 --- a/net/ipv6/esp6.c +++ b/net/ipv6/esp6.c @@ -135,47 +135,16 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp) }
#ifdef CONFIG_INET6_ESPINTCP -struct esp_tcp_sk { - struct sock *sk; - struct rcu_head rcu; -}; - -static void esp_free_tcp_sk(struct rcu_head *head) -{ - struct esp_tcp_sk *esk = container_of(head, struct esp_tcp_sk, rcu); - - sock_put(esk->sk); - kfree(esk); -} - static struct sock *esp6_find_tcp_sk(struct xfrm_state *x) { struct xfrm_encap_tmpl *encap = x->encap; struct net *net = xs_net(x); - struct esp_tcp_sk *esk; __be16 sport, dport; - struct sock *nsk; struct sock *sk;
- sk = rcu_dereference(x->encap_sk); - if (sk && sk->sk_state == TCP_ESTABLISHED) - return sk; - spin_lock_bh(&x->lock); sport = encap->encap_sport; dport = encap->encap_dport; - nsk = rcu_dereference_protected(x->encap_sk, - lockdep_is_held(&x->lock)); - if (sk && sk == nsk) { - esk = kmalloc(sizeof(*esk), GFP_ATOMIC); - if (!esk) { - spin_unlock_bh(&x->lock); - return ERR_PTR(-ENOMEM); - } - RCU_INIT_POINTER(x->encap_sk, NULL); - esk->sk = sk; - call_rcu(&esk->rcu, esp_free_tcp_sk); - } spin_unlock_bh(&x->lock);
sk = __inet6_lookup_established(net, net->ipv4.tcp_death_row.hashinfo, &x->id.daddr.in6, @@ -188,20 +157,6 @@ static struct sock *esp6_find_tcp_sk(struct xfrm_state *x) return ERR_PTR(-EINVAL); }
- spin_lock_bh(&x->lock); - nsk = rcu_dereference_protected(x->encap_sk, - lockdep_is_held(&x->lock)); - if (encap->encap_sport != sport || - encap->encap_dport != dport) { - sock_put(sk); - sk = nsk ?: ERR_PTR(-EREMCHG); - } else if (sk == nsk) { - sock_put(sk); - } else { - rcu_assign_pointer(x->encap_sk, sk); - } - spin_unlock_bh(&x->lock); - return sk; }
@@ -224,6 +179,8 @@ static int esp_output_tcp_finish(struct xfrm_state *x, struct sk_buff *skb) err = espintcp_push_skb(sk, skb); bh_unlock_sock(sk);
+ sock_put(sk); + out: rcu_read_unlock(); return err; @@ -427,6 +384,8 @@ static struct ip_esp_hdr *esp6_output_tcp_encap(struct xfrm_state *x, if (IS_ERR(sk)) return ERR_CAST(sk);
+ sock_put(sk); + *lenp = htons(len); esph = (struct ip_esp_hdr *)(lenp + 1);
diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index 2f4cf976b59a3..b5047a94c7d01 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -694,9 +694,6 @@ int __xfrm_state_delete(struct xfrm_state *x) net->xfrm.state_num--; spin_unlock(&net->xfrm.xfrm_state_lock);
- if (x->encap_sk) - sock_put(rcu_dereference_raw(x->encap_sk)); - xfrm_dev_state_delete(x);
/* All xfrm_state objects are created by xfrm_state_alloc.
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dave Jiang dave.jiang@intel.com
[ Upstream commit 2f30decd2f23a376d2ed73dfe4c601421edf501a ]
Add a workqueue for user submitted completion record fault processing. The workqueue creation and destruction lifetime will be tied to the user sub-driver since it will only be used when the wq is a user type.
Tested-by: Tony Zhu tony.zhu@intel.com Signed-off-by: Dave Jiang dave.jiang@intel.com Co-developed-by: Fenghua Yu fenghua.yu@intel.com Signed-off-by: Fenghua Yu fenghua.yu@intel.com Link: https://lore.kernel.org/r/20230407203143.2189681-7-fenghua.yu@intel.com Signed-off-by: Vinod Koul vkoul@kernel.org Stable-dep-of: 8dfa57aabff6 ("dmaengine: idxd: Fix allowing write() from different address spaces") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/idxd/cdev.c | 11 +++++++++++ drivers/dma/idxd/idxd.h | 1 + 2 files changed, 12 insertions(+)
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c index 9f8adb7013eba..e2a89873c6e1a 100644 --- a/drivers/dma/idxd/cdev.c +++ b/drivers/dma/idxd/cdev.c @@ -408,6 +408,13 @@ static int idxd_user_drv_probe(struct idxd_dev *idxd_dev) }
mutex_lock(&wq->wq_lock); + + wq->wq = create_workqueue(dev_name(wq_confdev(wq))); + if (!wq->wq) { + rc = -ENOMEM; + goto wq_err; + } + wq->type = IDXD_WQT_USER; rc = drv_enable_wq(wq); if (rc < 0) @@ -426,7 +433,9 @@ static int idxd_user_drv_probe(struct idxd_dev *idxd_dev) err_cdev: drv_disable_wq(wq); err: + destroy_workqueue(wq->wq); wq->type = IDXD_WQT_NONE; +wq_err: mutex_unlock(&wq->wq_lock); return rc; } @@ -439,6 +448,8 @@ static void idxd_user_drv_remove(struct idxd_dev *idxd_dev) idxd_wq_del_cdev(wq); drv_disable_wq(wq); wq->type = IDXD_WQT_NONE; + destroy_workqueue(wq->wq); + wq->wq = NULL; mutex_unlock(&wq->wq_lock); }
diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h index 14c6ef987fede..5dbb67ff1c0cb 100644 --- a/drivers/dma/idxd/idxd.h +++ b/drivers/dma/idxd/idxd.h @@ -185,6 +185,7 @@ struct idxd_wq { struct idxd_dev idxd_dev; struct idxd_cdev *idxd_cdev; struct wait_queue_head err_queue; + struct workqueue_struct *wq; struct idxd_device *idxd; int id; struct idxd_irq_entry ie;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Fenghua Yu fenghua.yu@intel.com
[ Upstream commit b022f59725f0ae846191abbd6d2e611d7f60f826 ]
Define idxd_copy_cr() to copy completion record to fault address in user address that is found by work queue (wq) and PASID.
It will be used to write the user's completion record that the hardware device is not able to write due to user completion record page fault.
An xarray is added to associate the PASID and mm with the struct idxd_user_context so mm can be found by PASID and wq.
It is called when handling the completion record fault in a kernel thread context. Switch to the mm using kthread_use_vm() and copy the completion record to the mm via copy_to_user(). Once the copy is completed, switch back to the current mm using kthread_unuse_mm().
Suggested-by: Christoph Hellwig hch@infradead.org Suggested-by: Jason Gunthorpe jgg@nvidia.com Suggested-by: Tony Luck tony.luck@intel.com Tested-by: Tony Zhu tony.zhu@intel.com Signed-off-by: Fenghua Yu fenghua.yu@intel.com Reviewed-by: Dave Jiang dave.jiang@intel.com Link: https://lore.kernel.org/r/20230407203143.2189681-9-fenghua.yu@intel.com Signed-off-by: Vinod Koul vkoul@kernel.org Stable-dep-of: 8dfa57aabff6 ("dmaengine: idxd: Fix allowing write() from different address spaces") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/idxd/cdev.c | 107 +++++++++++++++++++++++++++++++++++++-- drivers/dma/idxd/idxd.h | 6 +++ drivers/dma/idxd/init.c | 2 + drivers/dma/idxd/sysfs.c | 1 + 4 files changed, 111 insertions(+), 5 deletions(-)
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c index e2a89873c6e1a..c7aa47f01df02 100644 --- a/drivers/dma/idxd/cdev.c +++ b/drivers/dma/idxd/cdev.c @@ -12,7 +12,9 @@ #include <linux/fs.h> #include <linux/poll.h> #include <linux/iommu.h> +#include <linux/highmem.h> #include <uapi/linux/idxd.h> +#include <linux/xarray.h> #include "registers.h" #include "idxd.h"
@@ -35,6 +37,7 @@ struct idxd_user_context { struct idxd_wq *wq; struct task_struct *task; unsigned int pasid; + struct mm_struct *mm; unsigned int flags; struct iommu_sva *sva; }; @@ -69,6 +72,19 @@ static inline struct idxd_wq *inode_wq(struct inode *inode) return idxd_cdev->wq; }
+static void idxd_xa_pasid_remove(struct idxd_user_context *ctx) +{ + struct idxd_wq *wq = ctx->wq; + void *ptr; + + mutex_lock(&wq->uc_lock); + ptr = xa_cmpxchg(&wq->upasid_xa, ctx->pasid, ctx, NULL, GFP_KERNEL); + if (ptr != (void *)ctx) + dev_warn(&wq->idxd->pdev->dev, "xarray cmpxchg failed for pasid %u\n", + ctx->pasid); + mutex_unlock(&wq->uc_lock); +} + static int idxd_cdev_open(struct inode *inode, struct file *filp) { struct idxd_user_context *ctx; @@ -109,20 +125,26 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
pasid = iommu_sva_get_pasid(sva); if (pasid == IOMMU_PASID_INVALID) { - iommu_sva_unbind_device(sva); rc = -EINVAL; - goto failed; + goto failed_get_pasid; }
ctx->sva = sva; ctx->pasid = pasid; + ctx->mm = current->mm; + + mutex_lock(&wq->uc_lock); + rc = xa_insert(&wq->upasid_xa, pasid, ctx, GFP_KERNEL); + mutex_unlock(&wq->uc_lock); + if (rc < 0) + dev_warn(dev, "PASID entry already exist in xarray.\n");
if (wq_dedicated(wq)) { rc = idxd_wq_set_pasid(wq, pasid); if (rc < 0) { iommu_sva_unbind_device(sva); dev_err(dev, "wq set pasid failed: %d\n", rc); - goto failed; + goto failed_set_pasid; } } } @@ -131,7 +153,13 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp) mutex_unlock(&wq->wq_lock); return 0;
- failed: +failed_set_pasid: + if (device_user_pasid_enabled(idxd)) + idxd_xa_pasid_remove(ctx); +failed_get_pasid: + if (device_user_pasid_enabled(idxd)) + iommu_sva_unbind_device(sva); +failed: mutex_unlock(&wq->wq_lock); kfree(ctx); return rc; @@ -162,8 +190,10 @@ static int idxd_cdev_release(struct inode *node, struct file *filep) } }
- if (ctx->sva) + if (ctx->sva) { iommu_sva_unbind_device(ctx->sva); + idxd_xa_pasid_remove(ctx); + } kfree(ctx); mutex_lock(&wq->wq_lock); idxd_wq_put(wq); @@ -496,3 +526,70 @@ void idxd_cdev_remove(void) ida_destroy(&ictx[i].minor_ida); } } + +/** + * idxd_copy_cr - copy completion record to user address space found by wq and + * PASID + * @wq: work queue + * @pasid: PASID + * @addr: user fault address to write + * @cr: completion record + * @len: number of bytes to copy + * + * This is called by a work that handles completion record fault. + * + * Return: number of bytes copied. + */ +int idxd_copy_cr(struct idxd_wq *wq, ioasid_t pasid, unsigned long addr, + void *cr, int len) +{ + struct device *dev = &wq->idxd->pdev->dev; + int left = len, status_size = 1; + struct idxd_user_context *ctx; + struct mm_struct *mm; + + mutex_lock(&wq->uc_lock); + + ctx = xa_load(&wq->upasid_xa, pasid); + if (!ctx) { + dev_warn(dev, "No user context\n"); + goto out; + } + + mm = ctx->mm; + /* + * The completion record fault handling work is running in kernel + * thread context. It temporarily switches to the mm to copy cr + * to addr in the mm. + */ + kthread_use_mm(mm); + left = copy_to_user((void __user *)addr + status_size, cr + status_size, + len - status_size); + /* + * Copy status only after the rest of completion record is copied + * successfully so that the user gets the complete completion record + * when a non-zero status is polled. + */ + if (!left) { + u8 status; + + /* + * Ensure that the completion record's status field is written + * after the rest of the completion record has been written. + * This ensures that the user receives the correct completion + * record information once polling for a non-zero status. + */ + wmb(); + status = *(u8 *)cr; + if (put_user(status, (u8 __user *)addr)) + left += status_size; + } else { + left += status_size; + } + kthread_unuse_mm(mm); + +out: + mutex_unlock(&wq->uc_lock); + + return len - left; +} diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h index 5dbb67ff1c0cb..c3ace4aed0fc5 100644 --- a/drivers/dma/idxd/idxd.h +++ b/drivers/dma/idxd/idxd.h @@ -215,6 +215,10 @@ struct idxd_wq { char name[WQ_NAME_SIZE + 1]; u64 max_xfer_bytes; u32 max_batch_size; + + /* Lock to protect upasid_xa access. */ + struct mutex uc_lock; + struct xarray upasid_xa; };
struct idxd_engine { @@ -666,6 +670,8 @@ void idxd_cdev_remove(void); int idxd_cdev_get_major(struct idxd_device *idxd); int idxd_wq_add_cdev(struct idxd_wq *wq); void idxd_wq_del_cdev(struct idxd_wq *wq); +int idxd_copy_cr(struct idxd_wq *wq, ioasid_t pasid, unsigned long addr, + void *buf, int len);
/* perfmon */ #if IS_ENABLED(CONFIG_INTEL_IDXD_PERFMON) diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c index 7cb76db5ad600..ea651d5cf332d 100644 --- a/drivers/dma/idxd/init.c +++ b/drivers/dma/idxd/init.c @@ -218,6 +218,8 @@ static int idxd_setup_wqs(struct idxd_device *idxd) } bitmap_copy(wq->opcap_bmap, idxd->opcap_bmap, IDXD_MAX_OPCAP_BITS); } + mutex_init(&wq->uc_lock); + xa_init(&wq->upasid_xa); idxd->wqs[i] = wq; }
diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c index c811757d0f97f..0689464c4816a 100644 --- a/drivers/dma/idxd/sysfs.c +++ b/drivers/dma/idxd/sysfs.c @@ -1315,6 +1315,7 @@ static void idxd_conf_wq_release(struct device *dev)
bitmap_free(wq->opcap_bmap); kfree(wq->wqcfg); + xa_destroy(&wq->upasid_xa); kfree(wq); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vinicius Costa Gomes vinicius.gomes@intel.com
[ Upstream commit 8dfa57aabff625bf445548257f7711ef294cd30e ]
Check if the process submitting the descriptor belongs to the same address space as the one that opened the file, reject otherwise.
Fixes: 6827738dc684 ("dmaengine: idxd: add a write() method for applications to submit work") Signed-off-by: Vinicius Costa Gomes vinicius.gomes@intel.com Signed-off-by: Dave Jiang dave.jiang@intel.com Link: https://lore.kernel.org/r/20250421170337.3008875-1-dave.jiang@intel.com Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/idxd/cdev.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c index c7aa47f01df02..186f005bfa8fd 100644 --- a/drivers/dma/idxd/cdev.c +++ b/drivers/dma/idxd/cdev.c @@ -240,6 +240,9 @@ static int idxd_cdev_mmap(struct file *filp, struct vm_area_struct *vma) if (!idxd->user_submission_safe && !capable(CAP_SYS_RAWIO)) return -EPERM;
+ if (current->mm != ctx->mm) + return -EPERM; + rc = check_vma(wq, vma, __func__); if (rc < 0) return rc; @@ -306,6 +309,9 @@ static ssize_t idxd_cdev_write(struct file *filp, const char __user *buf, size_t ssize_t written = 0; int i;
+ if (current->mm != ctx->mm) + return -EPERM; + for (i = 0; i < len/sizeof(struct dsa_hw_desc); i++) { int rc = idxd_submit_user_descriptor(ctx, udesc + i);
@@ -326,6 +332,9 @@ static __poll_t idxd_cdev_poll(struct file *filp, struct idxd_device *idxd = wq->idxd; __poll_t out = 0;
+ if (current->mm != ctx->mm) + return -EPERM; + poll_wait(filp, &wq->err_queue, wait); spin_lock(&idxd->dev_lock); if (idxd->sw_err.valid)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matti Lehtimäki matti.lehtimaki@gmail.com
[ Upstream commit 4ca45af0a56d00b86285d6fdd720dca3215059a7 ]
Recent change to handle platforms with only single power domain broke pronto-v3 which requires power domains and doesn't have fallback voltage regulators in case power domains are missing. Add a check to verify the number of fallback voltage regulators before using the code which handles single power domain situation.
Fixes: 65991ea8a6d1 ("remoteproc: qcom_wcnss: Handle platforms with only single power domain") Signed-off-by: Matti Lehtimäki matti.lehtimaki@gmail.com Tested-by: Luca Weiss luca.weiss@fairphone.com # sdm632-fairphone-fp3 Link: https://lore.kernel.org/r/20250511234026.94735-1-matti.lehtimaki@gmail.com Signed-off-by: Bjorn Andersson andersson@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/remoteproc/qcom_wcnss.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/remoteproc/qcom_wcnss.c b/drivers/remoteproc/qcom_wcnss.c index ce61e0e7cbeb8..af96541c9b69a 100644 --- a/drivers/remoteproc/qcom_wcnss.c +++ b/drivers/remoteproc/qcom_wcnss.c @@ -445,7 +445,8 @@ static int wcnss_init_regulators(struct qcom_wcnss *wcnss, if (wcnss->num_pds) { info += wcnss->num_pds; /* Handle single power domain case */ - num_vregs += num_pd_vregs - wcnss->num_pds; + if (wcnss->num_pds < num_pd_vregs) + num_vregs += num_pd_vregs - wcnss->num_pds; } else { num_vregs += num_pd_vregs; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andre Przywara andre.przywara@arm.com
[ Upstream commit 98e6da673cc6dd46ca9a599802bd2c8f83606710 ]
The D1/R528/T113 SoCs have a hidden divider of 2 in the MMC mod clocks, just as other recent SoCs. So far we did not describe that, which led to the resulting MMC clock rate to be only half of its intended value.
Use a macro that allows to describe a fixed post-divider, to compensate for that divisor.
This brings the MMC performance on those SoCs to its expected level, so about 23 MB/s for SD cards, instead of the 11 MB/s measured so far.
Fixes: 35b97bb94111 ("clk: sunxi-ng: Add support for the D1 SoC clocks") Reported-by: Kuba Szczodrzyński kuba@szczodrzynski.pl Signed-off-by: Andre Przywara andre.przywara@arm.com Link: https://patch.msgid.link/20250501120631.837186-1-andre.przywara@arm.com Signed-off-by: Chen-Yu Tsai wens@csie.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/sunxi-ng/ccu-sun20i-d1.c | 44 ++++++++++++++++------------ drivers/clk/sunxi-ng/ccu_mp.h | 22 ++++++++++++++ 2 files changed, 47 insertions(+), 19 deletions(-)
diff --git a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c index cb4bf038e17f5..89d8bf4a30a26 100644 --- a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c +++ b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c @@ -412,19 +412,23 @@ static const struct clk_parent_data mmc0_mmc1_parents[] = { { .hw = &pll_periph0_2x_clk.common.hw }, { .hw = &pll_audio1_div2_clk.common.hw }, }; -static SUNXI_CCU_MP_DATA_WITH_MUX_GATE(mmc0_clk, "mmc0", mmc0_mmc1_parents, 0x830, - 0, 4, /* M */ - 8, 2, /* P */ - 24, 3, /* mux */ - BIT(31), /* gate */ - 0); - -static SUNXI_CCU_MP_DATA_WITH_MUX_GATE(mmc1_clk, "mmc1", mmc0_mmc1_parents, 0x834, - 0, 4, /* M */ - 8, 2, /* P */ - 24, 3, /* mux */ - BIT(31), /* gate */ - 0); +static SUNXI_CCU_MP_DATA_WITH_MUX_GATE_POSTDIV(mmc0_clk, "mmc0", + mmc0_mmc1_parents, 0x830, + 0, 4, /* M */ + 8, 2, /* P */ + 24, 3, /* mux */ + BIT(31), /* gate */ + 2, /* post-div */ + 0); + +static SUNXI_CCU_MP_DATA_WITH_MUX_GATE_POSTDIV(mmc1_clk, "mmc1", + mmc0_mmc1_parents, 0x834, + 0, 4, /* M */ + 8, 2, /* P */ + 24, 3, /* mux */ + BIT(31), /* gate */ + 2, /* post-div */ + 0);
static const struct clk_parent_data mmc2_parents[] = { { .fw_name = "hosc" }, @@ -433,12 +437,14 @@ static const struct clk_parent_data mmc2_parents[] = { { .hw = &pll_periph0_800M_clk.common.hw }, { .hw = &pll_audio1_div2_clk.common.hw }, }; -static SUNXI_CCU_MP_DATA_WITH_MUX_GATE(mmc2_clk, "mmc2", mmc2_parents, 0x838, - 0, 4, /* M */ - 8, 2, /* P */ - 24, 3, /* mux */ - BIT(31), /* gate */ - 0); +static SUNXI_CCU_MP_DATA_WITH_MUX_GATE_POSTDIV(mmc2_clk, "mmc2", mmc2_parents, + 0x838, + 0, 4, /* M */ + 8, 2, /* P */ + 24, 3, /* mux */ + BIT(31), /* gate */ + 2, /* post-div */ + 0);
static SUNXI_CCU_GATE_HWS(bus_mmc0_clk, "bus-mmc0", psi_ahb_hws, 0x84c, BIT(0), 0); diff --git a/drivers/clk/sunxi-ng/ccu_mp.h b/drivers/clk/sunxi-ng/ccu_mp.h index 6e50f3728fb5f..7d836a9fb3db3 100644 --- a/drivers/clk/sunxi-ng/ccu_mp.h +++ b/drivers/clk/sunxi-ng/ccu_mp.h @@ -52,6 +52,28 @@ struct ccu_mp { } \ }
+#define SUNXI_CCU_MP_DATA_WITH_MUX_GATE_POSTDIV(_struct, _name, _parents, \ + _reg, \ + _mshift, _mwidth, \ + _pshift, _pwidth, \ + _muxshift, _muxwidth, \ + _gate, _postdiv, _flags)\ + struct ccu_mp _struct = { \ + .enable = _gate, \ + .m = _SUNXI_CCU_DIV(_mshift, _mwidth), \ + .p = _SUNXI_CCU_DIV(_pshift, _pwidth), \ + .mux = _SUNXI_CCU_MUX(_muxshift, _muxwidth), \ + .fixed_post_div = _postdiv, \ + .common = { \ + .reg = _reg, \ + .features = CCU_FEATURE_FIXED_POSTDIV, \ + .hw.init = CLK_HW_INIT_PARENTS_DATA(_name, \ + _parents, \ + &ccu_mp_ops, \ + _flags), \ + } \ + } + #define SUNXI_CCU_MP_WITH_MUX_GATE(_struct, _name, _parents, _reg, \ _mshift, _mwidth, \ _pshift, _pwidth, \
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Chaignon paul.chaignon@gmail.com
[ Upstream commit 0b91fda3a1f044141e1e615456ff62508c32b202 ]
Prior to this patch, the mark is sanitized (applying the state's mask to the state's value) only on inserts when checking if a conflicting XFRM state or policy exists.
We discovered in Cilium that this same sanitization does not occur in the hot-path __xfrm_state_lookup. In the hot-path, the sk_buff's mark is simply compared to the state's value:
if ((mark & x->mark.m) != x->mark.v) continue;
Therefore, users can define unsanitized marks (ex. 0xf42/0xf00) which will never match any packet.
This commit updates __xfrm_state_insert and xfrm_policy_insert to store the sanitized marks, thus removing this footgun.
This has the side effect of changing the ip output, as the returned mark will have the mask applied to it when printed.
Fixes: 3d6acfa7641f ("xfrm: SA lookups with mark") Signed-off-by: Paul Chaignon paul.chaignon@gmail.com Signed-off-by: Louis DeLosSantos louis.delos.devel@gmail.com Co-developed-by: Louis DeLosSantos louis.delos.devel@gmail.com Signed-off-by: Steffen Klassert steffen.klassert@secunet.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/xfrm/xfrm_policy.c | 3 +++ net/xfrm/xfrm_state.c | 3 +++ 2 files changed, 6 insertions(+)
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index a022f49846879..e015ff225b27a 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -1597,6 +1597,9 @@ int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl) struct xfrm_policy *delpol; struct hlist_head *chain;
+ /* Sanitize mark before store */ + policy->mark.v &= policy->mark.m; + spin_lock_bh(&net->xfrm.xfrm_policy_lock); chain = policy_hash_bysel(net, &policy->selector, policy->family, dir); if (chain) diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index b5047a94c7d01..58c53bb1c5838 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -1275,6 +1275,9 @@ static void __xfrm_state_insert(struct xfrm_state *x)
list_add(&x->km.all, &net->xfrm.state_all);
+ /* Sanitize mark before store */ + x->mark.v &= x->mark.m; + h = xfrm_dst_hash(net, &x->id.daddr, &x->props.saddr, x->props.reqid, x->props.family); hlist_add_head_rcu(&x->bydst, net->xfrm.state_bydst + h);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dave Jiang dave.jiang@intel.com
[ Upstream commit ae74cd15ade833adc289279b5c6f12e78f64d4d7 ]
The fix to block access from different address space did not return a correct value for ->poll() change. kernel test bot reported that a return value of type __poll_t is expected rather than int. Fix to return POLLNVAL to indicate invalid request.
Fixes: 8dfa57aabff6 ("dmaengine: idxd: Fix allowing write() from different address spaces") Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/oe-kbuild-all/202505081851.rwD7jVxg-lkp@intel.com/ Signed-off-by: Dave Jiang dave.jiang@intel.com Link: https://lore.kernel.org/r/20250508170548.2747425-1-dave.jiang@intel.com Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/idxd/cdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c index 186f005bfa8fd..d736ab15ade24 100644 --- a/drivers/dma/idxd/cdev.c +++ b/drivers/dma/idxd/cdev.c @@ -333,7 +333,7 @@ static __poll_t idxd_cdev_poll(struct file *filp, __poll_t out = 0;
if (current->mm != ctx->mm) - return -EPERM; + return POLLNVAL;
poll_wait(filp, &wq->err_queue, wait); spin_lock(&idxd->dev_lock);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
[ Upstream commit 7af8479d9eb4319b4ba7b47a8c4d2c55af1c31e1 ]
l2cap_check_enc_key_size shall check the security level of the l2cap_chan rather than the hci_conn since for incoming connection request that may be different as hci_conn may already been encrypted using a different security level.
Fixes: 522e9ed157e3 ("Bluetooth: l2cap: Check encryption key size on incoming connection") Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/bluetooth/l2cap_core.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 222105e24d2d8..cb9b1edfcea2a 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -1561,7 +1561,8 @@ static void l2cap_request_info(struct l2cap_conn *conn) sizeof(req), &req); }
-static bool l2cap_check_enc_key_size(struct hci_conn *hcon) +static bool l2cap_check_enc_key_size(struct hci_conn *hcon, + struct l2cap_chan *chan) { /* The minimum encryption key size needs to be enforced by the * host stack before establishing any L2CAP connections. The @@ -1575,7 +1576,7 @@ static bool l2cap_check_enc_key_size(struct hci_conn *hcon) int min_key_size = hcon->hdev->min_enc_key_size;
/* On FIPS security level, key size must be 16 bytes */ - if (hcon->sec_level == BT_SECURITY_FIPS) + if (chan->sec_level == BT_SECURITY_FIPS) min_key_size = 16;
return (!test_bit(HCI_CONN_ENCRYPT, &hcon->flags) || @@ -1603,7 +1604,7 @@ static void l2cap_do_start(struct l2cap_chan *chan) !__l2cap_no_conn_pending(chan)) return;
- if (l2cap_check_enc_key_size(conn->hcon)) + if (l2cap_check_enc_key_size(conn->hcon, chan)) l2cap_start_connection(chan); else __set_chan_timer(chan, L2CAP_DISC_TIMEOUT); @@ -1685,7 +1686,7 @@ static void l2cap_conn_start(struct l2cap_conn *conn) continue; }
- if (l2cap_check_enc_key_size(conn->hcon)) + if (l2cap_check_enc_key_size(conn->hcon, chan)) l2cap_start_connection(chan); else l2cap_chan_close(chan, ECONNREFUSED); @@ -4187,7 +4188,7 @@ static struct l2cap_chan *l2cap_connect(struct l2cap_conn *conn, /* Check if the ACL is secure enough (if not SDP) */ if (psm != cpu_to_le16(L2CAP_PSM_SDP) && (!hci_conn_check_link_mode(conn->hcon) || - !l2cap_check_enc_key_size(conn->hcon))) { + !l2cap_check_enc_key_size(conn->hcon, pchan))) { conn->disc_reason = HCI_ERROR_AUTH_FAILURE; result = L2CAP_CR_SEC_BLOCK; goto response; @@ -8418,7 +8419,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt) }
if (chan->state == BT_CONNECT) { - if (!status && l2cap_check_enc_key_size(hcon)) + if (!status && l2cap_check_enc_key_size(hcon, chan)) l2cap_start_connection(chan); else __set_chan_timer(chan, L2CAP_DISC_TIMEOUT); @@ -8428,7 +8429,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt) struct l2cap_conn_rsp rsp; __u16 res, stat;
- if (!status && l2cap_check_enc_key_size(hcon)) { + if (!status && l2cap_check_enc_key_size(hcon, chan)) { if (test_bit(FLAG_DEFER_SETUP, &chan->flags)) { res = L2CAP_CR_PEND; stat = L2CAP_CS_AUTHOR_PEND;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 91b6dbced0ef1d680afdd69b14fc83d50ebafaf3 ]
When netfilter defrag hooks are loaded (due to the presence of conntrack rules, for example), fragmented packets entering the bridge will be defragged by the bridge's pre-routing hook (br_nf_pre_routing() -> ipv4_conntrack_defrag()).
Later on, in the bridge's post-routing hook, the defragged packet will be fragmented again. If the size of the largest fragment is larger than what the kernel has determined as the destination MTU (using ip_skb_dst_mtu()), the defragged packet will be dropped.
Before commit ac6627a28dbf ("net: ipv4: Consolidate ipv4_mtu and ip_dst_mtu_maybe_forward"), ip_skb_dst_mtu() would return dst_mtu() as the destination MTU. Assuming the dst entry attached to the packet is the bridge's fake rtable one, this would simply be the bridge's MTU (see fake_mtu()).
However, after above mentioned commit, ip_skb_dst_mtu() ends up returning the route's MTU stored in the dst entry's metrics. Ideally, in case the dst entry is the bridge's fake rtable one, this should be the bridge's MTU as the bridge takes care of updating this metric when its MTU changes (see br_change_mtu()).
Unfortunately, the last operation is a no-op given the metrics attached to the fake rtable entry are marked as read-only. Therefore, ip_skb_dst_mtu() ends up returning 1500 (the initial MTU value) and defragged packets are dropped during fragmentation when dealing with large fragments and high MTU (e.g., 9k).
Fix by moving the fake rtable entry's metrics to be per-bridge (in a similar fashion to the fake rtable entry itself) and marking them as writable, thereby allowing MTU changes to be reflected.
Fixes: 62fa8a846d7d ("net: Implement read-only protection and COW'ing of metrics.") Fixes: 33eb9873a283 ("bridge: initialize fake_rtable metrics") Reported-by: Venkat Venkatsubra venkat.x.venkatsubra@oracle.com Closes: https://lore.kernel.org/netdev/PH0PR10MB4504888284FF4CBA648197D0ACB82@PH0PR1... Tested-by: Venkat Venkatsubra venkat.x.venkatsubra@oracle.com Signed-off-by: Ido Schimmel idosch@nvidia.com Acked-by: Nikolay Aleksandrov razor@blackwall.org Link: https://patch.msgid.link/20250515084848.727706-1-idosch@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/bridge/br_nf_core.c | 7 ++----- net/bridge/br_private.h | 1 + 2 files changed, 3 insertions(+), 5 deletions(-)
diff --git a/net/bridge/br_nf_core.c b/net/bridge/br_nf_core.c index 8c69f0c95a8ed..b8c8deb87407d 100644 --- a/net/bridge/br_nf_core.c +++ b/net/bridge/br_nf_core.c @@ -65,17 +65,14 @@ static struct dst_ops fake_dst_ops = { * ipt_REJECT needs it. Future netfilter modules might * require us to fill additional fields. */ -static const u32 br_dst_default_metrics[RTAX_MAX] = { - [RTAX_MTU - 1] = 1500, -}; - void br_netfilter_rtable_init(struct net_bridge *br) { struct rtable *rt = &br->fake_rtable;
atomic_set(&rt->dst.__refcnt, 1); rt->dst.dev = br->dev; - dst_init_metrics(&rt->dst, br_dst_default_metrics, true); + dst_init_metrics(&rt->dst, br->metrics, false); + dst_metric_set(&rt->dst, RTAX_MTU, br->dev->mtu); rt->dst.flags = DST_NOXFRM | DST_FAKE_RTABLE; rt->dst.ops = &fake_dst_ops; } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 940de95167689..19fb505492521 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -478,6 +478,7 @@ struct net_bridge { struct rtable fake_rtable; struct rt6_info fake_rt6_info; }; + u32 metrics[RTAX_MAX]; #endif u16 group_fwd_mask; u16 group_fwd_mask_required;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jacob Keller jacob.e.keller@intel.com
[ Upstream commit bbd95160a03dbfcd01a541f25c27ddb730dfbbd5 ]
The ice_vc_repr_add_mac() function indicates that it does not store the MAC address filters in the firmware. However, it still increments vf->num_mac. This is incorrect, as vf->num_mac should represent the number of MAC filters currently programmed to firmware.
Indeed, we only perform this increment if the requested filter is a unicast address that doesn't match the existing vf->hw_lan_addr. In addition, ice_vc_repr_del_mac() does not decrement the vf->num_mac counter. This results in the counter becoming out of sync with the actual count.
As it turns out, vf->num_mac is currently only used in legacy made without port representors. The single place where the value is checked is for enforcing a filter limit on untrusted VFs.
Upcoming patches to support VF Live Migration will use this value when determining the size of the TLV for MAC address filters. Fix the representor mode function to stop incrementing the counter incorrectly.
Fixes: ac19e03ef780 ("ice: allow process VF opcodes in different ways") Signed-off-by: Jacob Keller jacob.e.keller@intel.com Reviewed-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Reviewed-by: Simon Horman horms@kernel.org Tested-by: Sujai Buvaneswaran sujai.buvaneswaran@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice_virtchnl.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index 42d8e5e771b7e..fa9d928081d63 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -3551,7 +3551,6 @@ static int ice_vc_repr_add_mac(struct ice_vf *vf, u8 *msg) }
ice_vfhw_mac_add(vf, &al->list[i]); - vf->num_mac++; break; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Kocialkowski paulk@sys-base.io
[ Upstream commit 47653e4243f2b0a26372e481ca098936b51ec3a8 ]
While the MDIO address of the internal PHY on Allwinner sun8i chips is generally 1, of_mdio_parse_addr is used to cleanly parse the address from the device-tree instead of hardcoding it.
A commit reworking the code ditched the parsed value and hardcoded the value 1 instead, which didn't really break anything but is more fragile and not future-proof.
Restore the initial behavior using the parsed address returned from the helper.
Fixes: 634db83b8265 ("net: stmmac: dwmac-sun8i: Handle integrated/external MDIOs") Signed-off-by: Paul Kocialkowski paulk@sys-base.io Reviewed-by: Andrew Lunn andrew@lunn.ch Acked-by: Corentin LABBE clabbe.montjoie@gmail.com Tested-by: Corentin LABBE clabbe.montjoie@gmail.com Link: https://patch.msgid.link/20250519164936.4172658-1-paulk@sys-base.io Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c index f834472599f75..0921b78c6244f 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c @@ -948,7 +948,7 @@ static int sun8i_dwmac_set_syscon(struct device *dev, /* of_mdio_parse_addr returns a valid (0 ~ 31) PHY * address. No need to mask it again. */ - reg |= 1 << H3_EPHY_ADDR_SHIFT; + reg |= ret << H3_EPHY_ADDR_SHIFT; } else { /* For SoCs without internal PHY the PHY selection bit should be * set to 0 (external PHY).
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thangaraj Samynathan thangaraj.s@microchip.com
[ Upstream commit 293e38ff4e4c2ba53f3fd47d8a4a9f0f0414a7a6 ]
SGMII_CTRL register, which specifies the active interface, was not properly restored when resuming from suspend. This led to incorrect interface selection after resume particularly in scenarios involving the FPGA.
To fix this: - Move the SGMII_CTRL setup out of the probe function. - Initialize the register in the hardware initialization helper function, which is called during both device initialization and resume.
This ensures the interface configuration is consistently restored after suspend/resume cycles.
Fixes: a46d9d37c4f4f ("net: lan743x: Add support for SGMII interface") Signed-off-by: Thangaraj Samynathan thangaraj.s@microchip.com Link: https://patch.msgid.link/20250516035719.117960-1-thangaraj.s@microchip.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/microchip/lan743x_main.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c index 2e69ba0143b15..fd35554191793 100644 --- a/drivers/net/ethernet/microchip/lan743x_main.c +++ b/drivers/net/ethernet/microchip/lan743x_main.c @@ -3253,6 +3253,7 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter, struct pci_dev *pdev) { struct lan743x_tx *tx; + u32 sgmii_ctl; int index; int ret;
@@ -3265,6 +3266,15 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter, spin_lock_init(&adapter->eth_syslock_spinlock); mutex_init(&adapter->sgmii_rw_lock); pci11x1x_set_rfe_rd_fifo_threshold(adapter); + sgmii_ctl = lan743x_csr_read(adapter, SGMII_CTL); + if (adapter->is_sgmii_en) { + sgmii_ctl |= SGMII_CTL_SGMII_ENABLE_; + sgmii_ctl &= ~SGMII_CTL_SGMII_POWER_DN_; + } else { + sgmii_ctl &= ~SGMII_CTL_SGMII_ENABLE_; + sgmii_ctl |= SGMII_CTL_SGMII_POWER_DN_; + } + lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl); } else { adapter->max_tx_channels = LAN743X_MAX_TX_CHANNELS; adapter->used_tx_channels = LAN743X_USED_TX_CHANNELS; @@ -3313,7 +3323,6 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter,
static int lan743x_mdiobus_init(struct lan743x_adapter *adapter) { - u32 sgmii_ctl; int ret;
adapter->mdiobus = devm_mdiobus_alloc(&adapter->pdev->dev); @@ -3325,10 +3334,6 @@ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter) adapter->mdiobus->priv = (void *)adapter; if (adapter->is_pci11x1x) { if (adapter->is_sgmii_en) { - sgmii_ctl = lan743x_csr_read(adapter, SGMII_CTL); - sgmii_ctl |= SGMII_CTL_SGMII_ENABLE_; - sgmii_ctl &= ~SGMII_CTL_SGMII_POWER_DN_; - lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl); netif_dbg(adapter, drv, adapter->netdev, "SGMII operation\n"); adapter->mdiobus->probe_capabilities = MDIOBUS_C22_C45; @@ -3338,10 +3343,6 @@ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter) netif_dbg(adapter, drv, adapter->netdev, "lan743x-mdiobus-c45\n"); } else { - sgmii_ctl = lan743x_csr_read(adapter, SGMII_CTL); - sgmii_ctl &= ~SGMII_CTL_SGMII_ENABLE_; - sgmii_ctl |= SGMII_CTL_SGMII_POWER_DN_; - lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl); netif_dbg(adapter, drv, adapter->netdev, "RGMII operation\n"); // Only C22 support when RGMII I/F
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pavel Begunkov asml.silence@gmail.com
[ Upstream commit a7d755ed9ce9738af3db602eb29d32774a180bc7 ]
Leaving the CQ critical section in the middle of a overflow flushing can cause cqe reordering since the cache cq pointers are reset and any new cqe emitters that might get called in between are not going to be forced into io_cqe_cache_refill().
Fixes: eac2ca2d682f9 ("io_uring: check if we need to reschedule during overflow flush") Signed-off-by: Pavel Begunkov asml.silence@gmail.com Link: https://lore.kernel.org/r/90ba817f1a458f091f355f407de1c911d2b93bbf.174748378... Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- io_uring/io_uring.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index f39d66589180e..ad462724246a7 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -627,6 +627,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) * to care for a non-real case. */ if (need_resched()) { + ctx->cqe_sentinel = ctx->cqe_cached; io_cq_unlock_post(ctx); mutex_unlock(&ctx->uring_lock); cond_resched();
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Cong Wang xiyou.wangcong@gmail.com
[ Upstream commit 3f981138109f63232a5fb7165938d4c945cc1b9d ]
When enqueuing the first packet to an HFSC class, hfsc_enqueue() calls the child qdisc's peek() operation before incrementing sch->q.qlen and sch->qstats.backlog. If the child qdisc uses qdisc_peek_dequeued(), this may trigger an immediate dequeue and potential packet drop. In such cases, qdisc_tree_reduce_backlog() is called, but the HFSC qdisc's qlen and backlog have not yet been updated, leading to inconsistent queue accounting. This can leave an empty HFSC class in the active list, causing further consequences like use-after-free.
This patch fixes the bug by moving the increment of sch->q.qlen and sch->qstats.backlog before the call to the child qdisc's peek() operation. This ensures that queue length and backlog are always accurate when packet drops or dequeues are triggered during the peek.
Fixes: 12d0ad3be9c3 ("net/sched/sch_hfsc.c: handle corner cases where head may change invalidating calculated deadline") Reported-by: Mingi Cho mincho@theori.io Signed-off-by: Cong Wang xiyou.wangcong@gmail.com Reviewed-by: Simon Horman horms@kernel.org Link: https://patch.msgid.link/20250518222038.58538-2-xiyou.wangcong@gmail.com Reviewed-by: Jamal Hadi Salim jhs@mojatatu.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/sch_hfsc.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c index fc1370c293730..ec6ee45100132 100644 --- a/net/sched/sch_hfsc.c +++ b/net/sched/sch_hfsc.c @@ -1568,6 +1568,9 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) return err; }
+ sch->qstats.backlog += len; + sch->q.qlen++; + if (first && !cl->cl_nactive) { if (cl->cl_flags & HFSC_RSC) init_ed(cl, len); @@ -1583,9 +1586,6 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
}
- sch->qstats.backlog += len; - sch->q.qlen++; - return NET_XMIT_SUCCESS; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit b2e3406a38f0f48b1dfb81e5bb73d243ff6af179 ]
Page pool for each rx queue enhance rx side performance by reclaiming buffers back to each queue specific pool. DMA mapping is done only for first allocation of buffers. As subsequent buffers allocation avoid DMA mapping, it results in performance improvement.
Image | Performance ------------ | ------------ Vannila | 3Mpps | with this | 42Mpps change | ---------------------------
Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Link: https://lore.kernel.org/r/20230522020404.152020-1-rkannoth@marvell.com Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: b4164de5041b ("octeontx2-pf: Add AF_XDP non-zero copy support") Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/Kconfig | 1 + .../marvell/octeontx2/nic/otx2_common.c | 78 ++++++++++++++++--- .../marvell/octeontx2/nic/otx2_common.h | 6 +- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 11 ++- .../marvell/octeontx2/nic/otx2_txrx.c | 19 +++-- .../marvell/octeontx2/nic/otx2_txrx.h | 1 + .../ethernet/marvell/octeontx2/nic/qos_sq.c | 2 +- 7 files changed, 96 insertions(+), 22 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/Kconfig b/drivers/net/ethernet/marvell/octeontx2/Kconfig index 993ac180a5db8..a32d85d6f599f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/Kconfig +++ b/drivers/net/ethernet/marvell/octeontx2/Kconfig @@ -32,6 +32,7 @@ config OCTEONTX2_PF tristate "Marvell OcteonTX2 NIC Physical Function driver" select OCTEONTX2_MBOX select NET_DEVLINK + select PAGE_POOL depends on (64BIT && COMPILE_TEST) || ARM64 select DIMLIB depends on PCI diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index d05f91f97a9af..5e11599d13223 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -513,11 +513,32 @@ void otx2_config_irq_coalescing(struct otx2_nic *pfvf, int qidx) (pfvf->hw.cq_ecount_wait - 1)); }
+static int otx2_alloc_pool_buf(struct otx2_nic *pfvf, struct otx2_pool *pool, + dma_addr_t *dma) +{ + unsigned int offset = 0; + struct page *page; + size_t sz; + + sz = SKB_DATA_ALIGN(pool->rbsize); + sz = ALIGN(sz, OTX2_ALIGN); + + page = page_pool_alloc_frag(pool->page_pool, &offset, sz, GFP_ATOMIC); + if (unlikely(!page)) + return -ENOMEM; + + *dma = page_pool_get_dma_addr(page) + offset; + return 0; +} + static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, dma_addr_t *dma) { u8 *buf;
+ if (pool->page_pool) + return otx2_alloc_pool_buf(pfvf, pool, dma); + buf = napi_alloc_frag_align(pool->rbsize, OTX2_ALIGN); if (unlikely(!buf)) return -ENOMEM; @@ -1206,10 +1227,31 @@ void otx2_sq_free_sqbs(struct otx2_nic *pfvf) } }
+void otx2_free_bufs(struct otx2_nic *pfvf, struct otx2_pool *pool, + u64 iova, int size) +{ + struct page *page; + u64 pa; + + pa = otx2_iova_to_phys(pfvf->iommu_domain, iova); + page = virt_to_head_page(phys_to_virt(pa)); + + if (pool->page_pool) { + page_pool_put_full_page(pool->page_pool, page, true); + } else { + dma_unmap_page_attrs(pfvf->dev, iova, size, + DMA_FROM_DEVICE, + DMA_ATTR_SKIP_CPU_SYNC); + + put_page(page); + } +} + void otx2_free_aura_ptr(struct otx2_nic *pfvf, int type) { int pool_id, pool_start = 0, pool_end = 0, size = 0; - u64 iova, pa; + struct otx2_pool *pool; + u64 iova;
if (type == AURA_NIX_SQ) { pool_start = otx2_get_pool_idx(pfvf, type, 0); @@ -1225,15 +1267,13 @@ void otx2_free_aura_ptr(struct otx2_nic *pfvf, int type) /* Free SQB and RQB pointers from the aura pool */ for (pool_id = pool_start; pool_id < pool_end; pool_id++) { iova = otx2_aura_allocptr(pfvf, pool_id); + pool = &pfvf->qset.pool[pool_id]; while (iova) { if (type == AURA_NIX_RQ) iova -= OTX2_HEAD_ROOM;
- pa = otx2_iova_to_phys(pfvf->iommu_domain, iova); - dma_unmap_page_attrs(pfvf->dev, iova, size, - DMA_FROM_DEVICE, - DMA_ATTR_SKIP_CPU_SYNC); - put_page(virt_to_page(phys_to_virt(pa))); + otx2_free_bufs(pfvf, pool, iova, size); + iova = otx2_aura_allocptr(pfvf, pool_id); } } @@ -1251,6 +1291,8 @@ void otx2_aura_pool_free(struct otx2_nic *pfvf) pool = &pfvf->qset.pool[pool_id]; qmem_free(pfvf->dev, pool->stack); qmem_free(pfvf->dev, pool->fc_addr); + page_pool_destroy(pool->page_pool); + pool->page_pool = NULL; } devm_kfree(pfvf->dev, pfvf->qset.pool); pfvf->qset.pool = NULL; @@ -1334,8 +1376,9 @@ int otx2_aura_init(struct otx2_nic *pfvf, int aura_id, }
int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, - int stack_pages, int numptrs, int buf_size) + int stack_pages, int numptrs, int buf_size, int type) { + struct page_pool_params pp_params = { 0 }; struct npa_aq_enq_req *aq; struct otx2_pool *pool; int err; @@ -1379,6 +1422,22 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, aq->ctype = NPA_AQ_CTYPE_POOL; aq->op = NPA_AQ_INSTOP_INIT;
+ if (type != AURA_NIX_RQ) { + pool->page_pool = NULL; + return 0; + } + + pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP; + pp_params.pool_size = numptrs; + pp_params.nid = NUMA_NO_NODE; + pp_params.dev = pfvf->dev; + pp_params.dma_dir = DMA_FROM_DEVICE; + pool->page_pool = page_pool_create(&pp_params); + if (IS_ERR(pool->page_pool)) { + netdev_err(pfvf->netdev, "Creation of page pool failed\n"); + return PTR_ERR(pool->page_pool); + } + return 0; }
@@ -1413,7 +1472,7 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
/* Initialize pool context */ err = otx2_pool_init(pfvf, pool_id, stack_pages, - num_sqbs, hw->sqb_size); + num_sqbs, hw->sqb_size, AURA_NIX_SQ); if (err) goto fail; } @@ -1476,7 +1535,7 @@ int otx2_rq_aura_pool_init(struct otx2_nic *pfvf) } for (pool_id = 0; pool_id < hw->rqpool_cnt; pool_id++) { err = otx2_pool_init(pfvf, pool_id, stack_pages, - num_ptrs, pfvf->rbsize); + num_ptrs, pfvf->rbsize, AURA_NIX_RQ); if (err) goto fail; } @@ -1660,7 +1719,6 @@ int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable) req->bpid_per_chan = 0; #endif
- return otx2_sync_mbox_msg(&pfvf->mbox); } EXPORT_SYMBOL(otx2_nix_config_bp); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index c15d1864a6371..4f0ac8158ed12 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -934,7 +934,7 @@ int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, int otx2_rxtx_enable(struct otx2_nic *pfvf, bool enable); void otx2_ctx_disable(struct mbox *mbox, int type, bool npa); int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable); -void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq); +void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, int qidx); void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq); int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura); int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); @@ -942,7 +942,7 @@ int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); int otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, dma_addr_t *dma); int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, - int stack_pages, int numptrs, int buf_size); + int stack_pages, int numptrs, int buf_size, int type); int otx2_aura_init(struct otx2_nic *pfvf, int aura_id, int pool_id, int numptrs);
@@ -1012,6 +1012,8 @@ u16 otx2_get_max_mtu(struct otx2_nic *pfvf); int otx2_handle_ntuple_tc_features(struct net_device *netdev, netdev_features_t features); int otx2_smq_flush(struct otx2_nic *pfvf, int smq); +void otx2_free_bufs(struct otx2_nic *pfvf, struct otx2_pool *pool, + u64 iova, int size);
/* tc support */ int otx2_init_tc(struct otx2_nic *nic); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 6b7fb324e756e..8385b46736934 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1591,7 +1591,9 @@ static void otx2_free_hw_resources(struct otx2_nic *pf) struct nix_lf_free_req *free_req; struct mbox *mbox = &pf->mbox; struct otx2_cq_queue *cq; + struct otx2_pool *pool; struct msg_req *req; + int pool_id; int qidx;
/* Ensure all SQE are processed */ @@ -1618,7 +1620,7 @@ static void otx2_free_hw_resources(struct otx2_nic *pf) for (qidx = 0; qidx < qset->cq_cnt; qidx++) { cq = &qset->cq[qidx]; if (cq->cq_type == CQ_RX) - otx2_cleanup_rx_cqes(pf, cq); + otx2_cleanup_rx_cqes(pf, cq, qidx); else otx2_cleanup_tx_cqes(pf, cq); } @@ -1629,6 +1631,13 @@ static void otx2_free_hw_resources(struct otx2_nic *pf) /* Free RQ buffer pointers*/ otx2_free_aura_ptr(pf, AURA_NIX_RQ);
+ for (qidx = 0; qidx < pf->hw.rx_queues; qidx++) { + pool_id = otx2_get_pool_idx(pf, AURA_NIX_RQ, qidx); + pool = &pf->qset.pool[pool_id]; + page_pool_destroy(pool->page_pool); + pool->page_pool = NULL; + } + otx2_free_cq_res(pf);
/* Free all ingress bandwidth profiles allocated */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index e579183e52392..cc704cd3b5ae1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -218,9 +218,6 @@ static bool otx2_skb_add_frag(struct otx2_nic *pfvf, struct sk_buff *skb, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, va - page_address(page) + off, len - off, pfvf->rbsize); - - otx2_dma_unmap_page(pfvf, iova - OTX2_HEAD_ROOM, - pfvf->rbsize, DMA_FROM_DEVICE); return true; }
@@ -383,6 +380,8 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf, if (pfvf->netdev->features & NETIF_F_RXCSUM) skb->ip_summed = CHECKSUM_UNNECESSARY;
+ skb_mark_for_recycle(skb); + napi_gro_frags(napi); }
@@ -1191,11 +1190,13 @@ bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq, } EXPORT_SYMBOL(otx2_sq_append_skb);
-void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq) +void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, int qidx) { struct nix_cqe_rx_s *cqe; + struct otx2_pool *pool; int processed_cqe = 0; - u64 iova, pa; + u16 pool_id; + u64 iova;
if (pfvf->xdp_prog) xdp_rxq_info_unreg(&cq->xdp_rxq); @@ -1203,6 +1204,9 @@ void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq) if (otx2_nix_cq_op_status(pfvf, cq) || !cq->pend_cqe) return;
+ pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_RQ, qidx); + pool = &pfvf->qset.pool[pool_id]; + while (cq->pend_cqe) { cqe = (struct nix_cqe_rx_s *)otx2_get_next_cqe(cq); processed_cqe++; @@ -1215,9 +1219,8 @@ void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq) continue; } iova = cqe->sg.seg_addr - OTX2_HEAD_ROOM; - pa = otx2_iova_to_phys(pfvf->iommu_domain, iova); - otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize, DMA_FROM_DEVICE); - put_page(virt_to_page(phys_to_virt(pa))); + + otx2_free_bufs(pfvf, pool, iova, pfvf->rbsize); }
/* Free CQEs to HW */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h index 7ab6db9a986fa..b5d689eeff80b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -118,6 +118,7 @@ struct otx2_cq_poll { struct otx2_pool { struct qmem *stack; struct qmem *fc_addr; + struct page_pool *page_pool; u16 rbsize; };
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c index e142d43f5a62c..95a2c8e616bd8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c @@ -63,7 +63,7 @@ static int otx2_qos_sq_aura_pool_init(struct otx2_nic *pfvf, int qidx)
/* Initialize pool context */ err = otx2_pool_init(pfvf, pool_id, stack_pages, - num_sqbs, hw->sqb_size); + num_sqbs, hw->sqb_size, AURA_NIX_SQ); if (err) goto aura_free;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Suman Ghosh sumang@marvell.com
[ Upstream commit b4164de5041b51cda3438e75bce668e2556057c3 ]
Set xdp rx ring memory type as MEM_TYPE_PAGE_POOL for af-xdp to work. This is needed since xdp_return_frame internally will use page pools.
Fixes: 06059a1a9a4a ("octeontx2-pf: Add XDP support to netdev PF") Signed-off-by: Suman Ghosh sumang@marvell.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 5e11599d13223..59a7e6f376f47 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -988,6 +988,7 @@ static int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx) int err, pool_id, non_xdp_queues; struct nix_aq_enq_req *aq; struct otx2_cq_queue *cq; + struct otx2_pool *pool;
cq = &qset->cq[qidx]; cq->cq_idx = qidx; @@ -996,8 +997,13 @@ static int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx) cq->cq_type = CQ_RX; cq->cint_idx = qidx; cq->cqe_cnt = qset->rqe_cnt; - if (pfvf->xdp_prog) + if (pfvf->xdp_prog) { + pool = &qset->pool[qidx]; xdp_rxq_info_reg(&cq->xdp_rxq, pfvf->netdev, qidx, 0); + xdp_rxq_info_reg_mem_model(&cq->xdp_rxq, + MEM_TYPE_PAGE_POOL, + pool->page_pool); + } } else if (qidx < non_xdp_queues) { cq->cq_type = CQ_TX; cq->cint_idx = qidx - pfvf->hw.rx_queues;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wang Liang wangliang74@huawei.com
[ Upstream commit e279024617134c94fd3e37470156534d5f2b3472 ]
Syzbot reported a slab-use-after-free with the following call trace:
================================================================== BUG: KASAN: slab-use-after-free in tipc_aead_encrypt_done+0x4bd/0x510 net/tipc/crypto.c:840 Read of size 8 at addr ffff88807a733000 by task kworker/1:0/25
Call Trace: kasan_report+0xd9/0x110 mm/kasan/report.c:601 tipc_aead_encrypt_done+0x4bd/0x510 net/tipc/crypto.c:840 crypto_request_complete include/crypto/algapi.h:266 aead_request_complete include/crypto/internal/aead.h:85 cryptd_aead_crypt+0x3b8/0x750 crypto/cryptd.c:772 crypto_request_complete include/crypto/algapi.h:266 cryptd_queue_worker+0x131/0x200 crypto/cryptd.c:181 process_one_work+0x9fb/0x1b60 kernel/workqueue.c:3231
Allocated by task 8355: kzalloc_noprof include/linux/slab.h:778 tipc_crypto_start+0xcc/0x9e0 net/tipc/crypto.c:1466 tipc_init_net+0x2dd/0x430 net/tipc/core.c:72 ops_init+0xb9/0x650 net/core/net_namespace.c:139 setup_net+0x435/0xb40 net/core/net_namespace.c:343 copy_net_ns+0x2f0/0x670 net/core/net_namespace.c:508 create_new_namespaces+0x3ea/0xb10 kernel/nsproxy.c:110 unshare_nsproxy_namespaces+0xc0/0x1f0 kernel/nsproxy.c:228 ksys_unshare+0x419/0x970 kernel/fork.c:3323 __do_sys_unshare kernel/fork.c:3394
Freed by task 63: kfree+0x12a/0x3b0 mm/slub.c:4557 tipc_crypto_stop+0x23c/0x500 net/tipc/crypto.c:1539 tipc_exit_net+0x8c/0x110 net/tipc/core.c:119 ops_exit_list+0xb0/0x180 net/core/net_namespace.c:173 cleanup_net+0x5b7/0xbf0 net/core/net_namespace.c:640 process_one_work+0x9fb/0x1b60 kernel/workqueue.c:3231
After freed the tipc_crypto tx by delete namespace, tipc_aead_encrypt_done may still visit it in cryptd_queue_worker workqueue.
I reproduce this issue by: ip netns add ns1 ip link add veth1 type veth peer name veth2 ip link set veth1 netns ns1 ip netns exec ns1 tipc bearer enable media eth dev veth1 ip netns exec ns1 tipc node set key this_is_a_master_key master ip netns exec ns1 tipc bearer disable media eth dev veth1 ip netns del ns1
The key of reproduction is that, simd_aead_encrypt is interrupted, leading to crypto_simd_usable() return false. Thus, the cryptd_queue_worker is triggered, and the tipc_crypto tx will be visited.
tipc_disc_timeout tipc_bearer_xmit_skb tipc_crypto_xmit tipc_aead_encrypt crypto_aead_encrypt // encrypt() simd_aead_encrypt // crypto_simd_usable() is false child = &ctx->cryptd_tfm->base;
simd_aead_encrypt crypto_aead_encrypt // encrypt() cryptd_aead_encrypt_enqueue cryptd_aead_enqueue cryptd_enqueue_request // trigger cryptd_queue_worker queue_work_on(smp_processor_id(), cryptd_wq, &cpu_queue->work)
Fix this by holding net reference count before encrypt.
Reported-by: syzbot+55c12726619ff85ce1f6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=55c12726619ff85ce1f6 Fixes: fc1b6d6de220 ("tipc: introduce TIPC encryption & authentication") Signed-off-by: Wang Liang wangliang74@huawei.com Link: https://patch.msgid.link/20250520101404.1341730-1-wangliang74@huawei.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/tipc/crypto.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c index 25c18f8783ce9..a9c02fac039b5 100644 --- a/net/tipc/crypto.c +++ b/net/tipc/crypto.c @@ -817,12 +817,16 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct sk_buff *skb, goto exit; }
+ /* Get net to avoid freed tipc_crypto when delete namespace */ + get_net(aead->crypto->net); + /* Now, do encrypt */ rc = crypto_aead_encrypt(req); if (rc == -EINPROGRESS || rc == -EBUSY) return rc;
tipc_bearer_put(b); + put_net(aead->crypto->net);
exit: kfree(ctx); @@ -860,6 +864,7 @@ static void tipc_aead_encrypt_done(struct crypto_async_request *base, int err) kfree(tx_ctx); tipc_bearer_put(b); tipc_aead_put(aead); + put_net(net); }
/**
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 0eefa27b493306928d88af6368193b134c98fd64 ]
This patch enables the LMT line for a PF/VF by setting the LMT_ENA bit in the APR_LMT_MAP_ENTRY_S structure.
Additionally, it simplifies the logic for calculating the LMTST table index by consistently using the maximum number of hw supported VFs (i.e., 256).
Fixes: 873a1e3d207a ("octeontx2-af: cn10k: Setting up lmtst map table"). Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Link: https://patch.msgid.link/20250521060834.19780-2-gakula@marvell.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/rvu_cn10k.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c index f9faa5b23bb9d..6ec0609074dca 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c @@ -15,13 +15,17 @@ #define LMT_TBL_OP_WRITE 1 #define LMT_MAP_TABLE_SIZE (128 * 1024) #define LMT_MAPTBL_ENTRY_SIZE 16 +#define LMT_MAX_VFS 256 + +#define LMT_MAP_ENTRY_ENA BIT_ULL(20) +#define LMT_MAP_ENTRY_LINES GENMASK_ULL(18, 16)
/* Function to perform operations (read/write) on lmtst map table */ static int lmtst_map_table_ops(struct rvu *rvu, u32 index, u64 *val, int lmt_tbl_op) { void __iomem *lmt_map_base; - u64 tbl_base; + u64 tbl_base, cfg;
tbl_base = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_MAP_BASE);
@@ -35,6 +39,13 @@ static int lmtst_map_table_ops(struct rvu *rvu, u32 index, u64 *val, *val = readq(lmt_map_base + index); } else { writeq((*val), (lmt_map_base + index)); + + cfg = FIELD_PREP(LMT_MAP_ENTRY_ENA, 0x1); + /* 2048 LMTLINES */ + cfg |= FIELD_PREP(LMT_MAP_ENTRY_LINES, 0x6); + + writeq(cfg, (lmt_map_base + (index + 8))); + /* Flushing the AP interceptor cache to make APR_LMT_MAP_ENTRY_S * changes effective. Write 1 for flush and read is being used as a * barrier and sets up a data dependency. Write to 0 after a write @@ -52,7 +63,7 @@ static int lmtst_map_table_ops(struct rvu *rvu, u32 index, u64 *val, #define LMT_MAP_TBL_W1_OFF 8 static u32 rvu_get_lmtst_tbl_index(struct rvu *rvu, u16 pcifunc) { - return ((rvu_get_pf(pcifunc) * rvu->hw->total_vfs) + + return ((rvu_get_pf(pcifunc) * LMT_MAX_VFS) + (pcifunc & RVU_PFVF_FUNC_MASK)) * LMT_MAPTBL_ENTRY_SIZE; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Geetha sowjanya gakula@marvell.com
[ Upstream commit a6ae7129819ad20788e610261246e71736543b8b ]
The current implementation maps the APR table using a fixed size, which can lead to incorrect mapping when the number of PFs and VFs varies. This patch corrects the mapping by calculating the APR table size dynamically based on the values configured in the APR_LMT_CFG register, ensuring accurate representation of APR entries in debugfs.
Fixes: 0daa55d033b0 ("octeontx2-af: cn10k: debugfs for dumping LMTST map table"). Signed-off-by: Geetha sowjanya gakula@marvell.com Link: https://patch.msgid.link/20250521060834.19780-3-gakula@marvell.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c | 9 ++++++--- .../net/ethernet/marvell/octeontx2/af/rvu_debugfs.c | 11 ++++++++--- 2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c index 6ec0609074dca..5cd45846237e2 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c @@ -13,7 +13,6 @@ /* RVU LMTST */ #define LMT_TBL_OP_READ 0 #define LMT_TBL_OP_WRITE 1 -#define LMT_MAP_TABLE_SIZE (128 * 1024) #define LMT_MAPTBL_ENTRY_SIZE 16 #define LMT_MAX_VFS 256
@@ -26,10 +25,14 @@ static int lmtst_map_table_ops(struct rvu *rvu, u32 index, u64 *val, { void __iomem *lmt_map_base; u64 tbl_base, cfg; + int pfs, vfs;
tbl_base = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_MAP_BASE); + cfg = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_CFG); + vfs = 1 << (cfg & 0xF); + pfs = 1 << ((cfg >> 4) & 0x7);
- lmt_map_base = ioremap_wc(tbl_base, LMT_MAP_TABLE_SIZE); + lmt_map_base = ioremap_wc(tbl_base, pfs * vfs * LMT_MAPTBL_ENTRY_SIZE); if (!lmt_map_base) { dev_err(rvu->dev, "Failed to setup lmt map table mapping!!\n"); return -ENOMEM; @@ -80,7 +83,7 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
mutex_lock(&rvu->rsrc_lock); rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_REQ, iova); - pf = rvu_get_pf(pcifunc) & 0x1F; + pf = rvu_get_pf(pcifunc) & RVU_PFVF_PF_MASK; val = BIT_ULL(63) | BIT_ULL(14) | BIT_ULL(13) | pf << 8 | ((pcifunc & RVU_PFVF_FUNC_MASK) & 0xFF); rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TXN_REQ, val); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c index a3c1d82032f55..aa2ab987eb752 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c @@ -580,6 +580,7 @@ static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp, u64 lmt_addr, val, tbl_base; int pf, vf, num_vfs, hw_vfs; void __iomem *lmt_map_base; + int apr_pfs, apr_vfs; int buf_size = 10240; size_t off = 0; int index = 0; @@ -595,8 +596,12 @@ static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp, return -ENOMEM;
tbl_base = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_MAP_BASE); + val = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_CFG); + apr_vfs = 1 << (val & 0xF); + apr_pfs = 1 << ((val >> 4) & 0x7);
- lmt_map_base = ioremap_wc(tbl_base, 128 * 1024); + lmt_map_base = ioremap_wc(tbl_base, apr_pfs * apr_vfs * + LMT_MAPTBL_ENTRY_SIZE); if (!lmt_map_base) { dev_err(rvu->dev, "Failed to setup lmt map table mapping!!\n"); kfree(buf); @@ -618,7 +623,7 @@ static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp, off += scnprintf(&buf[off], buf_size - 1 - off, "PF%d \t\t\t", pf);
- index = pf * rvu->hw->total_vfs * LMT_MAPTBL_ENTRY_SIZE; + index = pf * apr_vfs * LMT_MAPTBL_ENTRY_SIZE; off += scnprintf(&buf[off], buf_size - 1 - off, " 0x%llx\t\t", (tbl_base + index)); lmt_addr = readq(lmt_map_base + index); @@ -631,7 +636,7 @@ static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp, /* Reading num of VFs per PF */ rvu_get_pf_numvfs(rvu, pf, &num_vfs, &hw_vfs); for (vf = 0; vf < num_vfs; vf++) { - index = (pf * rvu->hw->total_vfs * 16) + + index = (pf * apr_vfs * LMT_MAPTBL_ENTRY_SIZE) + ((vf + 1) * LMT_MAPTBL_ENTRY_SIZE); off += scnprintf(&buf[off], buf_size - 1 - off, "PF%d:VF%d \t\t", pf, vf);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ivan Pravdin ipravdin.official@gmail.com
commit b2df03ed4052e97126267e8c13ad4204ea6ba9b6 upstream.
If accept(2) is called on socket type algif_hash with MSG_MORE flag set and crypto_ahash_import fails, sk2 is freed. However, it is also freed in af_alg_release, leading to slab-use-after-free error.
Fixes: fe869cdb89c9 ("crypto: algif_hash - User-space interface for hash operations") Cc: stable@vger.kernel.org Signed-off-by: Ivan Pravdin ipravdin.official@gmail.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- crypto/algif_hash.c | 4 ---- 1 file changed, 4 deletions(-)
--- a/crypto/algif_hash.c +++ b/crypto/algif_hash.c @@ -263,10 +263,6 @@ static int hash_accept(struct socket *so return err;
err = crypto_ahash_import(&ctx2->req, state); - if (err) { - sock_orphan(sk2); - sock_put(sk2); - }
return err; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dominik Grzegorzek dominik.grzegorzek@oracle.com
commit d6ebcde6d4ecf34f8495fb30516645db3aea8993 upstream.
A recent patch that addressed a UAF introduced a reference count leak: the parallel_data refcount is incremented unconditionally, regardless of the return value of queue_work(). If the work item is already queued, the incremented refcount is never decremented.
Fix this by checking the return value of queue_work() and decrementing the refcount when necessary.
Resolves:
Unreferenced object 0xffff9d9f421e3d80 (size 192): comm "cryptomgr_probe", pid 157, jiffies 4294694003 hex dump (first 32 bytes): 80 8b cf 41 9f 9d ff ff b8 97 e0 89 ff ff ff ff ...A............ d0 97 e0 89 ff ff ff ff 19 00 00 00 1f 88 23 00 ..............#. backtrace (crc 838fb36): __kmalloc_cache_noprof+0x284/0x320 padata_alloc_pd+0x20/0x1e0 padata_alloc_shell+0x3b/0xa0 0xffffffffc040a54d cryptomgr_probe+0x43/0xc0 kthread+0xf6/0x1f0 ret_from_fork+0x2f/0x50 ret_from_fork_asm+0x1a/0x30
Fixes: dd7d37ccf6b1 ("padata: avoid UAF for reorder_work") Cc: stable@vger.kernel.org Signed-off-by: Dominik Grzegorzek dominik.grzegorzek@oracle.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/padata.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/kernel/padata.c +++ b/kernel/padata.c @@ -350,7 +350,8 @@ static void padata_reorder(struct parall * To avoid UAF issue, add pd ref here, and put pd ref after reorder_work finish. */ padata_get_pd(pd); - queue_work(pinst->serial_wq, &pd->reorder_work); + if (!queue_work(pinst->serial_wq, &pd->reorder_work)) + padata_put_pd(pd); } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Carlos Sanchez carlossanchez@geotab.com
commit ef0841e4cb08754be6cb42bf97739fce5d086e5f upstream.
Allows slcan to receive short messages (typically errors) from the serial interface.
When error support was added to slcan protocol in b32ff4668544e1333b694fcc7812b2d7397b4d6a ("can: slcan: extend the protocol with error info") the minimum valid message size changed from 5 (minimum standard can frame tIII0) to 3 ("e1a" is a valid protocol message, it is one of the examples given in the comments for slcan_bump_err() ), but the check for minimum message length prodicating all decoding was not adjusted. This makes short error messages discarded and error frames not being generated.
This patch changes the minimum length to the new minimum (3 characters, excluding terminator, is now a valid message).
Signed-off-by: Carlos Sanchez carlossanchez@geotab.com Fixes: b32ff4668544 ("can: slcan: extend the protocol with error info") Reviewed-by: Vincent Mailhol mailhol.vincent@wanadoo.fr Link: https://patch.msgid.link/20250520102305.1097494-1-carlossanchez@geotab.com Cc: stable@vger.kernel.org Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/can/slcan/slcan-core.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-)
--- a/drivers/net/can/slcan/slcan-core.c +++ b/drivers/net/can/slcan/slcan-core.c @@ -71,12 +71,21 @@ MODULE_AUTHOR("Dario Binacchi <dario.bin #define SLCAN_CMD_LEN 1 #define SLCAN_SFF_ID_LEN 3 #define SLCAN_EFF_ID_LEN 8 +#define SLCAN_DATA_LENGTH_LEN 1 +#define SLCAN_ERROR_LEN 1 #define SLCAN_STATE_LEN 1 #define SLCAN_STATE_BE_RXCNT_LEN 3 #define SLCAN_STATE_BE_TXCNT_LEN 3 -#define SLCAN_STATE_FRAME_LEN (1 + SLCAN_CMD_LEN + \ - SLCAN_STATE_BE_RXCNT_LEN + \ - SLCAN_STATE_BE_TXCNT_LEN) +#define SLCAN_STATE_MSG_LEN (SLCAN_CMD_LEN + \ + SLCAN_STATE_LEN + \ + SLCAN_STATE_BE_RXCNT_LEN + \ + SLCAN_STATE_BE_TXCNT_LEN) +#define SLCAN_ERROR_MSG_LEN_MIN (SLCAN_CMD_LEN + \ + SLCAN_ERROR_LEN + \ + SLCAN_DATA_LENGTH_LEN) +#define SLCAN_FRAME_MSG_LEN_MIN (SLCAN_CMD_LEN + \ + SLCAN_SFF_ID_LEN + \ + SLCAN_DATA_LENGTH_LEN) struct slcan { struct can_priv can;
@@ -176,6 +185,9 @@ static void slcan_bump_frame(struct slca u32 tmpid; char *cmd = sl->rbuff;
+ if (sl->rcount < SLCAN_FRAME_MSG_LEN_MIN) + return; + skb = alloc_can_skb(sl->dev, &cf); if (unlikely(!skb)) { sl->dev->stats.rx_dropped++; @@ -281,7 +293,7 @@ static void slcan_bump_state(struct slca return; }
- if (state == sl->can.state || sl->rcount < SLCAN_STATE_FRAME_LEN) + if (state == sl->can.state || sl->rcount != SLCAN_STATE_MSG_LEN) return;
cmd += SLCAN_STATE_BE_RXCNT_LEN + SLCAN_CMD_LEN + 1; @@ -328,6 +340,9 @@ static void slcan_bump_err(struct slcan bool rx_errors = false, tx_errors = false, rx_over_errors = false; int i, len;
+ if (sl->rcount < SLCAN_ERROR_MSG_LEN_MIN) + return; + /* get len from sanitized ASCII value */ len = cmd[1]; if (len >= '0' && len < '9') @@ -456,8 +471,7 @@ static void slcan_bump(struct slcan *sl) static void slcan_unesc(struct slcan *sl, unsigned char s) { if ((s == '\r') || (s == '\a')) { /* CR or BEL ends the pdu */ - if (!test_and_clear_bit(SLF_ERROR, &sl->flags) && - sl->rcount > 4) + if (!test_and_clear_bit(SLF_ERROR, &sl->flags)) slcan_bump(sl);
sl->rcount = 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Oliver Hartkopp socketcan@hartkopp.net
commit c2aba69d0c36a496ab4f2e81e9c2b271f2693fd7 upstream.
The CAN broadcast manager (CAN BCM) can send a sequence of CAN frames via hrtimer. The content and also the length of the sequence can be changed resp reduced at runtime where the 'currframe' counter is then set to zero.
Although this appeared to be a safe operation the updates of 'currframe' can be triggered from user space and hrtimer context in bcm_can_tx(). Anderson Nascimento created a proof of concept that triggered a KASAN slab-out-of-bounds read access which can be prevented with a spin_lock_bh.
At the rework of bcm_can_tx() the 'count' variable has been moved into the protected section as this variable can be modified from both contexts too.
Fixes: ffd980f976e7 ("[CAN]: Add broadcast manager (bcm) protocol") Reported-by: Anderson Nascimento anderson@allelesecurity.com Tested-by: Anderson Nascimento anderson@allelesecurity.com Reviewed-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Oliver Hartkopp socketcan@hartkopp.net Link: https://patch.msgid.link/20250519125027.11900-1-socketcan@hartkopp.net Cc: stable@vger.kernel.org Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/can/bcm.c | 66 +++++++++++++++++++++++++++++++++++++++------------------- 1 file changed, 45 insertions(+), 21 deletions(-)
--- a/net/can/bcm.c +++ b/net/can/bcm.c @@ -58,6 +58,7 @@ #include <linux/can/skb.h> #include <linux/can/bcm.h> #include <linux/slab.h> +#include <linux/spinlock.h> #include <net/sock.h> #include <net/net_namespace.h>
@@ -120,6 +121,7 @@ struct bcm_op { struct canfd_frame last_sframe; struct sock *sk; struct net_device *rx_reg_dev; + spinlock_t bcm_tx_lock; /* protect currframe/count in runtime updates */ };
struct bcm_sock { @@ -273,13 +275,18 @@ static void bcm_can_tx(struct bcm_op *op { struct sk_buff *skb; struct net_device *dev; - struct canfd_frame *cf = op->frames + op->cfsiz * op->currframe; + struct canfd_frame *cf; int err;
/* no target device? => exit */ if (!op->ifindex) return;
+ /* read currframe under lock protection */ + spin_lock_bh(&op->bcm_tx_lock); + cf = op->frames + op->cfsiz * op->currframe; + spin_unlock_bh(&op->bcm_tx_lock); + dev = dev_get_by_index(sock_net(op->sk), op->ifindex); if (!dev) { /* RFC: should this bcm_op remove itself here? */ @@ -300,6 +307,10 @@ static void bcm_can_tx(struct bcm_op *op skb->dev = dev; can_skb_set_owner(skb, op->sk); err = can_send(skb, 1); + + /* update currframe and count under lock protection */ + spin_lock_bh(&op->bcm_tx_lock); + if (!err) op->frames_abs++;
@@ -308,6 +319,11 @@ static void bcm_can_tx(struct bcm_op *op /* reached last frame? */ if (op->currframe >= op->nframes) op->currframe = 0; + + if (op->count > 0) + op->count--; + + spin_unlock_bh(&op->bcm_tx_lock); out: dev_put(dev); } @@ -404,7 +420,7 @@ static enum hrtimer_restart bcm_tx_timeo struct bcm_msg_head msg_head;
if (op->kt_ival1 && (op->count > 0)) { - op->count--; + bcm_can_tx(op); if (!op->count && (op->flags & TX_COUNTEVT)) {
/* create notification to user */ @@ -419,7 +435,6 @@ static enum hrtimer_restart bcm_tx_timeo
bcm_send_to_user(op, &msg_head, NULL, 0); } - bcm_can_tx(op);
} else if (op->kt_ival2) { bcm_can_tx(op); @@ -914,6 +929,27 @@ static int bcm_tx_setup(struct bcm_msg_h } op->flags = msg_head->flags;
+ /* only lock for unlikely count/nframes/currframe changes */ + if (op->nframes != msg_head->nframes || + op->flags & TX_RESET_MULTI_IDX || + op->flags & SETTIMER) { + + spin_lock_bh(&op->bcm_tx_lock); + + if (op->nframes != msg_head->nframes || + op->flags & TX_RESET_MULTI_IDX) { + /* potentially update changed nframes */ + op->nframes = msg_head->nframes; + /* restart multiple frame transmission */ + op->currframe = 0; + } + + if (op->flags & SETTIMER) + op->count = msg_head->count; + + spin_unlock_bh(&op->bcm_tx_lock); + } + } else { /* insert new BCM operation for the given can_id */
@@ -921,9 +957,14 @@ static int bcm_tx_setup(struct bcm_msg_h if (!op) return -ENOMEM;
+ spin_lock_init(&op->bcm_tx_lock); op->can_id = msg_head->can_id; op->cfsiz = CFSIZ(msg_head->flags); op->flags = msg_head->flags; + op->nframes = msg_head->nframes; + + if (op->flags & SETTIMER) + op->count = msg_head->count;
/* create array for CAN frames and copy the data */ if (msg_head->nframes > 1) { @@ -982,22 +1023,8 @@ static int bcm_tx_setup(struct bcm_msg_h
} /* if ((op = bcm_find_op(&bo->tx_ops, msg_head->can_id, ifindex))) */
- if (op->nframes != msg_head->nframes) { - op->nframes = msg_head->nframes; - /* start multiple frame transmission with index 0 */ - op->currframe = 0; - } - - /* check flags */ - - if (op->flags & TX_RESET_MULTI_IDX) { - /* start multiple frame transmission with index 0 */ - op->currframe = 0; - } - if (op->flags & SETTIMER) { /* set timer values */ - op->count = msg_head->count; op->ival1 = msg_head->ival1; op->ival2 = msg_head->ival2; op->kt_ival1 = bcm_timeval_to_ktime(msg_head->ival1); @@ -1014,11 +1041,8 @@ static int bcm_tx_setup(struct bcm_msg_h op->flags |= TX_ANNOUNCE; }
- if (op->flags & TX_ANNOUNCE) { + if (op->flags & TX_ANNOUNCE) bcm_can_tx(op); - if (op->count) - op->count--; - }
if (op->flags & STARTTIMER) bcm_tx_start_timer(op);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Oliver Hartkopp socketcan@hartkopp.net
commit dac5e6249159ac255dad9781793dbe5908ac9ddb upstream.
When the procfs content is generated for a bcm_op which is in the process to be removed the procfs output might show unreliable data (UAF).
As the removal of bcm_op's is already implemented with rcu handling this patch adds the missing rcu_read_lock() and makes sure the list entries are properly removed under rcu protection.
Fixes: f1b4e32aca08 ("can: bcm: use call_rcu() instead of costly synchronize_rcu()") Reported-by: Anderson Nascimento anderson@allelesecurity.com Suggested-by: Anderson Nascimento anderson@allelesecurity.com Tested-by: Anderson Nascimento anderson@allelesecurity.com Signed-off-by: Oliver Hartkopp socketcan@hartkopp.net Link: https://patch.msgid.link/20250519125027.11900-2-socketcan@hartkopp.net Cc: stable@vger.kernel.org # >= 5.4 Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/can/bcm.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
--- a/net/can/bcm.c +++ b/net/can/bcm.c @@ -207,7 +207,9 @@ static int bcm_proc_show(struct seq_file seq_printf(m, " / bound %s", bcm_proc_getifname(net, ifname, bo->ifindex)); seq_printf(m, " <<<\n");
- list_for_each_entry(op, &bo->rx_ops, list) { + rcu_read_lock(); + + list_for_each_entry_rcu(op, &bo->rx_ops, list) {
unsigned long reduction;
@@ -263,6 +265,9 @@ static int bcm_proc_show(struct seq_file seq_printf(m, "# sent %ld\n", op->frames_abs); } seq_putc(m, '\n'); + + rcu_read_unlock(); + return 0; } #endif /* CONFIG_PROC_FS */ @@ -816,7 +821,7 @@ static int bcm_delete_rx_op(struct list_ REGMASK(op->can_id), bcm_rx_handler, op);
- list_del(&op->list); + list_del_rcu(&op->list); bcm_remove_op(op); return 1; /* done */ } @@ -836,7 +841,7 @@ static int bcm_delete_tx_op(struct list_ list_for_each_entry_safe(op, n, ops, list) { if ((op->can_id == mh->can_id) && (op->ifindex == ifindex) && (op->flags & CAN_FD_FRAME) == (mh->flags & CAN_FD_FRAME)) { - list_del(&op->list); + list_del_rcu(&op->list); bcm_remove_op(op); return 1; /* done */ } @@ -1258,7 +1263,7 @@ static int bcm_rx_setup(struct bcm_msg_h bcm_rx_handler, op, "bcm", sk); if (err) { /* this bcm rx op is broken -> remove it */ - list_del(&op->list); + list_del_rcu(&op->list); bcm_remove_op(op); return err; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit 93a81ca0657758b607c3f4ba889ae806be9beb73 upstream.
The PCM OSS layer tries to clear the buffer with the silence data at initialization (or reconfiguration) of a stream with the explicit call of snd_pcm_format_set_silence() with runtime->dma_area. But this may lead to a UAF because the accessed runtime->dma_area might be freed concurrently, as it's performed outside the PCM ops.
For avoiding it, move the code into the PCM core and perform it inside the buffer access lock, so that it won't be changed during the operation.
Reported-by: syzbot+32d4647f551007595173@syzkaller.appspotmail.com Closes: https://lore.kernel.org/68164d8e.050a0220.11da1b.0019.GAE@google.com Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20250516080817.20068-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/sound/pcm.h | 2 ++ sound/core/oss/pcm_oss.c | 3 +-- sound/core/pcm_native.c | 11 +++++++++++ 3 files changed, 14 insertions(+), 2 deletions(-)
--- a/include/sound/pcm.h +++ b/include/sound/pcm.h @@ -1429,6 +1429,8 @@ int snd_pcm_lib_mmap_iomem(struct snd_pc #define snd_pcm_lib_mmap_iomem NULL #endif
+void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime); + /** * snd_pcm_limit_isa_dma_size - Get the max size fitting with ISA DMA transfer * @dma: DMA number --- a/sound/core/oss/pcm_oss.c +++ b/sound/core/oss/pcm_oss.c @@ -1085,8 +1085,7 @@ static int snd_pcm_oss_change_params_loc runtime->oss.params = 0; runtime->oss.prepare = 1; runtime->oss.buffer_used = 0; - if (runtime->dma_area) - snd_pcm_format_set_silence(runtime->format, runtime->dma_area, bytes_to_samples(runtime, runtime->dma_bytes)); + snd_pcm_runtime_buffer_set_silence(runtime);
runtime->oss.period_frames = snd_pcm_alsa_frames(substream, oss_period_size);
--- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -703,6 +703,17 @@ static void snd_pcm_buffer_access_unlock atomic_inc(&runtime->buffer_accessing); }
+/* fill the PCM buffer with the current silence format; called from pcm_oss.c */ +void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime) +{ + snd_pcm_buffer_access_lock(runtime); + if (runtime->dma_area) + snd_pcm_format_set_silence(runtime->format, runtime->dma_area, + bytes_to_samples(runtime, runtime->dma_bytes)); + snd_pcm_buffer_access_unlock(runtime); +} +EXPORT_SYMBOL_GPL(snd_pcm_runtime_buffer_set_silence); + #if IS_ENABLED(CONFIG_SND_PCM_OSS) #define is_oss_stream(substream) ((substream)->oss.oss) #else
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ed Burcher git@edburcher.com
commit 8d70503068510e6080c2c649cccb154f16de26c9 upstream.
Lenovo Yoga Pro 7 (gen 10) with Realtek ALC3306 and combined CS35L56 amplifiers need quirk ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN to enable bass
Signed-off-by: Ed Burcher git@edburcher.com Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20250519224907.31265-2-git@edburcher.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -10379,6 +10379,7 @@ static const struct snd_pci_quirk alc269 SND_PCI_QUIRK(0x17aa, 0x3866, "Lenovo 13X", ALC287_FIXUP_CS35L41_I2C_2), SND_PCI_QUIRK(0x17aa, 0x3869, "Lenovo Yoga7 14IAL7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), + SND_PCI_QUIRK(0x17aa, 0x390d, "Lenovo Yoga Pro 7 14ASP10", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC), SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ilia Gavrilov Ilia.Gavrilov@infotecs.ru
commit 239af1970bcb039a1551d2c438d113df0010c149 upstream.
For SOCK_STREAM sockets, if user buffer size (len) is less than skb size (skb->len), the remaining data from skb will be lost after calling kfree_skb().
To fix this, move the statement for partial reading above skb deletion.
Found by InfoTeCS on behalf of Linux Verification Center (linuxtesting.org)
Fixes: 30a584d944fb ("[LLX]: SOCK_DGRAM interface fixes") Cc: stable@vger.kernel.org Signed-off-by: Ilia Gavrilov Ilia.Gavrilov@infotecs.ru Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/llc/af_llc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/net/llc/af_llc.c +++ b/net/llc/af_llc.c @@ -888,15 +888,15 @@ static int llc_ui_recvmsg(struct socket if (sk->sk_type != SOCK_STREAM) goto copy_uaddr;
+ /* Partial read */ + if (used + offset < skb_len) + continue; + if (!(flags & MSG_PEEK)) { skb_unlink(skb, &sk->sk_receive_queue); kfree_skb(skb); *seq = 0; } - - /* Partial read */ - if (used + offset < skb_len) - continue; } while (len > 0);
out:
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vladimir Moskovkin Vladimir.Moskovkin@kaspersky.com
commit 4e89a4077490f52cde652d17e32519b666abf3a6 upstream.
If the 'buf' array received from the user contains an empty string, the 'length' variable will be zero. Accessing the 'buf' array element with index 'length - 1' will result in a buffer overflow.
Add a check for an empty string.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: e8a60aa7404b ("platform/x86: Introduce support for Systems Management Driver over WMI for Dell Systems") Cc: stable@vger.kernel.org Signed-off-by: Vladimir Moskovkin Vladimir.Moskovkin@kaspersky.com Link: https://lore.kernel.org/r/39973642a4f24295b4a8fad9109c5b08@kaspersky.com Reviewed-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c +++ b/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c @@ -45,7 +45,7 @@ static ssize_t current_password_store(st int length;
length = strlen(buf); - if (buf[length-1] == '\n') + if (length && buf[length - 1] == '\n') length--;
/* firmware does verifiation of min/max password length,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: feijuan.li feijuan.li@samsung.com
commit 6692dbc15e5ed40a3aa037aced65d7b8826c58cd upstream.
When DP connected to a device with HDR capability, the hdr structure was filled.Then connected to another sink device without hdr capability, but the hdr info still exist.
Fixes: e85959d6cbe0 ("drm: Parse HDR metadata info from EDID") Cc: stable@vger.kernel.org # v5.3+ Signed-off-by: "feijuan.li" feijuan.li@samsung.com Reviewed-by: Jani Nikula jani.nikula@intel.com Link: https://lore.kernel.org/r/20250514063511.4151780-1-feijuan.li@samsung.com Signed-off-by: Jani Nikula jani.nikula@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_edid.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/gpu/drm/drm_edid.c +++ b/drivers/gpu/drm/drm_edid.c @@ -6164,6 +6164,7 @@ static void drm_reset_display_info(struc info->has_hdmi_infoframe = false; info->rgb_quant_range_selectable = false; memset(&info->hdmi, 0, sizeof(info->hdmi)); + memset(&connector->hdr_sink_metadata, 0, sizeof(connector->hdr_sink_metadata));
info->edid_hdmi_rgb444_dc_modes = 0; info->edid_hdmi_ycbcr444_dc_modes = 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wang Zhaolong wangzhaolong1@huawei.com
commit a7a8fe56e932a36f43e031b398aef92341bf5ea0 upstream.
There is a race condition in the readdir concurrency process, which may access the rsp buffer after it has been released, triggering the following KASAN warning.
================================================================== BUG: KASAN: slab-use-after-free in cifs_fill_dirent+0xb03/0xb60 [cifs] Read of size 4 at addr ffff8880099b819c by task a.out/342975
CPU: 2 UID: 0 PID: 342975 Comm: a.out Not tainted 6.15.0-rc6+ #240 PREEMPT(full) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.1-2.fc37 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x53/0x70 print_report+0xce/0x640 kasan_report+0xb8/0xf0 cifs_fill_dirent+0xb03/0xb60 [cifs] cifs_readdir+0x12cb/0x3190 [cifs] iterate_dir+0x1a1/0x520 __x64_sys_getdents+0x134/0x220 do_syscall_64+0x4b/0x110 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f996f64b9f9 Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0d f7 c3 0c 00 f7 d8 64 89 8 RSP: 002b:00007f996f53de78 EFLAGS: 00000207 ORIG_RAX: 000000000000004e RAX: ffffffffffffffda RBX: 00007f996f53ecdc RCX: 00007f996f64b9f9 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000003 RBP: 00007f996f53dea0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000207 R12: ffffffffffffff88 R13: 0000000000000000 R14: 00007ffc8cd9a500 R15: 00007f996f51e000 </TASK>
Allocated by task 408: kasan_save_stack+0x20/0x40 kasan_save_track+0x14/0x30 __kasan_slab_alloc+0x6e/0x70 kmem_cache_alloc_noprof+0x117/0x3d0 mempool_alloc_noprof+0xf2/0x2c0 cifs_buf_get+0x36/0x80 [cifs] allocate_buffers+0x1d2/0x330 [cifs] cifs_demultiplex_thread+0x22b/0x2690 [cifs] kthread+0x394/0x720 ret_from_fork+0x34/0x70 ret_from_fork_asm+0x1a/0x30
Freed by task 342979: kasan_save_stack+0x20/0x40 kasan_save_track+0x14/0x30 kasan_save_free_info+0x3b/0x60 __kasan_slab_free+0x37/0x50 kmem_cache_free+0x2b8/0x500 cifs_buf_release+0x3c/0x70 [cifs] cifs_readdir+0x1c97/0x3190 [cifs] iterate_dir+0x1a1/0x520 __x64_sys_getdents64+0x134/0x220 do_syscall_64+0x4b/0x110 entry_SYSCALL_64_after_hwframe+0x76/0x7e
The buggy address belongs to the object at ffff8880099b8000 which belongs to the cache cifs_request of size 16588 The buggy address is located 412 bytes inside of freed 16588-byte region [ffff8880099b8000, ffff8880099bc0cc)
The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x99b8 head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 anon flags: 0x80000000000040(head|node=0|zone=1) page_type: f5(slab) raw: 0080000000000040 ffff888001e03400 0000000000000000 dead000000000001 raw: 0000000000000000 0000000000010001 00000000f5000000 0000000000000000 head: 0080000000000040 ffff888001e03400 0000000000000000 dead000000000001 head: 0000000000000000 0000000000010001 00000000f5000000 0000000000000000 head: 0080000000000003 ffffea0000266e01 00000000ffffffff 00000000ffffffff head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008 page dumped because: kasan: bad access detected
Memory state around the buggy address: ffff8880099b8080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880099b8100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8880099b8180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^ ffff8880099b8200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880099b8280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================
POC is available in the link [1].
The problem triggering process is as follows:
Process 1 Process 2 ----------------------------------------------------------------- cifs_readdir /* file->private_data == NULL */ initiate_cifs_search cifsFile = kzalloc(sizeof(struct cifsFileInfo), GFP_KERNEL); smb2_query_dir_first ->query_dir_first() SMB2_query_directory SMB2_query_directory_init cifs_send_recv smb2_parse_query_directory srch_inf->ntwrk_buf_start = (char *)rsp; srch_inf->srch_entries_start = (char *)rsp + ... srch_inf->last_entry = (char *)rsp + ... srch_inf->smallBuf = true; find_cifs_entry /* if (cfile->srch_inf.ntwrk_buf_start) */ cifs_small_buf_release(cfile->srch_inf // free
cifs_readdir ->iterate_shared() /* file->private_data != NULL */ find_cifs_entry /* in while (...) loop */ smb2_query_dir_next ->query_dir_next() SMB2_query_directory SMB2_query_directory_init cifs_send_recv compound_send_recv smb_send_rqst __smb_send_rqst rc = -ERESTARTSYS; /* if (fatal_signal_pending()) */ goto out; return rc /* if (cfile->srch_inf.last_entry) */ cifs_save_resume_key() cifs_fill_dirent // UAF /* if (rc) */ return -ENOENT;
Fix this by ensuring the return code is checked before using pointers from the srch_inf.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=220131 [1] Fixes: a364bc0b37f1 ("[CIFS] fix saving of resume key before CIFSFindNext") Cc: stable@vger.kernel.org Reviewed-by: Paulo Alcantara (Red Hat) pc@manguebit.com Signed-off-by: Wang Zhaolong wangzhaolong1@huawei.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/smb/client/readdir.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/smb/client/readdir.c +++ b/fs/smb/client/readdir.c @@ -788,11 +788,11 @@ find_cifs_entry(const unsigned int xid, rc = server->ops->query_dir_next(xid, tcon, &cfile->fid, search_flags, &cfile->srch_inf); + if (rc) + return -ENOENT; /* FindFirst/Next set last_entry to NULL on malformed reply */ if (cfile->srch_inf.last_entry) cifs_save_resume_key(cfile->srch_inf.last_entry, cfile); - if (rc) - return -ENOENT; } if (index_to_find < cfile->srch_inf.index_of_last_entry) { /* we found the buffer that contains the entry */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wang Zhaolong wangzhaolong1@huawei.com
commit e48f9d849bfdec276eebf782a84fd4dfbe1c14c0 upstream.
Multiple pointers in struct cifs_search_info (ntwrk_buf_start, srch_entries_start, and last_entry) point to the same allocated buffer. However, when freeing this buffer, only ntwrk_buf_start was set to NULL, while the other pointers remained pointing to freed memory.
This is defensive programming to prevent potential issues with stale pointers. While the active UAF vulnerability is fixed by the previous patch, this change ensures consistent pointer state and more robust error handling.
Signed-off-by: Wang Zhaolong wangzhaolong1@huawei.com Cc: stable@vger.kernel.org Reviewed-by: Paulo Alcantara (Red Hat) pc@manguebit.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/smb/client/readdir.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/fs/smb/client/readdir.c +++ b/fs/smb/client/readdir.c @@ -765,7 +765,10 @@ find_cifs_entry(const unsigned int xid, else cifs_buf_release(cfile->srch_inf. ntwrk_buf_start); + /* Reset all pointers to the network buffer to prevent stale references */ cfile->srch_inf.ntwrk_buf_start = NULL; + cfile->srch_inf.srch_entries_start = NULL; + cfile->srch_inf.last_entry = NULL; } rc = initiate_cifs_search(xid, file, full_path); if (rc) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mario Limonciello mario.limonciello@amd.com
commit 7e7cb7a13c81073d38a10fa7b450d23712281ec4 upstream.
commit 68bfdc8dc0a1a ("drm/amd: Keep display off while going into S4") attempted to keep displays off during the S4 sequence by not resuming display IP. This however leads to hangs because DRM clients such as the console can try to access registers and cause a hang.
Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4155 Fixes: 68bfdc8dc0a1a ("drm/amd: Keep display off while going into S4") Reviewed-by: Alex Deucher alexander.deucher@amd.com Link: https://lore.kernel.org/r/20250522141328.115095-1-mario.limonciello@amd.com Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit e485502c37b097b0bd773baa7e2741bf7bd2909a) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 ----- 1 file changed, 5 deletions(-)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -2897,11 +2897,6 @@ static int dm_resume(void *handle)
return 0; } - - /* leave display off for S4 sequence */ - if (adev->in_s4) - return 0; - /* Recreate dc_state - DC invalidates it when setting power state to S3. */ dc_release_state(dm_state->context); dm_state->context = dc_create_state(dm->dc);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Breno Leitao leitao@debian.org
commit 06717a7b6c86514dbd6ab322e8083ffaa4db5712 upstream.
I am seeing soft lockup on certain machine types when a cgroup OOMs. This is happening because killing the process in certain machine might be very slow, which causes the soft lockup and RCU stalls. This happens usually when the cgroup has MANY processes and memory.oom.group is set.
Example I am seeing in real production:
[462012.244552] Memory cgroup out of memory: Killed process 3370438 (crosvm) .... .... [462037.318059] Memory cgroup out of memory: Killed process 4171372 (adb) .... [462037.348314] watchdog: BUG: soft lockup - CPU#64 stuck for 26s! [stat_manager-ag:1618982] ....
Quick look at why this is so slow, it seems to be related to serial flush for certain machine types. For all the crashes I saw, the target CPU was at console_flush_all().
In the case above, there are thousands of processes in the cgroup, and it is soft locking up before it reaches the 1024 limit in the code (which would call the cond_resched()). So, cond_resched() in 1024 blocks is not sufficient.
Remove the counter-based conditional rescheduling logic and call cond_resched() unconditionally after each task iteration, after fn() is called. This avoids the lockup independently of how slow fn() is.
Link: https://lkml.kernel.org/r/20250523-memcg_fix-v1-1-ad3eafb60477@debian.org Fixes: ade81479c7dd ("memcg: fix soft lockup in the OOM process") Signed-off-by: Breno Leitao leitao@debian.org Suggested-by: Rik van Riel riel@surriel.com Acked-by: Shakeel Butt shakeel.butt@linux.dev Cc: Michael van der Westhuizen rmikey@meta.com Cc: Usama Arif usamaarif642@gmail.com Cc: Pavel Begunkov asml.silence@gmail.com Cc: Chen Ridong chenridong@huawei.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Johannes Weiner hannes@cmpxchg.org Cc: Michal Hocko mhocko@kernel.org Cc: Michal Hocko mhocko@suse.com Cc: Muchun Song muchun.song@linux.dev Cc: Roman Gushchin roman.gushchin@linux.dev Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memcontrol.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
--- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1242,7 +1242,6 @@ int mem_cgroup_scan_tasks(struct mem_cgr { struct mem_cgroup *iter; int ret = 0; - int i = 0;
BUG_ON(memcg == root_mem_cgroup);
@@ -1252,10 +1251,9 @@ int mem_cgroup_scan_tasks(struct mem_cgr
css_task_iter_start(&iter->css, CSS_TASK_ITER_PROCS, &it); while (!ret && (task = css_task_iter_next(&it))) { - /* Avoid potential softlockup warning */ - if ((++i & 1023) == 0) - cond_resched(); ret = fn(task, arg); + /* Avoid potential softlockup warning */ + cond_resched(); } css_task_iter_end(&it); if (ret) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tianyang Zhang zhangtianyang@loongson.cn
commit e05741fb10c38d70bbd7ec12b23c197b6355d519 upstream.
__alloc_pages_slowpath has no change detection for ac->nodemask in the part of retry path, while cpuset can modify it in parallel. For some processes that set mempolicy as MPOL_BIND, this results ac->nodemask changes, and then the should_reclaim_retry will judge based on the latest nodemask and jump to retry, while the get_page_from_freelist only traverses the zonelist from ac->preferred_zoneref, which selected by a expired nodemask and may cause infinite retries in some cases
cpu 64: __alloc_pages_slowpath { /* ..... */ retry: /* ac->nodemask = 0x1, ac->preferred->zone->nid = 1 */ if (alloc_flags & ALLOC_KSWAPD) wake_all_kswapds(order, gfp_mask, ac); /* cpu 1: cpuset_write_resmask update_nodemask update_nodemasks_hier update_tasks_nodemask mpol_rebind_task mpol_rebind_policy mpol_rebind_nodemask // mempolicy->nodes has been modified, // which ac->nodemask point to
*/ /* ac->nodemask = 0x3, ac->preferred->zone->nid = 1 */ if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags, did_some_progress > 0, &no_progress_loops)) goto retry; }
Simultaneously starting multiple cpuset01 from LTP can quickly reproduce this issue on a multi node server when the maximum memory pressure is reached and the swap is enabled
Link: https://lkml.kernel.org/r/20250416082405.20988-1-zhangtianyang@loongson.cn Fixes: c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone in a zonelist twice") Signed-off-by: Tianyang Zhang zhangtianyang@loongson.cn Reviewed-by: Suren Baghdasaryan surenb@google.com Reviewed-by: Vlastimil Babka vbabka@suse.cz Cc: Michal Hocko mhocko@suse.com Cc: Brendan Jackman jackmanb@google.com Cc: Johannes Weiner hannes@cmpxchg.org Cc: Zi Yan ziy@nvidia.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/page_alloc.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5195,6 +5195,14 @@ restart: }
retry: + /* + * Deal with possible cpuset update races or zonelist updates to avoid + * infinite retries. + */ + if (check_retry_cpuset(cpuset_mems_cookie, ac) || + check_retry_zonelist(zonelist_iter_cookie)) + goto restart; + /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */ if (alloc_flags & ALLOC_KSWAPD) wake_all_kswapds(order, gfp_mask, ac);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jernej Skrabec jernej.skrabec@gmail.com
[ Upstream commit 573f99c7585f597630f14596550c79e73ffaeef4 ]
This reverts commit 531fdbeedeb89bd32018a35c6e137765c9cc9e97.
Hardware that uses I2C wasn't designed with high speeds in mind, so communication with PMIC via RSB can intermittently fail. Go back to I2C as higher speed and efficiency isn't worth the trouble.
Fixes: 531fdbeedeb8 ("arm64: dts: allwinner: h6: Use RSB for AXP805 PMIC connection") Link: https://github.com/LibreELEC/LibreELEC.tv/issues/7731 Signed-off-by: Jernej Skrabec jernej.skrabec@gmail.com Link: https://patch.msgid.link/20250413135848.67283-1-jernej.skrabec@gmail.com Signed-off-by: Chen-Yu Tsai wens@csie.org Signed-off-by: Sasha Levin sashal@kernel.org --- .../dts/allwinner/sun50i-h6-beelink-gs1.dts | 38 +++++++++---------- .../dts/allwinner/sun50i-h6-orangepi-3.dts | 14 +++---- .../dts/allwinner/sun50i-h6-orangepi.dtsi | 22 +++++------ 3 files changed, 37 insertions(+), 37 deletions(-)
diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts index 381d58cea092d..c854c7e310519 100644 --- a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts +++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts @@ -151,28 +151,12 @@ vcc-pg-supply = <®_aldo1>; };
-&r_ir { - linux,rc-map-name = "rc-beelink-gs1"; - status = "okay"; -}; - -&r_pio { - /* - * FIXME: We can't add that supply for now since it would - * create a circular dependency between pinctrl, the regulator - * and the RSB Bus. - * - * vcc-pl-supply = <®_aldo1>; - */ - vcc-pm-supply = <®_aldo1>; -}; - -&r_rsb { +&r_i2c { status = "okay";
- axp805: pmic@745 { + axp805: pmic@36 { compatible = "x-powers,axp805", "x-powers,axp806"; - reg = <0x745>; + reg = <0x36>; interrupt-parent = <&r_intc>; interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_LOW>; interrupt-controller; @@ -290,6 +274,22 @@ }; };
+&r_ir { + linux,rc-map-name = "rc-beelink-gs1"; + status = "okay"; +}; + +&r_pio { + /* + * PL0 and PL1 are used for PMIC I2C + * don't enable the pl-supply else + * it will fail at boot + * + * vcc-pl-supply = <®_aldo1>; + */ + vcc-pm-supply = <®_aldo1>; +}; + &spdif { pinctrl-names = "default"; pinctrl-0 = <&spdif_tx_pin>; diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi-3.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi-3.dts index 6fc65e8db2206..8c476e089185b 100644 --- a/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi-3.dts +++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi-3.dts @@ -175,16 +175,12 @@ vcc-pg-supply = <®_vcc_wifi_io>; };
-&r_ir { - status = "okay"; -}; - -&r_rsb { +&r_i2c { status = "okay";
- axp805: pmic@745 { + axp805: pmic@36 { compatible = "x-powers,axp805", "x-powers,axp806"; - reg = <0x745>; + reg = <0x36>; interrupt-parent = <&r_intc>; interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_LOW>; interrupt-controller; @@ -295,6 +291,10 @@ }; };
+&r_ir { + status = "okay"; +}; + &rtc { clocks = <&ext_osc32k>; }; diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi.dtsi index 92745128fcfeb..4ec4996592bef 100644 --- a/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi.dtsi +++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi.dtsi @@ -112,20 +112,12 @@ vcc-pg-supply = <®_aldo1>; };
-&r_ir { - status = "okay"; -}; - -&r_pio { - vcc-pm-supply = <®_bldo3>; -}; - -&r_rsb { +&r_i2c { status = "okay";
- axp805: pmic@745 { + axp805: pmic@36 { compatible = "x-powers,axp805", "x-powers,axp806"; - reg = <0x745>; + reg = <0x36>; interrupt-parent = <&r_intc>; interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_LOW>; interrupt-controller; @@ -240,6 +232,14 @@ }; };
+&r_ir { + status = "okay"; +}; + +&r_pio { + vcc-pm-supply = <®_bldo3>; +}; + &rtc { clocks = <&ext_osc32k>; };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit 1f4bbedd4e5a69b01cde2cc21d01151ab2d0884f ]
If there is no stream data in file, v_len is zero. So, If position(*pos) is zero, stream write will fail due to stream write position validation check. This patch reorganize stream write position validation.
Fixes: 0ca6df4f40cf ("ksmbd: prevent out-of-bounds stream writes by validating *pos") Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/smb/server/vfs.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c index cf1b241a15789..fa647b75fba8a 100644 --- a/fs/smb/server/vfs.c +++ b/fs/smb/server/vfs.c @@ -423,10 +423,15 @@ static int ksmbd_vfs_stream_write(struct ksmbd_file *fp, char *buf, loff_t *pos, ksmbd_debug(VFS, "write stream data pos : %llu, count : %zd\n", *pos, count);
+ if (*pos >= XATTR_SIZE_MAX) { + pr_err("stream write position %lld is out of bounds\n", *pos); + return -EINVAL; + } + size = *pos + count; if (size > XATTR_SIZE_MAX) { size = XATTR_SIZE_MAX; - count = (*pos + count) - XATTR_SIZE_MAX; + count = XATTR_SIZE_MAX - *pos; }
v_len = ksmbd_vfs_getcasexattr(user_ns, @@ -440,13 +445,6 @@ static int ksmbd_vfs_stream_write(struct ksmbd_file *fp, char *buf, loff_t *pos, goto out; }
- if (v_len <= *pos) { - pr_err("stream write position %lld is out of bounds (stream length: %zd)\n", - *pos, v_len); - err = -EINVAL; - goto out; - } - if (v_len < size) { wbuf = kvzalloc(size, GFP_KERNEL); if (!wbuf) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Larisa Grigore larisa.grigore@nxp.com
[ Upstream commit 283ae0c65e9c592f4a1ba4f31917f5e766da7f31 ]
DSPI registers are NOT continuous, some registers are reserved and accessing them from userspace will trigger external abort, add regmap register access table to avoid below abort.
For example on S32G:
# cat /sys/kernel/debug/regmap/401d8000.spi/registers
Internal error: synchronous external abort: 96000210 1 PREEMPT SMP ... Call trace: regmap_mmio_read32le+0x24/0x48 regmap_mmio_read+0x48/0x70 _regmap_bus_reg_read+0x38/0x48 _regmap_read+0x68/0x1b0 regmap_read+0x50/0x78 regmap_read_debugfs+0x120/0x338
Fixes: 1acbdeb92c87 ("spi/fsl-dspi: Convert to use regmap and add big-endian support") Co-developed-by: Xulin Sun xulin.sun@windriver.com Signed-off-by: Xulin Sun xulin.sun@windriver.com Signed-off-by: Larisa Grigore larisa.grigore@nxp.com Signed-off-by: James Clark james.clark@linaro.org Link: https://patch.msgid.link/20250522-james-nxp-spi-v2-1-bea884630cfb@linaro.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-fsl-dspi.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c index 01930b52c4fb8..3f9609edac944 100644 --- a/drivers/spi/spi-fsl-dspi.c +++ b/drivers/spi/spi-fsl-dspi.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0+ // // Copyright 2013 Freescale Semiconductor, Inc. -// Copyright 2020 NXP +// Copyright 2020-2025 NXP // // Freescale DSPI driver // This file contains a driver for the Freescale DSPI @@ -1128,6 +1128,20 @@ static int dspi_resume(struct device *dev)
static SIMPLE_DEV_PM_OPS(dspi_pm, dspi_suspend, dspi_resume);
+static const struct regmap_range dspi_yes_ranges[] = { + regmap_reg_range(SPI_MCR, SPI_MCR), + regmap_reg_range(SPI_TCR, SPI_CTAR(3)), + regmap_reg_range(SPI_SR, SPI_TXFR3), + regmap_reg_range(SPI_RXFR0, SPI_RXFR3), + regmap_reg_range(SPI_CTARE(0), SPI_CTARE(3)), + regmap_reg_range(SPI_SREX, SPI_SREX), +}; + +static const struct regmap_access_table dspi_access_table = { + .yes_ranges = dspi_yes_ranges, + .n_yes_ranges = ARRAY_SIZE(dspi_yes_ranges), +}; + static const struct regmap_range dspi_volatile_ranges[] = { regmap_reg_range(SPI_MCR, SPI_TCR), regmap_reg_range(SPI_SR, SPI_SR), @@ -1145,6 +1159,8 @@ static const struct regmap_config dspi_regmap_config = { .reg_stride = 4, .max_register = 0x88, .volatile_table = &dspi_volatile_table, + .rd_table = &dspi_access_table, + .wr_table = &dspi_access_table, };
static const struct regmap_range dspi_xspi_volatile_ranges[] = { @@ -1166,6 +1182,8 @@ static const struct regmap_config dspi_xspi_regmap_config[] = { .reg_stride = 4, .max_register = 0x13c, .volatile_table = &dspi_xspi_volatile_table, + .rd_table = &dspi_access_table, + .wr_table = &dspi_access_table, }, { .name = "pushr",
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bogdan-Gabriel Roman bogdan-gabriel.roman@nxp.com
[ Upstream commit 8a30a6d35a11ff5ccdede7d6740765685385a917 ]
The XSPI mode implementation in this driver still uses the EOQ flag to signal the last word in a transmission and deassert the PCS signal. However, at speeds lower than ~200kHZ, the PCS signal seems to remain asserted even when SR[EOQF] = 1 indicates the end of a transmission. This is a problem for target devices which require the deassertation of the PCS signal between transfers.
Hence, this commit 'forces' the deassertation of the PCS by stopping the module through MCR[HALT] after completing a new transfer. According to the reference manual, the module stops or transitions from the Running state to the Stopped state after the current frame, when any one of the following conditions exist: - The value of SR[EOQF] = 1. - The chip is in Debug mode and the value of MCR[FRZ] = 1. - The value of MCR[HALT] = 1.
This shouldn't be done if the last transfer in the message has cs_change set.
Fixes: ea93ed4c181b ("spi: spi-fsl-dspi: Use EOQ for last word in buffer even for XSPI mode") Signed-off-by: Bogdan-Gabriel Roman bogdan-gabriel.roman@nxp.com Signed-off-by: Larisa Grigore larisa.grigore@nxp.com Signed-off-by: James Clark james.clark@linaro.org Link: https://patch.msgid.link/20250522-james-nxp-spi-v2-2-bea884630cfb@linaro.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-fsl-dspi.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c index 3f9609edac944..93cdc52f0fb06 100644 --- a/drivers/spi/spi-fsl-dspi.c +++ b/drivers/spi/spi-fsl-dspi.c @@ -61,6 +61,7 @@ #define SPI_SR_TFIWF BIT(18) #define SPI_SR_RFDF BIT(17) #define SPI_SR_CMDFFF BIT(16) +#define SPI_SR_TXRXS BIT(30) #define SPI_SR_CLEAR (SPI_SR_TCFQF | \ SPI_SR_TFUF | SPI_SR_TFFF | \ SPI_SR_CMDTCF | SPI_SR_SPEF | \ @@ -907,9 +908,20 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr, struct spi_device *spi = message->spi; struct spi_transfer *transfer; int status = 0; + u32 val = 0; + bool cs_change = false;
message->actual_length = 0;
+ /* Put DSPI in running mode if halted. */ + regmap_read(dspi->regmap, SPI_MCR, &val); + if (val & SPI_MCR_HALT) { + regmap_update_bits(dspi->regmap, SPI_MCR, SPI_MCR_HALT, 0); + while (regmap_read(dspi->regmap, SPI_SR, &val) >= 0 && + !(val & SPI_SR_TXRXS)) + ; + } + list_for_each_entry(transfer, &message->transfers, transfer_list) { dspi->cur_transfer = transfer; dspi->cur_msg = message; @@ -934,6 +946,7 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr, dspi->tx_cmd |= SPI_PUSHR_CMD_CONT; }
+ cs_change = transfer->cs_change; dspi->tx = transfer->tx_buf; dspi->rx = transfer->rx_buf; dspi->len = transfer->len; @@ -966,6 +979,15 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr, spi_transfer_delay_exec(transfer); }
+ if (status || !cs_change) { + /* Put DSPI in stop mode */ + regmap_update_bits(dspi->regmap, SPI_MCR, + SPI_MCR_HALT, SPI_MCR_HALT); + while (regmap_read(dspi->regmap, SPI_SR, &val) >= 0 && + val & SPI_SR_TXRXS) + ; + } + message->status = status; spi_finalize_current_message(ctlr);
@@ -1206,6 +1228,8 @@ static int dspi_init(struct fsl_dspi *dspi) if (!spi_controller_is_slave(dspi->ctlr)) mcr |= SPI_MCR_MASTER;
+ mcr |= SPI_MCR_HALT; + regmap_write(dspi->regmap, SPI_MCR, mcr); regmap_write(dspi->regmap, SPI_SR, SPI_SR_CLEAR);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Larisa Grigore larisa.grigore@nxp.com
[ Upstream commit 7aba292eb15389073c7f3bd7847e3862dfdf604d ]
If, in a previous transfer, the controller sends more data than expected by the DSPI target, SR.RFDF (RX FIFO is not empty) will remain asserted. When flushing the FIFOs at the beginning of a new transfer (writing 1 into MCR.CLR_TXF and MCR.CLR_RXF), SR.RFDF should also be cleared. Otherwise, when running in target mode with DMA, if SR.RFDF remains asserted, the DMA callback will be fired before the controller sends any data.
Take this opportunity to reset all Status Register fields.
Fixes: 5ce3cc567471 ("spi: spi-fsl-dspi: Provide support for DSPI slave mode operation (Vybryd vf610)") Signed-off-by: Larisa Grigore larisa.grigore@nxp.com Signed-off-by: James Clark james.clark@linaro.org Link: https://patch.msgid.link/20250522-james-nxp-spi-v2-3-bea884630cfb@linaro.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-fsl-dspi.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c index 93cdc52f0fb06..5374c6d44519a 100644 --- a/drivers/spi/spi-fsl-dspi.c +++ b/drivers/spi/spi-fsl-dspi.c @@ -956,6 +956,8 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr, SPI_MCR_CLR_TXF | SPI_MCR_CLR_RXF, SPI_MCR_CLR_TXF | SPI_MCR_CLR_RXF);
+ regmap_write(dspi->regmap, SPI_SR, SPI_SR_CLEAR); + spi_take_timestamp_pre(dspi->ctlr, dspi->cur_transfer, dspi->progress, !dspi->irq);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nathan Chancellor nathan@kernel.org
commit d0afcfeb9e3810ec89d1ffde1a0e36621bb75dca upstream.
A new on by default warning in clang [1] aims to flags instances where const variables without static or thread local storage or const members in aggregate types are not initialized because it can lead to an indeterminate value. This is quite noisy for the kernel due to instances originating from header files such as:
drivers/gpu/drm/i915/gt/intel_ring.h:62:2: error: default initialization of an object of type 'typeof (ring->size)' (aka 'const unsigned int') leaves the object uninitialized [-Werror,-Wdefault-const-init-var-unsafe] 62 | typecheck(typeof(ring->size), next); | ^ include/linux/typecheck.h:10:9: note: expanded from macro 'typecheck' 10 | ({ type __dummy; \ | ^
include/net/ip.h:478:14: error: default initialization of an object of type 'typeof (rt->dst.expires)' (aka 'const unsigned long') leaves the object uninitialized [-Werror,-Wdefault-const-init-var-unsafe] 478 | if (mtu && time_before(jiffies, rt->dst.expires)) | ^ include/linux/jiffies.h:138:26: note: expanded from macro 'time_before' 138 | #define time_before(a,b) time_after(b,a) | ^ include/linux/jiffies.h:128:3: note: expanded from macro 'time_after' 128 | (typecheck(unsigned long, a) && \ | ^ include/linux/typecheck.h:11:12: note: expanded from macro 'typecheck' 11 | typeof(x) __dummy2; \ | ^
include/linux/list.h:409:27: warning: default initialization of an object of type 'union (unnamed union at include/linux/list.h:409:27)' with const member leaves the object uninitialized [-Wdefault-const-init-field-unsafe] 409 | struct list_head *next = smp_load_acquire(&head->next); | ^ include/asm-generic/barrier.h:176:29: note: expanded from macro 'smp_load_acquire' 176 | #define smp_load_acquire(p) __smp_load_acquire(p) | ^ arch/arm64/include/asm/barrier.h:164:59: note: expanded from macro '__smp_load_acquire' 164 | union { __unqual_scalar_typeof(*p) __val; char __c[1]; } __u; \ | ^ include/linux/list.h:409:27: note: member '__val' declared 'const' here
crypto/scatterwalk.c:66:22: error: default initialization of an object of type 'struct scatter_walk' with const member leaves the object uninitialized [-Werror,-Wdefault-const-init-field-unsafe] 66 | struct scatter_walk walk; | ^ include/crypto/algapi.h:112:15: note: member 'addr' declared 'const' here 112 | void *const addr; | ^
fs/hugetlbfs/inode.c:733:24: error: default initialization of an object of type 'struct vm_area_struct' with const member leaves the object uninitialized [-Werror,-Wdefault-const-init-field-unsafe] 733 | struct vm_area_struct pseudo_vma; | ^ include/linux/mm_types.h:803:20: note: member 'vm_flags' declared 'const' here 803 | const vm_flags_t vm_flags; | ^
Silencing the instances from typecheck.h is difficult because '= {}' is not available in older but supported compilers and '= {0}' would cause warnings about a literal 0 being treated as NULL. While it might be possible to come up with a local hack to silence the warning for clang-21+, it may not be worth it since -Wuninitialized will still trigger if an uninitialized const variable is actually used.
In all audited cases of the "field" variant of the warning, the members are either not used in the particular call path, modified through other means such as memset() / memcpy() because the containing object is not const, or are within a union with other non-const members.
Since this warning does not appear to have a high signal to noise ratio, just disable it.
Cc: stable@vger.kernel.org Link: https://github.com/llvm/llvm-project/commit/576161cb6069e2c7656a8ef530727a0f... [1] Reported-by: Linux Kernel Functional Testing lkft@linaro.org Closes: https://lore.kernel.org/CA+G9fYuNjKcxFKS_MKPRuga32XbndkLGcY-PVuoSwzv6VWbY=w@... Reported-by: Marcus Seyfarth m.seyfarth@gmail.com Closes: https://github.com/ClangBuiltLinux/linux/issues/2088 Signed-off-by: Nathan Chancellor nathan@kernel.org Signed-off-by: Masahiro Yamada masahiroy@kernel.org [nathan: Apply change to Makefile instead of scripts/Makefile.extrawarn due to lack of e88ca24319e4 in older stable branches] Signed-off-by: Nathan Chancellor nathan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Makefile | 12 ++++++++++++ 1 file changed, 12 insertions(+)
--- a/Makefile +++ b/Makefile @@ -875,6 +875,18 @@ ifdef CONFIG_CC_IS_CLANG KBUILD_CPPFLAGS += -Qunused-arguments # The kernel builds with '-std=gnu11' so use of GNU extensions is acceptable. KBUILD_CFLAGS += -Wno-gnu + +# Clang may emit a warning when a const variable, such as the dummy variables +# in typecheck(), or const member of an aggregate type are not initialized, +# which can result in unexpected behavior. However, in many audited cases of +# the "field" variant of the warning, this is intentional because the field is +# never used within a particular call path, the field is within a union with +# other non-const members, or the containing object is not const so the field +# can be modified via memcpy() / memset(). While the variable warning also gets +# disabled with this same switch, there should not be too much coverage lost +# because -Wuninitialized will still flag when an uninitialized const variable +# is used. +KBUILD_CFLAGS += $(call cc-disable-warning, default-const-init-unsafe) else
# gcc inanely warns about local variables called 'main'
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Geert Uytterhoeven geert+renesas@glider.be
commit 81100b9a7b0515132996d62a7a676a77676cb6e3 upstream.
On (H)SCIF with a Baud Rate Generator for External Clock (BRG), there are multiple ways to configure the requested serial speed. If firmware uses a different method than Linux, and if any debug info is printed after the Bit Rate Register (SCBRR) is restored, but before termios is reconfigured (which configures the alternative method), the system may lock-up during resume.
Fix this by saving and restoring the contents of the BRG Frequency Division (SCDL) and Clock Select (SCCKS) registers as well.
Also save and restore the HSCIF's Sampling Rate Register (HSSRR), which configures the sampling point, and the SCIFA/SCIFB's Serial Port Control and Data Registers (SCPCR/SCPDR), which configure the optional control flow signals.
After this, all registers that are not saved/restored are either: - read-only, - write-only, - status registers containing flags with clear-after-set semantics, - FIFO Data Count Trigger registers, which do not matter much for the serial console.
Fixes: 22a6984c5b5df8ea ("serial: sh-sci: Update the suspend/resume support") Signed-off-by: Geert Uytterhoeven geert+renesas@glider.be Tested-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Reviewed-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Link: https://lore.kernel.org/r/11c2eab45d48211e75d8b8202cce60400880fe55.174111498... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/serial/sh-sci.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+)
--- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -106,10 +106,15 @@ struct plat_sci_reg { };
struct sci_suspend_regs { + u16 scdl; + u16 sccks; u16 scsmr; u16 scscr; u16 scfcr; u16 scsptr; + u16 hssrr; + u16 scpcr; + u16 scpdr; u8 scbrr; u8 semr; }; @@ -3418,6 +3423,10 @@ static void sci_console_save(struct sci_ struct sci_suspend_regs *regs = &s->suspend_regs; struct uart_port *port = &s->port;
+ if (sci_getreg(port, SCDL)->size) + regs->scdl = sci_serial_in(port, SCDL); + if (sci_getreg(port, SCCKS)->size) + regs->sccks = sci_serial_in(port, SCCKS); if (sci_getreg(port, SCSMR)->size) regs->scsmr = sci_serial_in(port, SCSMR); if (sci_getreg(port, SCSCR)->size) @@ -3428,6 +3437,12 @@ static void sci_console_save(struct sci_ regs->scsptr = sci_serial_in(port, SCSPTR); if (sci_getreg(port, SCBRR)->size) regs->scbrr = sci_serial_in(port, SCBRR); + if (sci_getreg(port, HSSRR)->size) + regs->hssrr = sci_serial_in(port, HSSRR); + if (sci_getreg(port, SCPCR)->size) + regs->scpcr = sci_serial_in(port, SCPCR); + if (sci_getreg(port, SCPDR)->size) + regs->scpdr = sci_serial_in(port, SCPDR); if (sci_getreg(port, SEMR)->size) regs->semr = sci_serial_in(port, SEMR); } @@ -3437,6 +3452,10 @@ static void sci_console_restore(struct s struct sci_suspend_regs *regs = &s->suspend_regs; struct uart_port *port = &s->port;
+ if (sci_getreg(port, SCDL)->size) + sci_serial_out(port, SCDL, regs->scdl); + if (sci_getreg(port, SCCKS)->size) + sci_serial_out(port, SCCKS, regs->sccks); if (sci_getreg(port, SCSMR)->size) sci_serial_out(port, SCSMR, regs->scsmr); if (sci_getreg(port, SCSCR)->size) @@ -3447,6 +3466,12 @@ static void sci_console_restore(struct s sci_serial_out(port, SCSPTR, regs->scsptr); if (sci_getreg(port, SCBRR)->size) sci_serial_out(port, SCBRR, regs->scbrr); + if (sci_getreg(port, HSSRR)->size) + sci_serial_out(port, HSSRR, regs->hssrr); + if (sci_getreg(port, SCPCR)->size) + sci_serial_out(port, SCPCR, regs->scpcr); + if (sci_getreg(port, SCPDR)->size) + sci_serial_out(port, SCPDR, regs->scpdr); if (sci_getreg(port, SEMR)->size) sci_serial_out(port, SEMR, regs->semr); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter dan.carpenter@linaro.org
commit 5a062c3c3b82004766bc3ece82b594d337076152 upstream.
This should be >= pmx->soc->ngroups instead of > to avoid an out of bounds access. The pmx->soc->groups[] array is allocated in tegra_pinctrl_probe().
Fixes: c12bfa0fee65 ("pinctrl-tegra: Restore SFSEL bit when freeing pins") Signed-off-by: Dan Carpenter dan.carpenter@linaro.org Reviewed-by: Kunwu Chan kunwu.chan@linux.dev Link: https://lore.kernel.org/82b40d9d-b437-42a9-9eb3-2328aa6877ac@stanley.mountai... Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pinctrl/tegra/pinctrl-tegra.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/pinctrl/tegra/pinctrl-tegra.c +++ b/drivers/pinctrl/tegra/pinctrl-tegra.c @@ -305,7 +305,7 @@ static const struct tegra_pingroup *tegr { struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
- if (group_index < 0 || group_index > pmx->soc->ngroups) + if (group_index < 0 || group_index >= pmx->soc->ngroups) return NULL;
return &pmx->soc->groups[group_index];
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nathan Chancellor nathan@kernel.org
commit e8d2d287e26d9bd9114cf258a123a6b70812442e upstream.
Clang warns (or errors with CONFIG_WERROR=y):
drivers/i3c/master/svc-i3c-master.c:596:2: error: unannotated fall-through between switch labels [-Werror,-Wimplicit-fallthrough] 596 | default: | ^ drivers/i3c/master/svc-i3c-master.c:596:2: note: insert 'break;' to avoid fall-through 596 | default: | ^ | break; 1 error generated.
Clang is a little more pedantic than GCC, which does not warn when falling through to a case that is just break or return. Clang's version is more in line with the kernel's own stance in deprecated.rst, which states that all switch/case blocks must end in either break, fallthrough, continue, goto, or return. Add the missing break to silence the warning.
Fixes: 0430bf9bc1ac ("i3c: master: svc: Fix missing STOP for master request") Signed-off-by: Nathan Chancellor nathan@kernel.org Link: https://lore.kernel.org/r/20250319-i3c-fix-clang-fallthrough-v1-1-d8e02be1ef... Signed-off-by: Alexandre Belloni alexandre.belloni@bootlin.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/i3c/master/svc-i3c-master.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/i3c/master/svc-i3c-master.c +++ b/drivers/i3c/master/svc-i3c-master.c @@ -496,6 +496,7 @@ static void svc_i3c_master_ibi_work(stru break; case SVC_I3C_MSTATUS_IBITYPE_MASTER_REQUEST: svc_i3c_master_emit_stop(master); + break; default: break; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Balbir Singh balbirs@nvidia.com
commit 7170130e4c72ce0caa0cb42a1627c635cc262821 upstream.
As Bert Karwatzki reported, the following recent commit causes a performance regression on AMD iGPU and dGPU systems:
7ffb791423c7 ("x86/kaslr: Reduce KASLR entropy on most x86 systems")
It exposed a bug with nokaslr and zone device interaction.
The root cause of the bug is that, the GPU driver registers a zone device private memory region. When KASLR is disabled or the above commit is applied, the direct_map_physmem_end is set to much higher than 10 TiB typically to the 64TiB address. When zone device private memory is added to the system via add_pages(), it bumps up the max_pfn to the same value. This causes dma_addressing_limited() to return true, since the device cannot address memory all the way up to max_pfn.
This caused a regression for games played on the iGPU, as it resulted in the DMA32 zone being used for GPU allocations.
Fix this by not bumping up max_pfn on x86 systems, when pgmap is passed into add_pages(). The presence of pgmap is used to determine if device private memory is being added via add_pages().
More details:
devm_request_mem_region() and request_free_mem_region() request for device private memory. iomem_resource is passed as the base resource with start and end parameters. iomem_resource's end depends on several factors, including the platform and virtualization. On x86 for example on bare metal, this value is set to boot_cpu_data.x86_phys_bits. boot_cpu_data.x86_phys_bits can change depending on support for MKTME. By default it is set to the same as log2(direct_map_physmem_end) which is 46 to 52 bits depending on the number of levels in the page table. The allocation routines used iomem_resource's end and direct_map_physmem_end to figure out where to allocate the region.
[ arch/powerpc is also impacted by this problem, but this patch does not fix the issue for PowerPC. ]
Testing:
1. Tested on a virtual machine with test_hmm for zone device inseration
2. A previous version of this patch was tested by Bert, please see: https://lore.kernel.org/lkml/d87680bab997fdc9fb4e638983132af235d9a03a.camel@...
[ mingo: Clarified the comments and the changelog. ]
Reported-by: Bert Karwatzki spasswolf@web.de Tested-by: Bert Karwatzki spasswolf@web.de Fixes: 7ffb791423c7 ("x86/kaslr: Reduce KASLR entropy on most x86 systems") Signed-off-by: Balbir Singh balbirs@nvidia.com Signed-off-by: Ingo Molnar mingo@kernel.org Cc: Brian Gerst brgerst@gmail.com Cc: Juergen Gross jgross@suse.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Andrew Morton akpm@linux-foundation.org Cc: Christoph Hellwig hch@lst.de Cc: Pierre-Eric Pelloux-Prayer pierre-eric.pelloux-prayer@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: David Airlie airlied@gmail.com Cc: Simona Vetter simona@ffwll.ch Link: https://lore.kernel.org/r/20250401000752.249348-1-balbirs@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/mm/init_64.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-)
--- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -959,9 +959,18 @@ int add_pages(int nid, unsigned long sta ret = __add_pages(nid, start_pfn, nr_pages, params); WARN_ON_ONCE(ret);
- /* update max_pfn, max_low_pfn and high_memory */ - update_end_of_memory_vars(start_pfn << PAGE_SHIFT, - nr_pages << PAGE_SHIFT); + /* + * Special case: add_pages() is called by memremap_pages() for adding device + * private pages. Do not bump up max_pfn in the device private path, + * because max_pfn changes affect dma_addressing_limited(). + * + * dma_addressing_limited() returning true when max_pfn is the device's + * addressable memory can force device drivers to use bounce buffers + * and impact their performance negatively: + */ + if (!params->pgmap) + /* update max_pfn, max_low_pfn and high_memory */ + update_end_of_memory_vars(start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
return ret; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com
commit 0642287e3ecdd0d1f88e6a2e63768e16153a990c upstream.
Smatch warns: drivers/dma/idxd/cdev.c:327: idxd_cdev_open() warn: 'sva' was already freed.
When idxd_wq_set_pasid() fails, the current code unbinds sva and then goes to 'failed_set_pasid' where iommu_sva_unbind_device is called again causing the above warning. [ device_user_pasid_enabled(idxd) is still true when calling failed_set_pasid ]
Fix this by removing additional unbind when idxd_wq_set_pasid() fails
Fixes: b022f59725f0 ("dmaengine: idxd: add idxd_copy_cr() to copy user completion record during page fault handling") Signed-off-by: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Acked-by: Fenghua Yu fenghua.yu@intel.com Acked-by: Dave Jiang dave.jiang@intel.com Link: https://lore.kernel.org/r/20230509060716.2830630-1-harshit.m.mogalapalli@ora... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/dma/idxd/cdev.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/dma/idxd/cdev.c +++ b/drivers/dma/idxd/cdev.c @@ -142,7 +142,6 @@ static int idxd_cdev_open(struct inode * if (wq_dedicated(wq)) { rc = idxd_wq_set_pasid(wq, pasid); if (rc < 0) { - iommu_sva_unbind_device(sva); dev_err(dev, "wq set pasid failed: %d\n", rc); goto failed_set_pasid; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ratheesh Kannoth rkannoth@marvell.com
commit 49fa4b0d06705a24a81bb8be6eb175059b77f0a7 upstream.
octeontx2 driver calls page_pool_create() during driver probe() and fails if queue size > 32k. Page pool infra uses these buffers as shock absorbers for burst traffic. These pages are pinned down over time as working sets varies, due to the recycling nature of page pool, given page pool (currently) don't have a shrinker mechanism, the pages remain pinned down in ptr_ring. Instead of clamping page_pool size to 32k at most, limit it even more to 2k to avoid wasting memory.
This have been tested on octeontx2 CN10KA hardware. TCP and UDP tests using iperf shows no performance regressions.
Fixes: b2e3406a38f0 ("octeontx2-pf: Add support for page pool") Suggested-by: Alexander Lobakin aleksander.lobakin@intel.com Reviewed-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Acked-by: Jesper Dangaard Brouer hawk@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 2 +- drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -1434,7 +1434,7 @@ int otx2_pool_init(struct otx2_nic *pfvf }
pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP; - pp_params.pool_size = numptrs; + pp_params.pool_size = min(OTX2_PAGE_POOL_SZ, numptrs); pp_params.nid = NUMA_NO_NODE; pp_params.dev = pfvf->dev; pp_params.dma_dir = DMA_FROM_DEVICE; --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -23,6 +23,8 @@ #define OTX2_ETH_HLEN (VLAN_ETH_HLEN + VLAN_HLEN) #define OTX2_MIN_MTU 60
+#define OTX2_PAGE_POOL_SZ 2048 + #define OTX2_MAX_GSO_SEGS 255 #define OTX2_MAX_FRAGS_IN_SQE 9
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ratheesh Kannoth rkannoth@marvell.com
commit 88e69af061f2e061a68751ef9cad47a674527a1b upstream.
The access to page pool `cache' array and the `count' variable is not locked. Page pool cache access is fine as long as there is only one consumer per pool.
octeontx2 driver fills in rx buffers from page pool in NAPI context. If system is stressed and could not allocate buffers, refiiling work will be delegated to a delayed workqueue. This means that there are two cosumers to the page pool cache.
Either workqueue or IRQ/NAPI can be run on other CPU. This will lead to lock less access, hence corruption of cache pool indexes.
To fix this issue, NAPI is rescheduled from workqueue context to refill rx buffers.
Fixes: b2e3406a38f0 ("octeontx2-pf: Add support for page pool") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Reported-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Reviewed-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c | 6 +- drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h | 2 drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 43 ++------------- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h | 3 - drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 7 +- drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c | 30 ++++++++-- drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h | 4 - 7 files changed, 44 insertions(+), 51 deletions(-)
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c @@ -107,12 +107,13 @@ int cn10k_sq_aq_init(void *dev, u16 qidx }
#define NPA_MAX_BURST 16 -void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq) +int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq) { struct otx2_nic *pfvf = dev; + int cnt = cq->pool_ptrs; u64 ptrs[NPA_MAX_BURST]; - int num_ptrs = 1; dma_addr_t bufptr; + int num_ptrs = 1;
/* Refill pool with new buffers */ while (cq->pool_ptrs) { @@ -131,6 +132,7 @@ void cn10k_refill_pool_ptrs(void *dev, s num_ptrs = 1; } } + return cnt - cq->pool_ptrs; }
void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx) --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h @@ -24,7 +24,7 @@ static inline int mtu_to_dwrr_weight(str return weight; }
-void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); +int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx); int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); int cn10k_lmtst_init(struct otx2_nic *pfvf); --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -567,20 +567,8 @@ int otx2_alloc_rbuf(struct otx2_nic *pfv int otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, dma_addr_t *dma) { - if (unlikely(__otx2_alloc_rbuf(pfvf, cq->rbpool, dma))) { - struct refill_work *work; - struct delayed_work *dwork; - - work = &pfvf->refill_wrk[cq->cq_idx]; - dwork = &work->pool_refill_work; - /* Schedule a task if no other task is running */ - if (!cq->refill_task_sched) { - cq->refill_task_sched = true; - schedule_delayed_work(dwork, - msecs_to_jiffies(100)); - } + if (unlikely(__otx2_alloc_rbuf(pfvf, cq->rbpool, dma))) return -ENOMEM; - } return 0; }
@@ -1081,39 +1069,20 @@ static int otx2_cq_init(struct otx2_nic static void otx2_pool_refill_task(struct work_struct *work) { struct otx2_cq_queue *cq; - struct otx2_pool *rbpool; struct refill_work *wrk; - int qidx, free_ptrs = 0; struct otx2_nic *pfvf; - dma_addr_t bufptr; + int qidx;
wrk = container_of(work, struct refill_work, pool_refill_work.work); pfvf = wrk->pf; qidx = wrk - pfvf->refill_wrk; cq = &pfvf->qset.cq[qidx]; - rbpool = cq->rbpool; - free_ptrs = cq->pool_ptrs; - - while (cq->pool_ptrs) { - if (otx2_alloc_rbuf(pfvf, rbpool, &bufptr)) { - /* Schedule a WQ if we fails to free atleast half of the - * pointers else enable napi for this RQ. - */ - if (!((free_ptrs - cq->pool_ptrs) > free_ptrs / 2)) { - struct delayed_work *dwork;
- dwork = &wrk->pool_refill_work; - schedule_delayed_work(dwork, - msecs_to_jiffies(100)); - } else { - cq->refill_task_sched = false; - } - return; - } - pfvf->hw_ops->aura_freeptr(pfvf, qidx, bufptr + OTX2_HEAD_ROOM); - cq->pool_ptrs--; - } cq->refill_task_sched = false; + + local_bh_disable(); + napi_schedule(wrk->napi); + local_bh_enable(); }
int otx2_config_nix_queues(struct otx2_nic *pfvf) --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -280,6 +280,7 @@ struct flr_work { struct refill_work { struct delayed_work pool_refill_work; struct otx2_nic *pf; + struct napi_struct *napi; };
/* PTPv2 originTimestamp structure */ @@ -347,7 +348,7 @@ struct dev_hw_ops { int (*sq_aq_init)(void *dev, u16 qidx, u16 sqb_aura); void (*sqe_flush)(void *dev, struct otx2_snd_queue *sq, int size, int qidx); - void (*refill_pool_ptrs)(void *dev, struct otx2_cq_queue *cq); + int (*refill_pool_ptrs)(void *dev, struct otx2_cq_queue *cq); void (*aura_freeptr)(void *dev, int aura, u64 buf); };
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -2005,6 +2005,10 @@ int otx2_stop(struct net_device *netdev)
netif_tx_disable(netdev);
+ for (wrk = 0; wrk < pf->qset.cq_cnt; wrk++) + cancel_delayed_work_sync(&pf->refill_wrk[wrk].pool_refill_work); + devm_kfree(pf->dev, pf->refill_wrk); + otx2_free_hw_resources(pf); otx2_free_cints(pf, pf->hw.cint_cnt); otx2_disable_napi(pf); @@ -2012,9 +2016,6 @@ int otx2_stop(struct net_device *netdev) for (qidx = 0; qidx < netdev->num_tx_queues; qidx++) netdev_tx_reset_queue(netdev_get_tx_queue(netdev, qidx));
- for (wrk = 0; wrk < pf->qset.cq_cnt; wrk++) - cancel_delayed_work_sync(&pf->refill_wrk[wrk].pool_refill_work); - devm_kfree(pf->dev, pf->refill_wrk);
kfree(qset->sq); kfree(qset->cq); --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -428,9 +428,10 @@ process_cqe: return processed_cqe; }
-void otx2_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq) +int otx2_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq) { struct otx2_nic *pfvf = dev; + int cnt = cq->pool_ptrs; dma_addr_t bufptr;
while (cq->pool_ptrs) { @@ -439,6 +440,8 @@ void otx2_refill_pool_ptrs(void *dev, st otx2_aura_freeptr(pfvf, cq->cq_idx, bufptr + OTX2_HEAD_ROOM); cq->pool_ptrs--; } + + return cnt - cq->pool_ptrs; }
static int otx2_tx_napi_handler(struct otx2_nic *pfvf, @@ -532,6 +535,7 @@ int otx2_napi_handler(struct napi_struct struct otx2_cq_queue *cq; struct otx2_qset *qset; struct otx2_nic *pfvf; + int filled_cnt = -1;
cq_poll = container_of(napi, struct otx2_cq_poll, napi); pfvf = (struct otx2_nic *)cq_poll->dev; @@ -552,7 +556,7 @@ int otx2_napi_handler(struct napi_struct }
if (rx_cq && rx_cq->pool_ptrs) - pfvf->hw_ops->refill_pool_ptrs(pfvf, rx_cq); + filled_cnt = pfvf->hw_ops->refill_pool_ptrs(pfvf, rx_cq); /* Clear the IRQ */ otx2_write64(pfvf, NIX_LF_CINTX_INT(cq_poll->cint_idx), BIT_ULL(0));
@@ -565,9 +569,25 @@ int otx2_napi_handler(struct napi_struct if (pfvf->flags & OTX2_FLAG_ADPTV_INT_COAL_ENABLED) otx2_adjust_adaptive_coalese(pfvf, cq_poll);
- /* Re-enable interrupts */ - otx2_write64(pfvf, NIX_LF_CINTX_ENA_W1S(cq_poll->cint_idx), - BIT_ULL(0)); + if (unlikely(!filled_cnt)) { + struct refill_work *work; + struct delayed_work *dwork; + + work = &pfvf->refill_wrk[cq->cq_idx]; + dwork = &work->pool_refill_work; + /* Schedule a task if no other task is running */ + if (!cq->refill_task_sched) { + work->napi = napi; + cq->refill_task_sched = true; + schedule_delayed_work(dwork, + msecs_to_jiffies(100)); + } + } else { + /* Re-enable interrupts */ + otx2_write64(pfvf, + NIX_LF_CINTX_ENA_W1S(cq_poll->cint_idx), + BIT_ULL(0)); + } } return workdone; } --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -170,6 +170,6 @@ void cn10k_sqe_flush(void *dev, struct o int size, int qidx); void otx2_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx); -void otx2_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); -void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); +int otx2_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); +int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); #endif /* OTX2_TXRX_H */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ratheesh Kannoth rkannoth@marvell.com
commit 50e492143374c17ad89c865a1a44837b3f5c8226 upstream.
Since page pool param's "order" is set to 0, will result in below warn message if interface is configured with higher rx buffer size.
Steps to reproduce the issue. 1. devlink dev param set pci/0002:04:00.0 name receive_buffer_size \ value 8196 cmode runtime 2. ifconfig eth0 up
[ 19.901356] ------------[ cut here ]------------ [ 19.901361] WARNING: CPU: 11 PID: 12331 at net/core/page_pool.c:567 page_pool_alloc_frag+0x3c/0x230 [ 19.901449] pstate: 82401009 (Nzcv daif +PAN -UAO +TCO -DIT +SSBS BTYPE=--) [ 19.901451] pc : page_pool_alloc_frag+0x3c/0x230 [ 19.901453] lr : __otx2_alloc_rbuf+0x60/0xbc [rvu_nicpf] [ 19.901460] sp : ffff80000f66b970 [ 19.901461] x29: ffff80000f66b970 x28: 0000000000000000 x27: 0000000000000000 [ 19.901464] x26: ffff800000d15b68 x25: ffff000195b5c080 x24: ffff0002a5a32dc0 [ 19.901467] x23: ffff0001063c0878 x22: 0000000000000100 x21: 0000000000000000 [ 19.901469] x20: 0000000000000000 x19: ffff00016f781000 x18: 0000000000000000 [ 19.901472] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 [ 19.901474] x14: 0000000000000000 x13: ffff0005ffdc9c80 x12: 0000000000000000 [ 19.901477] x11: ffff800009119a38 x10: 4c6ef2e3ba300519 x9 : ffff800000d13844 [ 19.901479] x8 : ffff0002a5a33cc8 x7 : 0000000000000030 x6 : 0000000000000030 [ 19.901482] x5 : 0000000000000005 x4 : 0000000000000000 x3 : 0000000000000a20 [ 19.901484] x2 : 0000000000001080 x1 : ffff80000f66b9d4 x0 : 0000000000001000 [ 19.901487] Call trace: [ 19.901488] page_pool_alloc_frag+0x3c/0x230 [ 19.901490] __otx2_alloc_rbuf+0x60/0xbc [rvu_nicpf] [ 19.901494] otx2_rq_aura_pool_init+0x1c4/0x240 [rvu_nicpf] [ 19.901498] otx2_open+0x228/0xa70 [rvu_nicpf] [ 19.901501] otx2vf_open+0x20/0xd0 [rvu_nicvf] [ 19.901504] __dev_open+0x114/0x1d0 [ 19.901507] __dev_change_flags+0x194/0x210 [ 19.901510] dev_change_flags+0x2c/0x70 [ 19.901512] devinet_ioctl+0x3a4/0x6c4 [ 19.901515] inet_ioctl+0x228/0x240 [ 19.901518] sock_ioctl+0x2ac/0x480 [ 19.901522] __arm64_sys_ioctl+0x564/0xe50 [ 19.901525] invoke_syscall.constprop.0+0x58/0xf0 [ 19.901529] do_el0_svc+0x58/0x150 [ 19.901531] el0_svc+0x30/0x140 [ 19.901533] el0t_64_sync_handler+0xe8/0x114 [ 19.901535] el0t_64_sync+0x1a0/0x1a4 [ 19.901537] ---[ end trace 678c0bf660ad8116 ]---
Fixes: b2e3406a38f0 ("octeontx2-pf: Add support for page pool") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Reviewed-by: Yunsheng Lin linyunsheng@huawei.com Link: https://lore.kernel.org/r/20231010034842.3807816-1-rkannoth@marvell.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -1402,6 +1402,7 @@ int otx2_pool_init(struct otx2_nic *pfvf return 0; }
+ pp_params.order = get_order(buf_size); pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP; pp_params.pool_size = min(OTX2_PAGE_POOL_SZ, numptrs); pp_params.nid = NUMA_NO_NODE;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Frederic Weisbecker frederic@kernel.org
commit 53dac345395c0d2493cbc2f4c85fe38aef5b63f5 upstream.
hrtimers are migrated away from the dying CPU to any online target at the CPUHP_AP_HRTIMERS_DYING stage in order not to delay bandwidth timers handling tasks involved in the CPU hotplug forward progress.
However wakeups can still be performed by the outgoing CPU after CPUHP_AP_HRTIMERS_DYING. Those can result again in bandwidth timers being armed. Depending on several considerations (crystal ball power management based election, earliest timer already enqueued, timer migration enabled or not), the target may eventually be the current CPU even if offline. If that happens, the timer is eventually ignored.
The most notable example is RCU which had to deal with each and every of those wake-ups by deferring them to an online CPU, along with related workarounds:
_ e787644caf76 (rcu: Defer RCU kthreads wakeup when CPU is dying) _ 9139f93209d1 (rcu/nocb: Fix RT throttling hrtimer armed from offline CPU) _ f7345ccc62a4 (rcu/nocb: Fix rcuog wake-up from offline softirq)
The problem isn't confined to RCU though as the stop machine kthread (which runs CPUHP_AP_HRTIMERS_DYING) reports its completion at the end of its work through cpu_stop_signal_done() and performs a wake up that eventually arms the deadline server timer:
WARNING: CPU: 94 PID: 588 at kernel/time/hrtimer.c:1086 hrtimer_start_range_ns+0x289/0x2d0 CPU: 94 UID: 0 PID: 588 Comm: migration/94 Not tainted Stopper: multi_cpu_stop+0x0/0x120 <- stop_machine_cpuslocked+0x66/0xc0 RIP: 0010:hrtimer_start_range_ns+0x289/0x2d0 Call Trace: <TASK> start_dl_timer enqueue_dl_entity dl_server_start enqueue_task_fair enqueue_task ttwu_do_activate try_to_wake_up complete cpu_stopper_thread
Instead of providing yet another bandaid to work around the situation, fix it in the hrtimers infrastructure instead: always migrate away a timer to an online target whenever it is enqueued from an offline CPU.
This will also allow to revert all the above RCU disgraceful hacks.
Fixes: 5c0930ccaad5 ("hrtimers: Push pending hrtimers away from outgoing CPU earlier") Reported-by: Vlad Poenaru vlad.wing@gmail.com Reported-by: Usama Arif usamaarif642@gmail.com Signed-off-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Paul E. McKenney paulmck@kernel.org Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: stable@vger.kernel.org Tested-by: Paul E. McKenney paulmck@kernel.org Link: https://lore.kernel.org/all/20250117232433.24027-1-frederic@kernel.org Closes: 20241213203739.1519801-1-usamaarif642@gmail.com Signed-off-by: Zhaoyang Li lizy04@hust.edu.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/hrtimer.h | 1 kernel/time/hrtimer.c | 103 ++++++++++++++++++++++++++++++++++++++---------- 2 files changed, 83 insertions(+), 21 deletions(-)
--- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h @@ -237,6 +237,7 @@ struct hrtimer_cpu_base { ktime_t softirq_expires_next; struct hrtimer *softirq_next_timer; struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES]; + call_single_data_t csd; } ____cacheline_aligned;
static inline void hrtimer_set_expires(struct hrtimer *timer, ktime_t time) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -58,6 +58,8 @@ #define HRTIMER_ACTIVE_SOFT (HRTIMER_ACTIVE_HARD << MASK_SHIFT) #define HRTIMER_ACTIVE_ALL (HRTIMER_ACTIVE_SOFT | HRTIMER_ACTIVE_HARD)
+static void retrigger_next_event(void *arg); + /* * The timer bases: * @@ -111,7 +113,8 @@ DEFINE_PER_CPU(struct hrtimer_cpu_base, .clockid = CLOCK_TAI, .get_time = &ktime_get_clocktai, }, - } + }, + .csd = CSD_INIT(retrigger_next_event, NULL) };
static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = { @@ -124,6 +127,14 @@ static const int hrtimer_clock_to_base_t [CLOCK_TAI] = HRTIMER_BASE_TAI, };
+static inline bool hrtimer_base_is_online(struct hrtimer_cpu_base *base) +{ + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) + return true; + else + return likely(base->online); +} + /* * Functions and macros which are different for UP/SMP systems are kept in a * single place @@ -177,27 +188,54 @@ struct hrtimer_clock_base *lock_hrtimer_ }
/* - * We do not migrate the timer when it is expiring before the next - * event on the target cpu. When high resolution is enabled, we cannot - * reprogram the target cpu hardware and we would cause it to fire - * late. To keep it simple, we handle the high resolution enabled and - * disabled case similar. + * Check if the elected target is suitable considering its next + * event and the hotplug state of the current CPU. + * + * If the elected target is remote and its next event is after the timer + * to queue, then a remote reprogram is necessary. However there is no + * guarantee the IPI handling the operation would arrive in time to meet + * the high resolution deadline. In this case the local CPU becomes a + * preferred target, unless it is offline. + * + * High and low resolution modes are handled the same way for simplicity. * * Called with cpu_base->lock of target cpu held. */ -static int -hrtimer_check_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base) +static bool hrtimer_suitable_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base, + struct hrtimer_cpu_base *new_cpu_base, + struct hrtimer_cpu_base *this_cpu_base) { ktime_t expires;
+ /* + * The local CPU clockevent can be reprogrammed. Also get_target_base() + * guarantees it is online. + */ + if (new_cpu_base == this_cpu_base) + return true; + + /* + * The offline local CPU can't be the default target if the + * next remote target event is after this timer. Keep the + * elected new base. An IPI will we issued to reprogram + * it as a last resort. + */ + if (!hrtimer_base_is_online(this_cpu_base)) + return true; + expires = ktime_sub(hrtimer_get_expires(timer), new_base->offset); - return expires < new_base->cpu_base->expires_next; + + return expires >= new_base->cpu_base->expires_next; }
-static inline -struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base, - int pinned) +static inline struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base, int pinned) { + if (!hrtimer_base_is_online(base)) { + int cpu = cpumask_any_and(cpu_online_mask, housekeeping_cpumask(HK_TYPE_TIMER)); + + return &per_cpu(hrtimer_bases, cpu); + } + #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON) if (static_branch_likely(&timers_migration_enabled) && !pinned) return &per_cpu(hrtimer_bases, get_nohz_timer_target()); @@ -248,8 +286,8 @@ again: raw_spin_unlock(&base->cpu_base->lock); raw_spin_lock(&new_base->cpu_base->lock);
- if (new_cpu_base != this_cpu_base && - hrtimer_check_target(timer, new_base)) { + if (!hrtimer_suitable_target(timer, new_base, new_cpu_base, + this_cpu_base)) { raw_spin_unlock(&new_base->cpu_base->lock); raw_spin_lock(&base->cpu_base->lock); new_cpu_base = this_cpu_base; @@ -258,8 +296,7 @@ again: } WRITE_ONCE(timer->base, new_base); } else { - if (new_cpu_base != this_cpu_base && - hrtimer_check_target(timer, new_base)) { + if (!hrtimer_suitable_target(timer, new_base, new_cpu_base, this_cpu_base)) { new_cpu_base = this_cpu_base; goto again; } @@ -718,8 +755,6 @@ static inline int hrtimer_is_hres_enable return hrtimer_hres_enabled; }
-static void retrigger_next_event(void *arg); - /* * Switch to high resolution mode */ @@ -1205,6 +1240,7 @@ static int __hrtimer_start_range_ns(stru u64 delta_ns, const enum hrtimer_mode mode, struct hrtimer_clock_base *base) { + struct hrtimer_cpu_base *this_cpu_base = this_cpu_ptr(&hrtimer_bases); struct hrtimer_clock_base *new_base; bool force_local, first;
@@ -1216,10 +1252,16 @@ static int __hrtimer_start_range_ns(stru * and enforce reprogramming after it is queued no matter whether * it is the new first expiring timer again or not. */ - force_local = base->cpu_base == this_cpu_ptr(&hrtimer_bases); + force_local = base->cpu_base == this_cpu_base; force_local &= base->cpu_base->next_timer == timer;
/* + * Don't force local queuing if this enqueue happens on a unplugged + * CPU after hrtimer_cpu_dying() has been invoked. + */ + force_local &= this_cpu_base->online; + + /* * Remove an active timer from the queue. In case it is not queued * on the current CPU, make sure that remove_hrtimer() updates the * remote data correctly. @@ -1248,8 +1290,27 @@ static int __hrtimer_start_range_ns(stru }
first = enqueue_hrtimer(timer, new_base, mode); - if (!force_local) - return first; + if (!force_local) { + /* + * If the current CPU base is online, then the timer is + * never queued on a remote CPU if it would be the first + * expiring timer there. + */ + if (hrtimer_base_is_online(this_cpu_base)) + return first; + + /* + * Timer was enqueued remote because the current base is + * already offline. If the timer is the first to expire, + * kick the remote CPU to reprogram the clock event. + */ + if (first) { + struct hrtimer_cpu_base *new_cpu_base = new_base->cpu_base; + + smp_call_function_single_async(new_cpu_base->cpu, &new_cpu_base->csd); + } + return 0; + }
/* * Timer was forced to stay on the current CPU to avoid
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Boris Burkov boris@bur.io
commit 3e74859ee35edc33a022c3f3971df066ea0ca6b9 upstream.
When we call btrfs_read_folio() to bring a folio uptodate, we unlock the folio. The result of that is that a different thread can modify the mapping (like remove it with invalidate) before we call folio_lock(). This results in an invalid page and we need to try again.
In particular, if we are relocating concurrently with aborting a transaction, this can result in a crash like the following:
BUG: kernel NULL pointer dereference, address: 0000000000000000 PGD 0 P4D 0 Oops: 0000 [#1] SMP CPU: 76 PID: 1411631 Comm: kworker/u322:5 Workqueue: events_unbound btrfs_reclaim_bgs_work RIP: 0010:set_page_extent_mapped+0x20/0xb0 RSP: 0018:ffffc900516a7be8 EFLAGS: 00010246 RAX: ffffea009e851d08 RBX: ffffea009e0b1880 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffffc900516a7b90 RDI: ffffea009e0b1880 RBP: 0000000003573000 R08: 0000000000000001 R09: ffff88c07fd2f3f0 R10: 0000000000000000 R11: 0000194754b575be R12: 0000000003572000 R13: 0000000003572fff R14: 0000000000100cca R15: 0000000005582fff FS: 0000000000000000(0000) GS:ffff88c07fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000407d00f002 CR4: 00000000007706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <TASK> ? __die+0x78/0xc0 ? page_fault_oops+0x2a8/0x3a0 ? __switch_to+0x133/0x530 ? wq_worker_running+0xa/0x40 ? exc_page_fault+0x63/0x130 ? asm_exc_page_fault+0x22/0x30 ? set_page_extent_mapped+0x20/0xb0 relocate_file_extent_cluster+0x1a7/0x940 relocate_data_extent+0xaf/0x120 relocate_block_group+0x20f/0x480 btrfs_relocate_block_group+0x152/0x320 btrfs_relocate_chunk+0x3d/0x120 btrfs_reclaim_bgs_work+0x2ae/0x4e0 process_scheduled_works+0x184/0x370 worker_thread+0xc6/0x3e0 ? blk_add_timer+0xb0/0xb0 kthread+0xae/0xe0 ? flush_tlb_kernel_range+0x90/0x90 ret_from_fork+0x2f/0x40 ? flush_tlb_kernel_range+0x90/0x90 ret_from_fork_asm+0x11/0x20 </TASK>
This occurs because cleanup_one_transaction() calls destroy_delalloc_inodes() which calls invalidate_inode_pages2() which takes the folio_lock before setting mapping to NULL. We fail to check this, and subsequently call set_extent_mapping(), which assumes that mapping != NULL (in fact it asserts that in debug mode)
Note that the "fixes" patch here is not the one that introduced the race (the very first iteration of this code from 2009) but a more recent change that made this particular crash happen in practice.
Fixes: e7f1326cc24e ("btrfs: set page extent mapped after read_folio in relocate_one_page") CC: stable@vger.kernel.org # 6.1+ Reviewed-by: Qu Wenruo wqu@suse.com Signed-off-by: Boris Burkov boris@bur.io Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Zhaoyang Li lizy04@hust.edu.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/relocation.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -2977,6 +2977,7 @@ static int relocate_one_page(struct inod int ret;
ASSERT(page_index <= last_index); +again: page = find_lock_page(inode->i_mapping, page_index); if (!page) { page_cache_sync_readahead(inode->i_mapping, ra, NULL, @@ -2998,6 +2999,11 @@ static int relocate_one_page(struct inod ret = -EIO; goto release_page; } + if (page->mapping != inode->i_mapping) { + unlock_page(page); + put_page(page); + goto again; + } }
/*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Mikhalitsyn aleksandr.mikhalitsyn@canonical.com
commit 97154bcf4d1b7cabefec8a72cff5fbb91d5afb7b upstream.
Let's make CONFIG_UNIX a bool instead of a tristate. We've decided to do that during discussion about SCM_PIDFD patchset [1].
[1] https://lore.kernel.org/lkml/20230524081933.44dc8bea@kernel.org/
Cc: "David S. Miller" davem@davemloft.net Cc: Eric Dumazet edumazet@google.com Cc: Jakub Kicinski kuba@kernel.org Cc: Paolo Abeni pabeni@redhat.com Cc: Leon Romanovsky leon@kernel.org Cc: David Ahern dsahern@kernel.org Cc: Arnd Bergmann arnd@arndb.de Cc: Kees Cook keescook@chromium.org Cc: Christian Brauner brauner@kernel.org Cc: Kuniyuki Iwashima kuniyu@amazon.com Cc: Lennart Poettering mzxreary@0pointer.de Cc: Luca Boccassi bluca@debian.org Cc: linux-kernel@vger.kernel.org Cc: netdev@vger.kernel.org Cc: linux-arch@vger.kernel.org Suggested-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Alexander Mikhalitsyn aleksandr.mikhalitsyn@canonical.com Acked-by: Christian Brauner brauner@kernel.org Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/Kconfig | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
--- a/net/unix/Kconfig +++ b/net/unix/Kconfig @@ -4,7 +4,7 @@ #
config UNIX - tristate "Unix domain sockets" + bool "Unix domain sockets" help If you say Y here, you will include support for Unix domain sockets; sockets are the standard Unix mechanism for establishing and @@ -14,10 +14,6 @@ config UNIX an embedded system or something similar, you therefore definitely want to say Y here.
- To compile this driver as a module, choose M here: the module will be - called unix. Note that several important services won't work - correctly if you say M here and then neglect to load the module. - Say Y unless you know what you are doing.
config UNIX_SCM
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 5b17307bd0789edea0675d524a2b277b93bbde62 upstream.
Currently, unix_get_socket() returns struct sock, but after calling it, we always cast it to unix_sk().
Let's return struct unix_sock from unix_get_socket().
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Pavel Begunkov asml.silence@gmail.com Reviewed-by: Simon Horman horms@kernel.org Link: https://lore.kernel.org/r/20240123170856.41348-4-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 2 +- net/unix/garbage.c | 19 +++++++------------ net/unix/scm.c | 19 +++++++------------ 3 files changed, 15 insertions(+), 25 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -13,7 +13,7 @@ void unix_notinflight(struct user_struct void unix_destruct_scm(struct sk_buff *skb); void unix_gc(void); void wait_for_unix_gc(void); -struct sock *unix_get_socket(struct file *filp); +struct unix_sock *unix_get_socket(struct file *filp); struct sock *unix_peer_get(struct sock *sk);
#define UNIX_HASH_MOD (256 - 1) --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -105,20 +105,15 @@ static void scan_inflight(struct sock *x
while (nfd--) { /* Get the socket the fd matches if it indeed does so */ - struct sock *sk = unix_get_socket(*fp++); + struct unix_sock *u = unix_get_socket(*fp++);
- if (sk) { - struct unix_sock *u = unix_sk(sk); + /* Ignore non-candidates, they could have been added + * to the queues after starting the garbage collection + */ + if (u && test_bit(UNIX_GC_CANDIDATE, &u->gc_flags)) { + hit = true;
- /* Ignore non-candidates, they could - * have been added to the queues after - * starting the garbage collection - */ - if (test_bit(UNIX_GC_CANDIDATE, &u->gc_flags)) { - hit = true; - - func(u); - } + func(u); } } if (hit && hitlist != NULL) { --- a/net/unix/scm.c +++ b/net/unix/scm.c @@ -21,9 +21,8 @@ EXPORT_SYMBOL(gc_inflight_list); DEFINE_SPINLOCK(unix_gc_lock); EXPORT_SYMBOL(unix_gc_lock);
-struct sock *unix_get_socket(struct file *filp) +struct unix_sock *unix_get_socket(struct file *filp) { - struct sock *u_sock = NULL; struct inode *inode = file_inode(filp);
/* Socket ? */ @@ -33,10 +32,10 @@ struct sock *unix_get_socket(struct file
/* PF_UNIX ? */ if (s && sock->ops && sock->ops->family == PF_UNIX) - u_sock = s; + return unix_sk(s); }
- return u_sock; + return NULL; } EXPORT_SYMBOL(unix_get_socket);
@@ -45,13 +44,11 @@ EXPORT_SYMBOL(unix_get_socket); */ void unix_inflight(struct user_struct *user, struct file *fp) { - struct sock *s = unix_get_socket(fp); + struct unix_sock *u = unix_get_socket(fp);
spin_lock(&unix_gc_lock);
- if (s) { - struct unix_sock *u = unix_sk(s); - + if (u) { if (!u->inflight) { BUG_ON(!list_empty(&u->link)); list_add_tail(&u->link, &gc_inflight_list); @@ -68,13 +65,11 @@ void unix_inflight(struct user_struct *u
void unix_notinflight(struct user_struct *user, struct file *fp) { - struct sock *s = unix_get_socket(fp); + struct unix_sock *u = unix_get_socket(fp);
spin_lock(&unix_gc_lock);
- if (s) { - struct unix_sock *u = unix_sk(s); - + if (u) { BUG_ON(!u->inflight); BUG_ON(list_empty(&u->link));
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 8b90a9f819dc2a06baae4ec1a64d875e53b824ec upstream.
If more than 16000 inflight AF_UNIX sockets exist and the garbage collector is not running, unix_(dgram|stream)_sendmsg() call unix_gc(). Also, they wait for unix_gc() to complete.
In unix_gc(), all inflight AF_UNIX sockets are traversed at least once, and more if they are the GC candidate. Thus, sendmsg() significantly slows down with too many inflight AF_UNIX sockets.
There is a small window to invoke multiple unix_gc() instances, which will then be blocked by the same spinlock except for one.
Let's convert unix_gc() to use struct work so that it will not consume CPUs unnecessarily.
Note WRITE_ONCE(gc_in_progress, true) is moved before running GC. If we leave the WRITE_ONCE() as is and use the following test to call flush_work(), a process might not call it.
CPU 0 CPU 1 --- --- start work and call __unix_gc() if (work_pending(&unix_gc_work) || <-- false READ_ONCE(gc_in_progress)) <-- false flush_work(); <-- missed! WRITE_ONCE(gc_in_progress, true)
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20240123170856.41348-5-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 54 ++++++++++++++++++++++++++--------------------------- 1 file changed, 27 insertions(+), 27 deletions(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -86,7 +86,6 @@ /* Internal data structures and random procedures: */
static LIST_HEAD(gc_candidates); -static DECLARE_WAIT_QUEUE_HEAD(unix_gc_wait);
static void scan_inflight(struct sock *x, void (*func)(struct unix_sock *), struct sk_buff_head *hitlist) @@ -182,23 +181,8 @@ static void inc_inflight_move_tail(struc }
static bool gc_in_progress; -#define UNIX_INFLIGHT_TRIGGER_GC 16000 - -void wait_for_unix_gc(void) -{ - /* If number of inflight sockets is insane, - * force a garbage collect right now. - * Paired with the WRITE_ONCE() in unix_inflight(), - * unix_notinflight() and gc_in_progress(). - */ - if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC && - !READ_ONCE(gc_in_progress)) - unix_gc(); - wait_event(unix_gc_wait, !READ_ONCE(gc_in_progress)); -}
-/* The external entry point: unix_gc() */ -void unix_gc(void) +static void __unix_gc(struct work_struct *work) { struct sk_buff *next_skb, *skb; struct unix_sock *u; @@ -209,13 +193,6 @@ void unix_gc(void)
spin_lock(&unix_gc_lock);
- /* Avoid a recursive GC. */ - if (gc_in_progress) - goto out; - - /* Paired with READ_ONCE() in wait_for_unix_gc(). */ - WRITE_ONCE(gc_in_progress, true); - /* First, select candidates for garbage collection. Only * in-flight sockets are considered, and from those only ones * which don't have any external reference. @@ -346,8 +323,31 @@ void unix_gc(void) /* Paired with READ_ONCE() in wait_for_unix_gc(). */ WRITE_ONCE(gc_in_progress, false);
- wake_up(&unix_gc_wait); - - out: spin_unlock(&unix_gc_lock); } + +static DECLARE_WORK(unix_gc_work, __unix_gc); + +void unix_gc(void) +{ + WRITE_ONCE(gc_in_progress, true); + queue_work(system_unbound_wq, &unix_gc_work); +} + +#define UNIX_INFLIGHT_TRIGGER_GC 16000 + +void wait_for_unix_gc(void) +{ + /* If number of inflight sockets is insane, + * force a garbage collect right now. + * + * Paired with the WRITE_ONCE() in unix_inflight(), + * unix_notinflight(), and __unix_gc(). + */ + if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC && + !READ_ONCE(gc_in_progress)) + unix_gc(); + + if (READ_ONCE(gc_in_progress)) + flush_work(&unix_gc_work); +}
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit d9f21b3613337b55cc9d4a6ead484dca68475143 upstream.
If more than 16000 inflight AF_UNIX sockets exist and the garbage collector is not running, unix_(dgram|stream)_sendmsg() call unix_gc(). Also, they wait for unix_gc() to complete.
In unix_gc(), all inflight AF_UNIX sockets are traversed at least once, and more if they are the GC candidate. Thus, sendmsg() significantly slows down with too many inflight AF_UNIX sockets.
However, if a process sends data with no AF_UNIX FD, the sendmsg() call does not need to wait for GC. After this change, only the process that meets the condition below will be blocked under such a situation.
1) cmsg contains AF_UNIX socket 2) more than 32 AF_UNIX sent by the same user are still inflight
Note that even a sendmsg() call that does not meet the condition but has AF_UNIX FD will be blocked later in unix_scm_to_skb() by the spinlock, but we allow that as a bonus for sane users.
The results below are the time spent in unix_dgram_sendmsg() sending 1 byte of data with no FD 4096 times on a host where 32K inflight AF_UNIX sockets exist.
Without series: the sane sendmsg() needs to wait gc unreasonably.
$ sudo /usr/share/bcc/tools/funclatency -p 11165 unix_dgram_sendmsg Tracing 1 functions for "unix_dgram_sendmsg"... Hit Ctrl-C to end. ^C nsecs : count distribution [...] 524288 -> 1048575 : 0 | | 1048576 -> 2097151 : 3881 |****************************************| 2097152 -> 4194303 : 214 |** | 4194304 -> 8388607 : 1 | |
avg = 1825567 nsecs, total: 7477526027 nsecs, count: 4096
With series: the sane sendmsg() can finish much faster.
$ sudo /usr/share/bcc/tools/funclatency -p 8702 unix_dgram_sendmsg Tracing 1 functions for "unix_dgram_sendmsg"... Hit Ctrl-C to end. ^C nsecs : count distribution [...] 128 -> 255 : 0 | | 256 -> 511 : 4092 |****************************************| 512 -> 1023 : 2 | | 1024 -> 2047 : 0 | | 2048 -> 4095 : 0 | | 4096 -> 8191 : 1 | | 8192 -> 16383 : 1 | |
avg = 410 nsecs, total: 1680510 nsecs, count: 4096
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20240123170856.41348-6-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 12 ++++++++++-- include/net/scm.h | 1 + net/core/scm.c | 5 +++++ net/unix/af_unix.c | 6 ++++-- net/unix/garbage.c | 10 +++++++++- 5 files changed, 29 insertions(+), 5 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -8,12 +8,20 @@ #include <linux/refcount.h> #include <net/sock.h>
+#if IS_ENABLED(CONFIG_UNIX) +struct unix_sock *unix_get_socket(struct file *filp); +#else +static inline struct unix_sock *unix_get_socket(struct file *filp) +{ + return NULL; +} +#endif + void unix_inflight(struct user_struct *user, struct file *fp); void unix_notinflight(struct user_struct *user, struct file *fp); void unix_destruct_scm(struct sk_buff *skb); void unix_gc(void); -void wait_for_unix_gc(void); -struct unix_sock *unix_get_socket(struct file *filp); +void wait_for_unix_gc(struct scm_fp_list *fpl); struct sock *unix_peer_get(struct sock *sk);
#define UNIX_HASH_MOD (256 - 1) --- a/include/net/scm.h +++ b/include/net/scm.h @@ -23,6 +23,7 @@ struct scm_creds {
struct scm_fp_list { short count; + short count_unix; short max; struct user_struct *user; struct file *fp[SCM_MAX_FD]; --- a/net/core/scm.c +++ b/net/core/scm.c @@ -36,6 +36,7 @@ #include <net/compat.h> #include <net/scm.h> #include <net/cls_cgroup.h> +#include <net/af_unix.h>
/* @@ -85,6 +86,7 @@ static int scm_fp_copy(struct cmsghdr *c return -ENOMEM; *fplp = fpl; fpl->count = 0; + fpl->count_unix = 0; fpl->max = SCM_MAX_FD; fpl->user = NULL; } @@ -109,6 +111,9 @@ static int scm_fp_copy(struct cmsghdr *c fput(file); return -EINVAL; } + if (unix_get_socket(file)) + fpl->count_unix++; + *fpp++ = file; fpl->count++; } --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -1875,11 +1875,12 @@ static int unix_dgram_sendmsg(struct soc long timeo; int err;
- wait_for_unix_gc(); err = scm_send(sock, msg, &scm, false); if (err < 0) return err;
+ wait_for_unix_gc(scm.fp); + err = -EOPNOTSUPP; if (msg->msg_flags&MSG_OOB) goto out; @@ -2145,11 +2146,12 @@ static int unix_stream_sendmsg(struct so bool fds_sent = false; int data_len;
- wait_for_unix_gc(); err = scm_send(sock, msg, &scm, false); if (err < 0) return err;
+ wait_for_unix_gc(scm.fp); + err = -EOPNOTSUPP; if (msg->msg_flags & MSG_OOB) { #if IS_ENABLED(CONFIG_AF_UNIX_OOB) --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -335,8 +335,9 @@ void unix_gc(void) }
#define UNIX_INFLIGHT_TRIGGER_GC 16000 +#define UNIX_INFLIGHT_SANE_USER (SCM_MAX_FD * 8)
-void wait_for_unix_gc(void) +void wait_for_unix_gc(struct scm_fp_list *fpl) { /* If number of inflight sockets is insane, * force a garbage collect right now. @@ -348,6 +349,13 @@ void wait_for_unix_gc(void) !READ_ONCE(gc_in_progress)) unix_gc();
+ /* Penalise users who want to send AF_UNIX sockets + * but whose sockets have not been received yet. + */ + if (!fpl || !fpl->count_unix || + READ_ONCE(fpl->user->unix_inflight) < UNIX_INFLIGHT_SANE_USER) + return; + if (READ_ONCE(gc_in_progress)) flush_work(&unix_gc_work); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit d0f6dc26346863e1f4a23117f5468614e54df064 upstream.
This is a prep patch for the last patch in this series so that checkpatch will not warn about BUG_ON().
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Jens Axboe axboe@kernel.dk Link: https://lore.kernel.org/r/20240129190435.57228-2-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 8 ++++---- net/unix/scm.c | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -145,7 +145,7 @@ static void scan_children(struct sock *x /* An embryo cannot be in-flight, so it's safe * to use the list link. */ - BUG_ON(!list_empty(&u->link)); + WARN_ON_ONCE(!list_empty(&u->link)); list_add_tail(&u->link, &embryos); } spin_unlock(&x->sk_receive_queue.lock); @@ -224,8 +224,8 @@ static void __unix_gc(struct work_struct
total_refs = file_count(sk->sk_socket->file);
- BUG_ON(!u->inflight); - BUG_ON(total_refs < u->inflight); + WARN_ON_ONCE(!u->inflight); + WARN_ON_ONCE(total_refs < u->inflight); if (total_refs == u->inflight) { list_move_tail(&u->link, &gc_candidates); __set_bit(UNIX_GC_CANDIDATE, &u->gc_flags); @@ -318,7 +318,7 @@ static void __unix_gc(struct work_struct list_move_tail(&u->link, &gc_inflight_list);
/* All candidates should have been detached by now. */ - BUG_ON(!list_empty(&gc_candidates)); + WARN_ON_ONCE(!list_empty(&gc_candidates));
/* Paired with READ_ONCE() in wait_for_unix_gc(). */ WRITE_ONCE(gc_in_progress, false); --- a/net/unix/scm.c +++ b/net/unix/scm.c @@ -50,10 +50,10 @@ void unix_inflight(struct user_struct *u
if (u) { if (!u->inflight) { - BUG_ON(!list_empty(&u->link)); + WARN_ON_ONCE(!list_empty(&u->link)); list_add_tail(&u->link, &gc_inflight_list); } else { - BUG_ON(list_empty(&u->link)); + WARN_ON_ONCE(list_empty(&u->link)); } u->inflight++; /* Paired with READ_ONCE() in wait_for_unix_gc() */ @@ -70,8 +70,8 @@ void unix_notinflight(struct user_struct spin_lock(&unix_gc_lock);
if (u) { - BUG_ON(!u->inflight); - BUG_ON(list_empty(&u->link)); + WARN_ON_ONCE(!u->inflight); + WARN_ON_ONCE(list_empty(&u->link));
u->inflight--; if (!u->inflight)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 11498715f266a3fb4caabba9dd575636cbcaa8f1 upstream.
Since commit 705318a99a13 ("io_uring/af_unix: disable sending io_uring over sockets"), io_uring's unix socket cannot be passed via SCM_RIGHTS, so it does not contribute to cyclic reference and no longer be candidate for garbage collection.
Also, commit 6e5e6d274956 ("io_uring: drop any code related to SCM_RIGHTS") cleaned up SCM_RIGHTS code in io_uring.
Let's do it in AF_UNIX as well by reverting commit 0091bfc81741 ("io_uring/af_unix: defer registered files gc to io_uring release") and commit 10369080454d ("net: reclaim skb->scm_io_uring bit").
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Jens Axboe axboe@kernel.dk Link: https://lore.kernel.org/r/20240129190435.57228-3-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 25 ++----------------------- 1 file changed, 2 insertions(+), 23 deletions(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -184,12 +184,10 @@ static bool gc_in_progress;
static void __unix_gc(struct work_struct *work) { - struct sk_buff *next_skb, *skb; - struct unix_sock *u; - struct unix_sock *next; struct sk_buff_head hitlist; - struct list_head cursor; + struct unix_sock *u, *next; LIST_HEAD(not_cycle_list); + struct list_head cursor;
spin_lock(&unix_gc_lock);
@@ -293,30 +291,11 @@ static void __unix_gc(struct work_struct
spin_unlock(&unix_gc_lock);
- /* We need io_uring to clean its registered files, ignore all io_uring - * originated skbs. It's fine as io_uring doesn't keep references to - * other io_uring instances and so killing all other files in the cycle - * will put all io_uring references forcing it to go through normal - * release.path eventually putting registered files. - */ - skb_queue_walk_safe(&hitlist, skb, next_skb) { - if (skb->scm_io_uring) { - __skb_unlink(skb, &hitlist); - skb_queue_tail(&skb->sk->sk_receive_queue, skb); - } - } - /* Here we are. Hitlist is filled. Die. */ __skb_queue_purge(&hitlist);
spin_lock(&unix_gc_lock);
- /* There could be io_uring registered files, just push them back to - * the inflight list - */ - list_for_each_entry_safe(u, next, &gc_candidates, link) - list_move_tail(&u->link, &gc_inflight_list); - /* All candidates should have been detached by now. */ WARN_ON_ONCE(!list_empty(&gc_candidates));
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 99a7a5b9943ea2d05fb0dee38e4ae2290477ed83 upstream.
Originally, the code related to garbage collection was all in garbage.c.
Commit f4e65870e5ce ("net: split out functions related to registering inflight socket files") moved some functions to scm.c for io_uring and added CONFIG_UNIX_SCM just in case AF_UNIX was built as module.
However, since commit 97154bcf4d1b ("af_unix: Kconfig: make CONFIG_UNIX bool"), AF_UNIX is no longer built separately. Also, io_uring does not support SCM_RIGHTS now.
Let's move the functions back to garbage.c
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Jens Axboe axboe@kernel.dk Link: https://lore.kernel.org/r/20240129190435.57228-4-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 7 +- net/Makefile | 2 net/unix/Kconfig | 5 - net/unix/Makefile | 2 net/unix/af_unix.c | 63 ++++++++++++++++++++- net/unix/garbage.c | 73 +++++++++++++++++++++++- net/unix/scm.c | 149 -------------------------------------------------- net/unix/scm.h | 10 --- 8 files changed, 137 insertions(+), 174 deletions(-) delete mode 100644 net/unix/scm.c delete mode 100644 net/unix/scm.h
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -17,19 +17,20 @@ static inline struct unix_sock *unix_get } #endif
+extern spinlock_t unix_gc_lock; +extern unsigned int unix_tot_inflight; + void unix_inflight(struct user_struct *user, struct file *fp); void unix_notinflight(struct user_struct *user, struct file *fp); -void unix_destruct_scm(struct sk_buff *skb); void unix_gc(void); void wait_for_unix_gc(struct scm_fp_list *fpl); + struct sock *unix_peer_get(struct sock *sk);
#define UNIX_HASH_MOD (256 - 1) #define UNIX_HASH_SIZE (256 * 2) #define UNIX_HASH_BITS 8
-extern unsigned int unix_tot_inflight; - struct unix_address { refcount_t refcnt; int len; --- a/net/Makefile +++ b/net/Makefile @@ -17,7 +17,7 @@ obj-$(CONFIG_NETFILTER) += netfilter/ obj-$(CONFIG_INET) += ipv4/ obj-$(CONFIG_TLS) += tls/ obj-$(CONFIG_XFRM) += xfrm/ -obj-$(CONFIG_UNIX_SCM) += unix/ +obj-$(CONFIG_UNIX) += unix/ obj-y += ipv6/ obj-$(CONFIG_BPFILTER) += bpfilter/ obj-$(CONFIG_PACKET) += packet/ --- a/net/unix/Kconfig +++ b/net/unix/Kconfig @@ -16,11 +16,6 @@ config UNIX
Say Y unless you know what you are doing.
-config UNIX_SCM - bool - depends on UNIX - default y - config AF_UNIX_OOB bool depends on UNIX --- a/net/unix/Makefile +++ b/net/unix/Makefile @@ -11,5 +11,3 @@ unix-$(CONFIG_BPF_SYSCALL) += unix_bpf.o
obj-$(CONFIG_UNIX_DIAG) += unix_diag.o unix_diag-y := diag.o - -obj-$(CONFIG_UNIX_SCM) += scm.o --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -116,8 +116,6 @@ #include <linux/file.h> #include <linux/btf_ids.h>
-#include "scm.h" - static atomic_long_t unix_nr_socks; static struct hlist_head bsd_socket_buckets[UNIX_HASH_SIZE / 2]; static spinlock_t bsd_socket_locks[UNIX_HASH_SIZE / 2]; @@ -1726,6 +1724,52 @@ out: return err; }
+/* The "user->unix_inflight" variable is protected by the garbage + * collection lock, and we just read it locklessly here. If you go + * over the limit, there might be a tiny race in actually noticing + * it across threads. Tough. + */ +static inline bool too_many_unix_fds(struct task_struct *p) +{ + struct user_struct *user = current_user(); + + if (unlikely(READ_ONCE(user->unix_inflight) > task_rlimit(p, RLIMIT_NOFILE))) + return !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN); + return false; +} + +static int unix_attach_fds(struct scm_cookie *scm, struct sk_buff *skb) +{ + int i; + + if (too_many_unix_fds(current)) + return -ETOOMANYREFS; + + /* Need to duplicate file references for the sake of garbage + * collection. Otherwise a socket in the fps might become a + * candidate for GC while the skb is not yet queued. + */ + UNIXCB(skb).fp = scm_fp_dup(scm->fp); + if (!UNIXCB(skb).fp) + return -ENOMEM; + + for (i = scm->fp->count - 1; i >= 0; i--) + unix_inflight(scm->fp->user, scm->fp->fp[i]); + + return 0; +} + +static void unix_detach_fds(struct scm_cookie *scm, struct sk_buff *skb) +{ + int i; + + scm->fp = UNIXCB(skb).fp; + UNIXCB(skb).fp = NULL; + + for (i = scm->fp->count - 1; i >= 0; i--) + unix_notinflight(scm->fp->user, scm->fp->fp[i]); +} + static void unix_peek_fds(struct scm_cookie *scm, struct sk_buff *skb) { scm->fp = scm_fp_dup(UNIXCB(skb).fp); @@ -1773,6 +1817,21 @@ static void unix_peek_fds(struct scm_coo spin_unlock(&unix_gc_lock); }
+static void unix_destruct_scm(struct sk_buff *skb) +{ + struct scm_cookie scm; + + memset(&scm, 0, sizeof(scm)); + scm.pid = UNIXCB(skb).pid; + if (UNIXCB(skb).fp) + unix_detach_fds(&scm, skb); + + /* Alas, it calls VFS */ + /* So fscking what? fput() had been SMP-safe since the last Summer */ + scm_destroy(&scm); + sock_wfree(skb); +} + static int unix_scm_to_skb(struct scm_cookie *scm, struct sk_buff *skb, bool send_fds) { int err = 0; --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -81,11 +81,80 @@ #include <net/scm.h> #include <net/tcp_states.h>
-#include "scm.h" +struct unix_sock *unix_get_socket(struct file *filp) +{ + struct inode *inode = file_inode(filp);
-/* Internal data structures and random procedures: */ + /* Socket ? */ + if (S_ISSOCK(inode->i_mode) && !(filp->f_mode & FMODE_PATH)) { + struct socket *sock = SOCKET_I(inode); + const struct proto_ops *ops; + struct sock *sk = sock->sk;
+ ops = READ_ONCE(sock->ops); + + /* PF_UNIX ? */ + if (sk && ops && ops->family == PF_UNIX) + return unix_sk(sk); + } + + return NULL; +} + +DEFINE_SPINLOCK(unix_gc_lock); +unsigned int unix_tot_inflight; static LIST_HEAD(gc_candidates); +static LIST_HEAD(gc_inflight_list); + +/* Keep the number of times in flight count for the file + * descriptor if it is for an AF_UNIX socket. + */ +void unix_inflight(struct user_struct *user, struct file *filp) +{ + struct unix_sock *u = unix_get_socket(filp); + + spin_lock(&unix_gc_lock); + + if (u) { + if (!u->inflight) { + WARN_ON_ONCE(!list_empty(&u->link)); + list_add_tail(&u->link, &gc_inflight_list); + } else { + WARN_ON_ONCE(list_empty(&u->link)); + } + u->inflight++; + + /* Paired with READ_ONCE() in wait_for_unix_gc() */ + WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1); + } + + WRITE_ONCE(user->unix_inflight, user->unix_inflight + 1); + + spin_unlock(&unix_gc_lock); +} + +void unix_notinflight(struct user_struct *user, struct file *filp) +{ + struct unix_sock *u = unix_get_socket(filp); + + spin_lock(&unix_gc_lock); + + if (u) { + WARN_ON_ONCE(!u->inflight); + WARN_ON_ONCE(list_empty(&u->link)); + + u->inflight--; + if (!u->inflight) + list_del_init(&u->link); + + /* Paired with READ_ONCE() in wait_for_unix_gc() */ + WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1); + } + + WRITE_ONCE(user->unix_inflight, user->unix_inflight - 1); + + spin_unlock(&unix_gc_lock); +}
static void scan_inflight(struct sock *x, void (*func)(struct unix_sock *), struct sk_buff_head *hitlist) --- a/net/unix/scm.c +++ /dev/null @@ -1,149 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include <linux/module.h> -#include <linux/kernel.h> -#include <linux/string.h> -#include <linux/socket.h> -#include <linux/net.h> -#include <linux/fs.h> -#include <net/af_unix.h> -#include <net/scm.h> -#include <linux/init.h> -#include <linux/io_uring.h> - -#include "scm.h" - -unsigned int unix_tot_inflight; -EXPORT_SYMBOL(unix_tot_inflight); - -LIST_HEAD(gc_inflight_list); -EXPORT_SYMBOL(gc_inflight_list); - -DEFINE_SPINLOCK(unix_gc_lock); -EXPORT_SYMBOL(unix_gc_lock); - -struct unix_sock *unix_get_socket(struct file *filp) -{ - struct inode *inode = file_inode(filp); - - /* Socket ? */ - if (S_ISSOCK(inode->i_mode) && !(filp->f_mode & FMODE_PATH)) { - struct socket *sock = SOCKET_I(inode); - struct sock *s = sock->sk; - - /* PF_UNIX ? */ - if (s && sock->ops && sock->ops->family == PF_UNIX) - return unix_sk(s); - } - - return NULL; -} -EXPORT_SYMBOL(unix_get_socket); - -/* Keep the number of times in flight count for the file - * descriptor if it is for an AF_UNIX socket. - */ -void unix_inflight(struct user_struct *user, struct file *fp) -{ - struct unix_sock *u = unix_get_socket(fp); - - spin_lock(&unix_gc_lock); - - if (u) { - if (!u->inflight) { - WARN_ON_ONCE(!list_empty(&u->link)); - list_add_tail(&u->link, &gc_inflight_list); - } else { - WARN_ON_ONCE(list_empty(&u->link)); - } - u->inflight++; - /* Paired with READ_ONCE() in wait_for_unix_gc() */ - WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1); - } - WRITE_ONCE(user->unix_inflight, user->unix_inflight + 1); - spin_unlock(&unix_gc_lock); -} - -void unix_notinflight(struct user_struct *user, struct file *fp) -{ - struct unix_sock *u = unix_get_socket(fp); - - spin_lock(&unix_gc_lock); - - if (u) { - WARN_ON_ONCE(!u->inflight); - WARN_ON_ONCE(list_empty(&u->link)); - - u->inflight--; - if (!u->inflight) - list_del_init(&u->link); - /* Paired with READ_ONCE() in wait_for_unix_gc() */ - WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1); - } - WRITE_ONCE(user->unix_inflight, user->unix_inflight - 1); - spin_unlock(&unix_gc_lock); -} - -/* - * The "user->unix_inflight" variable is protected by the garbage - * collection lock, and we just read it locklessly here. If you go - * over the limit, there might be a tiny race in actually noticing - * it across threads. Tough. - */ -static inline bool too_many_unix_fds(struct task_struct *p) -{ - struct user_struct *user = current_user(); - - if (unlikely(READ_ONCE(user->unix_inflight) > task_rlimit(p, RLIMIT_NOFILE))) - return !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN); - return false; -} - -int unix_attach_fds(struct scm_cookie *scm, struct sk_buff *skb) -{ - int i; - - if (too_many_unix_fds(current)) - return -ETOOMANYREFS; - - /* - * Need to duplicate file references for the sake of garbage - * collection. Otherwise a socket in the fps might become a - * candidate for GC while the skb is not yet queued. - */ - UNIXCB(skb).fp = scm_fp_dup(scm->fp); - if (!UNIXCB(skb).fp) - return -ENOMEM; - - for (i = scm->fp->count - 1; i >= 0; i--) - unix_inflight(scm->fp->user, scm->fp->fp[i]); - return 0; -} -EXPORT_SYMBOL(unix_attach_fds); - -void unix_detach_fds(struct scm_cookie *scm, struct sk_buff *skb) -{ - int i; - - scm->fp = UNIXCB(skb).fp; - UNIXCB(skb).fp = NULL; - - for (i = scm->fp->count-1; i >= 0; i--) - unix_notinflight(scm->fp->user, scm->fp->fp[i]); -} -EXPORT_SYMBOL(unix_detach_fds); - -void unix_destruct_scm(struct sk_buff *skb) -{ - struct scm_cookie scm; - - memset(&scm, 0, sizeof(scm)); - scm.pid = UNIXCB(skb).pid; - if (UNIXCB(skb).fp) - unix_detach_fds(&scm, skb); - - /* Alas, it calls VFS */ - /* So fscking what? fput() had been SMP-safe since the last Summer */ - scm_destroy(&scm); - sock_wfree(skb); -} -EXPORT_SYMBOL(unix_destruct_scm); --- a/net/unix/scm.h +++ /dev/null @@ -1,10 +0,0 @@ -#ifndef NET_UNIX_SCM_H -#define NET_UNIX_SCM_H - -extern struct list_head gc_inflight_list; -extern spinlock_t unix_gc_lock; - -int unix_attach_fds(struct scm_cookie *scm, struct sk_buff *skb); -void unix_detach_fds(struct scm_cookie *scm, struct sk_buff *skb); - -#endif
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 1fbfdfaa590248c1d86407f578e40e5c65136330 upstream.
We will replace the garbage collection algorithm for AF_UNIX, where we will consider each inflight AF_UNIX socket as a vertex and its file descriptor as an edge in a directed graph.
This patch introduces a new struct unix_vertex representing a vertex in the graph and adds its pointer to struct unix_sock.
When we send a fd using the SCM_RIGHTS message, we allocate struct scm_fp_list to struct scm_cookie in scm_fp_copy(). Then, we bump each refcount of the inflight fds' struct file and save them in scm_fp_list.fp.
After that, unix_attach_fds() inexplicably clones scm_fp_list of scm_cookie and sets it to skb. (We will remove this part after replacing GC.)
Here, we add a new function call in unix_attach_fds() to preallocate struct unix_vertex per inflight AF_UNIX fd and link each vertex to skb's scm_fp_list.vertices.
When sendmsg() succeeds later, if the socket of the inflight fd is still not inflight yet, we will set the preallocated vertex to struct unix_sock.vertex and link it to a global list unix_unvisited_vertices under spin_lock(&unix_gc_lock).
If the socket is already inflight, we free the preallocated vertex. This is to avoid taking the lock unnecessarily when sendmsg() could fail later.
In the following patch, we will similarly allocate another struct per edge, which will finally be linked to the inflight socket's unix_vertex.edges.
And then, we will count the number of edges as unix_vertex.out_degree.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-2-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 9 +++++++++ include/net/scm.h | 3 +++ net/core/scm.c | 7 +++++++ net/unix/af_unix.c | 6 ++++++ net/unix/garbage.c | 38 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 63 insertions(+)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -22,9 +22,17 @@ extern unsigned int unix_tot_inflight;
void unix_inflight(struct user_struct *user, struct file *fp); void unix_notinflight(struct user_struct *user, struct file *fp); +int unix_prepare_fpl(struct scm_fp_list *fpl); +void unix_destroy_fpl(struct scm_fp_list *fpl); void unix_gc(void); void wait_for_unix_gc(struct scm_fp_list *fpl);
+struct unix_vertex { + struct list_head edges; + struct list_head entry; + unsigned long out_degree; +}; + struct sock *unix_peer_get(struct sock *sk);
#define UNIX_HASH_MOD (256 - 1) @@ -62,6 +70,7 @@ struct unix_sock { struct path path; struct mutex iolock, bindlock; struct sock *peer; + struct unix_vertex *vertex; struct list_head link; unsigned long inflight; spinlock_t lock; --- a/include/net/scm.h +++ b/include/net/scm.h @@ -25,6 +25,9 @@ struct scm_fp_list { short count; short count_unix; short max; +#ifdef CONFIG_UNIX + struct list_head vertices; +#endif struct user_struct *user; struct file *fp[SCM_MAX_FD]; }; --- a/net/core/scm.c +++ b/net/core/scm.c @@ -89,6 +89,9 @@ static int scm_fp_copy(struct cmsghdr *c fpl->count_unix = 0; fpl->max = SCM_MAX_FD; fpl->user = NULL; +#if IS_ENABLED(CONFIG_UNIX) + INIT_LIST_HEAD(&fpl->vertices); +#endif } fpp = &fpl->fp[fpl->count];
@@ -372,8 +375,12 @@ struct scm_fp_list *scm_fp_dup(struct sc if (new_fpl) { for (i = 0; i < fpl->count; i++) get_file(fpl->fp[i]); + new_fpl->max = new_fpl->count; new_fpl->user = get_uid(fpl->user); +#if IS_ENABLED(CONFIG_UNIX) + INIT_LIST_HEAD(&new_fpl->vertices); +#endif } return new_fpl; } --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -955,6 +955,7 @@ static struct sock *unix_create1(struct sk->sk_destruct = unix_sock_destructor; u = unix_sk(sk); u->inflight = 0; + u->vertex = NULL; u->path.dentry = NULL; u->path.mnt = NULL; spin_lock_init(&u->lock); @@ -1756,6 +1757,9 @@ static int unix_attach_fds(struct scm_co for (i = scm->fp->count - 1; i >= 0; i--) unix_inflight(scm->fp->user, scm->fp->fp[i]);
+ if (unix_prepare_fpl(UNIXCB(skb).fp)) + return -ENOMEM; + return 0; }
@@ -1766,6 +1770,8 @@ static void unix_detach_fds(struct scm_c scm->fp = UNIXCB(skb).fp; UNIXCB(skb).fp = NULL;
+ unix_destroy_fpl(scm->fp); + for (i = scm->fp->count - 1; i >= 0; i--) unix_notinflight(scm->fp->user, scm->fp->fp[i]); } --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -101,6 +101,44 @@ struct unix_sock *unix_get_socket(struct return NULL; }
+static void unix_free_vertices(struct scm_fp_list *fpl) +{ + struct unix_vertex *vertex, *next_vertex; + + list_for_each_entry_safe(vertex, next_vertex, &fpl->vertices, entry) { + list_del(&vertex->entry); + kfree(vertex); + } +} + +int unix_prepare_fpl(struct scm_fp_list *fpl) +{ + struct unix_vertex *vertex; + int i; + + if (!fpl->count_unix) + return 0; + + for (i = 0; i < fpl->count_unix; i++) { + vertex = kmalloc(sizeof(*vertex), GFP_KERNEL); + if (!vertex) + goto err; + + list_add(&vertex->entry, &fpl->vertices); + } + + return 0; + +err: + unix_free_vertices(fpl); + return -ENOMEM; +} + +void unix_destroy_fpl(struct scm_fp_list *fpl) +{ + unix_free_vertices(fpl); +} + DEFINE_SPINLOCK(unix_gc_lock); unsigned int unix_tot_inflight; static LIST_HEAD(gc_candidates);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 29b64e354029cfcf1eea4d91b146c7b769305930 upstream.
As with the previous patch, we preallocate to skb's scm_fp_list an array of struct unix_edge in the number of inflight AF_UNIX fds.
There we just preallocate memory and do not use immediately because sendmsg() could fail after this point. The actual use will be in the next patch.
When we queue skb with inflight edges, we will set the inflight socket's unix_sock as unix_edge->predecessor and the receiver's unix_sock as successor, and then we will link the edge to the inflight socket's unix_vertex.edges.
Note that we set NULL to cloned scm_fp_list.edges in scm_fp_dup() so that MSG_PEEK does not change the shape of the directed graph.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-3-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 6 ++++++ include/net/scm.h | 5 +++++ net/core/scm.c | 2 ++ net/unix/garbage.c | 6 ++++++ 4 files changed, 19 insertions(+)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -33,6 +33,12 @@ struct unix_vertex { unsigned long out_degree; };
+struct unix_edge { + struct unix_sock *predecessor; + struct unix_sock *successor; + struct list_head vertex_entry; +}; + struct sock *unix_peer_get(struct sock *sk);
#define UNIX_HASH_MOD (256 - 1) --- a/include/net/scm.h +++ b/include/net/scm.h @@ -21,12 +21,17 @@ struct scm_creds { kgid_t gid; };
+#ifdef CONFIG_UNIX +struct unix_edge; +#endif + struct scm_fp_list { short count; short count_unix; short max; #ifdef CONFIG_UNIX struct list_head vertices; + struct unix_edge *edges; #endif struct user_struct *user; struct file *fp[SCM_MAX_FD]; --- a/net/core/scm.c +++ b/net/core/scm.c @@ -90,6 +90,7 @@ static int scm_fp_copy(struct cmsghdr *c fpl->max = SCM_MAX_FD; fpl->user = NULL; #if IS_ENABLED(CONFIG_UNIX) + fpl->edges = NULL; INIT_LIST_HEAD(&fpl->vertices); #endif } @@ -379,6 +380,7 @@ struct scm_fp_list *scm_fp_dup(struct sc new_fpl->max = new_fpl->count; new_fpl->user = get_uid(fpl->user); #if IS_ENABLED(CONFIG_UNIX) + new_fpl->edges = NULL; INIT_LIST_HEAD(&new_fpl->vertices); #endif } --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -127,6 +127,11 @@ int unix_prepare_fpl(struct scm_fp_list list_add(&vertex->entry, &fpl->vertices); }
+ fpl->edges = kvmalloc_array(fpl->count_unix, sizeof(*fpl->edges), + GFP_KERNEL_ACCOUNT); + if (!fpl->edges) + goto err; + return 0;
err: @@ -136,6 +141,7 @@ err:
void unix_destroy_fpl(struct scm_fp_list *fpl) { + kvfree(fpl->edges); unix_free_vertices(fpl); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 42f298c06b30bfe0a8cbee5d38644e618699e26e upstream.
Just before queuing skb with inflight fds, we call scm_stat_add(), which is a good place to set up the preallocated struct unix_vertex and struct unix_edge in UNIXCB(skb).fp.
Then, we call unix_add_edges() and construct the directed graph as follows:
1. Set the inflight socket's unix_sock to unix_edge.predecessor. 2. Set the receiver's unix_sock to unix_edge.successor. 3. Set the preallocated vertex to inflight socket's unix_sock.vertex. 4. Link inflight socket's unix_vertex.entry to unix_unvisited_vertices. 5. Link unix_edge.vertex_entry to the inflight socket's unix_vertex.edges.
Let's say we pass the fd of AF_UNIX socket A to B and the fd of B to C. The graph looks like this:
+-------------------------+ | unix_unvisited_vertices | <-------------------------. +-------------------------+ | + | | +--------------+ +--------------+ | +--------------+ | | unix_sock A | <---. .---> | unix_sock B | <-|-. .---> | unix_sock C | | +--------------+ | | +--------------+ | | | +--------------+ | .-+ | vertex | | | .-+ | vertex | | | | | vertex | | | +--------------+ | | | +--------------+ | | | +--------------+ | | | | | | | | | | +--------------+ | | | +--------------+ | | | | '-> | unix_vertex | | | '-> | unix_vertex | | | | | +--------------+ | | +--------------+ | | | `---> | entry | +---------> | entry | +-' | | |--------------| | | |--------------| | | | edges | <-. | | | edges | <-. | | +--------------+ | | | +--------------+ | | | | | | | | | .----------------------' | | .----------------------' | | | | | | | | | +--------------+ | | | +--------------+ | | | | unix_edge | | | | | unix_edge | | | | +--------------+ | | | +--------------+ | | `-> | vertex_entry | | | `-> | vertex_entry | | | |--------------| | | |--------------| | | | predecessor | +---' | | predecessor | +---' | |--------------| | |--------------| | | successor | +-----' | successor | +-----' +--------------+ +--------------+
Henceforth, we denote such a graph as A -> B (-> C).
Now, we can express all inflight fd graphs that do not contain embryo sockets. We will support the particular case later.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-4-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 2 + include/net/scm.h | 1 net/core/scm.c | 2 + net/unix/af_unix.c | 8 +++- net/unix/garbage.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++++- 5 files changed, 100 insertions(+), 3 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -22,6 +22,8 @@ extern unsigned int unix_tot_inflight;
void unix_inflight(struct user_struct *user, struct file *fp); void unix_notinflight(struct user_struct *user, struct file *fp); +void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver); +void unix_del_edges(struct scm_fp_list *fpl); int unix_prepare_fpl(struct scm_fp_list *fpl); void unix_destroy_fpl(struct scm_fp_list *fpl); void unix_gc(void); --- a/include/net/scm.h +++ b/include/net/scm.h @@ -30,6 +30,7 @@ struct scm_fp_list { short count_unix; short max; #ifdef CONFIG_UNIX + bool inflight; struct list_head vertices; struct unix_edge *edges; #endif --- a/net/core/scm.c +++ b/net/core/scm.c @@ -90,6 +90,7 @@ static int scm_fp_copy(struct cmsghdr *c fpl->max = SCM_MAX_FD; fpl->user = NULL; #if IS_ENABLED(CONFIG_UNIX) + fpl->inflight = false; fpl->edges = NULL; INIT_LIST_HEAD(&fpl->vertices); #endif @@ -380,6 +381,7 @@ struct scm_fp_list *scm_fp_dup(struct sc new_fpl->max = new_fpl->count; new_fpl->user = get_uid(fpl->user); #if IS_ENABLED(CONFIG_UNIX) + new_fpl->inflight = false; new_fpl->edges = NULL; INIT_LIST_HEAD(&new_fpl->vertices); #endif --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -1910,8 +1910,10 @@ static void scm_stat_add(struct sock *sk struct scm_fp_list *fp = UNIXCB(skb).fp; struct unix_sock *u = unix_sk(sk);
- if (unlikely(fp && fp->count)) + if (unlikely(fp && fp->count)) { atomic_add(fp->count, &u->scm_stat.nr_fds); + unix_add_edges(fp, u); + } }
static void scm_stat_del(struct sock *sk, struct sk_buff *skb) @@ -1919,8 +1921,10 @@ static void scm_stat_del(struct sock *sk struct scm_fp_list *fp = UNIXCB(skb).fp; struct unix_sock *u = unix_sk(sk);
- if (unlikely(fp && fp->count)) + if (unlikely(fp && fp->count)) { atomic_sub(fp->count, &u->scm_stat.nr_fds); + unix_del_edges(fp); + } }
/* --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -101,6 +101,38 @@ struct unix_sock *unix_get_socket(struct return NULL; }
+static LIST_HEAD(unix_unvisited_vertices); + +static void unix_add_edge(struct scm_fp_list *fpl, struct unix_edge *edge) +{ + struct unix_vertex *vertex = edge->predecessor->vertex; + + if (!vertex) { + vertex = list_first_entry(&fpl->vertices, typeof(*vertex), entry); + vertex->out_degree = 0; + INIT_LIST_HEAD(&vertex->edges); + + list_move_tail(&vertex->entry, &unix_unvisited_vertices); + edge->predecessor->vertex = vertex; + } + + vertex->out_degree++; + list_add_tail(&edge->vertex_entry, &vertex->edges); +} + +static void unix_del_edge(struct scm_fp_list *fpl, struct unix_edge *edge) +{ + struct unix_vertex *vertex = edge->predecessor->vertex; + + list_del(&edge->vertex_entry); + vertex->out_degree--; + + if (!vertex->out_degree) { + edge->predecessor->vertex = NULL; + list_move_tail(&vertex->entry, &fpl->vertices); + } +} + static void unix_free_vertices(struct scm_fp_list *fpl) { struct unix_vertex *vertex, *next_vertex; @@ -111,6 +143,60 @@ static void unix_free_vertices(struct sc } }
+DEFINE_SPINLOCK(unix_gc_lock); + +void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver) +{ + int i = 0, j = 0; + + spin_lock(&unix_gc_lock); + + if (!fpl->count_unix) + goto out; + + do { + struct unix_sock *inflight = unix_get_socket(fpl->fp[j++]); + struct unix_edge *edge; + + if (!inflight) + continue; + + edge = fpl->edges + i++; + edge->predecessor = inflight; + edge->successor = receiver; + + unix_add_edge(fpl, edge); + } while (i < fpl->count_unix); + +out: + spin_unlock(&unix_gc_lock); + + fpl->inflight = true; + + unix_free_vertices(fpl); +} + +void unix_del_edges(struct scm_fp_list *fpl) +{ + int i = 0; + + spin_lock(&unix_gc_lock); + + if (!fpl->count_unix) + goto out; + + do { + struct unix_edge *edge = fpl->edges + i++; + + unix_del_edge(fpl, edge); + } while (i < fpl->count_unix); + +out: + spin_unlock(&unix_gc_lock); + + fpl->inflight = false; +} + int unix_prepare_fpl(struct scm_fp_list *fpl) { struct unix_vertex *vertex; @@ -141,11 +227,13 @@ err:
void unix_destroy_fpl(struct scm_fp_list *fpl) { + if (fpl->inflight) + unix_del_edges(fpl); + kvfree(fpl->edges); unix_free_vertices(fpl); }
-DEFINE_SPINLOCK(unix_gc_lock); unsigned int unix_tot_inflight; static LIST_HEAD(gc_candidates); static LIST_HEAD(gc_inflight_list);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 22c3c0c52d32f41cc38cd936ea0c93f22ced3315 upstream.
Currently, we track the number of inflight sockets in two variables. unix_tot_inflight is the total number of inflight AF_UNIX sockets on the host, and user->unix_inflight is the number of inflight fds per user.
We update them one by one in unix_inflight(), which can be done once in batch. Also, sendmsg() could fail even after unix_inflight(), then we need to acquire unix_gc_lock only to decrement the counters.
Let's bulk update the counters in unix_add_edges() and unix_del_edges(), which is called only for successfully passed fds.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-5-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -144,6 +144,7 @@ static void unix_free_vertices(struct sc }
DEFINE_SPINLOCK(unix_gc_lock); +unsigned int unix_tot_inflight;
void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver) { @@ -168,7 +169,10 @@ void unix_add_edges(struct scm_fp_list * unix_add_edge(fpl, edge); } while (i < fpl->count_unix);
+ WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + fpl->count_unix); out: + WRITE_ONCE(fpl->user->unix_inflight, fpl->user->unix_inflight + fpl->count); + spin_unlock(&unix_gc_lock);
fpl->inflight = true; @@ -191,7 +195,10 @@ void unix_del_edges(struct scm_fp_list * unix_del_edge(fpl, edge); } while (i < fpl->count_unix);
+ WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - fpl->count_unix); out: + WRITE_ONCE(fpl->user->unix_inflight, fpl->user->unix_inflight - fpl->count); + spin_unlock(&unix_gc_lock);
fpl->inflight = false; @@ -234,7 +241,6 @@ void unix_destroy_fpl(struct scm_fp_list unix_free_vertices(fpl); }
-unsigned int unix_tot_inflight; static LIST_HEAD(gc_candidates); static LIST_HEAD(gc_inflight_list);
@@ -255,13 +261,8 @@ void unix_inflight(struct user_struct *u WARN_ON_ONCE(list_empty(&u->link)); } u->inflight++; - - /* Paired with READ_ONCE() in wait_for_unix_gc() */ - WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1); }
- WRITE_ONCE(user->unix_inflight, user->unix_inflight + 1); - spin_unlock(&unix_gc_lock); }
@@ -278,13 +279,8 @@ void unix_notinflight(struct user_struct u->inflight--; if (!u->inflight) list_del_init(&u->link); - - /* Paired with READ_ONCE() in wait_for_unix_gc() */ - WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1); }
- WRITE_ONCE(user->unix_inflight, user->unix_inflight - 1); - spin_unlock(&unix_gc_lock); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 6ba76fd2848e107594ea4f03b737230f74bc23ea upstream.
The new GC will use a depth first search graph algorithm to find cyclic references. The algorithm visits every vertex exactly once.
Here, we implement the DFS part without recursion so that no one can abuse it.
unix_walk_scc() marks every vertex unvisited by initialising index as UNIX_VERTEX_INDEX_UNVISITED and iterates inflight vertices in unix_unvisited_vertices and call __unix_walk_scc() to start DFS from an arbitrary vertex.
__unix_walk_scc() iterates all edges starting from the vertex and explores the neighbour vertices with DFS using edge_stack.
After visiting all neighbours, __unix_walk_scc() moves the visited vertex to unix_visited_vertices so that unix_walk_scc() will not restart DFS from the visited vertex.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-6-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 2 + net/unix/garbage.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 76 insertions(+)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -33,12 +33,14 @@ struct unix_vertex { struct list_head edges; struct list_head entry; unsigned long out_degree; + unsigned long index; };
struct unix_edge { struct unix_sock *predecessor; struct unix_sock *successor; struct list_head vertex_entry; + struct list_head stack_entry; };
struct sock *unix_peer_get(struct sock *sk); --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -103,6 +103,11 @@ struct unix_sock *unix_get_socket(struct
static LIST_HEAD(unix_unvisited_vertices);
+enum unix_vertex_index { + UNIX_VERTEX_INDEX_UNVISITED, + UNIX_VERTEX_INDEX_START, +}; + static void unix_add_edge(struct scm_fp_list *fpl, struct unix_edge *edge) { struct unix_vertex *vertex = edge->predecessor->vertex; @@ -241,6 +246,73 @@ void unix_destroy_fpl(struct scm_fp_list unix_free_vertices(fpl); }
+static LIST_HEAD(unix_visited_vertices); + +static void __unix_walk_scc(struct unix_vertex *vertex) +{ + unsigned long index = UNIX_VERTEX_INDEX_START; + struct unix_edge *edge; + LIST_HEAD(edge_stack); + +next_vertex: + vertex->index = index; + index++; + + /* Explore neighbour vertices (receivers of the current vertex's fd). */ + list_for_each_entry(edge, &vertex->edges, vertex_entry) { + struct unix_vertex *next_vertex = edge->successor->vertex; + + if (!next_vertex) + continue; + + if (next_vertex->index == UNIX_VERTEX_INDEX_UNVISITED) { + /* Iterative deepening depth first search + * + * 1. Push a forward edge to edge_stack and set + * the successor to vertex for the next iteration. + */ + list_add(&edge->stack_entry, &edge_stack); + + vertex = next_vertex; + goto next_vertex; + + /* 2. Pop the edge directed to the current vertex + * and restore the ancestor for backtracking. + */ +prev_vertex: + edge = list_first_entry(&edge_stack, typeof(*edge), stack_entry); + list_del_init(&edge->stack_entry); + + vertex = edge->predecessor->vertex; + } + } + + /* Don't restart DFS from this vertex in unix_walk_scc(). */ + list_move_tail(&vertex->entry, &unix_visited_vertices); + + /* Need backtracking ? */ + if (!list_empty(&edge_stack)) + goto prev_vertex; +} + +static void unix_walk_scc(void) +{ + struct unix_vertex *vertex; + + list_for_each_entry(vertex, &unix_unvisited_vertices, entry) + vertex->index = UNIX_VERTEX_INDEX_UNVISITED; + + /* Visit every vertex exactly once. + * __unix_walk_scc() moves visited vertices to unix_visited_vertices. + */ + while (!list_empty(&unix_unvisited_vertices)) { + vertex = list_first_entry(&unix_unvisited_vertices, typeof(*vertex), entry); + __unix_walk_scc(vertex); + } + + list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); +} + static LIST_HEAD(gc_candidates); static LIST_HEAD(gc_inflight_list);
@@ -388,6 +460,8 @@ static void __unix_gc(struct work_struct
spin_lock(&unix_gc_lock);
+ unix_walk_scc(); + /* First, select candidates for garbage collection. Only * in-flight sockets are considered, and from those only ones * which don't have any external reference.
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 3484f063172dd88776b062046d721d7c2ae1af7c upstream.
In the new GC, we use a simple graph algorithm, Tarjan's Strongly Connected Components (SCC) algorithm, to find cyclic references.
The algorithm visits every vertex exactly once using depth-first search (DFS).
DFS starts by pushing an input vertex to a stack and assigning it a unique number. Two fields, index and lowlink, are initialised with the number, but lowlink could be updated later during DFS.
If a vertex has an edge to an unvisited inflight vertex, we visit it and do the same processing. So, we will have vertices in the stack in the order they appear and number them consecutively in the same order.
If a vertex has a back-edge to a visited vertex in the stack, we update the predecessor's lowlink with the successor's index.
After iterating edges from the vertex, we check if its index equals its lowlink.
If the lowlink is different from the index, it shows there was a back-edge. Then, we go backtracking and propagate the lowlink to its predecessor and resume the previous edge iteration from the next edge.
If the lowlink is the same as the index, we pop vertices before and including the vertex from the stack. Then, the set of vertices is SCC, possibly forming a cycle. At the same time, we move the vertices to unix_visited_vertices.
When we finish the algorithm, all vertices in each SCC will be linked via unix_vertex.scc_entry.
Let's take an example. We have a graph including five inflight vertices (F is not inflight):
A -> B -> C -> D -> E (-> F) ^ | `---------'
Suppose that we start DFS from C. We will visit C, D, and B first and initialise their index and lowlink. Then, the stack looks like this:
B = (3, 3) (index, lowlink)
D = (2, 2) C = (1, 1)
When checking B's edge to C, we update B's lowlink with C's index and propagate it to D.
B = (3, 1) (index, lowlink)
D = (2, 1)
C = (1, 1)
Next, we visit E, which has no edge to an inflight vertex.
E = (4, 4) (index, lowlink)
B = (3, 1) D = (2, 1) C = (1, 1)
When we leave from E, its index and lowlink are the same, so we pop E from the stack as single-vertex SCC. Next, we leave from B and D but do nothing because their lowlink are different from their index.
B = (3, 1) (index, lowlink) D = (2, 1)
C = (1, 1)
Then, we leave from C, whose index and lowlink are the same, so we pop B, D and C as SCC.
Last, we do DFS for the rest of vertices, A, which is also a single-vertex SCC.
Finally, each unix_vertex.scc_entry is linked as follows:
A -. B -> C -> D E -. ^ | ^ | ^ | `--' `---------' `--'
We use SCC later to decide whether we can garbage-collect the sockets.
Note that we still cannot detect SCC properly if an edge points to an embryo socket. The following two patches will sort it out.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-7-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 3 +++ net/unix/garbage.c | 46 ++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 47 insertions(+), 2 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -32,8 +32,11 @@ void wait_for_unix_gc(struct scm_fp_list struct unix_vertex { struct list_head edges; struct list_head entry; + struct list_head scc_entry; unsigned long out_degree; unsigned long index; + unsigned long lowlink; + bool on_stack; };
struct unix_edge { --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -251,11 +251,19 @@ static LIST_HEAD(unix_visited_vertices); static void __unix_walk_scc(struct unix_vertex *vertex) { unsigned long index = UNIX_VERTEX_INDEX_START; + LIST_HEAD(vertex_stack); struct unix_edge *edge; LIST_HEAD(edge_stack);
next_vertex: + /* Push vertex to vertex_stack. + * The vertex will be popped when finalising SCC later. + */ + vertex->on_stack = true; + list_add(&vertex->scc_entry, &vertex_stack); + vertex->index = index; + vertex->lowlink = index; index++;
/* Explore neighbour vertices (receivers of the current vertex's fd). */ @@ -283,12 +291,46 @@ prev_vertex: edge = list_first_entry(&edge_stack, typeof(*edge), stack_entry); list_del_init(&edge->stack_entry);
+ next_vertex = vertex; vertex = edge->predecessor->vertex; + + /* If the successor has a smaller lowlink, two vertices + * are in the same SCC, so propagate the smaller lowlink + * to skip SCC finalisation. + */ + vertex->lowlink = min(vertex->lowlink, next_vertex->lowlink); + } else if (next_vertex->on_stack) { + /* Loop detected by a back/cross edge. + * + * The successor is on vertex_stack, so two vertices are + * in the same SCC. If the successor has a smaller index, + * propagate it to skip SCC finalisation. + */ + vertex->lowlink = min(vertex->lowlink, next_vertex->index); + } else { + /* The successor was already grouped as another SCC */ } }
- /* Don't restart DFS from this vertex in unix_walk_scc(). */ - list_move_tail(&vertex->entry, &unix_visited_vertices); + if (vertex->index == vertex->lowlink) { + struct list_head scc; + + /* SCC finalised. + * + * If the lowlink was not updated, all the vertices above on + * vertex_stack are in the same SCC. Group them using scc_entry. + */ + __list_cut_position(&scc, &vertex_stack, &vertex->scc_entry); + + list_for_each_entry_reverse(vertex, &scc, scc_entry) { + /* Don't restart DFS from this vertex in unix_walk_scc(). */ + list_move_tail(&vertex->entry, &unix_visited_vertices); + + vertex->on_stack = false; + } + + list_del(&scc); + }
/* Need backtracking ? */ if (!list_empty(&edge_stack))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit aed6ecef55d70de3762ce41c561b7f547dbaf107 upstream.
This is a prep patch for the following change, where we need to fetch the listening socket from the successor embryo socket during GC.
We add a new field to struct unix_sock to save a pointer to a listening socket.
We set it when connect() creates a new socket, and clear it when accept() is called.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-8-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 1 + net/unix/af_unix.c | 5 ++++- 2 files changed, 5 insertions(+), 1 deletion(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -83,6 +83,7 @@ struct unix_sock { struct path path; struct mutex iolock, bindlock; struct sock *peer; + struct sock *listener; struct unix_vertex *vertex; struct list_head link; unsigned long inflight; --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -954,6 +954,7 @@ static struct sock *unix_create1(struct sk->sk_max_ack_backlog = READ_ONCE(net->unx.sysctl_max_dgram_qlen); sk->sk_destruct = unix_sock_destructor; u = unix_sk(sk); + u->listener = NULL; u->inflight = 0; u->vertex = NULL; u->path.dentry = NULL; @@ -1558,6 +1559,7 @@ restart: newsk->sk_type = sk->sk_type; init_peercred(newsk); newu = unix_sk(newsk); + newu->listener = other; RCU_INIT_POINTER(newsk->sk_wq, &newu->peer_wq); otheru = unix_sk(other);
@@ -1651,8 +1653,8 @@ static int unix_accept(struct socket *so bool kern) { struct sock *sk = sock->sk; - struct sock *tsk; struct sk_buff *skb; + struct sock *tsk; int err;
err = -EOPNOTSUPP; @@ -1677,6 +1679,7 @@ static int unix_accept(struct socket *so }
tsk = skb->sk; + unix_sk(tsk)->listener = NULL; skb_free_datagram(sk, skb); wake_up_interruptible(&unix_sk(sk)->peer_wait);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit dcf70df2048d27c5d186f013f101a4aefd63aa41 upstream.
To garbage collect inflight AF_UNIX sockets, we must define the cyclic reference appropriately. This is a bit tricky if the loop consists of embryo sockets.
Suppose that the fd of AF_UNIX socket A is passed to D and the fd B to C and that C and D are embryo sockets of A and B, respectively. It may appear that there are two separate graphs, A (-> D) and B (-> C), but this is not correct.
A --. .-- B X C <-' `-> D
Now, D holds A's refcount, and C has B's refcount, so unix_release() will never be called for A and B when we close() them. However, no one can call close() for D and C to free skbs holding refcounts of A and B because C/D is in A/B's receive queue, which should have been purged by unix_release() for A and B.
So, here's another type of cyclic reference. When a fd of an AF_UNIX socket is passed to an embryo socket, the reference is indirectly held by its parent listening socket.
.-> A .-> B | `- sk_receive_queue | `- sk_receive_queue | `- skb | `- skb | `- sk == C | `- sk == D | `- sk_receive_queue | `- sk_receive_queue | `- skb +---------' `- skb +-. | | `---------------------------------------------------------'
Technically, the graph must be denoted as A <-> B instead of A (-> D) and B (-> C) to find such a cyclic reference without touching each socket's receive queue.
.-> A --. .-- B <-. | X | == A <-> B `-- C <-' `-> D --'
We apply this fixup during GC by fetching the real successor by unix_edge_successor().
When we call accept(), we clear unix_sock.listener under unix_gc_lock not to confuse GC.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-9-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 1 + net/unix/af_unix.c | 2 +- net/unix/garbage.c | 20 +++++++++++++++++++- 3 files changed, 21 insertions(+), 2 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -24,6 +24,7 @@ void unix_inflight(struct user_struct *u void unix_notinflight(struct user_struct *user, struct file *fp); void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver); void unix_del_edges(struct scm_fp_list *fpl); +void unix_update_edges(struct unix_sock *receiver); int unix_prepare_fpl(struct scm_fp_list *fpl); void unix_destroy_fpl(struct scm_fp_list *fpl); void unix_gc(void); --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -1679,7 +1679,7 @@ static int unix_accept(struct socket *so }
tsk = skb->sk; - unix_sk(tsk)->listener = NULL; + unix_update_edges(unix_sk(tsk)); skb_free_datagram(sk, skb); wake_up_interruptible(&unix_sk(sk)->peer_wait);
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -101,6 +101,17 @@ struct unix_sock *unix_get_socket(struct return NULL; }
+static struct unix_vertex *unix_edge_successor(struct unix_edge *edge) +{ + /* If an embryo socket has a fd, + * the listener indirectly holds the fd's refcnt. + */ + if (edge->successor->listener) + return unix_sk(edge->successor->listener)->vertex; + + return edge->successor->vertex; +} + static LIST_HEAD(unix_unvisited_vertices);
enum unix_vertex_index { @@ -209,6 +220,13 @@ out: fpl->inflight = false; }
+void unix_update_edges(struct unix_sock *receiver) +{ + spin_lock(&unix_gc_lock); + receiver->listener = NULL; + spin_unlock(&unix_gc_lock); +} + int unix_prepare_fpl(struct scm_fp_list *fpl) { struct unix_vertex *vertex; @@ -268,7 +286,7 @@ next_vertex:
/* Explore neighbour vertices (receivers of the current vertex's fd). */ list_for_each_entry(edge, &vertex->edges, vertex_entry) { - struct unix_vertex *next_vertex = edge->successor->vertex; + struct unix_vertex *next_vertex = unix_edge_successor(edge);
if (!next_vertex) continue;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit ba31b4a4e1018f5844c6eb31734976e2184f2f9a upstream.
Before starting Tarjan's algorithm, we need to mark all vertices as unvisited. We can save this O(n) setup by reserving two special indices (0, 1) and using two variables.
The first time we link a vertex to unix_unvisited_vertices, we set unix_vertex_unvisited_index to index.
During DFS, we can see that the index of unvisited vertices is the same as unix_vertex_unvisited_index.
When we finalise SCC later, we set unix_vertex_grouped_index to each vertex's index.
Then, we can know (i) that the vertex is on the stack if the index of a visited vertex is >= 2 and (ii) that it is not on the stack and belongs to a different SCC if the index is unix_vertex_grouped_index.
After the whole algorithm, all indices of vertices are set as unix_vertex_grouped_index.
Next time we start DFS, we know that all unvisited vertices have unix_vertex_grouped_index, and we can use unix_vertex_unvisited_index as the not-on-stack marker.
To use the same variable in __unix_walk_scc(), we can swap unix_vertex_(grouped|unvisited)_index at the end of Tarjan's algorithm.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-10-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 1 - net/unix/garbage.c | 26 +++++++++++++++----------- 2 files changed, 15 insertions(+), 12 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -37,7 +37,6 @@ struct unix_vertex { unsigned long out_degree; unsigned long index; unsigned long lowlink; - bool on_stack; };
struct unix_edge { --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -115,16 +115,20 @@ static struct unix_vertex *unix_edge_suc static LIST_HEAD(unix_unvisited_vertices);
enum unix_vertex_index { - UNIX_VERTEX_INDEX_UNVISITED, + UNIX_VERTEX_INDEX_MARK1, + UNIX_VERTEX_INDEX_MARK2, UNIX_VERTEX_INDEX_START, };
+static unsigned long unix_vertex_unvisited_index = UNIX_VERTEX_INDEX_MARK1; + static void unix_add_edge(struct scm_fp_list *fpl, struct unix_edge *edge) { struct unix_vertex *vertex = edge->predecessor->vertex;
if (!vertex) { vertex = list_first_entry(&fpl->vertices, typeof(*vertex), entry); + vertex->index = unix_vertex_unvisited_index; vertex->out_degree = 0; INIT_LIST_HEAD(&vertex->edges);
@@ -265,6 +269,7 @@ void unix_destroy_fpl(struct scm_fp_list }
static LIST_HEAD(unix_visited_vertices); +static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
static void __unix_walk_scc(struct unix_vertex *vertex) { @@ -274,10 +279,10 @@ static void __unix_walk_scc(struct unix_ LIST_HEAD(edge_stack);
next_vertex: - /* Push vertex to vertex_stack. + /* Push vertex to vertex_stack and mark it as on-stack + * (index >= UNIX_VERTEX_INDEX_START). * The vertex will be popped when finalising SCC later. */ - vertex->on_stack = true; list_add(&vertex->scc_entry, &vertex_stack);
vertex->index = index; @@ -291,7 +296,7 @@ next_vertex: if (!next_vertex) continue;
- if (next_vertex->index == UNIX_VERTEX_INDEX_UNVISITED) { + if (next_vertex->index == unix_vertex_unvisited_index) { /* Iterative deepening depth first search * * 1. Push a forward edge to edge_stack and set @@ -317,7 +322,7 @@ prev_vertex: * to skip SCC finalisation. */ vertex->lowlink = min(vertex->lowlink, next_vertex->lowlink); - } else if (next_vertex->on_stack) { + } else if (next_vertex->index != unix_vertex_grouped_index) { /* Loop detected by a back/cross edge. * * The successor is on vertex_stack, so two vertices are @@ -344,7 +349,8 @@ prev_vertex: /* Don't restart DFS from this vertex in unix_walk_scc(). */ list_move_tail(&vertex->entry, &unix_visited_vertices);
- vertex->on_stack = false; + /* Mark vertex as off-stack. */ + vertex->index = unix_vertex_grouped_index; }
list_del(&scc); @@ -357,20 +363,18 @@ prev_vertex:
static void unix_walk_scc(void) { - struct unix_vertex *vertex; - - list_for_each_entry(vertex, &unix_unvisited_vertices, entry) - vertex->index = UNIX_VERTEX_INDEX_UNVISITED; - /* Visit every vertex exactly once. * __unix_walk_scc() moves visited vertices to unix_visited_vertices. */ while (!list_empty(&unix_unvisited_vertices)) { + struct unix_vertex *vertex; + vertex = list_first_entry(&unix_unvisited_vertices, typeof(*vertex), entry); __unix_walk_scc(vertex); }
list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); + swap(unix_vertex_unvisited_index, unix_vertex_grouped_index); }
static LIST_HEAD(gc_candidates);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 77e5593aebba823bcbcf2c4b58b07efcd63933b8 upstream.
We do not need to run GC if there is no possible cyclic reference. We use unix_graph_maybe_cyclic to decide if we should run GC.
If a fd of an AF_UNIX socket is passed to an already inflight AF_UNIX socket, they could form a cyclic reference. Then, we set true to unix_graph_maybe_cyclic and later run Tarjan's algorithm to group them into SCC.
Once we run Tarjan's algorithm, we are 100% sure whether cyclic references exist or not. If there is no cycle, we set false to unix_graph_maybe_cyclic and can skip the entire garbage collection next time.
When finalising SCC, we set true to unix_graph_maybe_cyclic if SCC consists of multiple vertices.
Even if SCC is a single vertex, a cycle might exist as self-fd passing. Given the corner case is rare, we detect it by checking all edges of the vertex and set true to unix_graph_maybe_cyclic.
With this change, __unix_gc() is just a spin_lock() dance in the normal usage.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-11-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 48 +++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 47 insertions(+), 1 deletion(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -112,6 +112,19 @@ static struct unix_vertex *unix_edge_suc return edge->successor->vertex; }
+static bool unix_graph_maybe_cyclic; + +static void unix_update_graph(struct unix_vertex *vertex) +{ + /* If the receiver socket is not inflight, no cyclic + * reference could be formed. + */ + if (!vertex) + return; + + unix_graph_maybe_cyclic = true; +} + static LIST_HEAD(unix_unvisited_vertices);
enum unix_vertex_index { @@ -138,12 +151,16 @@ static void unix_add_edge(struct scm_fp_
vertex->out_degree++; list_add_tail(&edge->vertex_entry, &vertex->edges); + + unix_update_graph(unix_edge_successor(edge)); }
static void unix_del_edge(struct scm_fp_list *fpl, struct unix_edge *edge) { struct unix_vertex *vertex = edge->predecessor->vertex;
+ unix_update_graph(unix_edge_successor(edge)); + list_del(&edge->vertex_entry); vertex->out_degree--;
@@ -227,6 +244,7 @@ out: void unix_update_edges(struct unix_sock *receiver) { spin_lock(&unix_gc_lock); + unix_update_graph(unix_sk(receiver->listener)->vertex); receiver->listener = NULL; spin_unlock(&unix_gc_lock); } @@ -268,6 +286,26 @@ void unix_destroy_fpl(struct scm_fp_list unix_free_vertices(fpl); }
+static bool unix_scc_cyclic(struct list_head *scc) +{ + struct unix_vertex *vertex; + struct unix_edge *edge; + + /* SCC containing multiple vertices ? */ + if (!list_is_singular(scc)) + return true; + + vertex = list_first_entry(scc, typeof(*vertex), scc_entry); + + /* Self-reference or a embryo-listener circle ? */ + list_for_each_entry(edge, &vertex->edges, vertex_entry) { + if (unix_edge_successor(edge) == vertex) + return true; + } + + return false; +} + static LIST_HEAD(unix_visited_vertices); static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
@@ -353,6 +391,9 @@ prev_vertex: vertex->index = unix_vertex_grouped_index; }
+ if (!unix_graph_maybe_cyclic) + unix_graph_maybe_cyclic = unix_scc_cyclic(&scc); + list_del(&scc); }
@@ -363,6 +404,8 @@ prev_vertex:
static void unix_walk_scc(void) { + unix_graph_maybe_cyclic = false; + /* Visit every vertex exactly once. * __unix_walk_scc() moves visited vertices to unix_visited_vertices. */ @@ -524,6 +567,9 @@ static void __unix_gc(struct work_struct
spin_lock(&unix_gc_lock);
+ if (!unix_graph_maybe_cyclic) + goto skip_gc; + unix_walk_scc();
/* First, select candidates for garbage collection. Only @@ -633,7 +679,7 @@ static void __unix_gc(struct work_struct
/* All candidates should have been detached by now. */ WARN_ON_ONCE(!list_empty(&gc_candidates)); - +skip_gc: /* Paired with READ_ONCE() in wait_for_unix_gc(). */ WRITE_ONCE(gc_in_progress, false);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit ad081928a8b0f57f269df999a28087fce6f2b6ce upstream.
Once a cyclic reference is formed, we need to run GC to check if there is dead SCC.
However, we do not need to run Tarjan's algorithm if we know that the shape of the inflight graph has not been changed.
If an edge is added/updated/deleted and the edge's successor is inflight, we set false to unix_graph_grouped, which means we need to re-classify SCC.
Once we finalise SCC, we set true to unix_graph_grouped.
While unix_graph_grouped is true, we can iterate the grouped SCC using vertex->scc_entry in unix_walk_scc_fast().
list_add() and list_for_each_entry_reverse() uses seem weird, but they are to keep the vertex order consistent and make writing test easier.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-12-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -113,6 +113,7 @@ static struct unix_vertex *unix_edge_suc }
static bool unix_graph_maybe_cyclic; +static bool unix_graph_grouped;
static void unix_update_graph(struct unix_vertex *vertex) { @@ -123,6 +124,7 @@ static void unix_update_graph(struct uni return;
unix_graph_maybe_cyclic = true; + unix_graph_grouped = false; }
static LIST_HEAD(unix_unvisited_vertices); @@ -144,6 +146,7 @@ static void unix_add_edge(struct scm_fp_ vertex->index = unix_vertex_unvisited_index; vertex->out_degree = 0; INIT_LIST_HEAD(&vertex->edges); + INIT_LIST_HEAD(&vertex->scc_entry);
list_move_tail(&vertex->entry, &unix_unvisited_vertices); edge->predecessor->vertex = vertex; @@ -418,6 +421,26 @@ static void unix_walk_scc(void)
list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); swap(unix_vertex_unvisited_index, unix_vertex_grouped_index); + + unix_graph_grouped = true; +} + +static void unix_walk_scc_fast(void) +{ + while (!list_empty(&unix_unvisited_vertices)) { + struct unix_vertex *vertex; + struct list_head scc; + + vertex = list_first_entry(&unix_unvisited_vertices, typeof(*vertex), entry); + list_add(&scc, &vertex->scc_entry); + + list_for_each_entry_reverse(vertex, &scc, scc_entry) + list_move_tail(&vertex->entry, &unix_visited_vertices); + + list_del(&scc); + } + + list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); }
static LIST_HEAD(gc_candidates); @@ -570,7 +593,10 @@ static void __unix_gc(struct work_struct if (!unix_graph_maybe_cyclic) goto skip_gc;
- unix_walk_scc(); + if (unix_graph_grouped) + unix_walk_scc_fast(); + else + unix_walk_scc();
/* First, select candidates for garbage collection. Only * in-flight sockets are considered, and from those only ones
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit bfdb01283ee8f2f3089656c3ff8f62bb072dabb2 upstream.
The definition of the lowlink in Tarjan's algorithm is the smallest index of a vertex that is reachable with at most one back-edge in SCC. This is not useful for a cross-edge.
If we start traversing from A in the following graph, the final lowlink of D is 3. The cross-edge here is one between D and C.
A -> B -> D D = (4, 3) (index, lowlink) ^ | | C = (3, 1) | V | B = (2, 1) `--- C <--' A = (1, 1)
This is because the lowlink of D is updated with the index of C.
In the following patch, we detect a dead SCC by checking two conditions for each vertex.
1) vertex has no edge directed to another SCC (no bridge) 2) vertex's out_degree is the same as the refcount of its file
If 1) is false, there is a receiver of all fds of the SCC and its ancestor SCC.
To evaluate 1), we need to assign a unique index to each SCC and assign it to all vertices in the SCC.
This patch changes the lowlink update logic for cross-edge so that in the example above, the lowlink of D is updated with the lowlink of C.
A -> B -> D D = (4, 1) (index, lowlink) ^ | | C = (3, 1) | V | B = (2, 1) `--- C <--' A = (1, 1)
Then, all vertices in the same SCC have the same lowlink, and we can quickly find the bridge connecting to different SCC if exists.
However, it is no longer called lowlink, so we rename it to scc_index. (It's sometimes called lowpoint.)
Also, we add a global variable to hold the last index used in DFS so that we do not reset the initial index in each DFS.
This patch can be squashed to the SCC detection patch but is split deliberately for anyone wondering why lowlink is not used as used in the original Tarjan's algorithm and many reference implementations.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-13-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 2 +- net/unix/garbage.c | 29 +++++++++++++++-------------- 2 files changed, 16 insertions(+), 15 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -36,7 +36,7 @@ struct unix_vertex { struct list_head scc_entry; unsigned long out_degree; unsigned long index; - unsigned long lowlink; + unsigned long scc_index; };
struct unix_edge { --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -312,9 +312,8 @@ static bool unix_scc_cyclic(struct list_ static LIST_HEAD(unix_visited_vertices); static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
-static void __unix_walk_scc(struct unix_vertex *vertex) +static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index) { - unsigned long index = UNIX_VERTEX_INDEX_START; LIST_HEAD(vertex_stack); struct unix_edge *edge; LIST_HEAD(edge_stack); @@ -326,9 +325,9 @@ next_vertex: */ list_add(&vertex->scc_entry, &vertex_stack);
- vertex->index = index; - vertex->lowlink = index; - index++; + vertex->index = *last_index; + vertex->scc_index = *last_index; + (*last_index)++;
/* Explore neighbour vertices (receivers of the current vertex's fd). */ list_for_each_entry(edge, &vertex->edges, vertex_entry) { @@ -358,30 +357,30 @@ prev_vertex: next_vertex = vertex; vertex = edge->predecessor->vertex;
- /* If the successor has a smaller lowlink, two vertices - * are in the same SCC, so propagate the smaller lowlink + /* If the successor has a smaller scc_index, two vertices + * are in the same SCC, so propagate the smaller scc_index * to skip SCC finalisation. */ - vertex->lowlink = min(vertex->lowlink, next_vertex->lowlink); + vertex->scc_index = min(vertex->scc_index, next_vertex->scc_index); } else if (next_vertex->index != unix_vertex_grouped_index) { /* Loop detected by a back/cross edge. * - * The successor is on vertex_stack, so two vertices are - * in the same SCC. If the successor has a smaller index, + * The successor is on vertex_stack, so two vertices are in + * the same SCC. If the successor has a smaller *scc_index*, * propagate it to skip SCC finalisation. */ - vertex->lowlink = min(vertex->lowlink, next_vertex->index); + vertex->scc_index = min(vertex->scc_index, next_vertex->scc_index); } else { /* The successor was already grouped as another SCC */ } }
- if (vertex->index == vertex->lowlink) { + if (vertex->index == vertex->scc_index) { struct list_head scc;
/* SCC finalised. * - * If the lowlink was not updated, all the vertices above on + * If the scc_index was not updated, all the vertices above on * vertex_stack are in the same SCC. Group them using scc_entry. */ __list_cut_position(&scc, &vertex_stack, &vertex->scc_entry); @@ -407,6 +406,8 @@ prev_vertex:
static void unix_walk_scc(void) { + unsigned long last_index = UNIX_VERTEX_INDEX_START; + unix_graph_maybe_cyclic = false;
/* Visit every vertex exactly once. @@ -416,7 +417,7 @@ static void unix_walk_scc(void) struct unix_vertex *vertex;
vertex = list_first_entry(&unix_unvisited_vertices, typeof(*vertex), entry); - __unix_walk_scc(vertex); + __unix_walk_scc(vertex, &last_index); }
list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit a15702d8b3aad8ce5268c565bd29f0e02fd2db83 upstream.
When iterating SCC, we call unix_vertex_dead() for each vertex to check if the vertex is close()d and has no bridge to another SCC.
If both conditions are true for every vertex in SCC, we can execute garbage collection for all skb in the SCC.
The actual garbage collection is done in the following patch, replacing the old implementation.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-14-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 44 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 43 insertions(+), 1 deletion(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -289,6 +289,39 @@ void unix_destroy_fpl(struct scm_fp_list unix_free_vertices(fpl); }
+static bool unix_vertex_dead(struct unix_vertex *vertex) +{ + struct unix_edge *edge; + struct unix_sock *u; + long total_ref; + + list_for_each_entry(edge, &vertex->edges, vertex_entry) { + struct unix_vertex *next_vertex = unix_edge_successor(edge); + + /* The vertex's fd can be received by a non-inflight socket. */ + if (!next_vertex) + return false; + + /* The vertex's fd can be received by an inflight socket in + * another SCC. + */ + if (next_vertex->scc_index != vertex->scc_index) + return false; + } + + /* No receiver exists out of the same SCC. */ + + edge = list_first_entry(&vertex->edges, typeof(*edge), vertex_entry); + u = edge->predecessor; + total_ref = file_count(u->sk.sk_socket->file); + + /* If not close()d, total_ref > out_degree. */ + if (total_ref != vertex->out_degree) + return false; + + return true; +} + static bool unix_scc_cyclic(struct list_head *scc) { struct unix_vertex *vertex; @@ -377,6 +410,7 @@ prev_vertex:
if (vertex->index == vertex->scc_index) { struct list_head scc; + bool scc_dead = true;
/* SCC finalised. * @@ -391,6 +425,9 @@ prev_vertex:
/* Mark vertex as off-stack. */ vertex->index = unix_vertex_grouped_index; + + if (scc_dead) + scc_dead = unix_vertex_dead(vertex); }
if (!unix_graph_maybe_cyclic) @@ -431,13 +468,18 @@ static void unix_walk_scc_fast(void) while (!list_empty(&unix_unvisited_vertices)) { struct unix_vertex *vertex; struct list_head scc; + bool scc_dead = true;
vertex = list_first_entry(&unix_unvisited_vertices, typeof(*vertex), entry); list_add(&scc, &vertex->scc_entry);
- list_for_each_entry_reverse(vertex, &scc, scc_entry) + list_for_each_entry_reverse(vertex, &scc, scc_entry) { list_move_tail(&vertex->entry, &unix_visited_vertices);
+ if (scc_dead) + scc_dead = unix_vertex_dead(vertex); + } + list_del(&scc); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 4090fa373f0e763c43610853d2774b5979915959 upstream.
If we find a dead SCC during iteration, we call unix_collect_skb() to splice all skb in the SCC to the global sk_buff_head, hitlist.
After iterating all SCC, we unlock unix_gc_lock and purge the queue.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240325202425.60930-15-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 8 - net/unix/af_unix.c | 12 - net/unix/garbage.c | 318 ++++++++++---------------------------------------- 3 files changed, 64 insertions(+), 274 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -19,9 +19,6 @@ static inline struct unix_sock *unix_get
extern spinlock_t unix_gc_lock; extern unsigned int unix_tot_inflight; - -void unix_inflight(struct user_struct *user, struct file *fp); -void unix_notinflight(struct user_struct *user, struct file *fp); void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver); void unix_del_edges(struct scm_fp_list *fpl); void unix_update_edges(struct unix_sock *receiver); @@ -85,12 +82,7 @@ struct unix_sock { struct sock *peer; struct sock *listener; struct unix_vertex *vertex; - struct list_head link; - unsigned long inflight; spinlock_t lock; - unsigned long gc_flags; -#define UNIX_GC_CANDIDATE 0 -#define UNIX_GC_MAYBE_CYCLE 1 struct socket_wq peer_wq; wait_queue_entry_t peer_wake; struct scm_stat scm_stat; --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -955,12 +955,10 @@ static struct sock *unix_create1(struct sk->sk_destruct = unix_sock_destructor; u = unix_sk(sk); u->listener = NULL; - u->inflight = 0; u->vertex = NULL; u->path.dentry = NULL; u->path.mnt = NULL; spin_lock_init(&u->lock); - INIT_LIST_HEAD(&u->link); mutex_init(&u->iolock); /* single task reading lock */ mutex_init(&u->bindlock); /* single task binding lock */ init_waitqueue_head(&u->peer_wait); @@ -1744,8 +1742,6 @@ static inline bool too_many_unix_fds(str
static int unix_attach_fds(struct scm_cookie *scm, struct sk_buff *skb) { - int i; - if (too_many_unix_fds(current)) return -ETOOMANYREFS;
@@ -1757,9 +1753,6 @@ static int unix_attach_fds(struct scm_co if (!UNIXCB(skb).fp) return -ENOMEM;
- for (i = scm->fp->count - 1; i >= 0; i--) - unix_inflight(scm->fp->user, scm->fp->fp[i]); - if (unix_prepare_fpl(UNIXCB(skb).fp)) return -ENOMEM;
@@ -1768,15 +1761,10 @@ static int unix_attach_fds(struct scm_co
static void unix_detach_fds(struct scm_cookie *scm, struct sk_buff *skb) { - int i; - scm->fp = UNIXCB(skb).fp; UNIXCB(skb).fp = NULL;
unix_destroy_fpl(scm->fp); - - for (i = scm->fp->count - 1; i >= 0; i--) - unix_notinflight(scm->fp->user, scm->fp->fp[i]); }
static void unix_peek_fds(struct scm_cookie *scm, struct sk_buff *skb) --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -322,6 +322,52 @@ static bool unix_vertex_dead(struct unix return true; }
+enum unix_recv_queue_lock_class { + U_RECVQ_LOCK_NORMAL, + U_RECVQ_LOCK_EMBRYO, +}; + +static void unix_collect_skb(struct list_head *scc, struct sk_buff_head *hitlist) +{ + struct unix_vertex *vertex; + + list_for_each_entry_reverse(vertex, scc, scc_entry) { + struct sk_buff_head *queue; + struct unix_edge *edge; + struct unix_sock *u; + + edge = list_first_entry(&vertex->edges, typeof(*edge), vertex_entry); + u = edge->predecessor; + queue = &u->sk.sk_receive_queue; + + spin_lock(&queue->lock); + + if (u->sk.sk_state == TCP_LISTEN) { + struct sk_buff *skb; + + skb_queue_walk(queue, skb) { + struct sk_buff_head *embryo_queue = &skb->sk->sk_receive_queue; + + /* listener -> embryo order, the inversion never happens. */ + spin_lock_nested(&embryo_queue->lock, U_RECVQ_LOCK_EMBRYO); + skb_queue_splice_init(embryo_queue, hitlist); + spin_unlock(&embryo_queue->lock); + } + } else { + skb_queue_splice_init(queue, hitlist); + +#if IS_ENABLED(CONFIG_AF_UNIX_OOB) + if (u->oob_skb) { + kfree_skb(u->oob_skb); + u->oob_skb = NULL; + } +#endif + } + + spin_unlock(&queue->lock); + } +} + static bool unix_scc_cyclic(struct list_head *scc) { struct unix_vertex *vertex; @@ -345,7 +391,8 @@ static bool unix_scc_cyclic(struct list_ static LIST_HEAD(unix_visited_vertices); static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
-static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index) +static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index, + struct sk_buff_head *hitlist) { LIST_HEAD(vertex_stack); struct unix_edge *edge; @@ -430,7 +477,9 @@ prev_vertex: scc_dead = unix_vertex_dead(vertex); }
- if (!unix_graph_maybe_cyclic) + if (scc_dead) + unix_collect_skb(&scc, hitlist); + else if (!unix_graph_maybe_cyclic) unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
list_del(&scc); @@ -441,7 +490,7 @@ prev_vertex: goto prev_vertex; }
-static void unix_walk_scc(void) +static void unix_walk_scc(struct sk_buff_head *hitlist) { unsigned long last_index = UNIX_VERTEX_INDEX_START;
@@ -454,7 +503,7 @@ static void unix_walk_scc(void) struct unix_vertex *vertex;
vertex = list_first_entry(&unix_unvisited_vertices, typeof(*vertex), entry); - __unix_walk_scc(vertex, &last_index); + __unix_walk_scc(vertex, &last_index, hitlist); }
list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); @@ -463,7 +512,7 @@ static void unix_walk_scc(void) unix_graph_grouped = true; }
-static void unix_walk_scc_fast(void) +static void unix_walk_scc_fast(struct sk_buff_head *hitlist) { while (!list_empty(&unix_unvisited_vertices)) { struct unix_vertex *vertex; @@ -480,279 +529,40 @@ static void unix_walk_scc_fast(void) scc_dead = unix_vertex_dead(vertex); }
+ if (scc_dead) + unix_collect_skb(&scc, hitlist); + list_del(&scc); }
list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); }
-static LIST_HEAD(gc_candidates); -static LIST_HEAD(gc_inflight_list); - -/* Keep the number of times in flight count for the file - * descriptor if it is for an AF_UNIX socket. - */ -void unix_inflight(struct user_struct *user, struct file *filp) -{ - struct unix_sock *u = unix_get_socket(filp); - - spin_lock(&unix_gc_lock); - - if (u) { - if (!u->inflight) { - WARN_ON_ONCE(!list_empty(&u->link)); - list_add_tail(&u->link, &gc_inflight_list); - } else { - WARN_ON_ONCE(list_empty(&u->link)); - } - u->inflight++; - } - - spin_unlock(&unix_gc_lock); -} - -void unix_notinflight(struct user_struct *user, struct file *filp) -{ - struct unix_sock *u = unix_get_socket(filp); - - spin_lock(&unix_gc_lock); - - if (u) { - WARN_ON_ONCE(!u->inflight); - WARN_ON_ONCE(list_empty(&u->link)); - - u->inflight--; - if (!u->inflight) - list_del_init(&u->link); - } - - spin_unlock(&unix_gc_lock); -} - -static void scan_inflight(struct sock *x, void (*func)(struct unix_sock *), - struct sk_buff_head *hitlist) -{ - struct sk_buff *skb; - struct sk_buff *next; - - spin_lock(&x->sk_receive_queue.lock); - skb_queue_walk_safe(&x->sk_receive_queue, skb, next) { - /* Do we have file descriptors ? */ - if (UNIXCB(skb).fp) { - bool hit = false; - /* Process the descriptors of this socket */ - int nfd = UNIXCB(skb).fp->count; - struct file **fp = UNIXCB(skb).fp->fp; - - while (nfd--) { - /* Get the socket the fd matches if it indeed does so */ - struct unix_sock *u = unix_get_socket(*fp++); - - /* Ignore non-candidates, they could have been added - * to the queues after starting the garbage collection - */ - if (u && test_bit(UNIX_GC_CANDIDATE, &u->gc_flags)) { - hit = true; - - func(u); - } - } - if (hit && hitlist != NULL) { - __skb_unlink(skb, &x->sk_receive_queue); - __skb_queue_tail(hitlist, skb); - } - } - } - spin_unlock(&x->sk_receive_queue.lock); -} - -static void scan_children(struct sock *x, void (*func)(struct unix_sock *), - struct sk_buff_head *hitlist) -{ - if (x->sk_state != TCP_LISTEN) { - scan_inflight(x, func, hitlist); - } else { - struct sk_buff *skb; - struct sk_buff *next; - struct unix_sock *u; - LIST_HEAD(embryos); - - /* For a listening socket collect the queued embryos - * and perform a scan on them as well. - */ - spin_lock(&x->sk_receive_queue.lock); - skb_queue_walk_safe(&x->sk_receive_queue, skb, next) { - u = unix_sk(skb->sk); - - /* An embryo cannot be in-flight, so it's safe - * to use the list link. - */ - WARN_ON_ONCE(!list_empty(&u->link)); - list_add_tail(&u->link, &embryos); - } - spin_unlock(&x->sk_receive_queue.lock); - - while (!list_empty(&embryos)) { - u = list_entry(embryos.next, struct unix_sock, link); - scan_inflight(&u->sk, func, hitlist); - list_del_init(&u->link); - } - } -} - -static void dec_inflight(struct unix_sock *usk) -{ - usk->inflight--; -} - -static void inc_inflight(struct unix_sock *usk) -{ - usk->inflight++; -} - -static void inc_inflight_move_tail(struct unix_sock *u) -{ - u->inflight++; - - /* If this still might be part of a cycle, move it to the end - * of the list, so that it's checked even if it was already - * passed over - */ - if (test_bit(UNIX_GC_MAYBE_CYCLE, &u->gc_flags)) - list_move_tail(&u->link, &gc_candidates); -} - static bool gc_in_progress;
static void __unix_gc(struct work_struct *work) { struct sk_buff_head hitlist; - struct unix_sock *u, *next; - LIST_HEAD(not_cycle_list); - struct list_head cursor;
spin_lock(&unix_gc_lock);
- if (!unix_graph_maybe_cyclic) + if (!unix_graph_maybe_cyclic) { + spin_unlock(&unix_gc_lock); goto skip_gc; - - if (unix_graph_grouped) - unix_walk_scc_fast(); - else - unix_walk_scc(); - - /* First, select candidates for garbage collection. Only - * in-flight sockets are considered, and from those only ones - * which don't have any external reference. - * - * Holding unix_gc_lock will protect these candidates from - * being detached, and hence from gaining an external - * reference. Since there are no possible receivers, all - * buffers currently on the candidates' queues stay there - * during the garbage collection. - * - * We also know that no new candidate can be added onto the - * receive queues. Other, non candidate sockets _can_ be - * added to queue, so we must make sure only to touch - * candidates. - * - * Embryos, though never candidates themselves, affect which - * candidates are reachable by the garbage collector. Before - * being added to a listener's queue, an embryo may already - * receive data carrying SCM_RIGHTS, potentially making the - * passed socket a candidate that is not yet reachable by the - * collector. It becomes reachable once the embryo is - * enqueued. Therefore, we must ensure that no SCM-laden - * embryo appears in a (candidate) listener's queue between - * consecutive scan_children() calls. - */ - list_for_each_entry_safe(u, next, &gc_inflight_list, link) { - struct sock *sk = &u->sk; - long total_refs; - - total_refs = file_count(sk->sk_socket->file); - - WARN_ON_ONCE(!u->inflight); - WARN_ON_ONCE(total_refs < u->inflight); - if (total_refs == u->inflight) { - list_move_tail(&u->link, &gc_candidates); - __set_bit(UNIX_GC_CANDIDATE, &u->gc_flags); - __set_bit(UNIX_GC_MAYBE_CYCLE, &u->gc_flags); - - if (sk->sk_state == TCP_LISTEN) { - unix_state_lock_nested(sk, U_LOCK_GC_LISTENER); - unix_state_unlock(sk); - } - } - } - - /* Now remove all internal in-flight reference to children of - * the candidates. - */ - list_for_each_entry(u, &gc_candidates, link) - scan_children(&u->sk, dec_inflight, NULL); - - /* Restore the references for children of all candidates, - * which have remaining references. Do this recursively, so - * only those remain, which form cyclic references. - * - * Use a "cursor" link, to make the list traversal safe, even - * though elements might be moved about. - */ - list_add(&cursor, &gc_candidates); - while (cursor.next != &gc_candidates) { - u = list_entry(cursor.next, struct unix_sock, link); - - /* Move cursor to after the current position. */ - list_move(&cursor, &u->link); - - if (u->inflight) { - list_move_tail(&u->link, ¬_cycle_list); - __clear_bit(UNIX_GC_MAYBE_CYCLE, &u->gc_flags); - scan_children(&u->sk, inc_inflight_move_tail, NULL); - } } - list_del(&cursor);
- /* Now gc_candidates contains only garbage. Restore original - * inflight counters for these as well, and remove the skbuffs - * which are creating the cycle(s). - */ - skb_queue_head_init(&hitlist); - list_for_each_entry(u, &gc_candidates, link) { - scan_children(&u->sk, inc_inflight, &hitlist); - -#if IS_ENABLED(CONFIG_AF_UNIX_OOB) - if (u->oob_skb) { - kfree_skb(u->oob_skb); - u->oob_skb = NULL; - } -#endif - } + __skb_queue_head_init(&hitlist);
- /* not_cycle_list contains those sockets which do not make up a - * cycle. Restore these to the inflight list. - */ - while (!list_empty(¬_cycle_list)) { - u = list_entry(not_cycle_list.next, struct unix_sock, link); - __clear_bit(UNIX_GC_CANDIDATE, &u->gc_flags); - list_move_tail(&u->link, &gc_inflight_list); - } + if (unix_graph_grouped) + unix_walk_scc_fast(&hitlist); + else + unix_walk_scc(&hitlist);
spin_unlock(&unix_gc_lock);
- /* Here we are. Hitlist is filled. Die. */ __skb_queue_purge(&hitlist); - - spin_lock(&unix_gc_lock); - - /* All candidates should have been detached by now. */ - WARN_ON_ONCE(!list_empty(&gc_candidates)); skip_gc: - /* Paired with READ_ONCE() in wait_for_unix_gc(). */ WRITE_ONCE(gc_in_progress, false); - - spin_unlock(&unix_gc_lock); }
static DECLARE_WORK(unix_gc_work, __unix_gc);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 118f457da9ed58a79e24b73c2ef0aa1987241f0e upstream.
In the previous GC implementation, the shape of the inflight socket graph was not expected to change while GC was in progress.
MSG_PEEK was tricky because it could install inflight fd silently and transform the graph.
Let's say we peeked a fd, which was a listening socket, and accept()ed some embryo sockets from it. The garbage collection algorithm would have been confused because the set of sockets visited in scan_inflight() would change within the same GC invocation.
That's why we placed spin_lock(&unix_gc_lock) and spin_unlock() in unix_peek_fds() with a fat comment.
In the new GC implementation, we no longer garbage-collect the socket if it exists in another queue, that is, if it has a bridge to another SCC. Also, accept() will require the lock if it has edges.
Thus, we need not do the complicated lock dance.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20240401173125.92184-3-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 1 - net/unix/af_unix.c | 42 ------------------------------------------ net/unix/garbage.c | 2 +- 3 files changed, 1 insertion(+), 44 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -17,7 +17,6 @@ static inline struct unix_sock *unix_get } #endif
-extern spinlock_t unix_gc_lock; extern unsigned int unix_tot_inflight; void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver); void unix_del_edges(struct scm_fp_list *fpl); --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -1770,48 +1770,6 @@ static void unix_detach_fds(struct scm_c static void unix_peek_fds(struct scm_cookie *scm, struct sk_buff *skb) { scm->fp = scm_fp_dup(UNIXCB(skb).fp); - - /* - * Garbage collection of unix sockets starts by selecting a set of - * candidate sockets which have reference only from being in flight - * (total_refs == inflight_refs). This condition is checked once during - * the candidate collection phase, and candidates are marked as such, so - * that non-candidates can later be ignored. While inflight_refs is - * protected by unix_gc_lock, total_refs (file count) is not, hence this - * is an instantaneous decision. - * - * Once a candidate, however, the socket must not be reinstalled into a - * file descriptor while the garbage collection is in progress. - * - * If the above conditions are met, then the directed graph of - * candidates (*) does not change while unix_gc_lock is held. - * - * Any operations that changes the file count through file descriptors - * (dup, close, sendmsg) does not change the graph since candidates are - * not installed in fds. - * - * Dequeing a candidate via recvmsg would install it into an fd, but - * that takes unix_gc_lock to decrement the inflight count, so it's - * serialized with garbage collection. - * - * MSG_PEEK is special in that it does not change the inflight count, - * yet does install the socket into an fd. The following lock/unlock - * pair is to ensure serialization with garbage collection. It must be - * done between incrementing the file count and installing the file into - * an fd. - * - * If garbage collection starts after the barrier provided by the - * lock/unlock, then it will see the elevated refcount and not mark this - * as a candidate. If a garbage collection is already in progress - * before the file count was incremented, then the lock/unlock pair will - * ensure that garbage collection is finished before progressing to - * installing the fd. - * - * (*) A -> B where B is on the queue of A or B is on the queue of C - * which is on the queue of listening socket A. - */ - spin_lock(&unix_gc_lock); - spin_unlock(&unix_gc_lock); }
static void unix_destruct_scm(struct sk_buff *skb) --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -183,7 +183,7 @@ static void unix_free_vertices(struct sc } }
-DEFINE_SPINLOCK(unix_gc_lock); +static DEFINE_SPINLOCK(unix_gc_lock); unsigned int unix_tot_inflight;
void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit fd86344823b521149bb31d91eba900ba3525efa6 upstream.
Commit dcf70df2048d ("af_unix: Fix up unix_edge.successor for embryo socket.") added spin_lock(&unix_gc_lock) in accept() path, and it caused regression in a stress test as reported by kernel test robot.
If the embryo socket is not part of the inflight graph, we need not hold the lock.
To decide that in O(1) time and avoid the regression in the normal use case,
1. add a new stat unix_sk(sk)->scm_stat.nr_unix_fds
2. count the number of inflight AF_UNIX sockets in the receive queue under unix_state_lock()
3. move unix_update_edges() call under unix_state_lock()
4. avoid locking if nr_unix_fds is 0 in unix_update_edges()
Reported-by: kernel test robot oliver.sang@intel.com Closes: https://lore.kernel.org/oe-lkp/202404101427.92a08551-oliver.sang@intel.com Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20240413021928.20946-1-kuniyu@amazon.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/af_unix.h | 1 + net/unix/af_unix.c | 2 +- net/unix/garbage.c | 20 ++++++++++++++++---- 3 files changed, 18 insertions(+), 5 deletions(-)
--- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -67,6 +67,7 @@ struct unix_skb_parms {
struct scm_stat { atomic_t nr_fds; + unsigned long nr_unix_fds; };
#define UNIXCB(skb) (*(struct unix_skb_parms *)&((skb)->cb)) --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -1677,12 +1677,12 @@ static int unix_accept(struct socket *so }
tsk = skb->sk; - unix_update_edges(unix_sk(tsk)); skb_free_datagram(sk, skb); wake_up_interruptible(&unix_sk(sk)->peer_wait);
/* attach accepted sock to socket */ unix_state_lock(tsk); + unix_update_edges(unix_sk(tsk)); newsock->state = SS_CONNECTED; unix_sock_inherit_flags(sock, newsock); sock_graft(tsk, newsock); --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -209,6 +209,7 @@ void unix_add_edges(struct scm_fp_list * unix_add_edge(fpl, edge); } while (i < fpl->count_unix);
+ receiver->scm_stat.nr_unix_fds += fpl->count_unix; WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + fpl->count_unix); out: WRITE_ONCE(fpl->user->unix_inflight, fpl->user->unix_inflight + fpl->count); @@ -222,6 +223,7 @@ out:
void unix_del_edges(struct scm_fp_list *fpl) { + struct unix_sock *receiver; int i = 0;
spin_lock(&unix_gc_lock); @@ -235,6 +237,8 @@ void unix_del_edges(struct scm_fp_list * unix_del_edge(fpl, edge); } while (i < fpl->count_unix);
+ receiver = fpl->edges[0].successor; + receiver->scm_stat.nr_unix_fds -= fpl->count_unix; WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - fpl->count_unix); out: WRITE_ONCE(fpl->user->unix_inflight, fpl->user->unix_inflight - fpl->count); @@ -246,10 +250,18 @@ out:
void unix_update_edges(struct unix_sock *receiver) { - spin_lock(&unix_gc_lock); - unix_update_graph(unix_sk(receiver->listener)->vertex); - receiver->listener = NULL; - spin_unlock(&unix_gc_lock); + /* nr_unix_fds is only updated under unix_state_lock(). + * If it's 0 here, the embryo socket is not part of the + * inflight graph, and GC will not see it, so no lock needed. + */ + if (!receiver->scm_stat.nr_unix_fds) { + receiver->listener = NULL; + } else { + spin_lock(&unix_gc_lock); + unix_update_graph(unix_sk(receiver->listener)->vertex); + receiver->listener = NULL; + spin_unlock(&unix_gc_lock); + } }
int unix_prepare_fpl(struct scm_fp_list *fpl)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 1af2dface5d286dd1f2f3405a0d6fa9f2c8fb998 upstream.
syzbot reported use-after-free in unix_del_edges(). [0]
What the repro does is basically repeat the following quickly.
1. pass a fd of an AF_UNIX socket to itself
socketpair(AF_UNIX, SOCK_DGRAM, 0, [3, 4]) = 0 sendmsg(3, {..., msg_control=[{cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, cmsg_data=[4]}], ...}, 0) = 0
2. pass other fds of AF_UNIX sockets to the socket above
socketpair(AF_UNIX, SOCK_SEQPACKET, 0, [5, 6]) = 0 sendmsg(3, {..., msg_control=[{cmsg_len=48, cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, cmsg_data=[5, 6]}], ...}, 0) = 0
3. close all sockets
Here, two skb are created, and every unix_edge->successor is the first socket. Then, __unix_gc() will garbage-collect the two skb:
(a) free skb with self-referencing fd (b) free skb holding other sockets
After (a), the self-referencing socket will be scheduled to be freed later by the delayed_fput() task.
syzbot repeated the sequences above (1. ~ 3.) quickly and triggered the task concurrently while GC was running.
So, at (b), the socket was already freed, and accessing it was illegal.
unix_del_edges() accesses the receiver socket as edge->successor to optimise GC. However, we should not do it during GC.
Garbage-collecting sockets does not change the shape of the rest of the graph, so we need not call unix_update_graph() to update unix_graph_grouped when we purge skb.
However, if we clean up all loops in the unix_walk_scc_fast() path, unix_graph_maybe_cyclic remains unchanged (true), and __unix_gc() will call unix_walk_scc_fast() continuously even though there is no socket to garbage-collect.
To keep that optimisation while fixing UAF, let's add the same updating logic of unix_graph_maybe_cyclic in unix_walk_scc_fast() as done in unix_walk_scc() and __unix_walk_scc().
Note that when unix_del_edges() is called from other places, the receiver socket is always alive:
- sendmsg: the successor's sk_refcnt is bumped by sock_hold() unix_find_other() for SOCK_DGRAM, connect() for SOCK_STREAM
- recvmsg: the successor is the receiver, and its fd is alive
[0]: BUG: KASAN: slab-use-after-free in unix_edge_successor net/unix/garbage.c:109 [inline] BUG: KASAN: slab-use-after-free in unix_del_edge net/unix/garbage.c:165 [inline] BUG: KASAN: slab-use-after-free in unix_del_edges+0x148/0x630 net/unix/garbage.c:237 Read of size 8 at addr ffff888079c6e640 by task kworker/u8:6/1099
CPU: 0 PID: 1099 Comm: kworker/u8:6 Not tainted 6.9.0-rc4-next-20240418-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Workqueue: events_unbound __unix_gc Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114 print_address_description mm/kasan/report.c:377 [inline] print_report+0x169/0x550 mm/kasan/report.c:488 kasan_report+0x143/0x180 mm/kasan/report.c:601 unix_edge_successor net/unix/garbage.c:109 [inline] unix_del_edge net/unix/garbage.c:165 [inline] unix_del_edges+0x148/0x630 net/unix/garbage.c:237 unix_destroy_fpl+0x59/0x210 net/unix/garbage.c:298 unix_detach_fds net/unix/af_unix.c:1811 [inline] unix_destruct_scm+0x13e/0x210 net/unix/af_unix.c:1826 skb_release_head_state+0x100/0x250 net/core/skbuff.c:1127 skb_release_all net/core/skbuff.c:1138 [inline] __kfree_skb net/core/skbuff.c:1154 [inline] kfree_skb_reason+0x16d/0x3b0 net/core/skbuff.c:1190 __skb_queue_purge_reason include/linux/skbuff.h:3251 [inline] __skb_queue_purge include/linux/skbuff.h:3256 [inline] __unix_gc+0x1732/0x1830 net/unix/garbage.c:575 process_one_work kernel/workqueue.c:3218 [inline] process_scheduled_works+0xa2c/0x1830 kernel/workqueue.c:3299 worker_thread+0x86d/0xd70 kernel/workqueue.c:3380 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 </TASK>
Allocated by task 14427: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 unpoison_slab_object mm/kasan/common.c:312 [inline] __kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:338 kasan_slab_alloc include/linux/kasan.h:201 [inline] slab_post_alloc_hook mm/slub.c:3897 [inline] slab_alloc_node mm/slub.c:3957 [inline] kmem_cache_alloc_noprof+0x135/0x290 mm/slub.c:3964 sk_prot_alloc+0x58/0x210 net/core/sock.c:2074 sk_alloc+0x38/0x370 net/core/sock.c:2133 unix_create1+0xb4/0x770 unix_create+0x14e/0x200 net/unix/af_unix.c:1034 __sock_create+0x490/0x920 net/socket.c:1571 sock_create net/socket.c:1622 [inline] __sys_socketpair+0x33e/0x720 net/socket.c:1773 __do_sys_socketpair net/socket.c:1822 [inline] __se_sys_socketpair net/socket.c:1819 [inline] __x64_sys_socketpair+0x9b/0xb0 net/socket.c:1819 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f
Freed by task 1805: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:579 poison_slab_object+0xe0/0x150 mm/kasan/common.c:240 __kasan_slab_free+0x37/0x60 mm/kasan/common.c:256 kasan_slab_free include/linux/kasan.h:184 [inline] slab_free_hook mm/slub.c:2190 [inline] slab_free mm/slub.c:4393 [inline] kmem_cache_free+0x145/0x340 mm/slub.c:4468 sk_prot_free net/core/sock.c:2114 [inline] __sk_destruct+0x467/0x5f0 net/core/sock.c:2208 sock_put include/net/sock.h:1948 [inline] unix_release_sock+0xa8b/0xd20 net/unix/af_unix.c:665 unix_release+0x91/0xc0 net/unix/af_unix.c:1049 __sock_release net/socket.c:659 [inline] sock_close+0xbc/0x240 net/socket.c:1421 __fput+0x406/0x8b0 fs/file_table.c:422 delayed_fput+0x59/0x80 fs/file_table.c:445 process_one_work kernel/workqueue.c:3218 [inline] process_scheduled_works+0xa2c/0x1830 kernel/workqueue.c:3299 worker_thread+0x86d/0xd70 kernel/workqueue.c:3380 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
The buggy address belongs to the object at ffff888079c6e000 which belongs to the cache UNIX of size 1920 The buggy address is located 1600 bytes inside of freed 1920-byte region [ffff888079c6e000, ffff888079c6e780)
Reported-by: syzbot+f3f3eef1d2100200e593@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=f3f3eef1d2100200e593 Fixes: 77e5593aebba ("af_unix: Skip GC if no cycle exists.") Fixes: fd86344823b5 ("af_unix: Try not to hold unix_gc_lock during accept().") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20240419235102.31707-1-kuniyu@amazon.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -158,11 +158,14 @@ static void unix_add_edge(struct scm_fp_ unix_update_graph(unix_edge_successor(edge)); }
+static bool gc_in_progress; + static void unix_del_edge(struct scm_fp_list *fpl, struct unix_edge *edge) { struct unix_vertex *vertex = edge->predecessor->vertex;
- unix_update_graph(unix_edge_successor(edge)); + if (!gc_in_progress) + unix_update_graph(unix_edge_successor(edge));
list_del(&edge->vertex_entry); vertex->out_degree--; @@ -237,8 +240,10 @@ void unix_del_edges(struct scm_fp_list * unix_del_edge(fpl, edge); } while (i < fpl->count_unix);
- receiver = fpl->edges[0].successor; - receiver->scm_stat.nr_unix_fds -= fpl->count_unix; + if (!gc_in_progress) { + receiver = fpl->edges[0].successor; + receiver->scm_stat.nr_unix_fds -= fpl->count_unix; + } WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - fpl->count_unix); out: WRITE_ONCE(fpl->user->unix_inflight, fpl->user->unix_inflight - fpl->count); @@ -526,6 +531,8 @@ static void unix_walk_scc(struct sk_buff
static void unix_walk_scc_fast(struct sk_buff_head *hitlist) { + unix_graph_maybe_cyclic = false; + while (!list_empty(&unix_unvisited_vertices)) { struct unix_vertex *vertex; struct list_head scc; @@ -543,6 +550,8 @@ static void unix_walk_scc_fast(struct sk
if (scc_dead) unix_collect_skb(&scc, hitlist); + else if (!unix_graph_maybe_cyclic) + unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
list_del(&scc); } @@ -550,8 +559,6 @@ static void unix_walk_scc_fast(struct sk list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); }
-static bool gc_in_progress; - static void __unix_gc(struct work_struct *work) { struct sk_buff_head hitlist;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 7172dc93d621d5dc302d007e95ddd1311ec64283 upstream.
Commit 1af2dface5d2 ("af_unix: Don't access successor in unix_del_edges() during GC.") fixed use-after-free by avoid accessing edge->successor while GC is in progress.
However, there could be a small race window where another process could call unix_del_edges() while gc_in_progress is true and __skb_queue_purge() is on the way.
So, we need another marker for struct scm_fp_list which indicates if the skb is garbage-collected.
This patch adds dead flag in struct scm_fp_list and set it true before calling __skb_queue_purge().
Fixes: 1af2dface5d2 ("af_unix: Don't access successor in unix_del_edges() during GC.") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Paolo Abeni pabeni@redhat.com Link: https://lore.kernel.org/r/20240508171150.50601-1-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/scm.h | 1 + net/core/scm.c | 1 + net/unix/garbage.c | 14 ++++++++++---- 3 files changed, 12 insertions(+), 4 deletions(-)
--- a/include/net/scm.h +++ b/include/net/scm.h @@ -31,6 +31,7 @@ struct scm_fp_list { short max; #ifdef CONFIG_UNIX bool inflight; + bool dead; struct list_head vertices; struct unix_edge *edges; #endif --- a/net/core/scm.c +++ b/net/core/scm.c @@ -91,6 +91,7 @@ static int scm_fp_copy(struct cmsghdr *c fpl->user = NULL; #if IS_ENABLED(CONFIG_UNIX) fpl->inflight = false; + fpl->dead = false; fpl->edges = NULL; INIT_LIST_HEAD(&fpl->vertices); #endif --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -158,13 +158,11 @@ static void unix_add_edge(struct scm_fp_ unix_update_graph(unix_edge_successor(edge)); }
-static bool gc_in_progress; - static void unix_del_edge(struct scm_fp_list *fpl, struct unix_edge *edge) { struct unix_vertex *vertex = edge->predecessor->vertex;
- if (!gc_in_progress) + if (!fpl->dead) unix_update_graph(unix_edge_successor(edge));
list_del(&edge->vertex_entry); @@ -240,7 +238,7 @@ void unix_del_edges(struct scm_fp_list * unix_del_edge(fpl, edge); } while (i < fpl->count_unix);
- if (!gc_in_progress) { + if (!fpl->dead) { receiver = fpl->edges[0].successor; receiver->scm_stat.nr_unix_fds -= fpl->count_unix; } @@ -559,9 +557,12 @@ static void unix_walk_scc_fast(struct sk list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); }
+static bool gc_in_progress; + static void __unix_gc(struct work_struct *work) { struct sk_buff_head hitlist; + struct sk_buff *skb;
spin_lock(&unix_gc_lock);
@@ -579,6 +580,11 @@ static void __unix_gc(struct work_struct
spin_unlock(&unix_gc_lock);
+ skb_queue_walk(&hitlist, skb) { + if (UNIXCB(skb).fp) + UNIXCB(skb).fp->dead = true; + } + __skb_queue_purge(&hitlist); skip_gc: WRITE_ONCE(gc_in_progress, false);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michal Luczaj mhal@rbox.co
commit 041933a1ec7b4173a8e638cae4f8e394331d7e54 upstream.
GC attempts to explicitly drop oob_skb's reference before purging the hit list.
The problem is with embryos: kfree_skb(u->oob_skb) is never called on an embryo socket.
The python script below [0] sends a listener's fd to its embryo as OOB data. While GC does collect the embryo's queue, it fails to drop the OOB skb's refcount. The skb which was in embryo's receive queue stays as unix_sk(sk)->oob_skb and keeps the listener's refcount [1].
Tell GC to dispose embryo's oob_skb.
[0]: from array import array from socket import *
addr = '\x00unix-oob' lis = socket(AF_UNIX, SOCK_STREAM) lis.bind(addr) lis.listen(1)
s = socket(AF_UNIX, SOCK_STREAM) s.connect(addr) scm = (SOL_SOCKET, SCM_RIGHTS, array('i', [lis.fileno()])) s.sendmsg([b'x'], [scm], MSG_OOB) lis.close()
[1] $ grep unix-oob /proc/net/unix $ ./unix-oob.py $ grep unix-oob /proc/net/unix 0000000000000000: 00000002 00000000 00000000 0001 02 0 @unix-oob 0000000000000000: 00000002 00000000 00010000 0001 01 6072 @unix-oob
Fixes: 4090fa373f0e ("af_unix: Replace garbage collection algorithm.") Signed-off-by: Michal Luczaj mhal@rbox.co Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -342,6 +342,18 @@ enum unix_recv_queue_lock_class { U_RECVQ_LOCK_EMBRYO, };
+static void unix_collect_queue(struct unix_sock *u, struct sk_buff_head *hitlist) +{ + skb_queue_splice_init(&u->sk.sk_receive_queue, hitlist); + +#if IS_ENABLED(CONFIG_AF_UNIX_OOB) + if (u->oob_skb) { + WARN_ON_ONCE(skb_unref(u->oob_skb)); + u->oob_skb = NULL; + } +#endif +} + static void unix_collect_skb(struct list_head *scc, struct sk_buff_head *hitlist) { struct unix_vertex *vertex; @@ -365,18 +377,11 @@ static void unix_collect_skb(struct list
/* listener -> embryo order, the inversion never happens. */ spin_lock_nested(&embryo_queue->lock, U_RECVQ_LOCK_EMBRYO); - skb_queue_splice_init(embryo_queue, hitlist); + unix_collect_queue(unix_sk(skb->sk), hitlist); spin_unlock(&embryo_queue->lock); } } else { - skb_queue_splice_init(queue, hitlist); - -#if IS_ENABLED(CONFIG_AF_UNIX_OOB) - if (u->oob_skb) { - kfree_skb(u->oob_skb); - u->oob_skb = NULL; - } -#endif + unix_collect_queue(u, hitlist); }
spin_unlock(&queue->lock);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shigeru Yoshida syoshida@redhat.com
commit 927fa5b3e4f52e0967bfc859afc98ad1c523d2d5 upstream.
KMSAN reported uninit-value access in __unix_walk_scc() [1].
In the list_for_each_entry_reverse() loop, when the vertex's index equals it's scc_index, the loop uses the variable vertex as a temporary variable that points to a vertex in scc. And when the loop is finished, the variable vertex points to the list head, in this case scc, which is a local variable on the stack (more precisely, it's not even scc and might underflow the call stack of __unix_walk_scc(): container_of(&scc, struct unix_vertex, scc_entry)).
However, the variable vertex is used under the label prev_vertex. So if the edge_stack is not empty and the function jumps to the prev_vertex label, the function will access invalid data on the stack. This causes the uninit-value access issue.
Fix this by introducing a new temporary variable for the loop.
[1] BUG: KMSAN: uninit-value in __unix_walk_scc net/unix/garbage.c:478 [inline] BUG: KMSAN: uninit-value in unix_walk_scc net/unix/garbage.c:526 [inline] BUG: KMSAN: uninit-value in __unix_gc+0x2589/0x3c20 net/unix/garbage.c:584 __unix_walk_scc net/unix/garbage.c:478 [inline] unix_walk_scc net/unix/garbage.c:526 [inline] __unix_gc+0x2589/0x3c20 net/unix/garbage.c:584 process_one_work kernel/workqueue.c:3231 [inline] process_scheduled_works+0xade/0x1bf0 kernel/workqueue.c:3312 worker_thread+0xeb6/0x15b0 kernel/workqueue.c:3393 kthread+0x3c4/0x530 kernel/kthread.c:389 ret_from_fork+0x6e/0x90 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Uninit was stored to memory at: unix_walk_scc net/unix/garbage.c:526 [inline] __unix_gc+0x2adf/0x3c20 net/unix/garbage.c:584 process_one_work kernel/workqueue.c:3231 [inline] process_scheduled_works+0xade/0x1bf0 kernel/workqueue.c:3312 worker_thread+0xeb6/0x15b0 kernel/workqueue.c:3393 kthread+0x3c4/0x530 kernel/kthread.c:389 ret_from_fork+0x6e/0x90 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Local variable entries created at: ref_tracker_free+0x48/0xf30 lib/ref_tracker.c:222 netdev_tracker_free include/linux/netdevice.h:4058 [inline] netdev_put include/linux/netdevice.h:4075 [inline] dev_put include/linux/netdevice.h:4101 [inline] update_gid_event_work_handler+0xaa/0x1b0 drivers/infiniband/core/roce_gid_mgmt.c:813
CPU: 1 PID: 12763 Comm: kworker/u8:31 Not tainted 6.10.0-rc4-00217-g35bb670d65fc #32 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-2.fc40 04/01/2014 Workqueue: events_unbound __unix_gc
Fixes: 3484f063172d ("af_unix: Detect Strongly Connected Components.") Reported-by: syzkaller syzkaller@googlegroups.com Signed-off-by: Shigeru Yoshida syoshida@redhat.com Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://patch.msgid.link/20240702160428.10153-1-syoshida@redhat.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/unix/garbage.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)
--- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -476,6 +476,7 @@ prev_vertex: }
if (vertex->index == vertex->scc_index) { + struct unix_vertex *v; struct list_head scc; bool scc_dead = true;
@@ -486,15 +487,15 @@ prev_vertex: */ __list_cut_position(&scc, &vertex_stack, &vertex->scc_entry);
- list_for_each_entry_reverse(vertex, &scc, scc_entry) { + list_for_each_entry_reverse(v, &scc, scc_entry) { /* Don't restart DFS from this vertex in unix_walk_scc(). */ - list_move_tail(&vertex->entry, &unix_visited_vertices); + list_move_tail(&v->entry, &unix_visited_vertices);
/* Mark vertex as off-stack. */ - vertex->index = unix_vertex_grouped_index; + v->index = unix_vertex_grouped_index;
if (scc_dead) - scc_dead = unix_vertex_dead(vertex); + scc_dead = unix_vertex_dead(v); }
if (scc_dead)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alok Tiwari alok.a.tiwari@oracle.com
commit 295217420a44403a33c30f99d8337fe7b07eb02b upstream.
There is a typo in sm8350.dts where the node label mmeory@85200000 should be memory@85200000. This patch corrects the typo for clarity and consistency.
Fixes: b7e8f433a673 ("arm64: dts: qcom: Add basic devicetree support for SM8350 SoC") Cc: stable@vger.kernel.org Signed-off-by: Alok Tiwari alok.a.tiwari@oracle.com Link: https://lore.kernel.org/r/20250514114656.2307828-1-alok.a.tiwari@oracle.com Signed-off-by: Bjorn Andersson andersson@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/boot/dts/qcom/sm8350.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi +++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi @@ -421,7 +421,7 @@ no-map; };
- pil_camera_mem: mmeory@85200000 { + pil_camera_mem: memory@85200000 { reg = <0x0 0x85200000 0x0 0x500000>; no-map; };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pedro Tammela pctammela@mojatatu.com
commit ac9fe7dd8e730a103ae4481147395cc73492d786 upstream.
Savino says: "We are writing to report that this recent patch (141d34391abbb315d68556b7c67ad97885407547) [1] can be bypassed, and a UAF can still occur when HFSC is utilized with NETEM.
The patch only checks the cl->cl_nactive field to determine whether it is the first insertion or not [2], but this field is only incremented by init_vf [3].
By using HFSC_RSC (which uses init_ed) [4], it is possible to bypass the check and insert the class twice in the eltree. Under normal conditions, this would lead to an infinite loop in hfsc_dequeue for the reasons we already explained in this report [5].
However, if TBF is added as root qdisc and it is configured with a very low rate, it can be utilized to prevent packets from being dequeued. This behavior can be exploited to perform subsequent insertions in the HFSC eltree and cause a UAF."
To fix both the UAF and the infinite loop, with netem as an hfsc child, check explicitly in hfsc_enqueue whether the class is already in the eltree whenever the HFSC_RSC flag is set.
[1] https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commi... [2] https://elixir.bootlin.com/linux/v6.15-rc5/source/net/sched/sch_hfsc.c#L1572 [3] https://elixir.bootlin.com/linux/v6.15-rc5/source/net/sched/sch_hfsc.c#L677 [4] https://elixir.bootlin.com/linux/v6.15-rc5/source/net/sched/sch_hfsc.c#L1574 [5] https://lore.kernel.org/netdev/8DuRWwfqjoRDLDmBMlIfbrsZg9Gx50DHJc1ilxsEBNe2D...
Fixes: 37d9cf1a3ce3 ("sched: Fix detection of empty queues in child qdiscs") Reported-by: Savino Dicanosa savy@syst3mfailure.io Reported-by: William Liu will@willsroot.io Acked-by: Jamal Hadi Salim jhs@mojatatu.com Tested-by: Victor Nogueira victor@mojatatu.com Signed-off-by: Pedro Tammela pctammela@mojatatu.com Link: https://patch.msgid.link/20250522181448.1439717-2-pctammela@mojatatu.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/sched/sch_hfsc.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
--- a/net/sched/sch_hfsc.c +++ b/net/sched/sch_hfsc.c @@ -176,6 +176,11 @@ struct hfsc_sched {
#define HT_INFINITY 0xffffffffffffffffULL /* infinite time value */
+static bool cl_in_el_or_vttree(struct hfsc_class *cl) +{ + return ((cl->cl_flags & HFSC_FSC) && cl->cl_nactive) || + ((cl->cl_flags & HFSC_RSC) && !RB_EMPTY_NODE(&cl->el_node)); +}
/* * eligible tree holds backlogged classes being sorted by their eligible times. @@ -1041,6 +1046,8 @@ hfsc_change_class(struct Qdisc *sch, u32 if (cl == NULL) return -ENOBUFS;
+ RB_CLEAR_NODE(&cl->el_node); + err = tcf_block_get(&cl->block, &cl->filter_list, sch, extack); if (err) { kfree(cl); @@ -1571,7 +1578,7 @@ hfsc_enqueue(struct sk_buff *skb, struct sch->qstats.backlog += len; sch->q.qlen++;
- if (first && !cl->cl_nactive) { + if (first && !cl_in_el_or_vttree(cl)) { if (cl->cl_flags & HFSC_RSC) init_ed(cl, len); if (cl->cl_flags & HFSC_FSC)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Robin Murphy robin.murphy@arm.com
commit 11b0f576e0cbde6a12258f2af6753b17b8df342b upstream.
Somehow the encodings for REQ2/SNP2 channels in XP events got mixed up... Unmix them.
CC: stable@vger.kernel.org Fixes: 23760a014417 ("perf/arm-cmn: Add CMN-700 support") Signed-off-by: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/087023e9737ac93d7ec7a841da904758c254cb01.174671740... Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/perf/arm-cmn.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/perf/arm-cmn.c +++ b/drivers/perf/arm-cmn.c @@ -675,8 +675,8 @@ static umode_t arm_cmn_event_attr_is_vis
if ((chan == 5 && cmn->rsp_vc_num < 2) || (chan == 6 && cmn->dat_vc_num < 2) || - (chan == 7 && cmn->snp_vc_num < 2) || - (chan == 8 && cmn->req_vc_num < 2)) + (chan == 7 && cmn->req_vc_num < 2) || + (chan == 8 && cmn->snp_vc_num < 2)) return 0; }
@@ -794,8 +794,8 @@ static umode_t arm_cmn_event_attr_is_vis _CMN_EVENT_XP(pub_##_name, (_event) | (4 << 5)), \ _CMN_EVENT_XP(rsp2_##_name, (_event) | (5 << 5)), \ _CMN_EVENT_XP(dat2_##_name, (_event) | (6 << 5)), \ - _CMN_EVENT_XP(snp2_##_name, (_event) | (7 << 5)), \ - _CMN_EVENT_XP(req2_##_name, (_event) | (8 << 5)) + _CMN_EVENT_XP(req2_##_name, (_event) | (7 << 5)), \ + _CMN_EVENT_XP(snp2_##_name, (_event) | (8 << 5))
static struct attribute *arm_cmn_event_attrs[] = {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Robin Murphy robin.murphy@arm.com
commit 597704e201068db3d104de3c7a4d447ff8209127 upstream.
For all the complexity of handling affinity for CPU hotplug, what we've apparently managed to overlook is that arm_cmn_init_irqs() has in fact always been setting the *initial* affinity of all IRQs to CPU 0, not the CPU we subsequently choose for event scheduling. Oh dear.
Cc: stable@vger.kernel.org Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver") Signed-off-by: Robin Murphy robin.murphy@arm.com Reviewed-by: Ilkka Koskinen ilkka@os.amperecomputing.com Link: https://lore.kernel.org/r/b12fccba6b5b4d2674944f59e4daad91cd63420b.174706991... Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/perf/arm-cmn.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/perf/arm-cmn.c +++ b/drivers/perf/arm-cmn.c @@ -2313,6 +2313,7 @@ static int arm_cmn_probe(struct platform
cmn->dev = &pdev->dev; cmn->part = (unsigned long)device_get_match_data(cmn->dev); + cmn->cpu = cpumask_local_spread(0, dev_to_node(cmn->dev)); platform_set_drvdata(pdev, cmn);
if (cmn->part == PART_CMN600 && has_acpi_companion(cmn->dev)) { @@ -2340,7 +2341,6 @@ static int arm_cmn_probe(struct platform if (err) return err;
- cmn->cpu = cpumask_local_spread(0, dev_to_node(cmn->dev)); cmn->pmu = (struct pmu) { .module = THIS_MODULE, .attr_groups = arm_cmn_attr_groups,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Brauner brauner@kernel.org
commit 95c5f43181fe9c1b5e5a4bd3281c857a5259991f upstream.
The replace_fd() helper returns the file descriptor number on success and a negative error code on failure. The current error handling in umh_pipe_setup() only works because the file descriptor that is replaced is zero but that's pretty volatile. Explicitly check for a negative error code.
Link: https://lore.kernel.org/20250414-work-coredump-v2-2-685bf231f828@kernel.org Tested-by: Luca Boccassi luca.boccassi@gmail.com Reviewed-by: Oleg Nesterov oleg@redhat.com Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/coredump.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/fs/coredump.c +++ b/fs/coredump.c @@ -493,7 +493,9 @@ static int umh_pipe_setup(struct subproc { struct file *files[2]; struct coredump_params *cp = (struct coredump_params *)info->data; - int err = create_pipe_files(files, 0); + int err; + + err = create_pipe_files(files, 0); if (err) return err;
@@ -501,10 +503,13 @@ static int umh_pipe_setup(struct subproc
err = replace_fd(0, files[0], 0); fput(files[0]); + if (err < 0) + return err; + /* and disallow core files too */ current->signal->rlim[RLIMIT_CORE] = (struct rlimit){1, 1};
- return err; + return 0; }
void do_coredump(const kernel_siginfo_t *siginfo)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Brauner brauner@kernel.org
commit 6ae930d9dbf2d093157be33428538c91966d8a9f upstream.
Add a new helper that allows to reserve a pidfd and allocates a new pidfd file that stashes the provided struct pid. This will allow us to remove places that either open code this function or that call pidfd_create() but then have to call close_fd() because there are still failure points after pidfd_create() has been called.
Reviewed-by: Jan Kara jack@suse.cz Message-Id: 20230327-pidfd-file-api-v1-1-5c0e9a3158e4@kernel.org Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/pid.h | 1 kernel/fork.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++++++++ kernel/pid.c | 19 ++++------- 3 files changed, 93 insertions(+), 12 deletions(-)
--- a/include/linux/pid.h +++ b/include/linux/pid.h @@ -80,6 +80,7 @@ extern struct pid *pidfd_pid(const struc struct pid *pidfd_get_pid(unsigned int fd, unsigned int *flags); struct task_struct *pidfd_get_task(int pidfd, unsigned int *flags); int pidfd_create(struct pid *pid, unsigned int flags); +int pidfd_prepare(struct pid *pid, unsigned int flags, struct file **ret);
static inline struct pid *get_pid(struct pid *pid) { --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1943,6 +1943,91 @@ const struct file_operations pidfd_fops #endif };
+/** + * __pidfd_prepare - allocate a new pidfd_file and reserve a pidfd + * @pid: the struct pid for which to create a pidfd + * @flags: flags of the new @pidfd + * @pidfd: the pidfd to return + * + * Allocate a new file that stashes @pid and reserve a new pidfd number in the + * caller's file descriptor table. The pidfd is reserved but not installed yet. + + * The helper doesn't perform checks on @pid which makes it useful for pidfds + * created via CLONE_PIDFD where @pid has no task attached when the pidfd and + * pidfd file are prepared. + * + * If this function returns successfully the caller is responsible to either + * call fd_install() passing the returned pidfd and pidfd file as arguments in + * order to install the pidfd into its file descriptor table or they must use + * put_unused_fd() and fput() on the returned pidfd and pidfd file + * respectively. + * + * This function is useful when a pidfd must already be reserved but there + * might still be points of failure afterwards and the caller wants to ensure + * that no pidfd is leaked into its file descriptor table. + * + * Return: On success, a reserved pidfd is returned from the function and a new + * pidfd file is returned in the last argument to the function. On + * error, a negative error code is returned from the function and the + * last argument remains unchanged. + */ +static int __pidfd_prepare(struct pid *pid, unsigned int flags, struct file **ret) +{ + int pidfd; + struct file *pidfd_file; + + if (flags & ~(O_NONBLOCK | O_RDWR | O_CLOEXEC)) + return -EINVAL; + + pidfd = get_unused_fd_flags(O_RDWR | O_CLOEXEC); + if (pidfd < 0) + return pidfd; + + pidfd_file = anon_inode_getfile("[pidfd]", &pidfd_fops, pid, + flags | O_RDWR | O_CLOEXEC); + if (IS_ERR(pidfd_file)) { + put_unused_fd(pidfd); + return PTR_ERR(pidfd_file); + } + get_pid(pid); /* held by pidfd_file now */ + *ret = pidfd_file; + return pidfd; +} + +/** + * pidfd_prepare - allocate a new pidfd_file and reserve a pidfd + * @pid: the struct pid for which to create a pidfd + * @flags: flags of the new @pidfd + * @pidfd: the pidfd to return + * + * Allocate a new file that stashes @pid and reserve a new pidfd number in the + * caller's file descriptor table. The pidfd is reserved but not installed yet. + * + * The helper verifies that @pid is used as a thread group leader. + * + * If this function returns successfully the caller is responsible to either + * call fd_install() passing the returned pidfd and pidfd file as arguments in + * order to install the pidfd into its file descriptor table or they must use + * put_unused_fd() and fput() on the returned pidfd and pidfd file + * respectively. + * + * This function is useful when a pidfd must already be reserved but there + * might still be points of failure afterwards and the caller wants to ensure + * that no pidfd is leaked into its file descriptor table. + * + * Return: On success, a reserved pidfd is returned from the function and a new + * pidfd file is returned in the last argument to the function. On + * error, a negative error code is returned from the function and the + * last argument remains unchanged. + */ +int pidfd_prepare(struct pid *pid, unsigned int flags, struct file **ret) +{ + if (!pid || !pid_has_task(pid, PIDTYPE_TGID)) + return -EINVAL; + + return __pidfd_prepare(pid, flags, ret); +} + static void __delayed_free_task(struct rcu_head *rhp) { struct task_struct *tsk = container_of(rhp, struct task_struct, rcu); --- a/kernel/pid.c +++ b/kernel/pid.c @@ -594,20 +594,15 @@ struct task_struct *pidfd_get_task(int p */ int pidfd_create(struct pid *pid, unsigned int flags) { - int fd; + int pidfd; + struct file *pidfd_file;
- if (!pid || !pid_has_task(pid, PIDTYPE_TGID)) - return -EINVAL; + pidfd = pidfd_prepare(pid, flags, &pidfd_file); + if (pidfd < 0) + return pidfd;
- if (flags & ~(O_NONBLOCK | O_RDWR | O_CLOEXEC)) - return -EINVAL; - - fd = anon_inode_getfd("[pidfd]", &pidfd_fops, get_pid(pid), - flags | O_RDWR | O_CLOEXEC); - if (fd < 0) - put_pid(pid); - - return fd; + fd_install(pidfd, pidfd_file); + return pidfd; }
/**
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Brauner brauner@kernel.org
commit ca7707f5430ad6b1c9cb7cee0a7f67d69328bb2d upstream.
Stop open-coding get_unused_fd_flags() and anon_inode_getfile(). That's brittle just for keeping the flags between both calls in sync. Use the dedicated helper.
Message-Id: 20230327-pidfd-file-api-v1-2-5c0e9a3158e4@kernel.org Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/fork.c | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-)
--- a/kernel/fork.c +++ b/kernel/fork.c @@ -2378,21 +2378,12 @@ static __latent_entropy struct task_stru * if the fd table isn't shared). */ if (clone_flags & CLONE_PIDFD) { - retval = get_unused_fd_flags(O_RDWR | O_CLOEXEC); + /* Note that no task has been attached to @pid yet. */ + retval = __pidfd_prepare(pid, O_RDWR | O_CLOEXEC, &pidfile); if (retval < 0) goto bad_fork_free_pid; - pidfd = retval;
- pidfile = anon_inode_getfile("[pidfd]", &pidfd_fops, pid, - O_RDWR | O_CLOEXEC); - if (IS_ERR(pidfile)) { - put_unused_fd(pidfd); - retval = PTR_ERR(pidfile); - goto bad_fork_free_pid; - } - get_pid(pid); /* held by pidfile now */ - retval = put_user(pidfd, args->pidfd); if (retval) goto bad_fork_put_pidfd;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Brauner brauner@kernel.org
commit b5325b2a270fcaf7b2a9a0f23d422ca8a5a8bdea upstream.
Give userspace a way to instruct the kernel to install a pidfd into the usermode helper process. This makes coredump handling a lot more reliable for userspace. In parallel with this commit we already have systemd adding support for this in [1].
We create a pidfs file for the coredumping process when we process the corename pattern. When the usermode helper process is forked we then install the pidfs file as file descriptor three into the usermode helpers file descriptor table so it's available to the exec'd program.
Since usermode helpers are either children of the system_unbound_wq workqueue or kthreadd we know that the file descriptor table is empty and can thus always use three as the file descriptor number.
Note, that we'll install a pidfd for the thread-group leader even if a subthread is calling do_coredump(). We know that task linkage hasn't been removed due to delay_group_leader() and even if this @current isn't the actual thread-group leader we know that the thread-group leader cannot be reaped until @current has exited.
[brauner: This is a backport for the v6.1 series. Upstream has significantly changed and backporting all that infra is a non-starter. So simply backport the pidfd_prepare() helper and waste the file descriptor we allocated. Then we minimally massage the umh coredump setup code.]
Link: https://github.com/systemd/systemd/pull/37125 [1] Link: https://lore.kernel.org/20250414-work-coredump-v2-3-685bf231f828@kernel.org Tested-by: Luca Boccassi luca.boccassi@gmail.com Reviewed-by: Oleg Nesterov oleg@redhat.com Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/coredump.c | 78 ++++++++++++++++++++++++++++++++++++++++++----- include/linux/coredump.h | 1 2 files changed, 72 insertions(+), 7 deletions(-)
--- a/fs/coredump.c +++ b/fs/coredump.c @@ -42,6 +42,7 @@ #include <linux/timekeeping.h> #include <linux/sysctl.h> #include <linux/elf.h> +#include <uapi/linux/pidfd.h>
#include <linux/uaccess.h> #include <asm/mmu_context.h> @@ -56,6 +57,13 @@ static bool dump_vma_snapshot(struct coredump_params *cprm); static void free_vma_snapshot(struct coredump_params *cprm);
+/* + * File descriptor number for the pidfd for the thread-group leader of + * the coredumping task installed into the usermode helper's file + * descriptor table. + */ +#define COREDUMP_PIDFD_NUMBER 3 + static int core_uses_pid; static unsigned int core_pipe_limit; static char core_pattern[CORENAME_MAX_SIZE] = "core"; @@ -325,6 +333,27 @@ static int format_corename(struct core_n err = cn_printf(cn, "%lu", rlimit(RLIMIT_CORE)); break; + /* pidfd number */ + case 'F': { + /* + * Installing a pidfd only makes sense if + * we actually spawn a usermode helper. + */ + if (!ispipe) + break; + + /* + * Note that we'll install a pidfd for the + * thread-group leader. We know that task + * linkage hasn't been removed yet and even if + * this @current isn't the actual thread-group + * leader we know that the thread-group leader + * cannot be reaped until @current has exited. + */ + cprm->pid = task_tgid(current); + err = cn_printf(cn, "%d", COREDUMP_PIDFD_NUMBER); + break; + } default: break; } @@ -479,7 +508,7 @@ static void wait_for_dump_helpers(struct }
/* - * umh_pipe_setup + * umh_coredump_setup * helper function to customize the process used * to collect the core in userspace. Specifically * it sets up a pipe and installs it as fd 0 (stdin) @@ -489,27 +518,62 @@ static void wait_for_dump_helpers(struct * is a special value that we use to trap recursive * core dumps */ -static int umh_pipe_setup(struct subprocess_info *info, struct cred *new) +static int umh_coredump_setup(struct subprocess_info *info, struct cred *new) { struct file *files[2]; + struct file *pidfs_file = NULL; struct coredump_params *cp = (struct coredump_params *)info->data; int err;
+ if (cp->pid) { + int fd; + + fd = pidfd_prepare(cp->pid, 0, &pidfs_file); + if (fd < 0) + return fd; + + /* + * We don't care about the fd. We also cannot simply + * replace it below because dup2() will refuse to close + * this file descriptor if its in a larval state. So + * close it! + */ + put_unused_fd(fd); + + /* + * Usermode helpers are childen of either + * system_unbound_wq or of kthreadd. So we know that + * we're starting off with a clean file descriptor + * table. So we should always be able to use + * COREDUMP_PIDFD_NUMBER as our file descriptor value. + */ + err = replace_fd(COREDUMP_PIDFD_NUMBER, pidfs_file, 0); + if (err < 0) + goto out_fail; + + pidfs_file = NULL; + } + err = create_pipe_files(files, 0); if (err) - return err; + goto out_fail;
cp->file = files[1];
err = replace_fd(0, files[0], 0); fput(files[0]); if (err < 0) - return err; + goto out_fail;
/* and disallow core files too */ current->signal->rlim[RLIMIT_CORE] = (struct rlimit){1, 1};
- return 0; + err = 0; + +out_fail: + if (pidfs_file) + fput(pidfs_file); + return err; }
void do_coredump(const kernel_siginfo_t *siginfo) @@ -585,7 +649,7 @@ void do_coredump(const kernel_siginfo_t }
if (cprm.limit == 1) { - /* See umh_pipe_setup() which sets RLIMIT_CORE = 1. + /* See umh_coredump_setup() which sets RLIMIT_CORE = 1. * * Normally core limits are irrelevant to pipes, since * we're not writing to the file system, but we use @@ -630,7 +694,7 @@ void do_coredump(const kernel_siginfo_t retval = -ENOMEM; sub_info = call_usermodehelper_setup(helper_argv[0], helper_argv, NULL, GFP_KERNEL, - umh_pipe_setup, NULL, &cprm); + umh_coredump_setup, NULL, &cprm); if (sub_info) retval = call_usermodehelper_exec(sub_info, UMH_WAIT_EXEC); --- a/include/linux/coredump.h +++ b/include/linux/coredump.h @@ -28,6 +28,7 @@ struct coredump_params { int vma_count; size_t vma_data_size; struct core_vma_metadata *vma_meta; + struct pid *pid; };
/*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Milton Barrera miltonjosue2001@gmail.com
[ Upstream commit fa9fdeea1b7d6440c22efa6d59a769eae8bc89f1 ]
This patch adds HID_QUIRK_ALWAYS_POLL for the ADATA XPG wireless gaming mouse (USB ID 125f:7505) and its USB dongle (USB ID 125f:7506). Without this quirk, the device does not generate input events properly.
Signed-off-by: Milton Barrera miltonjosue2001@gmail.com Signed-off-by: Jiri Kosina jkosina@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-ids.h | 4 ++++ drivers/hid/hid-quirks.c | 2 ++ 2 files changed, 6 insertions(+)
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 4187d890bcc1a..e078d2ac92c87 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -41,6 +41,10 @@ #define USB_VENDOR_ID_ACTIONSTAR 0x2101 #define USB_DEVICE_ID_ACTIONSTAR_1011 0x1011
+#define USB_VENDOR_ID_ADATA_XPG 0x125f +#define USB_VENDOR_ID_ADATA_XPG_WL_GAMING_MOUSE 0x7505 +#define USB_VENDOR_ID_ADATA_XPG_WL_GAMING_MOUSE_DONGLE 0x7506 + #define USB_VENDOR_ID_ADS_TECH 0x06e1 #define USB_DEVICE_ID_ADS_TECH_RADIO_SI470X 0xa155
diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c index 875c44e5cf6c2..d8c5c7d451efd 100644 --- a/drivers/hid/hid-quirks.c +++ b/drivers/hid/hid-quirks.c @@ -27,6 +27,8 @@ static const struct hid_device_id hid_quirks[] = { { HID_USB_DEVICE(USB_VENDOR_ID_AASHIMA, USB_DEVICE_ID_AASHIMA_GAMEPAD), HID_QUIRK_BADPAD }, { HID_USB_DEVICE(USB_VENDOR_ID_AASHIMA, USB_DEVICE_ID_AASHIMA_PREDATOR), HID_QUIRK_BADPAD }, + { HID_USB_DEVICE(USB_VENDOR_ID_ADATA_XPG, USB_VENDOR_ID_ADATA_XPG_WL_GAMING_MOUSE), HID_QUIRK_ALWAYS_POLL }, + { HID_USB_DEVICE(USB_VENDOR_ID_ADATA_XPG, USB_VENDOR_ID_ADATA_XPG_WL_GAMING_MOUSE_DONGLE), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_AFATECH, USB_DEVICE_ID_AFATECH_AF9016), HID_QUIRK_FULLSPEED_INTERVAL }, { HID_USB_DEVICE(USB_VENDOR_ID_AIREN, USB_DEVICE_ID_AIREN_SLIMPLUS), HID_QUIRK_NOGET }, { HID_USB_DEVICE(USB_VENDOR_ID_AKAI_09E8, USB_DEVICE_ID_AKAI_09E8_MIDIMIX), HID_QUIRK_NO_INIT_REPORTS },
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jeff Layton jlayton@kernel.org
[ Upstream commit 6b9785dc8b13d9fb75ceec8cf4ea7ec3f3b1edbc ]
Currently, different NFS clients can share the same DS connections, even when they are in different net namespaces. If a containerized client creates a DS connection, another container can find and use it. When the first client exits, the connection will close which can lead to stalls in other clients.
Add a net namespace pointer to struct nfs4_pnfs_ds, and compare those value to the caller's netns in _data_server_lookup_locked() when searching for a nfs4_pnfs_ds to match.
Reported-by: Omar Sandoval osandov@osandov.com Reported-by: Sargun Dillon sargun@sargun.me Closes: https://lore.kernel.org/linux-nfs/Z_ArpQC_vREh_hEA@telecaster/ Tested-by: Sargun Dillon sargun@sargun.me Signed-off-by: Jeff Layton jlayton@kernel.org Reviewed-by: Benjamin Coddington bcodding@redhat.com Link: https://lore.kernel.org/r/20250410-nfs-ds-netns-v2-1-f80b7979ba80@kernel.org Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/filelayout/filelayoutdev.c | 6 +++--- fs/nfs/flexfilelayout/flexfilelayoutdev.c | 6 +++--- fs/nfs/pnfs.h | 4 +++- fs/nfs/pnfs_nfs.c | 9 +++++---- 4 files changed, 14 insertions(+), 11 deletions(-)
diff --git a/fs/nfs/filelayout/filelayoutdev.c b/fs/nfs/filelayout/filelayoutdev.c index acf4b88889dc3..d5f1fbfd9a0c7 100644 --- a/fs/nfs/filelayout/filelayoutdev.c +++ b/fs/nfs/filelayout/filelayoutdev.c @@ -75,6 +75,7 @@ nfs4_fl_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev, struct page *scratch; struct list_head dsaddrs; struct nfs4_pnfs_ds_addr *da; + struct net *net = server->nfs_client->cl_net;
/* set up xdr stream */ scratch = alloc_page(gfp_flags); @@ -158,8 +159,7 @@ nfs4_fl_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
mp_count = be32_to_cpup(p); /* multipath count */ for (j = 0; j < mp_count; j++) { - da = nfs4_decode_mp_ds_addr(server->nfs_client->cl_net, - &stream, gfp_flags); + da = nfs4_decode_mp_ds_addr(net, &stream, gfp_flags); if (da) list_add_tail(&da->da_node, &dsaddrs); } @@ -169,7 +169,7 @@ nfs4_fl_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev, goto out_err_free_deviceid; }
- dsaddr->ds_list[i] = nfs4_pnfs_ds_add(&dsaddrs, gfp_flags); + dsaddr->ds_list[i] = nfs4_pnfs_ds_add(net, &dsaddrs, gfp_flags); if (!dsaddr->ds_list[i]) goto out_err_drain_dsaddrs;
diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c index e028f5a0ef5f6..d21c5ecfbf1cc 100644 --- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c +++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c @@ -49,6 +49,7 @@ nfs4_ff_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev, struct nfs4_pnfs_ds_addr *da; struct nfs4_ff_layout_ds *new_ds = NULL; struct nfs4_ff_ds_version *ds_versions = NULL; + struct net *net = server->nfs_client->cl_net; u32 mp_count; u32 version_count; __be32 *p; @@ -80,8 +81,7 @@ nfs4_ff_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
for (i = 0; i < mp_count; i++) { /* multipath ds */ - da = nfs4_decode_mp_ds_addr(server->nfs_client->cl_net, - &stream, gfp_flags); + da = nfs4_decode_mp_ds_addr(net, &stream, gfp_flags); if (da) list_add_tail(&da->da_node, &dsaddrs); } @@ -149,7 +149,7 @@ nfs4_ff_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev, new_ds->ds_versions = ds_versions; new_ds->ds_versions_cnt = version_count;
- new_ds->ds = nfs4_pnfs_ds_add(&dsaddrs, gfp_flags); + new_ds->ds = nfs4_pnfs_ds_add(net, &dsaddrs, gfp_flags); if (!new_ds->ds) goto out_err_drain_dsaddrs;
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h index e3e6a41f19de6..f5173c1881845 100644 --- a/fs/nfs/pnfs.h +++ b/fs/nfs/pnfs.h @@ -59,6 +59,7 @@ struct nfs4_pnfs_ds { struct list_head ds_node; /* nfs4_pnfs_dev_hlist dev_dslist */ char *ds_remotestr; /* comma sep list of addrs */ struct list_head ds_addrs; + const struct net *ds_net; struct nfs_client *ds_clp; refcount_t ds_count; unsigned long ds_state; @@ -405,7 +406,8 @@ int pnfs_generic_commit_pagelist(struct inode *inode, int pnfs_generic_scan_commit_lists(struct nfs_commit_info *cinfo, int max); void pnfs_generic_write_commit_done(struct rpc_task *task, void *data); void nfs4_pnfs_ds_put(struct nfs4_pnfs_ds *ds); -struct nfs4_pnfs_ds *nfs4_pnfs_ds_add(struct list_head *dsaddrs, +struct nfs4_pnfs_ds *nfs4_pnfs_ds_add(const struct net *net, + struct list_head *dsaddrs, gfp_t gfp_flags); void nfs4_pnfs_v3_ds_connect_unload(void); int nfs4_pnfs_ds_connect(struct nfs_server *mds_srv, struct nfs4_pnfs_ds *ds, diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c index 47a8da3f5c9ff..31afa88742f62 100644 --- a/fs/nfs/pnfs_nfs.c +++ b/fs/nfs/pnfs_nfs.c @@ -651,12 +651,12 @@ _same_data_server_addrs_locked(const struct list_head *dsaddrs1, * Lookup DS by addresses. nfs4_ds_cache_lock is held */ static struct nfs4_pnfs_ds * -_data_server_lookup_locked(const struct list_head *dsaddrs) +_data_server_lookup_locked(const struct net *net, const struct list_head *dsaddrs) { struct nfs4_pnfs_ds *ds;
list_for_each_entry(ds, &nfs4_data_server_cache, ds_node) - if (_same_data_server_addrs_locked(&ds->ds_addrs, dsaddrs)) + if (ds->ds_net == net && _same_data_server_addrs_locked(&ds->ds_addrs, dsaddrs)) return ds; return NULL; } @@ -763,7 +763,7 @@ nfs4_pnfs_remotestr(struct list_head *dsaddrs, gfp_t gfp_flags) * uncached and return cached struct nfs4_pnfs_ds. */ struct nfs4_pnfs_ds * -nfs4_pnfs_ds_add(struct list_head *dsaddrs, gfp_t gfp_flags) +nfs4_pnfs_ds_add(const struct net *net, struct list_head *dsaddrs, gfp_t gfp_flags) { struct nfs4_pnfs_ds *tmp_ds, *ds = NULL; char *remotestr; @@ -781,13 +781,14 @@ nfs4_pnfs_ds_add(struct list_head *dsaddrs, gfp_t gfp_flags) remotestr = nfs4_pnfs_remotestr(dsaddrs, gfp_flags);
spin_lock(&nfs4_ds_cache_lock); - tmp_ds = _data_server_lookup_locked(dsaddrs); + tmp_ds = _data_server_lookup_locked(net, dsaddrs); if (tmp_ds == NULL) { INIT_LIST_HEAD(&ds->ds_addrs); list_splice_init(dsaddrs, &ds->ds_addrs); ds->ds_remotestr = remotestr; refcount_set(&ds->ds_count, 1); INIT_LIST_HEAD(&ds->ds_node); + ds->ds_net = net; ds->ds_clp = NULL; list_add(&ds->ds_node, &nfs4_data_server_cache); dprintk("%s add new data server %s\n", __func__,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: John Chau johnchau@0atlas.com
[ Upstream commit a032f29a15412fab9f4352e0032836d51420a338 ]
Change get_thinkpad_model_data() to check for additional vendor name "NEC" in order to support NEC Lavie X1475JAS notebook (and perhaps more).
The reason of this works with minimal changes is because NEC Lavie X1475JAS is a Thinkpad inside. ACPI dumps reveals its OEM ID to be "LENOVO", BIOS version "R2PET30W" matches typical Lenovo BIOS version, the existence of HKEY of LEN0268, with DMI fw string is "R2PHT24W".
I compiled and tested with my own machine, attached the dmesg below as proof of work: [ 6.288932] thinkpad_acpi: ThinkPad ACPI Extras v0.26 [ 6.288937] thinkpad_acpi: http://ibm-acpi.sf.net/ [ 6.288938] thinkpad_acpi: ThinkPad BIOS R2PET30W (1.11 ), EC R2PHT24W [ 6.307000] thinkpad_acpi: radio switch found; radios are enabled [ 6.307030] thinkpad_acpi: This ThinkPad has standard ACPI backlight brightness control, supported by the ACPI video driver [ 6.307033] thinkpad_acpi: Disabling thinkpad-acpi brightness events by default... [ 6.320322] thinkpad_acpi: rfkill switch tpacpi_bluetooth_sw: radio is unblocked [ 6.371963] thinkpad_acpi: secondary fan control detected & enabled [ 6.391922] thinkpad_acpi: battery 1 registered (start 0, stop 85, behaviours: 0x7) [ 6.398375] input: ThinkPad Extra Buttons as /devices/platform/thinkpad_acpi/input/input13
Signed-off-by: John Chau johnchau@0atlas.com Link: https://lore.kernel.org/r/20250504165513.295135-1-johnchau@0atlas.com Reviewed-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/thinkpad_acpi.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c index 26ca9c453a59c..295939e9ac69d 100644 --- a/drivers/platform/x86/thinkpad_acpi.c +++ b/drivers/platform/x86/thinkpad_acpi.c @@ -11517,6 +11517,8 @@ static int __must_check __init get_thinkpad_model_data( tp->vendor = PCI_VENDOR_ID_IBM; else if (dmi_name_in_vendors("LENOVO")) tp->vendor = PCI_VENDOR_ID_LENOVO; + else if (dmi_name_in_vendors("NEC")) + tp->vendor = PCI_VENDOR_ID_LENOVO; else return 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Masahiro Yamada masahiroy@kernel.org
[ Upstream commit ab09da75700e9d25c7dfbc7f7934920beb5e39b9 ]
Building the kernel with O= is affected by stale in-tree build artifacts.
So, if the source tree is not clean, Kbuild displays the following:
$ make ARCH=um O=build defconfig make[1]: Entering directory '/.../linux/build' *** *** The source tree is not clean, please run 'make ARCH=um mrproper' *** in /.../linux *** make[2]: *** [/.../linux/Makefile:673: outputmakefile] Error 1 make[1]: *** [/.../linux/Makefile:248: __sub-make] Error 2 make[1]: Leaving directory '/.../linux/build' make: *** [Makefile:248: __sub-make] Error 2
Usually, running 'make mrproper' is sufficient for cleaning the source tree for out-of-tree builds.
However, building UML generates build artifacts not only in arch/um/, but also in the SUBARCH directory (i.e., arch/x86/). If in-tree stale files remain under arch/x86/, Kbuild will reuse them instead of creating new ones under the specified build directory.
This commit makes 'make ARCH=um clean' recurse into the SUBARCH directory.
Reported-by: Shuah Khan skhan@linuxfoundation.org Closes: https://lore.kernel.org/lkml/20250502172459.14175-1-skhan@linuxfoundation.or... Signed-off-by: Masahiro Yamada masahiroy@kernel.org Acked-by: Johannes Berg johannes@sipsolutions.net Reviewed-by: David Gow davidgow@google.com Reviewed-by: Shuah Khan skhan@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/um/Makefile | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/um/Makefile b/arch/um/Makefile index 778c50f273992..25d0501549f54 100644 --- a/arch/um/Makefile +++ b/arch/um/Makefile @@ -155,5 +155,6 @@ MRPROPER_FILES += $(HOST_DIR)/include/generated archclean: @find . ( -name '*.bb' -o -name '*.bbg' -o -name '*.da' \ -o -name '*.gcov' ) -type f -print | xargs rm -f + $(Q)$(MAKE) -f $(srctree)/Makefile ARCH=$(HEADER_ARCH) clean
export HEADER_ARCH SUBARCH USER_CFLAGS CFLAGS_NO_HARDENING OS DEV_NULL_PATH
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alessandro Grassi alessandro.grassi@mailbox.org
[ Upstream commit fb98bd0a13de2c9d96cb5c00c81b5ca118ac9d71 ]
The SPI interface is activated before the CPOL setting is applied. In that moment, the clock idles high and CS goes low. After a short delay, CPOL and other settings are applied, which may cause the clock to change state and idle low. This transition is not part of a clock cycle, and it can confuse the receiving device.
To prevent this unexpected transition, activate the interface while CPOL and the other settings are being applied.
Signed-off-by: Alessandro Grassi alessandro.grassi@mailbox.org Link: https://patch.msgid.link/20250502095520.13825-1-alessandro.grassi@mailbox.or... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-sun4i.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/spi/spi-sun4i.c b/drivers/spi/spi-sun4i.c index 6000d0761206c..6937f5c4d868f 100644 --- a/drivers/spi/spi-sun4i.c +++ b/drivers/spi/spi-sun4i.c @@ -263,6 +263,9 @@ static int sun4i_spi_transfer_one(struct spi_master *master, else reg |= SUN4I_CTL_DHB;
+ /* Now that the settings are correct, enable the interface */ + reg |= SUN4I_CTL_ENABLE; + sun4i_spi_write(sspi, SUN4I_CTL_REG, reg);
/* Ensure that we have a parent clock fast enough */ @@ -403,7 +406,7 @@ static int sun4i_spi_runtime_resume(struct device *dev) }
sun4i_spi_write(sspi, SUN4I_CTL_REG, - SUN4I_CTL_ENABLE | SUN4I_CTL_MASTER | SUN4I_CTL_TP); + SUN4I_CTL_MASTER | SUN4I_CTL_TP);
return 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ilya Guterman amfernusus@gmail.com
[ Upstream commit e765bf89f42b5c82132a556b630affeb82b2a21f ]
This commit adds the NVME_QUIRK_NO_DEEPEST_PS quirk for device [126f:2262], which belongs to device SOLIDIGM P44 Pro SSDPFKKW020X7
The device frequently have trouble exiting the deepest power state (5), resulting in the entire disk being unresponsive.
Verified by setting nvme_core.default_ps_max_latency_us=10000 and observing the expected behavior.
Signed-off-by: Ilya Guterman amfernusus@gmail.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/pci.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 49a3cb8f1f105..218c1d69090ec 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -3592,6 +3592,8 @@ static const struct pci_device_id nvme_id_table[] = { .driver_data = NVME_QUIRK_NO_DEEPEST_PS, }, { PCI_DEVICE(0x1e49, 0x0041), /* ZHITAI TiPro7000 NVMe SSD */ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, }, + { PCI_DEVICE(0x025e, 0xf1ac), /* SOLIDIGM P44 pro SSDPFKKW020X7 */ + .driver_data = NVME_QUIRK_NO_DEEPEST_PS, }, { PCI_DEVICE(0xc0a9, 0x540a), /* Crucial P2 */ .driver_data = NVME_QUIRK_BOGUS_NID, }, { PCI_DEVICE(0x1d97, 0x2263), /* Lexar NM610 */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit dcd21b609d4abc7303f8683bce4f35d78d7d6830 ]
The Linux client assumes that all filehandles are non-volatile for renames within the same directory (otherwise sillyrename cannot work). However, the existence of the Linux 'subtree_check' export option has meant that nfs_rename() has always assumed it needs to flush writes before attempting to rename.
Since NFSv4 does allow the client to query whether or not the server exhibits this behaviour, and since knfsd does actually set the appropriate flag when 'subtree_check' is enabled on an export, it should be OK to optimise away the write flushing behaviour in the cases where it is clearly not needed.
Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/client.c | 2 ++ fs/nfs/dir.c | 15 ++++++++++++++- include/linux/nfs_fs_sb.h | 12 +++++++++--- 3 files changed, 25 insertions(+), 4 deletions(-)
diff --git a/fs/nfs/client.c b/fs/nfs/client.c index a8930e6c417fc..de4ad41b14e2a 100644 --- a/fs/nfs/client.c +++ b/fs/nfs/client.c @@ -1052,6 +1052,8 @@ struct nfs_server *nfs_create_server(struct fs_context *fc) if (server->namelen == 0 || server->namelen > NFS2_MAXNAMLEN) server->namelen = NFS2_MAXNAMLEN; } + /* Linux 'subtree_check' borkenness mandates this setting */ + server->fh_expire_type = NFS_FH_VOL_RENAME;
if (!(fattr->valid & NFS_ATTR_FATTR)) { error = ctx->nfs_mod->rpc_ops->getattr(server, ctx->mntfh, diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index 70660ff248b79..1876978107ca1 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -2632,6 +2632,18 @@ nfs_unblock_rename(struct rpc_task *task, struct nfs_renamedata *data) unblock_revalidate(new_dentry); }
+static bool nfs_rename_is_unsafe_cross_dir(struct dentry *old_dentry, + struct dentry *new_dentry) +{ + struct nfs_server *server = NFS_SB(old_dentry->d_sb); + + if (old_dentry->d_parent != new_dentry->d_parent) + return false; + if (server->fh_expire_type & NFS_FH_RENAME_UNSAFE) + return !(server->fh_expire_type & NFS_FH_NOEXPIRE_WITH_OPEN); + return true; +} + /* * RENAME * FIXME: Some nfsds, like the Linux user space nfsd, may generate a @@ -2719,7 +2731,8 @@ int nfs_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
}
- if (S_ISREG(old_inode->i_mode)) + if (S_ISREG(old_inode->i_mode) && + nfs_rename_is_unsafe_cross_dir(old_dentry, new_dentry)) nfs_sync_inode(old_inode); task = nfs_async_rename(old_dir, new_dir, old_dentry, new_dentry, must_unblock ? nfs_unblock_rename : NULL); diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h index 9ea9f9087a712..a9671f9300848 100644 --- a/include/linux/nfs_fs_sb.h +++ b/include/linux/nfs_fs_sb.h @@ -196,6 +196,15 @@ struct nfs_server { char *fscache_uniq; /* Uniquifier (or NULL) */ #endif
+ /* The following #defines numerically match the NFSv4 equivalents */ +#define NFS_FH_NOEXPIRE_WITH_OPEN (0x1) +#define NFS_FH_VOLATILE_ANY (0x2) +#define NFS_FH_VOL_MIGRATION (0x4) +#define NFS_FH_VOL_RENAME (0x8) +#define NFS_FH_RENAME_UNSAFE (NFS_FH_VOLATILE_ANY | NFS_FH_VOL_RENAME) + u32 fh_expire_type; /* V4 bitmask representing file + handle volatility type for + this filesystem */ u32 pnfs_blksize; /* layout_blksize attr */ #if IS_ENABLED(CONFIG_NFS_V4) u32 attr_bitmask[3];/* V4 bitmask representing the set @@ -219,9 +228,6 @@ struct nfs_server { u32 acl_bitmask; /* V4 bitmask representing the ACEs that are supported on this filesystem */ - u32 fh_expire_type; /* V4 bitmask representing file - handle volatility type for - this filesystem */ struct pnfs_layoutdriver_type *pnfs_curr_ld; /* Active layout driver */ struct rpc_wait_queue roc_rpcwaitq; void *pnfs_ld_data; /* per mount point data */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Valtteri Koskivuori vkoskiv@gmail.com
[ Upstream commit a7e255ff9fe4d9b8b902023aaf5b7a673786bb50 ]
The S2110 has an additional set of media playback control keys enabled by a hardware toggle button that switches the keys between "Application" and "Player" modes. Toggling "Player" mode just shifts the scancode of each hotkey up by 4.
Add defines for new scancodes, and a keymap and dmi id for the S2110.
Tested on a Fujitsu Lifebook S2110.
Signed-off-by: Valtteri Koskivuori vkoskiv@gmail.com Acked-by: Jonathan Woithe jwoithe@just42.net Link: https://lore.kernel.org/r/20250509184251.713003-1-vkoskiv@gmail.com Reviewed-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/fujitsu-laptop.c | 33 +++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-)
diff --git a/drivers/platform/x86/fujitsu-laptop.c b/drivers/platform/x86/fujitsu-laptop.c index b543d117b12c7..259eb2a2858f8 100644 --- a/drivers/platform/x86/fujitsu-laptop.c +++ b/drivers/platform/x86/fujitsu-laptop.c @@ -17,13 +17,13 @@ /* * fujitsu-laptop.c - Fujitsu laptop support, providing access to additional * features made available on a range of Fujitsu laptops including the - * P2xxx/P5xxx/S6xxx/S7xxx series. + * P2xxx/P5xxx/S2xxx/S6xxx/S7xxx series. * * This driver implements a vendor-specific backlight control interface for * Fujitsu laptops and provides support for hotkeys present on certain Fujitsu * laptops. * - * This driver has been tested on a Fujitsu Lifebook S6410, S7020 and + * This driver has been tested on a Fujitsu Lifebook S2110, S6410, S7020 and * P8010. It should work on most P-series and S-series Lifebooks, but * YMMV. * @@ -102,7 +102,11 @@ #define KEY2_CODE 0x411 #define KEY3_CODE 0x412 #define KEY4_CODE 0x413 -#define KEY5_CODE 0x420 +#define KEY5_CODE 0x414 +#define KEY6_CODE 0x415 +#define KEY7_CODE 0x416 +#define KEY8_CODE 0x417 +#define KEY9_CODE 0x420
/* Hotkey ringbuffer limits */ #define MAX_HOTKEY_RINGBUFFER_SIZE 100 @@ -450,7 +454,7 @@ static const struct key_entry keymap_default[] = { { KE_KEY, KEY2_CODE, { KEY_PROG2 } }, { KE_KEY, KEY3_CODE, { KEY_PROG3 } }, { KE_KEY, KEY4_CODE, { KEY_PROG4 } }, - { KE_KEY, KEY5_CODE, { KEY_RFKILL } }, + { KE_KEY, KEY9_CODE, { KEY_RFKILL } }, /* Soft keys read from status flags */ { KE_KEY, FLAG_RFKILL, { KEY_RFKILL } }, { KE_KEY, FLAG_TOUCHPAD_TOGGLE, { KEY_TOUCHPAD_TOGGLE } }, @@ -474,6 +478,18 @@ static const struct key_entry keymap_p8010[] = { { KE_END, 0 } };
+static const struct key_entry keymap_s2110[] = { + { KE_KEY, KEY1_CODE, { KEY_PROG1 } }, /* "A" */ + { KE_KEY, KEY2_CODE, { KEY_PROG2 } }, /* "B" */ + { KE_KEY, KEY3_CODE, { KEY_WWW } }, /* "Internet" */ + { KE_KEY, KEY4_CODE, { KEY_EMAIL } }, /* "E-mail" */ + { KE_KEY, KEY5_CODE, { KEY_STOPCD } }, + { KE_KEY, KEY6_CODE, { KEY_PLAYPAUSE } }, + { KE_KEY, KEY7_CODE, { KEY_PREVIOUSSONG } }, + { KE_KEY, KEY8_CODE, { KEY_NEXTSONG } }, + { KE_END, 0 } +}; + static const struct key_entry *keymap = keymap_default;
static int fujitsu_laptop_dmi_keymap_override(const struct dmi_system_id *id) @@ -511,6 +527,15 @@ static const struct dmi_system_id fujitsu_laptop_dmi_table[] = { }, .driver_data = (void *)keymap_p8010 }, + { + .callback = fujitsu_laptop_dmi_keymap_override, + .ident = "Fujitsu LifeBook S2110", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"), + DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK S2110"), + }, + .driver_data = (void *)keymap_s2110 + }, {} };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Pearson mpearson-lenovo@squebb.ca
[ Upstream commit 29e4e6b4235fefa5930affb531fe449cac330a72 ]
If user modifies the battery charge threshold an ACPI event is generated. Confirmed with Lenovo FW team this is only generated on user event. As no action is needed, ignore the event and prevent spurious kernel logs.
Reported-by: Derek Barbosa debarbos@redhat.com Closes: https://lore.kernel.org/platform-driver-x86/7e9a1c47-5d9c-4978-af20-3949d53f... Signed-off-by: Mark Pearson mpearson-lenovo@squebb.ca Reviewed-by: Hans de Goede hdegoede@redhat.com Reviewed-by: Armin Wolf W_Armin@gmx.de Link: https://lore.kernel.org/r/20250517023348.2962591-1-mpearson-lenovo@squebb.ca Reviewed-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/thinkpad_acpi.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c index 295939e9ac69d..17d74434e6046 100644 --- a/drivers/platform/x86/thinkpad_acpi.c +++ b/drivers/platform/x86/thinkpad_acpi.c @@ -211,6 +211,7 @@ enum tpacpi_hkey_event_t { /* Thermal events */ TP_HKEY_EV_ALARM_BAT_HOT = 0x6011, /* battery too hot */ TP_HKEY_EV_ALARM_BAT_XHOT = 0x6012, /* battery critically hot */ + TP_HKEY_EV_ALARM_BAT_LIM_CHANGE = 0x6013, /* battery charge limit changed*/ TP_HKEY_EV_ALARM_SENSOR_HOT = 0x6021, /* sensor too hot */ TP_HKEY_EV_ALARM_SENSOR_XHOT = 0x6022, /* sensor critically hot */ TP_HKEY_EV_THM_TABLE_CHANGED = 0x6030, /* windows; thermal table changed */ @@ -3948,6 +3949,10 @@ static bool hotkey_notify_6xxx(const u32 hkey, pr_alert("THERMAL EMERGENCY: battery is extremely hot!\n"); /* recommended action: immediate sleep/hibernate */ break; + case TP_HKEY_EV_ALARM_BAT_LIM_CHANGE: + pr_debug("Battery Info: battery charge threshold changed\n"); + /* User changed charging threshold. No action needed */ + return true; case TP_HKEY_EV_ALARM_SENSOR_HOT: pr_crit("THERMAL ALARM: a sensor reports something is too hot!\n"); /* recommended action: warn user through gui, that */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nishanth Menon nm@ti.com
[ Upstream commit 50980d8da71a0c2e045e85bba93c0099ab73a209 ]
Using random mac address is not an error since the driver continues to function, it should be informative that the system has not assigned a MAC address. This is inline with other drivers such as ax88796c, dm9051 etc. Drop the error level to info level.
Signed-off-by: Nishanth Menon nm@ti.com Reviewed-by: Simon Horman horms@kernel.org Reviewed-by: Roger Quadros rogerq@kernel.org Link: https://patch.msgid.link/20250516122655.442808-1-nm@ti.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 32828d4ac64ce..a0a9e4e13e77b 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -1918,7 +1918,7 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common) port->slave.mac_addr); if (!is_valid_ether_addr(port->slave.mac_addr)) { eth_random_addr(port->slave.mac_addr); - dev_err(dev, "Use random MAC address\n"); + dev_info(dev, "Use random MAC address\n"); } } }
Am 02.06.2025 um 15:44 schrieb Greg Kroah-Hartman:
This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Builds, boots and works on my 2-socket Ivy Bridge Xeon E5-2697 v2 server. No dmesg oddities or regressions found.
Tested-by: Peter Schneider pschneider1968@googlemail.com
Beste Grüße, Peter Schneider
On 6/2/25 06:44, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 04 Jun 2025 13:42:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.141-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
On ARCH_BRCMSTB using 32-bit and 64-bit ARM kernels, build tested on BMIPS_GENERIC:
Tested-by: Florian Fainelli florian.fainelli@broadcom.com
On 6/2/25 06:44, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 04 Jun 2025 13:42:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.141-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On Mon, 2 Jun 2025 at 20:33, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 04 Jun 2025 13:42:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.141-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 6.1.141-rc1 * git: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git * git commit: 1c3f3a4d0cca7742a95419a6c91fff4326a2de1c * git describe: v6.1.140-326-g1c3f3a4d0cca * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.14...
## Test Regressions (compared to v6.1.139-98-g1fb2f21fca77)
## Metric Regressions (compared to v6.1.139-98-g1fb2f21fca77)
## Test Fixes (compared to v6.1.139-98-g1fb2f21fca77)
## Metric Fixes (compared to v6.1.139-98-g1fb2f21fca77)
## Test result summary total: 219869, pass: 201359, fail: 3409, skip: 14877, xfail: 224
## Build Summary * arc: 5 total, 5 passed, 0 failed * arm: 133 total, 133 passed, 0 failed * arm64: 41 total, 41 passed, 0 failed * i386: 21 total, 21 passed, 0 failed * mips: 26 total, 25 passed, 1 failed * parisc: 4 total, 4 passed, 0 failed * powerpc: 32 total, 31 passed, 1 failed * riscv: 11 total, 11 passed, 0 failed * s390: 14 total, 14 passed, 0 failed * sh: 10 total, 10 passed, 0 failed * sparc: 7 total, 7 passed, 0 failed * x86_64: 33 total, 33 passed, 0 failed
## Test suites summary * boot * commands * kselftest-arm64 * kselftest-breakpoints * kselftest-capabilities * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-exec * kselftest-fpu * kselftest-futex * kselftest-intel_pstate * kselftest-kcmp * kselftest-kvm * kselftest-livepatch * kselftest-membarrier * kselftest-mincore * kselftest-mqueue * kselftest-openat2 * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-sigaltstack * kselftest-size * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user_events * kselftest-vDSO * kselftest-x86 * kunit * kvm-unit-tests * lava * libgpiod * libhugetlbfs * log-parser-boot * log-parser-build-clang * log-parser-build-gcc * log-parser-test * ltp-capability * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-hugetlb * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-smoke * ltp-syscalls * ltp-tracing * modules * perf * rcutorture
-- Linaro LKFT https://lkft.linaro.org
Hi!
This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
CIP testing did not find any problems here:
https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/tree/linux-6...
Tested-by: Pavel Machek (CIP) pavel@denx.de
Best regards, Pavel
On 6/2/25 07:44, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 04 Jun 2025 13:42:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.141-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
On Mon, Jun 02, 2025 at 03:44:36PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Tested-by: Mark Brown broonie@kernel.org
On Mon, 02 Jun 2025 15:44:36 +0200 Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 04 Jun 2025 13:42:20 +0000. Anything received after that time might be too late.
Boot-tested under QEMU for Rust x86_64:
Tested-by: Miguel Ojeda ojeda@kernel.org
Thanks!
Cheers, Miguel
The kernel, bpf tool, and perf tool builds fine for v6.1.141-rc1 on x86 and arm64 Azure VM.
Kernel binary size for x86 build: text data bss dec hex filename 25847143 11304250 16613376 53764769 33462a1 vmlinux
Kernel binary size for arm64 build: text data bss dec hex filename 31273699 12549552 831088 44654339 2a95f03 vmlinux
Tested-by: Hardik Garg hargar@linux.microsoft.com
Thanks, Hardik
On Mon, 02 Jun 2025 15:44:36 +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.141 release. There are 325 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 04 Jun 2025 13:42:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.141-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
All tests passing for Tegra ...
Test results for stable-v6.1: 10 builds: 10 pass, 0 fail 28 boots: 28 pass, 0 fail 115 tests: 115 pass, 0 fail
Linux version: 6.1.141-rc1-g1c3f3a4d0cca Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra186-p3509-0000+p3636-0001, tegra194-p2972-0000, tegra194-p3509-0000+p3668-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Tested-by: Jon Hunter jonathanh@nvidia.com
Jon
linux-stable-mirror@lists.linaro.org