This is the start of the stable review cycle for the 6.1.130 release. There are 176 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.130-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.1.130-rc1
Fullway Wang fullwaywang@outlook.com media: mtk-vcodec: potential null pointer deference in SCP
Quang Le quanglex97@gmail.com pfifo_tail_enqueue: Drop new packet when sch->limit == 0
Phillip Lougher phillip@squashfs.org.uk Squashfs: check the inode number is not the invalid value of zero
Jiaxun Yang jiaxun.yang@flygoat.com mm/memory: Use exception ip to search exception tables
Jiaxun Yang jiaxun.yang@flygoat.com ptrace: Introduce exception_ip arch hook
Thomas Gleixner tglx@linutronix.de intel_idle: Handle older CPUs, which stop the TSC in deeper C states, correctly
chr[] chris@rudorff.com amdgpu/pm/legacy: fix suspend/resume issues
Sohaib Nadeem sohaib.nadeem@amd.com drm/amd/display: fixed integer types and null check locations
Andreas Schwab schwab@suse.de riscv/futex: sign extend compare value in atomic cmpxchg
Thomas Gleixner tglx@linutronix.de sched/core: Prevent rescheduling when interrupts are disabled
Ard Biesheuvel ardb@kernel.org vmlinux.lds: Ensure that const vars with relocations are mapped R/O
Matthieu Baerts (NGI0) matttbe@kernel.org mptcp: reset when MPTCP opts are dropped after join
Paolo Abeni pabeni@redhat.com mptcp: always handle address removal under msk socket lock
Kaustabh Chakraborty kauschluss@disroot.org phy: exynos5-usbdrd: fix MPLL_MULTIPLIER and SSC_REFCLKSEL masks in refclk
BH Hsieh bhsieh@nvidia.com phy: tegra: xusb: reset VBUS & ID OVERRIDE
Wei Fang wei.fang@nxp.com net: enetc: fix the off-by-one issue in enetc_map_tx_tso_buffs()
Wei Fang wei.fang@nxp.com net: enetc: correct the xdp_tx statistics
Wei Fang wei.fang@nxp.com net: enetc: update UDP checksum when updating originTimestamp field
Wei Fang wei.fang@nxp.com net: enetc: keep track of correct Tx BD count in enetc_map_tx_tso_buffs()
Wei Fang wei.fang@nxp.com net: enetc: fix the off-by-one issue in enetc_map_tx_buffs()
Nikita Zhandarovich n.zhandarovich@fintech.ru usbnet: gl620a: fix endpoint checking in genelink_bind()
Tyrone Ting kfting@nuvoton.com i2c: npcm: disable interrupt enable bit before devm_request_irq
Roman Li Roman.Li@amd.com drm/amd/display: Fix HPD after gpu reset
Tom Chung chiahsuan.chung@amd.com drm/amd/display: Disable PSR-SU on eDP panels
Kan Liang kan.liang@linux.intel.com perf/core: Fix low freq setting via IOC_PERIOD
Kan Liang kan.liang@linux.intel.com perf/x86: Fix low freqency setting issue
Dmitry Panchenko dmitry@d-systems.ee ALSA: usb-audio: Re-add sample rate quirk for Pioneer DJM-900NXS2
Nikolay Kuratov kniv@yandex-team.ru ftrace: Avoid potential division by zero in function_stat_show()
Steven Rostedt rostedt@goodmis.org tracing: Fix bad hist from corrupting named_triggers list
Chukun Pan amadeus@jmu.edu.cn phy: rockchip: naneng-combphy: compatible reset with old DT
Russell Senior russell@personaltelco.net x86/CPU: Fix warm boot hang regression on AMD SC1100 SoC systems
Pavel Begunkov asml.silence@gmail.com io_uring/net: save msg_control for compat
Tong Tiangen tongtiangen@huawei.com uprobes: Reject the shared zeropage in uprobe_write_opcode()
David Howells dhowells@redhat.com mm: Don't pin ZERO_PAGE in pin_user_pages()
Justin Iurman justin.iurman@uliege.be net: ipv6: fix dst ref loop on input in rpl lwt
Justin Iurman justin.iurman@uliege.be net: ipv6: rpl_iptunnel: mitigate 2-realloc issue
Justin Iurman justin.iurman@uliege.be net: ipv6: fix dst ref loop on input in seg6 lwt
Justin Iurman justin.iurman@uliege.be net: ipv6: seg6_iptunnel: mitigate 2-realloc issue
Justin Iurman justin.iurman@uliege.be include: net: add static inline dst_dev_overhead() to dst.h
Shay Drory shayd@nvidia.com net/mlx5: IRQ, Fix null string in debug print
Harshal Chaudhari hchaudhari@marvell.com net: mvpp2: cls: Fixed Non IP flow, with vlan tag flow defination.
Mohammad Heib mheib@redhat.com net: Clear old fragment checksum value in napi_reuse_skb
Wang Hai wanghai38@huawei.com tcp: Defer ts_recent changes until req is owned
Philo Lu lulie@linux.alibaba.com ipvs: Always clear ipvs_property flag in skb_scrub_packet()
Nicolas Frattaroli nicolas.frattaroli@collabora.com ASoC: es8328: fix route from DAC to output
Sean Anderson sean.anderson@linux.dev net: cadence: macb: Synchronize stats calculations
Eric Dumazet edumazet@google.com ipvlan: ensure network headers are in skb linear part
Guillaume Nault gnault@redhat.com ipvlan: Prepare ipvlan_process_v4_outbound() to future .flowi4_tos conversion.
Guillaume Nault gnault@redhat.com ipv4: Convert ip_route_input() to dscp_t.
Guillaume Nault gnault@redhat.com ipv4: Convert icmp_route_lookup() to dscp_t.
Ido Schimmel idosch@nvidia.com ipvlan: Unmask upper DSCP bits in ipvlan_process_v4_outbound()
Ido Schimmel idosch@nvidia.com ipv4: icmp: Unmask upper DSCP bits in icmp_route_lookup()
Ido Schimmel idosch@nvidia.com ipv4: icmp: Pass full DS field to ip_route_input()
Peilin He he.peilin@zte.com.cn net/ipv4: add tracepoint for icmp_send
Jiri Slaby (SUSE) jirislaby@kernel.org net: set the minimum for net_hotdata.netdev_budget_usecs
Ido Schimmel idosch@nvidia.com net: loopback: Avoid sending IP packets without an Ethernet header
David Howells dhowells@redhat.com afs: Fix the server_list to unuse a displaced server rather than putting it
David Howells dhowells@redhat.com afs: Make it possible to find the volumes that are using a server
Colin Ian King colin.i.king@gmail.com afs: remove variable nr_servers
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: L2CAP: Fix L2CAP_ECRED_CONN_RSP response
Takashi Iwai tiwai@suse.de ALSA: usb-audio: Avoid dropping MIDI events at closing multiple ports
Arnd Bergmann arnd@arndb.de sunrpc: suppress warnings for unused procfs functions
Patrisious Haddad phaddad@nvidia.com RDMA/mlx5: Fix bind QP error cleanup flow
Ye Bin yebin10@huawei.com scsi: core: Clear driver private data when retrying request
Patrisious Haddad phaddad@nvidia.com RDMA/mlx5: Fix AH static rate parsing
Or Har-Toov ohartoov@nvidia.com IB/core: Add support for XDR link speed
Leon Romanovsky leon@kernel.org RDMA/mlx5: Reduce QP table exposure
Mark Zhang markzhang@nvidia.com RDMA/mlx: Calling qp event handler in workqueue context
Trond Myklebust trond.myklebust@hammerspace.com SUNRPC: Prevent looping due to rpc_signal_task() races
Stephen Brennan stephen.s.brennan@oracle.com SUNRPC: convert RPC_TASK_* constants to enum
Vasiliy Kovalev kovalev@altlinux.org ovl: fix UAF in ovl_dentry_update_reval by moving dput() in ovl_link_up
Mark Zhang markzhang@nvidia.com IB/mlx5: Set and get correct qp_num for a DCT QP
Yishai Hadas yishaih@nvidia.com RDMA/mlx5: Fix the recovery flow of the UMR QP
Shay Drory shayd@nvidia.com RDMA/mlx5: Implement mkeys management via LIFO queue
Michael Guralnik michaelgur@nvidia.com RDMA/mlx5: Add work to remove temporary entries from the cache
Michael Guralnik michaelgur@nvidia.com RDMA/mlx5: Cache all user cacheable mkeys on dereg MR flow
Michael Guralnik michaelgur@nvidia.com RDMA/mlx5: Introduce mlx5r_cache_rb_key
Michael Guralnik michaelgur@nvidia.com RDMA/mlx5: Change the cache structure to an RB-tree
Aharon Landau aharonl@nvidia.com RDMA/mlx5: Remove implicit ODP cache entry
Aharon Landau aharonl@nvidia.com RDMA/mlx5: Don't keep umrable 'page_shift' in cache entries
Xin Long lucien.xin@gmail.com netfilter: allow exp not to be removed in nf_ct_find_expectation
Alexander Dahl ada@thorsis.com spi: atmel-quadspi: Fix wrong register value written to MR
Alexander Dahl ada@thorsis.com spi: atmel-quadspi: Avoid overwriting delay register settings
Yunfei Dong yunfei.dong@mediatek.com media: mediatek: vcodec: Fix H264 multi stateless decoder smatch warning
Yu Kuai yukuai3@huawei.com block, bfq: fix bfqq uaf in bfq_limit_depth()
Paolo Valente paolo.valente@linaro.org block, bfq: split sync bfq_queues on a per-actuator basis
Patrick Bellasi derkling@google.com x86/cpu/kvm: SRSO: Fix possible missing IBPB on VM-Exit
Steven Rostedt rostedt@goodmis.org ftrace: Do not add duplicate entries in subops manager ops
Sebastian Andrzej Siewior bigeasy@linutronix.de ftrace: Correct preemption accounting for function tracing.
Komal Bajaj quic_kbajaj@quicinc.com EDAC/qcom: Correct interrupt enable register configuration
Haoxiang Li haoxiang_li2024@163.com smb: client: Add check for next_buffer in receive_encrypted_standard()
Niravkumar L Rabara niravkumar.l.rabara@intel.com mtd: rawnand: cadence: fix incorrect device in dma_unmap_single
Niravkumar L Rabara niravkumar.l.rabara@intel.com mtd: rawnand: cadence: use dma_map_resource for sdma address
Niravkumar L Rabara niravkumar.l.rabara@intel.com mtd: rawnand: cadence: fix error code in cadence_nand_init()
Ricardo Cañuelo Navarro rcn@igalia.com mm,madvise,hugetlb: check for 0-length range after end address adjustment
Christian Brauner brauner@kernel.org acct: block access to kernel internal filesystems
Christian Brauner brauner@kernel.org acct: perform last write from workqueue
John Veness john-linux@pelago.org.uk ALSA: hda/conexant: Add quirk for HP ProBook 450 G4 mute LED
Wentao Liang vulab@iscas.ac.cn ALSA: hda: Add error check for snd_ctl_rename_id() in snd_hda_create_dig_out_ctls()
Nikita Zhandarovich n.zhandarovich@fintech.ru ASoC: fsl_micfil: Enable default case in micfil_set_quality()
Haoxiang Li haoxiang_li2024@163.com nfp: bpf: Add check for nfp_app_ctrl_msg_alloc()
Gavrilov Ilia Ilia.Gavrilov@infotecs.ru drop_monitor: fix incorrect initialization order
Sumit Garg sumit.garg@linaro.org tee: optee: Fix supplicant wait loop
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Make sure all planes in use by the joiner have their crtc included
Jessica Zhang quic_jesszhan@quicinc.com drm/msm/dpu: Disable dither in phys encoder cleanup
Yan Zhai yan@cloudflare.com bpf: skip non exist keys in generic_map_lookup_batch
Caleb Sander Mateos csander@purestorage.com nvme/ioctl: add missing space in err message
Marijn Suijten marijn.suijten@somainline.org drm/msm/dpu: Don't leak bits_per_component into random DSC_ENC fields
David Hildenbrand david@redhat.com nouveau/svm: fix missing folio unlock + put after make_device_exclusive_range()
Andrey Vatoropin a.vatoropin@crpt.ru power: supply: da9150-fg: fix potential overflow
Jiayuan Chen mrpre@163.com bpf: Fix wrong copied_seq calculation
Jiayuan Chen mrpre@163.com strparser: Add read_sock callback
Shigeru Yoshida syoshida@redhat.com bpf, test_run: Fix use-after-free issue in eth_skb_pkt_type()
Tomi Valkeinen tomi.valkeinen+renesas@ideasonboard.com drm/rcar-du: dsi: Fix PHY lock bit check
Devarsh Thakkar devarsht@ti.com drm/tidss: Fix race condition while handling interrupt registers
Tomi Valkeinen tomi.valkeinen@ideasonboard.com drm/tidss: Add simple K2G manual reset
Sabrina Dubroca sd@queasysnail.net tcp: drop secpath at the same time as we currently drop dst
Nick Hu nick.hu@sifive.com net: axienet: Set mac_managed_pm
Breno Leitao leitao@debian.org arp: switch to dev_getbyhwaddr() in arp_req_set_public()
Breno Leitao leitao@debian.org net: Add non-RCU dev_getbyhwaddr() helper
Cong Wang xiyou.wangcong@gmail.com flow_dissector: Fix port range key handling in BPF conversion
Cong Wang xiyou.wangcong@gmail.com flow_dissector: Fix handling of mixed port and port-range keys
Kuniyuki Iwashima kuniyu@amazon.com geneve: Suppress list corruption splat in geneve_destroy_tunnels().
Kuniyuki Iwashima kuniyu@amazon.com gtp: Suppress list corruption splat in gtp_net_exit_batch_rtnl().
Nick Child nnac123@linux.ibm.com ibmvnic: Don't reference skb after sending to VIOS
Nick Child nnac123@linux.ibm.com ibmvnic: Add stat for tx direct vs tx batched
Nick Child nnac123@linux.ibm.com ibmvnic: Introduce send sub-crq direct
Nick Child nnac123@linux.ibm.com ibmvnic: Return error code on TX scrq flush fail
Vitaly Rodionov vitalyr@opensource.cirrus.com ALSA: hda/cirrus: Correct the full scale volume set logic
Kuniyuki Iwashima kuniyu@amazon.com geneve: Fix use-after-free in geneve_find_dev().
Christophe Leroy christophe.leroy@csgroup.eu powerpc/code-patching: Fix KASAN hit by not flagging text patching area as VM_ALLOC
Kailang Yang kailang@realtek.com ALSA: hda/realtek: Fixup ALC225 depop procedure
Christophe Leroy christophe.leroy@csgroup.eu powerpc/64s: Rewrite __real_pte() and __rpte_to_hidx() as static inline
Michael Ellerman mpe@ellerman.id.au powerpc/64s/mm: Move __real_pte stubs into hash-4k.h
John Keeping jkeeping@inmusicbrands.com ASoC: rockchip: i2s-tdm: fix shift config for SND_SOC_DAIFMT_DSP_[AB]
Jill Donahue jilliandonahue58@gmail.com USB: gadget: f_midi: f_midi_complete to call queue_work
Roy Luo royluo@google.com usb: gadget: core: flush gadget workqueue after device removal
Roy Luo royluo@google.com USB: gadget: core: create sysfs link between udc and gadget
Ricardo Ribalda ribalda@chromium.org media: uvcvideo: Remove dangling pointers
Ricardo Ribalda ribalda@chromium.org media: uvcvideo: Only save async fh if success
Ricardo Ribalda ribalda@chromium.org media: uvcvideo: Refactor iterators
Ricardo Ribalda ribalda@chromium.org media: uvcvideo: Fix crash during unbind if gpio unit is in use
Yang Yingliang yangyingliang@huawei.com media: Switch to use dev_err_probe() helper
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org soc: mediatek: mtk-devapc: Fix leaking IO map on driver remove
Uwe Kleine-König u.kleine-koenig@pengutronix.de soc/mediatek: mtk-devapc: Convert to platform remove callback returning void
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org soc: mediatek: mtk-devapc: Fix leaking IO map on error paths
AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com soc: mediatek: mtk-devapc: Switch to devm_clk_get_enabled()
Jarkko Sakkinen jarkko@kernel.org tpm: Change to kvalloc() in eventlog/acpi.c
Eddie James eajames@linux.ibm.com tpm: Use managed allocation for bios event log
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org arm64: dts: qcom: sm8450: Fix CDSP memory length
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org arm64: dts: qcom: trim addresses to 8 digits
Chen-Yu Tsai wenst@chromium.org arm64: dts: mediatek: mt8183: Disable DSI display output by default
Igor Pylypiv ipylypiv@google.com scsi: core: Do not retry I/Os during depopulation
Douglas Gilbert dgilbert@interlog.com scsi: core: Handle depopulation and restoration in progress
Dan Carpenter dan.carpenter@linaro.org ASoC: renesas: rz-ssi: Add a check for negative sample_space
Daniel Golle daniel@makrotopia.org clk: mediatek: mt2701-img: add missing dummy clk
Daniel Golle daniel@makrotopia.org clk: mediatek: mt2701-bdp: add missing dummy clk
Daniel Golle daniel@makrotopia.org clk: mediatek: mt2701-vdec: fix conversion to mtk_clk_simple_probe
AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com clk: mediatek: clk-mtk: Add dummy clock ops
Zijun Hu quic_zijuhu@quicinc.com Bluetooth: qca: Fix poor RF performance for WCN6855
Cheng Jiang quic_chejiang@quicinc.com Bluetooth: qca: Update firmware-name to support board specific nvm
Zijun Hu quic_zijuhu@quicinc.com Bluetooth: qca: Support downloading board id specific NVM for WCN7850
Bence Csókás csokas.bence@prolan.hu spi: atmel-qspi: Memory barriers after memory-mapped I/O
Csókás, Bence csokas.bence@prolan.hu spi: atmel-quadspi: Create `atmel_qspi_ops` to support newer SoC families
Yang Yingliang yangyingliang@huawei.com spi: atmel-quadspi: switch to use modern name
Tudor Ambarus tudor.ambarus@microchip.com spi: atmel-quadspi: Add support for configuring CS timing
Chen Ridong chenridong@huawei.com memcg: fix soft lockup in the OOM process
Carlos Galo carlosgalo@google.com mm: update mark_victim tracepoints fields
Yu Kuai yukuai3@huawei.com md/md-bitmap: Synchronize bitmap_get_stats() with bitmap lifetime
Yu Kuai yukuai3@huawei.com md/md-bitmap: add 'sync_size' into struct md_bitmap_stats
Yu Kuai yukuai3@huawei.com md/md-cluster: fix spares warnings for __le64
Yu Kuai yukuai3@huawei.com md/md-bitmap: replace md_bitmap_status() with a new helper md_bitmap_get_stats()
Yu Kuai yukuai3@huawei.com md: simplify md_seq_ops
Yu Kuai yukuai3@huawei.com md: factor out a helper from mddev_put()
Yu Kuai yukuai3@huawei.com md: use separate work_struct for md_start_sync()
Catalin Marinas catalin.marinas@arm.com arm64: mte: Do not allow PROT_MTE on MAP_HUGETLB user mappings
-------------
Diffstat:
Documentation/core-api/pin_user_pages.rst | 6 + Documentation/networking/strparser.rst | 9 +- Makefile | 4 +- arch/arm64/boot/dts/mediatek/mt8183.dtsi | 1 + arch/arm64/boot/dts/qcom/sm8350.dtsi | 2 +- arch/arm64/boot/dts/qcom/sm8450.dtsi | 4 +- arch/arm64/include/asm/mman.h | 9 +- arch/mips/include/asm/ptrace.h | 2 + arch/mips/kernel/ptrace.c | 7 + arch/powerpc/include/asm/book3s/64/hash-4k.h | 28 + arch/powerpc/include/asm/book3s/64/pgtable.h | 26 - arch/powerpc/lib/code-patching.c | 2 +- arch/riscv/include/asm/futex.h | 2 +- arch/x86/Kconfig | 3 +- arch/x86/events/core.c | 2 +- arch/x86/kernel/cpu/bugs.c | 20 +- arch/x86/kernel/cpu/cyrix.c | 4 +- block/bfq-cgroup.c | 97 +-- block/bfq-iosched.c | 195 ++++-- block/bfq-iosched.h | 51 +- drivers/bluetooth/btqca.c | 110 ++- drivers/char/tpm/eventlog/acpi.c | 16 +- drivers/char/tpm/eventlog/efi.c | 13 +- drivers/char/tpm/eventlog/of.c | 3 +- drivers/char/tpm/tpm-chip.c | 1 - drivers/clk/mediatek/clk-mt2701-bdp.c | 1 + drivers/clk/mediatek/clk-mt2701-img.c | 1 + drivers/clk/mediatek/clk-mt2701-vdec.c | 1 + drivers/clk/mediatek/clk-mtk.c | 16 + drivers/clk/mediatek/clk-mtk.h | 19 + drivers/edac/qcom_edac.c | 4 +- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c | 14 + .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c | 3 +- drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 16 +- drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 25 +- drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c | 8 +- drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c | 26 +- drivers/gpu/drm/i915/display/intel_display.c | 18 + drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 + drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c | 3 +- drivers/gpu/drm/nouveau/nouveau_svm.c | 9 +- drivers/gpu/drm/rcar-du/rcar_mipi_dsi.c | 2 +- drivers/gpu/drm/rcar-du/rcar_mipi_dsi_regs.h | 1 - drivers/gpu/drm/tidss/tidss_dispc.c | 22 +- drivers/gpu/drm/tidss/tidss_irq.c | 2 + drivers/i2c/busses/i2c-npcm7xx.c | 7 + drivers/idle/intel_idle.c | 4 + drivers/infiniband/core/sysfs.c | 4 + drivers/infiniband/core/uverbs_std_types_device.c | 3 +- drivers/infiniband/core/verbs.c | 3 + drivers/infiniband/hw/mlx4/main.c | 8 + drivers/infiniband/hw/mlx4/mlx4_ib.h | 3 + drivers/infiniband/hw/mlx4/qp.c | 121 +++- drivers/infiniband/hw/mlx5/ah.c | 3 +- drivers/infiniband/hw/mlx5/counters.c | 8 +- drivers/infiniband/hw/mlx5/main.c | 7 + drivers/infiniband/hw/mlx5/mlx5_ib.h | 60 +- drivers/infiniband/hw/mlx5/mr.c | 742 ++++++++++++++------- drivers/infiniband/hw/mlx5/odp.c | 40 +- drivers/infiniband/hw/mlx5/qp.c | 129 ++-- drivers/infiniband/hw/mlx5/qp.h | 14 +- drivers/infiniband/hw/mlx5/qpc.c | 3 +- drivers/infiniband/hw/mlx5/umr.c | 87 ++- drivers/md/md-bitmap.c | 34 +- drivers/md/md-bitmap.h | 9 +- drivers/md/md-cluster.c | 34 +- drivers/md/md.c | 171 +++-- drivers/md/md.h | 5 +- drivers/media/cec/platform/stm32/stm32-cec.c | 9 +- drivers/media/i2c/ad5820.c | 18 +- drivers/media/i2c/imx274.c | 5 +- drivers/media/i2c/tc358743.c | 9 +- drivers/media/platform/mediatek/mdp/mtk_mdp_comp.c | 5 +- .../platform/mediatek/vcodec/mtk_vcodec_fw_scp.c | 2 + .../mediatek/vcodec/vdec/vdec_h264_req_multi_if.c | 9 +- .../media/platform/samsung/exynos4-is/media-dev.c | 4 +- drivers/media/platform/st/stm32/stm32-dcmi.c | 27 +- drivers/media/platform/ti/omap3isp/isp.c | 3 +- drivers/media/platform/xilinx/xilinx-csi2rxss.c | 8 +- drivers/media/rc/gpio-ir-recv.c | 10 +- drivers/media/rc/gpio-ir-tx.c | 9 +- drivers/media/rc/ir-rx51.c | 9 +- drivers/media/usb/uvc/uvc_ctrl.c | 99 ++- drivers/media/usb/uvc/uvc_driver.c | 35 +- drivers/media/usb/uvc/uvc_v4l2.c | 2 + drivers/media/usb/uvc/uvcvideo.h | 10 +- drivers/mtd/nand/raw/cadence-nand-controller.c | 42 +- drivers/net/ethernet/cadence/macb.h | 2 + drivers/net/ethernet/cadence/macb_main.c | 12 +- drivers/net/ethernet/freescale/enetc/enetc.c | 100 ++- drivers/net/ethernet/ibm/ibmvnic.c | 85 ++- drivers/net/ethernet/ibm/ibmvnic.h | 3 +- drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 2 +- drivers/net/ethernet/mellanox/mlx4/qp.c | 14 +- drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c | 2 +- drivers/net/ethernet/netronome/nfp/bpf/cmsg.c | 2 + drivers/net/ethernet/xilinx/xilinx_axienet_main.c | 1 + drivers/net/geneve.c | 16 +- drivers/net/gtp.c | 5 - drivers/net/ipvlan/ipvlan_core.c | 24 +- drivers/net/loopback.c | 14 + drivers/net/usb/gl620a.c | 4 +- drivers/nvme/host/ioctl.c | 3 +- drivers/phy/rockchip/phy-rockchip-naneng-combphy.c | 5 +- drivers/phy/samsung/phy-exynos5-usbdrd.c | 12 +- drivers/phy/tegra/xusb-tegra186.c | 11 + drivers/power/supply/da9150-fg.c | 4 +- drivers/scsi/scsi_lib.c | 22 +- drivers/scsi/sd.c | 4 + drivers/soc/mediatek/mtk-devapc.c | 36 +- drivers/spi/atmel-quadspi.c | 172 +++-- drivers/tee/optee/supp.c | 35 +- drivers/usb/gadget/function/f_midi.c | 2 +- drivers/usb/gadget/udc/core.c | 11 +- fs/afs/cell.c | 1 + fs/afs/internal.h | 23 +- fs/afs/server.c | 1 + fs/afs/server_list.c | 114 +++- fs/afs/vl_alias.c | 2 +- fs/afs/volume.c | 40 +- fs/overlayfs/copy_up.c | 2 +- fs/smb/client/smb2ops.c | 4 + fs/squashfs/inode.c | 5 +- include/asm-generic/vmlinux.lds.h | 2 +- include/linux/mlx4/qp.h | 1 + include/linux/mlx5/driver.h | 10 - include/linux/mm.h | 26 +- include/linux/netdevice.h | 2 + include/linux/ptrace.h | 4 + include/linux/skmsg.h | 2 + include/linux/sunrpc/sched.h | 17 +- include/net/dst.h | 9 + include/net/ip.h | 5 + include/net/netfilter/nf_conntrack_expect.h | 2 +- include/net/route.h | 5 +- include/net/strparser.h | 2 + include/net/tcp.h | 22 + include/rdma/ib_verbs.h | 4 +- include/trace/events/icmp.h | 67 ++ include/trace/events/oom.h | 36 +- include/trace/events/sunrpc.h | 3 +- include/uapi/rdma/ib_user_ioctl_verbs.h | 3 +- io_uring/net.c | 4 +- kernel/acct.c | 134 ++-- kernel/bpf/syscall.c | 18 +- kernel/events/core.c | 17 +- kernel/events/uprobes.c | 5 + kernel/sched/core.c | 2 +- kernel/trace/ftrace.c | 30 +- kernel/trace/trace_events_hist.c | 34 +- kernel/trace/trace_functions.c | 6 +- mm/gup.c | 31 +- mm/madvise.c | 11 +- mm/memcontrol.c | 7 +- mm/memory.c | 4 +- mm/oom_kill.c | 14 +- net/bluetooth/l2cap_core.c | 9 +- net/bpf/test_run.c | 5 +- net/bridge/br_netfilter_hooks.c | 8 +- net/core/dev.c | 37 +- net/core/drop_monitor.c | 39 +- net/core/flow_dissector.c | 49 +- net/core/gro.c | 1 + net/core/skbuff.c | 2 +- net/core/skmsg.c | 7 + net/core/sysctl_net_core.c | 3 +- net/ipv4/arp.c | 2 +- net/ipv4/icmp.c | 24 +- net/ipv4/ip_options.c | 3 +- net/ipv4/tcp.c | 29 +- net/ipv4/tcp_bpf.c | 36 + net/ipv4/tcp_fastopen.c | 4 +- net/ipv4/tcp_input.c | 8 +- net/ipv4/tcp_ipv4.c | 2 +- net/ipv4/tcp_minisocks.c | 10 +- net/ipv6/ip6_tunnel.c | 4 +- net/ipv6/rpl_iptunnel.c | 58 +- net/ipv6/seg6_iptunnel.c | 97 ++- net/mptcp/pm_netlink.c | 5 - net/mptcp/subflow.c | 15 +- net/netfilter/nf_conntrack_core.c | 2 +- net/netfilter/nf_conntrack_expect.c | 4 +- net/netfilter/nft_ct.c | 2 + net/sched/sch_fifo.c | 3 + net/strparser/strparser.c | 11 +- net/sunrpc/cache.c | 10 +- net/sunrpc/sched.c | 2 - sound/pci/hda/hda_codec.c | 4 +- sound/pci/hda/patch_conexant.c | 1 + sound/pci/hda/patch_cs8409-tables.c | 6 +- sound/pci/hda/patch_cs8409.c | 20 +- sound/pci/hda/patch_cs8409.h | 5 +- sound/pci/hda/patch_realtek.c | 1 + sound/soc/codecs/es8328.c | 15 +- sound/soc/fsl/fsl_micfil.c | 2 + sound/soc/rockchip/rockchip_i2s_tdm.c | 4 +- sound/soc/sh/rz-ssi.c | 2 + sound/usb/midi.c | 2 +- sound/usb/quirks.c | 1 + 199 files changed, 3053 insertions(+), 1460 deletions(-)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Catalin Marinas catalin.marinas@arm.com
PROT_MTE (memory tagging extensions) is not supported on all user mmap() types for various reasons (memory attributes, backing storage, CoW handling). The arm64 arch_validate_flags() function checks whether the VM_MTE_ALLOWED flag has been set for a vma during mmap(), usually by arch_calc_vm_flag_bits().
Linux prior to 6.13 does not support PROT_MTE hugetlb mappings. This was added by commit 25c17c4b55de ("hugetlb: arm64: add mte support"). However, earlier kernels inadvertently set VM_MTE_ALLOWED on (MAP_ANONYMOUS | MAP_HUGETLB) mappings by only checking for MAP_ANONYMOUS.
Explicitly check MAP_HUGETLB in arch_calc_vm_flag_bits() and avoid setting VM_MTE_ALLOWED for such mappings.
Fixes: 9f3419315f3c ("arm64: mte: Add PROT_MTE support to mmap() and mprotect()") Cc: stable@vger.kernel.org # 5.10.x-6.12.x Reported-by: Naresh Kamboju naresh.kamboju@linaro.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/include/asm/mman.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/arch/arm64/include/asm/mman.h +++ b/arch/arm64/include/asm/mman.h @@ -31,9 +31,12 @@ static inline unsigned long arch_calc_vm * backed by tags-capable memory. The vm_flags may be overridden by a * filesystem supporting MTE (RAM-based). */ - if (system_supports_mte() && - ((flags & MAP_ANONYMOUS) || shmem_file(file))) - return VM_MTE_ALLOWED; + if (system_supports_mte()) { + if ((flags & MAP_ANONYMOUS) && !(flags & MAP_HUGETLB)) + return VM_MTE_ALLOWED; + if (shmem_file(file)) + return VM_MTE_ALLOWED; + }
return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu Kuai yukuai3@huawei.com
[ Upstream commit ac619781967bd5663c29606246b50dbebd8b3473 ]
It's a little weird to borrow 'del_work' for md_start_sync(), declare a new work_struct 'sync_work' for md_start_sync().
Signed-off-by: Yu Kuai yukuai3@huawei.com Reviewed-by: Xiao Ni xni@redhat.com Signed-off-by: Song Liu song@kernel.org Link: https://lore.kernel.org/r/20230825031622.1530464-2-yukuai1@huaweicloud.com Stable-dep-of: 8d28d0ddb986 ("md/md-bitmap: Synchronize bitmap_get_stats() with bitmap lifetime") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md.c | 10 ++++++---- drivers/md/md.h | 5 ++++- 2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c index 297c86f5c70b5..4b629b7a540f7 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -682,13 +682,13 @@ void mddev_put(struct mddev *mddev) * flush_workqueue() after mddev_find will succeed in waiting * for the work to be done. */ - INIT_WORK(&mddev->del_work, mddev_delayed_delete); queue_work(md_misc_wq, &mddev->del_work); } spin_unlock(&all_mddevs_lock); }
static void md_safemode_timeout(struct timer_list *t); +static void md_start_sync(struct work_struct *ws);
void mddev_init(struct mddev *mddev) { @@ -710,6 +710,9 @@ void mddev_init(struct mddev *mddev) mddev->resync_min = 0; mddev->resync_max = MaxSector; mddev->level = LEVEL_NONE; + + INIT_WORK(&mddev->sync_work, md_start_sync); + INIT_WORK(&mddev->del_work, mddev_delayed_delete); } EXPORT_SYMBOL_GPL(mddev_init);
@@ -9308,7 +9311,7 @@ static int remove_and_add_spares(struct mddev *mddev,
static void md_start_sync(struct work_struct *ws) { - struct mddev *mddev = container_of(ws, struct mddev, del_work); + struct mddev *mddev = container_of(ws, struct mddev, sync_work);
mddev->sync_thread = md_register_thread(md_do_sync, mddev, @@ -9516,8 +9519,7 @@ void md_check_recovery(struct mddev *mddev) */ md_bitmap_write_all(mddev->bitmap); } - INIT_WORK(&mddev->del_work, md_start_sync); - queue_work(md_misc_wq, &mddev->del_work); + queue_work(md_misc_wq, &mddev->sync_work); goto unlock; } not_running: diff --git a/drivers/md/md.h b/drivers/md/md.h index 4f0b480974552..c1258c94216ac 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -452,7 +452,10 @@ struct mddev { struct kernfs_node *sysfs_degraded; /*handle for 'degraded' */ struct kernfs_node *sysfs_level; /*handle for 'level' */
- struct work_struct del_work; /* used for delayed sysfs removal */ + /* used for delayed sysfs removal */ + struct work_struct del_work; + /* used for register new sync thread */ + struct work_struct sync_work;
/* "lock" protects: * flush_bio transition from NULL to !NULL
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu Kuai yukuai3@huawei.com
[ Upstream commit 3d8d32873c7b6d9cec5b40c2ddb8c7c55961694f ]
There are no functional changes, prepare to simplify md_seq_ops in next patch.
Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Song Liu song@kernel.org Link: https://lore.kernel.org/r/20230927061241.1552837-2-yukuai1@huaweicloud.com Stable-dep-of: 8d28d0ddb986 ("md/md-bitmap: Synchronize bitmap_get_stats() with bitmap lifetime") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c index 4b629b7a540f7..44bac1e7d47e2 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -667,23 +667,28 @@ static inline struct mddev *mddev_get(struct mddev *mddev)
static void mddev_delayed_delete(struct work_struct *ws);
+static void __mddev_put(struct mddev *mddev) +{ + if (mddev->raid_disks || !list_empty(&mddev->disks) || + mddev->ctime || mddev->hold_active) + return; + + /* Array is not configured at all, and not held active, so destroy it */ + set_bit(MD_DELETED, &mddev->flags); + + /* + * Call queue_work inside the spinlock so that flush_workqueue() after + * mddev_find will succeed in waiting for the work to be done. + */ + queue_work(md_misc_wq, &mddev->del_work); +} + void mddev_put(struct mddev *mddev) { if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) return; - if (!mddev->raid_disks && list_empty(&mddev->disks) && - mddev->ctime == 0 && !mddev->hold_active) { - /* Array is not configured at all, and not held active, - * so destroy it */ - set_bit(MD_DELETED, &mddev->flags);
- /* - * Call queue_work inside the spinlock so that - * flush_workqueue() after mddev_find will succeed in waiting - * for the work to be done. - */ - queue_work(md_misc_wq, &mddev->del_work); - } + __mddev_put(mddev); spin_unlock(&all_mddevs_lock); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu Kuai yukuai3@huawei.com
[ Upstream commit cf1b6d4441fffd0ba8ae4ced6a12f578c95ca049 ]
Before this patch, the implementation is hacky and hard to understand:
1) md_seq_start set pos to 1; 2) md_seq_show found pos is 1, then print Personalities; 3) md_seq_next found pos is 1, then it update pos to the first mddev; 4) md_seq_show found pos is not 1 or 2, show mddev; 5) md_seq_next found pos is not 1 or 2, update pos to next mddev; 6) loop 4-5 until the last mddev, then md_seq_next update pos to 2; 7) md_seq_show found pos is 2, then print unused devices; 8) md_seq_next found pos is 2, stop;
This patch remove the magic value and use seq_list_start/next/stop() directly, and move printing "Personalities" to md_seq_start(), "unsed devices" to md_seq_stop():
1) md_seq_start print Personalities, and then set pos to first mddev; 2) md_seq_show show mddev; 3) md_seq_next update pos to next mddev; 4) loop 2-3 until the last mddev; 5) md_seq_stop print unsed devices;
Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Song Liu song@kernel.org Link: https://lore.kernel.org/r/20230927061241.1552837-3-yukuai1@huaweicloud.com Stable-dep-of: 8d28d0ddb986 ("md/md-bitmap: Synchronize bitmap_get_stats() with bitmap lifetime") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md.c | 100 +++++++++++------------------------------------- 1 file changed, 22 insertions(+), 78 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c index 44bac1e7d47e2..743244b06f679 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8250,105 +8250,46 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev) }
static void *md_seq_start(struct seq_file *seq, loff_t *pos) + __acquires(&all_mddevs_lock) { - struct list_head *tmp; - loff_t l = *pos; - struct mddev *mddev; + struct md_personality *pers;
- if (l == 0x10000) { - ++*pos; - return (void *)2; - } - if (l > 0x10000) - return NULL; - if (!l--) - /* header */ - return (void*)1; + seq_puts(seq, "Personalities : "); + spin_lock(&pers_lock); + list_for_each_entry(pers, &pers_list, list) + seq_printf(seq, "[%s] ", pers->name); + + spin_unlock(&pers_lock); + seq_puts(seq, "\n"); + seq->poll_event = atomic_read(&md_event_count);
spin_lock(&all_mddevs_lock); - list_for_each(tmp,&all_mddevs) - if (!l--) { - mddev = list_entry(tmp, struct mddev, all_mddevs); - if (!mddev_get(mddev)) - continue; - spin_unlock(&all_mddevs_lock); - return mddev; - } - spin_unlock(&all_mddevs_lock); - if (!l--) - return (void*)2;/* tail */ - return NULL; + + return seq_list_start(&all_mddevs, *pos); }
static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos) { - struct list_head *tmp; - struct mddev *next_mddev, *mddev = v; - struct mddev *to_put = NULL; - - ++*pos; - if (v == (void*)2) - return NULL; - - spin_lock(&all_mddevs_lock); - if (v == (void*)1) { - tmp = all_mddevs.next; - } else { - to_put = mddev; - tmp = mddev->all_mddevs.next; - } - - for (;;) { - if (tmp == &all_mddevs) { - next_mddev = (void*)2; - *pos = 0x10000; - break; - } - next_mddev = list_entry(tmp, struct mddev, all_mddevs); - if (mddev_get(next_mddev)) - break; - mddev = next_mddev; - tmp = mddev->all_mddevs.next; - } - spin_unlock(&all_mddevs_lock); - - if (to_put) - mddev_put(to_put); - return next_mddev; - + return seq_list_next(v, &all_mddevs, pos); }
static void md_seq_stop(struct seq_file *seq, void *v) + __releases(&all_mddevs_lock) { - struct mddev *mddev = v; - - if (mddev && v != (void*)1 && v != (void*)2) - mddev_put(mddev); + status_unused(seq); + spin_unlock(&all_mddevs_lock); }
static int md_seq_show(struct seq_file *seq, void *v) { - struct mddev *mddev = v; + struct mddev *mddev = list_entry(v, struct mddev, all_mddevs); sector_t sectors; struct md_rdev *rdev;
- if (v == (void*)1) { - struct md_personality *pers; - seq_printf(seq, "Personalities : "); - spin_lock(&pers_lock); - list_for_each_entry(pers, &pers_list, list) - seq_printf(seq, "[%s] ", pers->name); - - spin_unlock(&pers_lock); - seq_printf(seq, "\n"); - seq->poll_event = atomic_read(&md_event_count); + if (!mddev_get(mddev)) return 0; - } - if (v == (void*)2) { - status_unused(seq); - return 0; - }
+ spin_unlock(&all_mddevs_lock); spin_lock(&mddev->lock); if (mddev->pers || mddev->raid_disks || !list_empty(&mddev->disks)) { seq_printf(seq, "%s : %sactive", mdname(mddev), @@ -8419,6 +8360,9 @@ static int md_seq_show(struct seq_file *seq, void *v) seq_printf(seq, "\n"); } spin_unlock(&mddev->lock); + spin_lock(&all_mddevs_lock); + if (atomic_dec_and_test(&mddev->active)) + __mddev_put(mddev);
return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu Kuai yukuai3@huawei.com
[ Upstream commit 38f287d7e495ae00d4481702f44ff7ca79f5c9bc ]
There are no functional changes, and the new helper will be used in multiple places in following patches to avoid dereferencing bitmap directly.
Signed-off-by: Yu Kuai yukuai3@huawei.com Link: https://lore.kernel.org/r/20240826074452.1490072-3-yukuai1@huaweicloud.com Signed-off-by: Song Liu song@kernel.org Stable-dep-of: 8d28d0ddb986 ("md/md-bitmap: Synchronize bitmap_get_stats() with bitmap lifetime") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md-bitmap.c | 25 ++++++------------------- drivers/md/md-bitmap.h | 8 +++++++- drivers/md/md.c | 29 ++++++++++++++++++++++++++++- 3 files changed, 41 insertions(+), 21 deletions(-)
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index 9d8ac04c23462..736268447d3e1 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -2022,32 +2022,19 @@ int md_bitmap_copy_from_slot(struct mddev *mddev, int slot, } EXPORT_SYMBOL_GPL(md_bitmap_copy_from_slot);
- -void md_bitmap_status(struct seq_file *seq, struct bitmap *bitmap) +int md_bitmap_get_stats(struct bitmap *bitmap, struct md_bitmap_stats *stats) { - unsigned long chunk_kb; struct bitmap_counts *counts;
if (!bitmap) - return; + return -ENOENT;
counts = &bitmap->counts; + stats->missing_pages = counts->missing_pages; + stats->pages = counts->pages; + stats->file = bitmap->storage.file;
- chunk_kb = bitmap->mddev->bitmap_info.chunksize >> 10; - seq_printf(seq, "bitmap: %lu/%lu pages [%luKB], " - "%lu%s chunk", - counts->pages - counts->missing_pages, - counts->pages, - (counts->pages - counts->missing_pages) - << (PAGE_SHIFT - 10), - chunk_kb ? chunk_kb : bitmap->mddev->bitmap_info.chunksize, - chunk_kb ? "KB" : "B"); - if (bitmap->storage.file) { - seq_printf(seq, ", file: "); - seq_file_path(seq, bitmap->storage.file, " \t\n"); - } - - seq_printf(seq, "\n"); + return 0; }
int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks, diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h index 3a4750952b3a7..00ac4c3ecf4d9 100644 --- a/drivers/md/md-bitmap.h +++ b/drivers/md/md-bitmap.h @@ -233,6 +233,12 @@ struct bitmap { int cluster_slot; /* Slot offset for clustered env */ };
+struct md_bitmap_stats { + unsigned long missing_pages; + unsigned long pages; + struct file *file; +}; + /* the bitmap API */
/* these are used only by md/bitmap */ @@ -243,7 +249,7 @@ void md_bitmap_destroy(struct mddev *mddev);
void md_bitmap_print_sb(struct bitmap *bitmap); void md_bitmap_update_sb(struct bitmap *bitmap); -void md_bitmap_status(struct seq_file *seq, struct bitmap *bitmap); +int md_bitmap_get_stats(struct bitmap *bitmap, struct md_bitmap_stats *stats);
int md_bitmap_setallbits(struct bitmap *bitmap); void md_bitmap_write_all(struct bitmap *bitmap); diff --git a/drivers/md/md.c b/drivers/md/md.c index 743244b06f679..887479e0d3afe 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8280,6 +8280,33 @@ static void md_seq_stop(struct seq_file *seq, void *v) spin_unlock(&all_mddevs_lock); }
+static void md_bitmap_status(struct seq_file *seq, struct mddev *mddev) +{ + struct md_bitmap_stats stats; + unsigned long used_pages; + unsigned long chunk_kb; + int err; + + err = md_bitmap_get_stats(mddev->bitmap, &stats); + if (err) + return; + + chunk_kb = mddev->bitmap_info.chunksize >> 10; + used_pages = stats.pages - stats.missing_pages; + + seq_printf(seq, "bitmap: %lu/%lu pages [%luKB], %lu%s chunk", + used_pages, stats.pages, used_pages << (PAGE_SHIFT - 10), + chunk_kb ? chunk_kb : mddev->bitmap_info.chunksize, + chunk_kb ? "KB" : "B"); + + if (stats.file) { + seq_puts(seq, ", file: "); + seq_file_path(seq, stats.file, " \t\n"); + } + + seq_putc(seq, '\n'); +} + static int md_seq_show(struct seq_file *seq, void *v) { struct mddev *mddev = list_entry(v, struct mddev, all_mddevs); @@ -8355,7 +8382,7 @@ static int md_seq_show(struct seq_file *seq, void *v) } else seq_printf(seq, "\n ");
- md_bitmap_status(seq, mddev->bitmap); + md_bitmap_status(seq, mddev);
seq_printf(seq, "\n"); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu Kuai yukuai3@huawei.com
[ Upstream commit 82697ccf7e495c1ba81e315c2886d6220ff84c2c ]
drivers/md/md-cluster.c:1220:22: warning: incorrect type in assignment (different base types) drivers/md/md-cluster.c:1220:22: expected unsigned long my_sync_size drivers/md/md-cluster.c:1220:22: got restricted __le64 [usertype] sync_size drivers/md/md-cluster.c:1252:35: warning: incorrect type in assignment (different base types) drivers/md/md-cluster.c:1252:35: expected unsigned long sync_size drivers/md/md-cluster.c:1252:35: got restricted __le64 [usertype] sync_size drivers/md/md-cluster.c:1253:41: warning: restricted __le64 degrades to integer
Fix the warnings by using le64_to_cpu() to convet __le64 to integer.
Signed-off-by: Yu Kuai yukuai3@huawei.com Link: https://lore.kernel.org/r/20240826074452.1490072-6-yukuai1@huaweicloud.com Signed-off-by: Song Liu song@kernel.org Stable-dep-of: 8d28d0ddb986 ("md/md-bitmap: Synchronize bitmap_get_stats() with bitmap lifetime") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md-cluster.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c index 10e0c5381d01b..a0d3f6c397707 100644 --- a/drivers/md/md-cluster.c +++ b/drivers/md/md-cluster.c @@ -1195,7 +1195,7 @@ static int cluster_check_sync_size(struct mddev *mddev) struct dlm_lock_resource *bm_lockres;
sb = kmap_atomic(bitmap->storage.sb_page); - my_sync_size = sb->sync_size; + my_sync_size = le64_to_cpu(sb->sync_size); kunmap_atomic(sb);
for (i = 0; i < node_num; i++) { @@ -1227,8 +1227,8 @@ static int cluster_check_sync_size(struct mddev *mddev)
sb = kmap_atomic(bitmap->storage.sb_page); if (sync_size == 0) - sync_size = sb->sync_size; - else if (sync_size != sb->sync_size) { + sync_size = le64_to_cpu(sb->sync_size); + else if (sync_size != le64_to_cpu(sb->sync_size)) { kunmap_atomic(sb); md_bitmap_free(bitmap); return -1;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu Kuai yukuai3@huawei.com
[ Upstream commit ec6bb299c7c3dd4ca1724d13d5f5fae3ee54fc65 ]
To avoid dereferencing bitmap directly in md-cluster to prepare inventing a new bitmap.
BTW, also fix following checkpatch warnings:
WARNING: Deprecated use of 'kmap_atomic', prefer 'kmap_local_page' instead WARNING: Deprecated use of 'kunmap_atomic', prefer 'kunmap_local' instead
Signed-off-by: Yu Kuai yukuai3@huawei.com Link: https://lore.kernel.org/r/20240826074452.1490072-7-yukuai1@huaweicloud.com Signed-off-by: Song Liu song@kernel.org Stable-dep-of: 8d28d0ddb986 ("md/md-bitmap: Synchronize bitmap_get_stats() with bitmap lifetime") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md-bitmap.c | 6 ++++++ drivers/md/md-bitmap.h | 1 + drivers/md/md-cluster.c | 34 ++++++++++++++++++++-------------- 3 files changed, 27 insertions(+), 14 deletions(-)
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index 736268447d3e1..bddf4f3d27a77 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -2025,10 +2025,15 @@ EXPORT_SYMBOL_GPL(md_bitmap_copy_from_slot); int md_bitmap_get_stats(struct bitmap *bitmap, struct md_bitmap_stats *stats) { struct bitmap_counts *counts; + bitmap_super_t *sb;
if (!bitmap) return -ENOENT;
+ sb = kmap_local_page(bitmap->storage.sb_page); + stats->sync_size = le64_to_cpu(sb->sync_size); + kunmap_local(sb); + counts = &bitmap->counts; stats->missing_pages = counts->missing_pages; stats->pages = counts->pages; @@ -2036,6 +2041,7 @@ int md_bitmap_get_stats(struct bitmap *bitmap, struct md_bitmap_stats *stats)
return 0; } +EXPORT_SYMBOL_GPL(md_bitmap_get_stats);
int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks, int chunksize, int init) diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h index 00ac4c3ecf4d9..7b7a701f74be7 100644 --- a/drivers/md/md-bitmap.h +++ b/drivers/md/md-bitmap.h @@ -235,6 +235,7 @@ struct bitmap {
struct md_bitmap_stats { unsigned long missing_pages; + unsigned long sync_size; unsigned long pages; struct file *file; }; diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c index a0d3f6c397707..7484bb83171a9 100644 --- a/drivers/md/md-cluster.c +++ b/drivers/md/md-cluster.c @@ -1185,18 +1185,21 @@ static int resize_bitmaps(struct mddev *mddev, sector_t newsize, sector_t oldsiz */ static int cluster_check_sync_size(struct mddev *mddev) { - int i, rv; - bitmap_super_t *sb; - unsigned long my_sync_size, sync_size = 0; - int node_num = mddev->bitmap_info.nodes; int current_slot = md_cluster_ops->slot_number(mddev); + int node_num = mddev->bitmap_info.nodes; struct bitmap *bitmap = mddev->bitmap; - char str[64]; struct dlm_lock_resource *bm_lockres; + struct md_bitmap_stats stats; + unsigned long sync_size = 0; + unsigned long my_sync_size; + char str[64]; + int i, rv;
- sb = kmap_atomic(bitmap->storage.sb_page); - my_sync_size = le64_to_cpu(sb->sync_size); - kunmap_atomic(sb); + rv = md_bitmap_get_stats(bitmap, &stats); + if (rv) + return rv; + + my_sync_size = stats.sync_size;
for (i = 0; i < node_num; i++) { if (i == current_slot) @@ -1225,15 +1228,18 @@ static int cluster_check_sync_size(struct mddev *mddev) md_bitmap_update_sb(bitmap); lockres_free(bm_lockres);
- sb = kmap_atomic(bitmap->storage.sb_page); - if (sync_size == 0) - sync_size = le64_to_cpu(sb->sync_size); - else if (sync_size != le64_to_cpu(sb->sync_size)) { - kunmap_atomic(sb); + rv = md_bitmap_get_stats(bitmap, &stats); + if (rv) { + md_bitmap_free(bitmap); + return rv; + } + + if (sync_size == 0) { + sync_size = stats.sync_size; + } else if (sync_size != stats.sync_size) { md_bitmap_free(bitmap); return -1; } - kunmap_atomic(sb); md_bitmap_free(bitmap); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu Kuai yukuai3@huawei.com
[ Upstream commit 8d28d0ddb986f56920ac97ae704cc3340a699a30 ]
After commit ec6bb299c7c3 ("md/md-bitmap: add 'sync_size' into struct md_bitmap_stats"), following panic is reported:
Oops: general protection fault, probably for non-canonical address RIP: 0010:bitmap_get_stats+0x2b/0xa0 Call Trace: <TASK> md_seq_show+0x2d2/0x5b0 seq_read_iter+0x2b9/0x470 seq_read+0x12f/0x180 proc_reg_read+0x57/0xb0 vfs_read+0xf6/0x380 ksys_read+0x6c/0xf0 do_syscall_64+0x82/0x170 entry_SYSCALL_64_after_hwframe+0x76/0x7e
Root cause is that bitmap_get_stats() can be called at anytime if mddev is still there, even if bitmap is destroyed, or not fully initialized. Deferenceing bitmap in this case can crash the kernel. Meanwhile, the above commit start to deferencing bitmap->storage, make the problem easier to trigger.
Fix the problem by protecting bitmap_get_stats() with bitmap_info.mutex.
Cc: stable@vger.kernel.org # v6.12+ Fixes: 32a7627cf3a3 ("[PATCH] md: optimised resync using Bitmap based intent logging") Reported-and-tested-by: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Closes: https://lore.kernel.org/linux-raid/ca3a91a2-50ae-4f68-b317-abd9889f3907@orac... Signed-off-by: Yu Kuai yukuai3@huawei.com Link: https://lore.kernel.org/r/20250124092055.4050195-1-yukuai1@huaweicloud.com Signed-off-by: Song Liu song@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md-bitmap.c | 5 ++++- drivers/md/md.c | 5 +++++ 2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index bddf4f3d27a77..e18e21b24210d 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -2029,7 +2029,10 @@ int md_bitmap_get_stats(struct bitmap *bitmap, struct md_bitmap_stats *stats)
if (!bitmap) return -ENOENT; - + if (bitmap->mddev->bitmap_info.external) + return -ENOENT; + if (!bitmap->storage.sb_page) /* no superblock */ + return -EINVAL; sb = kmap_local_page(bitmap->storage.sb_page); stats->sync_size = le64_to_cpu(sb->sync_size); kunmap_local(sb); diff --git a/drivers/md/md.c b/drivers/md/md.c index 887479e0d3afe..e2a3a1e1afca0 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8317,6 +8317,10 @@ static int md_seq_show(struct seq_file *seq, void *v) return 0;
spin_unlock(&all_mddevs_lock); + + /* prevent bitmap to be freed after checking */ + mutex_lock(&mddev->bitmap_info.mutex); + spin_lock(&mddev->lock); if (mddev->pers || mddev->raid_disks || !list_empty(&mddev->disks)) { seq_printf(seq, "%s : %sactive", mdname(mddev), @@ -8387,6 +8391,7 @@ static int md_seq_show(struct seq_file *seq, void *v) seq_printf(seq, "\n"); } spin_unlock(&mddev->lock); + mutex_unlock(&mddev->bitmap_info.mutex); spin_lock(&all_mddevs_lock); if (atomic_dec_and_test(&mddev->active)) __mddev_put(mddev);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Carlos Galo carlosgalo@google.com
[ Upstream commit 72ba14deb40a9e9668ec5e66a341ed657e5215c2 ]
The current implementation of the mark_victim tracepoint provides only the process ID (pid) of the victim process. This limitation poses challenges for userspace tools requiring real-time OOM analysis and intervention. Although this information is available from the kernel logs, it’s not the appropriate format to provide OOM notifications. In Android, BPF programs are used with the mark_victim trace events to notify userspace of an OOM kill. For consistency, update the trace event to include the same information about the OOMed victim as the kernel logs.
- UID In Android each installed application has a unique UID. Including the `uid` assists in correlating OOM events with specific apps.
- Process Name (comm) Enables identification of the affected process.
- OOM Score Will allow userspace to get additional insight of the relative kill priority of the OOM victim. In Android, the oom_score_adj is used to categorize app state (foreground, background, etc.), which aids in analyzing user-perceptible impacts of OOM events [1].
- Total VM, RSS Stats, and pgtables Amount of memory used by the victim that will, potentially, be freed up by killing it.
[1] https://cs.android.com/android/platform/superproject/main/+/246dc8fc95b6d93a... Signed-off-by: Carlos Galo carlosgalo@google.com Reviewed-by: Steven Rostedt rostedt@goodmis.org Cc: Suren Baghdasaryan surenb@google.com Cc: Michal Hocko mhocko@suse.com Cc: "Masami Hiramatsu (Google)" mhiramat@kernel.org Cc: Mathieu Desnoyers mathieu.desnoyers@efficios.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: ade81479c7dd ("memcg: fix soft lockup in the OOM process") Signed-off-by: Sasha Levin sashal@kernel.org --- include/trace/events/oom.h | 36 ++++++++++++++++++++++++++++++++---- mm/oom_kill.c | 6 +++++- 2 files changed, 37 insertions(+), 5 deletions(-)
diff --git a/include/trace/events/oom.h b/include/trace/events/oom.h index 26a11e4a2c361..b799f3bcba823 100644 --- a/include/trace/events/oom.h +++ b/include/trace/events/oom.h @@ -7,6 +7,8 @@ #include <linux/tracepoint.h> #include <trace/events/mmflags.h>
+#define PG_COUNT_TO_KB(x) ((x) << (PAGE_SHIFT - 10)) + TRACE_EVENT(oom_score_adj_update,
TP_PROTO(struct task_struct *task), @@ -72,19 +74,45 @@ TRACE_EVENT(reclaim_retry_zone, );
TRACE_EVENT(mark_victim, - TP_PROTO(int pid), + TP_PROTO(struct task_struct *task, uid_t uid),
- TP_ARGS(pid), + TP_ARGS(task, uid),
TP_STRUCT__entry( __field(int, pid) + __string(comm, task->comm) + __field(unsigned long, total_vm) + __field(unsigned long, anon_rss) + __field(unsigned long, file_rss) + __field(unsigned long, shmem_rss) + __field(uid_t, uid) + __field(unsigned long, pgtables) + __field(short, oom_score_adj) ),
TP_fast_assign( - __entry->pid = pid; + __entry->pid = task->pid; + __assign_str(comm, task->comm); + __entry->total_vm = PG_COUNT_TO_KB(task->mm->total_vm); + __entry->anon_rss = PG_COUNT_TO_KB(get_mm_counter(task->mm, MM_ANONPAGES)); + __entry->file_rss = PG_COUNT_TO_KB(get_mm_counter(task->mm, MM_FILEPAGES)); + __entry->shmem_rss = PG_COUNT_TO_KB(get_mm_counter(task->mm, MM_SHMEMPAGES)); + __entry->uid = uid; + __entry->pgtables = mm_pgtables_bytes(task->mm) >> 10; + __entry->oom_score_adj = task->signal->oom_score_adj; ),
- TP_printk("pid=%d", __entry->pid) + TP_printk("pid=%d comm=%s total-vm=%lukB anon-rss=%lukB file-rss:%lukB shmem-rss:%lukB uid=%u pgtables=%lukB oom_score_adj=%hd", + __entry->pid, + __get_str(comm), + __entry->total_vm, + __entry->anon_rss, + __entry->file_rss, + __entry->shmem_rss, + __entry->uid, + __entry->pgtables, + __entry->oom_score_adj + ) );
TRACE_EVENT(wake_reaper, diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 1276e49b31b0a..4de30c6c5183f 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -44,6 +44,7 @@ #include <linux/kthread.h> #include <linux/init.h> #include <linux/mmu_notifier.h> +#include <linux/cred.h>
#include <asm/tlb.h> #include "internal.h" @@ -757,6 +758,7 @@ static inline void queue_oom_reaper(struct task_struct *tsk) */ static void mark_oom_victim(struct task_struct *tsk) { + const struct cred *cred; struct mm_struct *mm = tsk->mm;
WARN_ON(oom_killer_disabled); @@ -776,7 +778,9 @@ static void mark_oom_victim(struct task_struct *tsk) */ __thaw_task(tsk); atomic_inc(&oom_victims); - trace_mark_victim(tsk->pid); + cred = get_task_cred(tsk); + trace_mark_victim(tsk, cred->uid.val); + put_cred(cred); }
/**
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chen Ridong chenridong@huawei.com
[ Upstream commit ade81479c7dda1ce3eedb215c78bc615bbd04f06 ]
A soft lockup issue was found in the product with about 56,000 tasks were in the OOM cgroup, it was traversing them when the soft lockup was triggered.
watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [VM Thread:1503066] CPU: 2 PID: 1503066 Comm: VM Thread Kdump: loaded Tainted: G Hardware name: Huawei Cloud OpenStack Nova, BIOS RIP: 0010:console_unlock+0x343/0x540 RSP: 0000:ffffb751447db9a0 EFLAGS: 00000247 ORIG_RAX: ffffffffffffff13 RAX: 0000000000000001 RBX: 0000000000000000 RCX: 00000000ffffffff RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000247 RBP: ffffffffafc71f90 R08: 0000000000000000 R09: 0000000000000040 R10: 0000000000000080 R11: 0000000000000000 R12: ffffffffafc74bd0 R13: ffffffffaf60a220 R14: 0000000000000247 R15: 0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f2fe6ad91f0 CR3: 00000004b2076003 CR4: 0000000000360ee0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: vprintk_emit+0x193/0x280 printk+0x52/0x6e dump_task+0x114/0x130 mem_cgroup_scan_tasks+0x76/0x100 dump_header+0x1fe/0x210 oom_kill_process+0xd1/0x100 out_of_memory+0x125/0x570 mem_cgroup_out_of_memory+0xb5/0xd0 try_charge+0x720/0x770 mem_cgroup_try_charge+0x86/0x180 mem_cgroup_try_charge_delay+0x1c/0x40 do_anonymous_page+0xb5/0x390 handle_mm_fault+0xc4/0x1f0
This is because thousands of processes are in the OOM cgroup, it takes a long time to traverse all of them. As a result, this lead to soft lockup in the OOM process.
To fix this issue, call 'cond_resched' in the 'mem_cgroup_scan_tasks' function per 1000 iterations. For global OOM, call 'touch_softlockup_watchdog' per 1000 iterations to avoid this issue.
Link: https://lkml.kernel.org/r/20241224025238.3768787-1-chenridong@huaweicloud.co... Fixes: 9cbb78bb3143 ("mm, memcg: introduce own oom handler to iterate only over its own threads") Signed-off-by: Chen Ridong chenridong@huawei.com Acked-by: Michal Hocko mhocko@suse.com Cc: Roman Gushchin roman.gushchin@linux.dev Cc: Johannes Weiner hannes@cmpxchg.org Cc: Shakeel Butt shakeelb@google.com Cc: Muchun Song songmuchun@bytedance.com Cc: Michal Koutný mkoutny@suse.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/memcontrol.c | 7 ++++++- mm/oom_kill.c | 8 +++++++- 2 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 280bb6969c0bf..3f7cab196eb62 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1242,6 +1242,7 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, { struct mem_cgroup *iter; int ret = 0; + int i = 0;
BUG_ON(memcg == root_mem_cgroup);
@@ -1250,8 +1251,12 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, struct task_struct *task;
css_task_iter_start(&iter->css, CSS_TASK_ITER_PROCS, &it); - while (!ret && (task = css_task_iter_next(&it))) + while (!ret && (task = css_task_iter_next(&it))) { + /* Avoid potential softlockup warning */ + if ((++i & 1023) == 0) + cond_resched(); ret = fn(task, arg); + } css_task_iter_end(&it); if (ret) { mem_cgroup_iter_break(memcg, iter); diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 4de30c6c5183f..f4c8ef863ea79 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -45,6 +45,7 @@ #include <linux/init.h> #include <linux/mmu_notifier.h> #include <linux/cred.h> +#include <linux/nmi.h>
#include <asm/tlb.h> #include "internal.h" @@ -430,10 +431,15 @@ static void dump_tasks(struct oom_control *oc) mem_cgroup_scan_tasks(oc->memcg, dump_task, oc); else { struct task_struct *p; + int i = 0;
rcu_read_lock(); - for_each_process(p) + for_each_process(p) { + /* Avoid potential softlockup warning */ + if ((++i & 1023) == 0) + touch_softlockup_watchdog(); dump_task(p, oc); + } rcu_read_unlock(); } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tudor Ambarus tudor.ambarus@microchip.com
[ Upstream commit f732646d0ccd22f42ed7de5e59c0abb7a848e034 ]
The at91 QSPI IP uses a default value of half of the period of the QSPI clock period for the cs-setup time, which is not always enough, an example being the sst26vf064b SPI NOR flash which requires a minimum cs-setup time of 5 ns. It was observed that none of the at91 SoCs can fulfill the minimum CS setup time for the aforementioned flash, as they operate at high frequencies and half a period does not suffice for the required CS setup time. Add support for configuring the CS timing in the controller.
Signed-off-by: Tudor Ambarus tudor.ambarus@microchip.com Link: https://lore.kernel.org/r/20221117105249.115649-5-tudor.ambarus@microchip.co... Signed-off-by: Mark Brown broonie@kernel.org Stable-dep-of: be92ab2de0ee ("spi: atmel-qspi: Memory barriers after memory-mapped I/O") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/atmel-quadspi.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c index b5afe5790b1d2..58d5336b954d9 100644 --- a/drivers/spi/atmel-quadspi.c +++ b/drivers/spi/atmel-quadspi.c @@ -510,6 +510,39 @@ static int atmel_qspi_setup(struct spi_device *spi) return 0; }
+static int atmel_qspi_set_cs_timing(struct spi_device *spi) +{ + struct spi_controller *ctrl = spi->master; + struct atmel_qspi *aq = spi_controller_get_devdata(ctrl); + unsigned long clk_rate; + u32 cs_setup; + int delay; + int ret; + + delay = spi_delay_to_ns(&spi->cs_setup, NULL); + if (delay <= 0) + return delay; + + clk_rate = clk_get_rate(aq->pclk); + if (!clk_rate) + return -EINVAL; + + cs_setup = DIV_ROUND_UP((delay * DIV_ROUND_UP(clk_rate, 1000000)), + 1000); + + ret = pm_runtime_resume_and_get(ctrl->dev.parent); + if (ret < 0) + return ret; + + aq->scr |= QSPI_SCR_DLYBS(cs_setup); + atmel_qspi_write(aq->scr, aq, QSPI_SCR); + + pm_runtime_mark_last_busy(ctrl->dev.parent); + pm_runtime_put_autosuspend(ctrl->dev.parent); + + return 0; +} + static void atmel_qspi_init(struct atmel_qspi *aq) { /* Reset the QSPI controller */ @@ -555,6 +588,7 @@ static int atmel_qspi_probe(struct platform_device *pdev)
ctrl->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_DUAL | SPI_TX_QUAD; ctrl->setup = atmel_qspi_setup; + ctrl->set_cs_timing = atmel_qspi_set_cs_timing; ctrl->bus_num = -1; ctrl->mem_ops = &atmel_qspi_mem_ops; ctrl->num_chipselect = 1;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yang Yingliang yangyingliang@huawei.com
[ Upstream commit ccbc6554ed66dc37778b8ed823bcaaabfb1731cf ]
Change legacy name master to modern name host or controller.
No functional changed.
Signed-off-by: Yang Yingliang yangyingliang@huawei.com Link: https://lore.kernel.org/r/20230110131805.2827248-4-yangyingliang@huawei.com Signed-off-by: Mark Brown broonie@kernel.org Stable-dep-of: be92ab2de0ee ("spi: atmel-qspi: Memory barriers after memory-mapped I/O") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/atmel-quadspi.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c index 58d5336b954d9..bc6dfb6d86546 100644 --- a/drivers/spi/atmel-quadspi.c +++ b/drivers/spi/atmel-quadspi.c @@ -406,7 +406,7 @@ static int atmel_qspi_set_cfg(struct atmel_qspi *aq,
static int atmel_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) { - struct atmel_qspi *aq = spi_controller_get_devdata(mem->spi->master); + struct atmel_qspi *aq = spi_controller_get_devdata(mem->spi->controller); u32 sr, offset; int err;
@@ -476,7 +476,7 @@ static const struct spi_controller_mem_ops atmel_qspi_mem_ops = {
static int atmel_qspi_setup(struct spi_device *spi) { - struct spi_controller *ctrl = spi->master; + struct spi_controller *ctrl = spi->controller; struct atmel_qspi *aq = spi_controller_get_devdata(ctrl); unsigned long src_rate; u32 scbr; @@ -512,7 +512,7 @@ static int atmel_qspi_setup(struct spi_device *spi)
static int atmel_qspi_set_cs_timing(struct spi_device *spi) { - struct spi_controller *ctrl = spi->master; + struct spi_controller *ctrl = spi->controller; struct atmel_qspi *aq = spi_controller_get_devdata(ctrl); unsigned long clk_rate; u32 cs_setup; @@ -582,7 +582,7 @@ static int atmel_qspi_probe(struct platform_device *pdev) struct resource *res; int irq, err = 0;
- ctrl = devm_spi_alloc_master(&pdev->dev, sizeof(*aq)); + ctrl = devm_spi_alloc_host(&pdev->dev, sizeof(*aq)); if (!ctrl) return -ENOMEM;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Csókás, Bence csokas.bence@prolan.hu
[ Upstream commit c0a0203cf57963792d59b3e4317a1d07b73df42a ]
Refactor the code to introduce an ops struct, to prepare for merging support for later SoCs, such as SAMA7G5. This code was based on the vendor's kernel (linux4microchip). Cc'ing original contributors.
Signed-off-by: Csókás, Bence csokas.bence@prolan.hu Link: https://patch.msgid.link/20241128174316.3209354-2-csokas.bence@prolan.hu Signed-off-by: Mark Brown broonie@kernel.org Stable-dep-of: be92ab2de0ee ("spi: atmel-qspi: Memory barriers after memory-mapped I/O") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/atmel-quadspi.c | 111 +++++++++++++++++++++++++----------- 1 file changed, 77 insertions(+), 34 deletions(-)
diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c index bc6dfb6d86546..32fcab2a11885 100644 --- a/drivers/spi/atmel-quadspi.c +++ b/drivers/spi/atmel-quadspi.c @@ -138,11 +138,15 @@ #define QSPI_WPSR_WPVSRC_MASK GENMASK(15, 8) #define QSPI_WPSR_WPVSRC(src) (((src) << 8) & QSPI_WPSR_WPVSRC)
+#define ATMEL_QSPI_TIMEOUT 1000 /* ms */ + struct atmel_qspi_caps { bool has_qspick; bool has_ricr; };
+struct atmel_qspi_ops; + struct atmel_qspi { void __iomem *regs; void __iomem *mem; @@ -150,13 +154,22 @@ struct atmel_qspi { struct clk *qspick; struct platform_device *pdev; const struct atmel_qspi_caps *caps; + const struct atmel_qspi_ops *ops; resource_size_t mmap_size; u32 pending; + u32 irq_mask; u32 mr; u32 scr; struct completion cmd_completion; };
+struct atmel_qspi_ops { + int (*set_cfg)(struct atmel_qspi *aq, const struct spi_mem_op *op, + u32 *offset); + int (*transfer)(struct spi_mem *mem, const struct spi_mem_op *op, + u32 offset); +}; + struct atmel_qspi_mode { u8 cmd_buswidth; u8 addr_buswidth; @@ -404,10 +417,60 @@ static int atmel_qspi_set_cfg(struct atmel_qspi *aq, return 0; }
+static int atmel_qspi_wait_for_completion(struct atmel_qspi *aq, u32 irq_mask) +{ + int err = 0; + u32 sr; + + /* Poll INSTRuction End status */ + sr = atmel_qspi_read(aq, QSPI_SR); + if ((sr & irq_mask) == irq_mask) + return 0; + + /* Wait for INSTRuction End interrupt */ + reinit_completion(&aq->cmd_completion); + aq->pending = sr & irq_mask; + aq->irq_mask = irq_mask; + atmel_qspi_write(irq_mask, aq, QSPI_IER); + if (!wait_for_completion_timeout(&aq->cmd_completion, + msecs_to_jiffies(ATMEL_QSPI_TIMEOUT))) + err = -ETIMEDOUT; + atmel_qspi_write(irq_mask, aq, QSPI_IDR); + + return err; +} + +static int atmel_qspi_transfer(struct spi_mem *mem, + const struct spi_mem_op *op, u32 offset) +{ + struct atmel_qspi *aq = spi_controller_get_devdata(mem->spi->controller); + + /* Skip to the final steps if there is no data */ + if (!op->data.nbytes) + return atmel_qspi_wait_for_completion(aq, + QSPI_SR_CMD_COMPLETED); + + /* Dummy read of QSPI_IFR to synchronize APB and AHB accesses */ + (void)atmel_qspi_read(aq, QSPI_IFR); + + /* Send/Receive data */ + if (op->data.dir == SPI_MEM_DATA_IN) + memcpy_fromio(op->data.buf.in, aq->mem + offset, + op->data.nbytes); + else + memcpy_toio(aq->mem + offset, op->data.buf.out, + op->data.nbytes); + + /* Release the chip-select */ + atmel_qspi_write(QSPI_CR_LASTXFER, aq, QSPI_CR); + + return atmel_qspi_wait_for_completion(aq, QSPI_SR_CMD_COMPLETED); +} + static int atmel_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) { struct atmel_qspi *aq = spi_controller_get_devdata(mem->spi->controller); - u32 sr, offset; + u32 offset; int err;
/* @@ -416,46 +479,20 @@ static int atmel_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) * when the flash memories overrun the controller's memory space. */ if (op->addr.val + op->data.nbytes > aq->mmap_size) - return -ENOTSUPP; + return -EOPNOTSUPP; + + if (op->addr.nbytes > 4) + return -EOPNOTSUPP;
err = pm_runtime_resume_and_get(&aq->pdev->dev); if (err < 0) return err;
- err = atmel_qspi_set_cfg(aq, op, &offset); + err = aq->ops->set_cfg(aq, op, &offset); if (err) goto pm_runtime_put;
- /* Skip to the final steps if there is no data */ - if (op->data.nbytes) { - /* Dummy read of QSPI_IFR to synchronize APB and AHB accesses */ - (void)atmel_qspi_read(aq, QSPI_IFR); - - /* Send/Receive data */ - if (op->data.dir == SPI_MEM_DATA_IN) - memcpy_fromio(op->data.buf.in, aq->mem + offset, - op->data.nbytes); - else - memcpy_toio(aq->mem + offset, op->data.buf.out, - op->data.nbytes); - - /* Release the chip-select */ - atmel_qspi_write(QSPI_CR_LASTXFER, aq, QSPI_CR); - } - - /* Poll INSTRuction End status */ - sr = atmel_qspi_read(aq, QSPI_SR); - if ((sr & QSPI_SR_CMD_COMPLETED) == QSPI_SR_CMD_COMPLETED) - goto pm_runtime_put; - - /* Wait for INSTRuction End interrupt */ - reinit_completion(&aq->cmd_completion); - aq->pending = sr & QSPI_SR_CMD_COMPLETED; - atmel_qspi_write(QSPI_SR_CMD_COMPLETED, aq, QSPI_IER); - if (!wait_for_completion_timeout(&aq->cmd_completion, - msecs_to_jiffies(1000))) - err = -ETIMEDOUT; - atmel_qspi_write(QSPI_SR_CMD_COMPLETED, aq, QSPI_IDR); + err = aq->ops->transfer(mem, op, offset);
pm_runtime_put: pm_runtime_mark_last_busy(&aq->pdev->dev); @@ -569,12 +606,17 @@ static irqreturn_t atmel_qspi_interrupt(int irq, void *dev_id) return IRQ_NONE;
aq->pending |= pending; - if ((aq->pending & QSPI_SR_CMD_COMPLETED) == QSPI_SR_CMD_COMPLETED) + if ((aq->pending & aq->irq_mask) == aq->irq_mask) complete(&aq->cmd_completion);
return IRQ_HANDLED; }
+static const struct atmel_qspi_ops atmel_qspi_ops = { + .set_cfg = atmel_qspi_set_cfg, + .transfer = atmel_qspi_transfer, +}; + static int atmel_qspi_probe(struct platform_device *pdev) { struct spi_controller *ctrl; @@ -599,6 +641,7 @@ static int atmel_qspi_probe(struct platform_device *pdev)
init_completion(&aq->cmd_completion); aq->pdev = pdev; + aq->ops = &atmel_qspi_ops;
/* Map the registers */ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_base");
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bence Csókás csokas.bence@prolan.hu
[ Upstream commit be92ab2de0ee1a13291c3b47b2d7eb24d80c0a2c ]
The QSPI peripheral control and status registers are accessible via the SoC's APB bus, whereas MMIO transactions' data travels on the AHB bus.
Microchip documentation and even sample code from Atmel emphasises the need for a memory barrier before the first MMIO transaction to the AHB-connected QSPI, and before the last write to its registers via APB. This is achieved by the following lines in `atmel_qspi_transfer()`:
/* Dummy read of QSPI_IFR to synchronize APB and AHB accesses */ (void)atmel_qspi_read(aq, QSPI_IFR);
However, the current documentation makes no mention to synchronization requirements in the other direction, i.e. after the last data written via AHB, and before the first register access on APB.
In our case, we were facing an issue where the QSPI peripheral would cease to send any new CSR (nCS Rise) interrupts, leading to a timeout in `atmel_qspi_wait_for_completion()` and ultimately this panic in higher levels:
ubi0 error: ubi_io_write: error -110 while writing 63108 bytes to PEB 491:128, written 63104 bytes
After months of extensive research of the codebase, fiddling around the debugger with kgdb, and back-and-forth with Microchip, we came to the conclusion that the issue is probably that the peripheral is still busy receiving on AHB when the LASTXFER bit is written to its Control Register on APB, therefore this write gets lost, and the peripheral still thinks there is more data to come in the MMIO transfer. This was first formulated when we noticed that doubling the write() of QSPI_CR_LASTXFER seemed to solve the problem.
Ultimately, the solution is to introduce memory barriers after the AHB-mapped MMIO transfers, to ensure ordering.
Fixes: d5433def3153 ("mtd: spi-nor: atmel-quadspi: Add spi-mem support to atmel-quadspi") Cc: Hari.PrasathGE@microchip.com Cc: Mahesh.Abotula@microchip.com Cc: Marco.Cardellini@microchip.com Cc: stable@vger.kernel.org # c0a0203cf579: ("spi: atmel-quadspi: Create `atmel_qspi_ops`"...) Cc: stable@vger.kernel.org # 6.x.y Signed-off-by: Bence Csókás csokas.bence@prolan.hu Link: https://patch.msgid.link/20241219091258.395187-1-csokas.bence@prolan.hu Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/atmel-quadspi.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c index 32fcab2a11885..a62baef08e6a8 100644 --- a/drivers/spi/atmel-quadspi.c +++ b/drivers/spi/atmel-quadspi.c @@ -454,13 +454,20 @@ static int atmel_qspi_transfer(struct spi_mem *mem, (void)atmel_qspi_read(aq, QSPI_IFR);
/* Send/Receive data */ - if (op->data.dir == SPI_MEM_DATA_IN) + if (op->data.dir == SPI_MEM_DATA_IN) { memcpy_fromio(op->data.buf.in, aq->mem + offset, op->data.nbytes); - else + + /* Synchronize AHB and APB accesses again */ + rmb(); + } else { memcpy_toio(aq->mem + offset, op->data.buf.out, op->data.nbytes);
+ /* Synchronize AHB and APB accesses again */ + wmb(); + } + /* Release the chip-select */ atmel_qspi_write(QSPI_CR_LASTXFER, aq, QSPI_CR);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zijun Hu quic_zijuhu@quicinc.com
[ Upstream commit e41137d8bd1a8e8bab8dcbfe3ec056418db3df18 ]
Download board id specific NVM instead of default for WCN7850 if board id is available.
Signed-off-by: Zijun Hu quic_zijuhu@quicinc.com Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Stable-dep-of: a2fad248947d ("Bluetooth: qca: Fix poor RF performance for WCN6855") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bluetooth/btqca.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c index 35fb26cbf2294..513ff87a7a049 100644 --- a/drivers/bluetooth/btqca.c +++ b/drivers/bluetooth/btqca.c @@ -739,6 +739,19 @@ static void qca_generate_hsp_nvm_name(char *fwname, size_t max_size, snprintf(fwname, max_size, "qca/hpnv%02x%s.%x", rom_ver, variant, bid); }
+static inline void qca_get_nvm_name_generic(struct qca_fw_config *cfg, + const char *stem, u8 rom_ver, u16 bid) +{ + if (bid == 0x0) + snprintf(cfg->fwname, sizeof(cfg->fwname), "qca/%snv%02x.bin", stem, rom_ver); + else if (bid & 0xff00) + snprintf(cfg->fwname, sizeof(cfg->fwname), + "qca/%snv%02x.b%x", stem, rom_ver, bid); + else + snprintf(cfg->fwname, sizeof(cfg->fwname), + "qca/%snv%02x.b%02x", stem, rom_ver, bid); +} + int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, enum qca_btsoc_type soc_type, struct qca_btsoc_version ver, const char *firmware_name) @@ -819,7 +832,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, /* Give the controller some time to get ready to receive the NVM */ msleep(10);
- if (soc_type == QCA_QCA2066) + if (soc_type == QCA_QCA2066 || soc_type == QCA_WCN7850) qca_read_fw_board_id(hdev, &boardid);
/* Download NVM configuration */ @@ -861,8 +874,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, "qca/hpnv%02x.bin", rom_ver); break; case QCA_WCN7850: - snprintf(config.fwname, sizeof(config.fwname), - "qca/hmtnv%02x.bin", rom_ver); + qca_get_nvm_name_generic(&config, "hmt", rom_ver, boardid); break;
default:
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Cheng Jiang quic_chejiang@quicinc.com
[ Upstream commit a4c5a468c6329bde7dfd46bacff2cbf5f8a8152e ]
Different connectivity boards may be attached to the same platform. For example, QCA6698-based boards can support either a two-antenna or three-antenna solution, both of which work on the sa8775p-ride platform. Due to differences in connectivity boards and variations in RF performance from different foundries, different NVM configurations are used based on the board ID.
Therefore, in the firmware-name property, if the NVM file has an extension, the NVM file will be used. Otherwise, the system will first try the .bNN (board ID) file, and if that fails, it will fall back to the .bin file.
Possible configurations: firmware-name = "QCA6698/hpnv21"; firmware-name = "QCA6698/hpnv21.bin";
Signed-off-by: Cheng Jiang quic_chejiang@quicinc.com Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Stable-dep-of: a2fad248947d ("Bluetooth: qca: Fix poor RF performance for WCN6855") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bluetooth/btqca.c | 113 ++++++++++++++++++++++++++++---------- 1 file changed, 85 insertions(+), 28 deletions(-)
diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c index 513ff87a7a049..484a860785fde 100644 --- a/drivers/bluetooth/btqca.c +++ b/drivers/bluetooth/btqca.c @@ -289,6 +289,39 @@ int qca_send_pre_shutdown_cmd(struct hci_dev *hdev) } EXPORT_SYMBOL_GPL(qca_send_pre_shutdown_cmd);
+static bool qca_filename_has_extension(const char *filename) +{ + const char *suffix = strrchr(filename, '.'); + + /* File extensions require a dot, but not as the first or last character */ + if (!suffix || suffix == filename || *(suffix + 1) == '\0') + return 0; + + /* Avoid matching directories with names that look like files with extensions */ + return !strchr(suffix, '/'); +} + +static bool qca_get_alt_nvm_file(char *filename, size_t max_size) +{ + char fwname[64]; + const char *suffix; + + /* nvm file name has an extension, replace with .bin */ + if (qca_filename_has_extension(filename)) { + suffix = strrchr(filename, '.'); + strscpy(fwname, filename, suffix - filename + 1); + snprintf(fwname + (suffix - filename), + sizeof(fwname) - (suffix - filename), ".bin"); + /* If nvm file is already the default one, return false to skip the retry. */ + if (strcmp(fwname, filename) == 0) + return false; + + snprintf(filename, max_size, "%s", fwname); + return true; + } + return false; +} + static int qca_tlv_check_data(struct hci_dev *hdev, struct qca_fw_config *config, u8 *fw_data, size_t fw_size, @@ -586,6 +619,19 @@ static int qca_download_firmware(struct hci_dev *hdev, config->fwname, ret); return ret; } + } + /* If the board-specific file is missing, try loading the default + * one, unless that was attempted already. + */ + else if (config->type == TLV_TYPE_NVM && + qca_get_alt_nvm_file(config->fwname, sizeof(config->fwname))) { + bt_dev_info(hdev, "QCA Downloading %s", config->fwname); + ret = request_firmware(&fw, config->fwname, &hdev->dev); + if (ret) { + bt_dev_err(hdev, "QCA Failed to request file: %s (%d)", + config->fwname, ret); + return ret; + } } else { bt_dev_err(hdev, "QCA Failed to request file: %s (%d)", config->fwname, ret); @@ -722,34 +768,38 @@ static int qca_check_bdaddr(struct hci_dev *hdev, const struct qca_fw_config *co return 0; }
-static void qca_generate_hsp_nvm_name(char *fwname, size_t max_size, +static void qca_get_nvm_name_by_board(char *fwname, size_t max_size, + const char *stem, enum qca_btsoc_type soc_type, struct qca_btsoc_version ver, u8 rom_ver, u16 bid) { const char *variant; + const char *prefix;
- /* hsp gf chip */ - if ((le32_to_cpu(ver.soc_id) & QCA_HSP_GF_SOC_MASK) == QCA_HSP_GF_SOC_ID) - variant = "g"; - else - variant = ""; + /* Set the default value to variant and prefix */ + variant = ""; + prefix = "b";
- if (bid == 0x0) - snprintf(fwname, max_size, "qca/hpnv%02x%s.bin", rom_ver, variant); - else - snprintf(fwname, max_size, "qca/hpnv%02x%s.%x", rom_ver, variant, bid); -} + if (soc_type == QCA_QCA2066) + prefix = "";
-static inline void qca_get_nvm_name_generic(struct qca_fw_config *cfg, - const char *stem, u8 rom_ver, u16 bid) -{ - if (bid == 0x0) - snprintf(cfg->fwname, sizeof(cfg->fwname), "qca/%snv%02x.bin", stem, rom_ver); - else if (bid & 0xff00) - snprintf(cfg->fwname, sizeof(cfg->fwname), - "qca/%snv%02x.b%x", stem, rom_ver, bid); - else - snprintf(cfg->fwname, sizeof(cfg->fwname), - "qca/%snv%02x.b%02x", stem, rom_ver, bid); + if (soc_type == QCA_WCN6855 || soc_type == QCA_QCA2066) { + /* If the chip is manufactured by GlobalFoundries */ + if ((le32_to_cpu(ver.soc_id) & QCA_HSP_GF_SOC_MASK) == QCA_HSP_GF_SOC_ID) + variant = "g"; + } + + if (rom_ver != 0) { + if (bid == 0x0 || bid == 0xffff) + snprintf(fwname, max_size, "qca/%s%02x%s.bin", stem, rom_ver, variant); + else + snprintf(fwname, max_size, "qca/%s%02x%s.%s%02x", stem, rom_ver, + variant, prefix, bid); + } else { + if (bid == 0x0 || bid == 0xffff) + snprintf(fwname, max_size, "qca/%s%s.bin", stem, variant); + else + snprintf(fwname, max_size, "qca/%s%s.%s%02x", stem, variant, prefix, bid); + } }
int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, @@ -838,8 +888,14 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, /* Download NVM configuration */ config.type = TLV_TYPE_NVM; if (firmware_name) { - snprintf(config.fwname, sizeof(config.fwname), - "qca/%s", firmware_name); + /* The firmware name has an extension, use it directly */ + if (qca_filename_has_extension(firmware_name)) { + snprintf(config.fwname, sizeof(config.fwname), "qca/%s", firmware_name); + } else { + qca_read_fw_board_id(hdev, &boardid); + qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname), + firmware_name, soc_type, ver, 0, boardid); + } } else { switch (soc_type) { case QCA_WCN3990: @@ -858,8 +914,9 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, "qca/apnv%02x.bin", rom_ver); break; case QCA_QCA2066: - qca_generate_hsp_nvm_name(config.fwname, - sizeof(config.fwname), ver, rom_ver, boardid); + qca_get_nvm_name_by_board(config.fwname, + sizeof(config.fwname), "hpnv", soc_type, ver, + rom_ver, boardid); break; case QCA_QCA6390: snprintf(config.fwname, sizeof(config.fwname), @@ -874,9 +931,9 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, "qca/hpnv%02x.bin", rom_ver); break; case QCA_WCN7850: - qca_get_nvm_name_generic(&config, "hmt", rom_ver, boardid); + qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname), + "hmtnv", soc_type, ver, rom_ver, boardid); break; - default: snprintf(config.fwname, sizeof(config.fwname), "qca/nvm_%08x.bin", soc_ver);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zijun Hu quic_zijuhu@quicinc.com
[ Upstream commit a2fad248947d702ed3dcb52b8377c1a3ae201e44 ]
For WCN6855, board ID specific NVM needs to be downloaded once board ID is available, but the default NVM is always downloaded currently.
The wrong NVM causes poor RF performance, and effects user experience for several types of laptop with WCN6855 on the market.
Fix by downloading board ID specific NVM if board ID is available.
Fixes: 095327fede00 ("Bluetooth: hci_qca: Add support for QTI Bluetooth chip wcn6855") Cc: stable@vger.kernel.org # 6.4 Signed-off-by: Zijun Hu quic_zijuhu@quicinc.com Tested-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Johan Hovold johan+linaro@kernel.org Tested-by: Steev Klimaszewski steev@kali.org #Thinkpad X13s Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bluetooth/btqca.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c index 484a860785fde..892e2540f008a 100644 --- a/drivers/bluetooth/btqca.c +++ b/drivers/bluetooth/btqca.c @@ -927,8 +927,9 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, "qca/msnv%02x.bin", rom_ver); break; case QCA_WCN6855: - snprintf(config.fwname, sizeof(config.fwname), - "qca/hpnv%02x.bin", rom_ver); + qca_read_fw_board_id(hdev, &boardid); + qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname), + "hpnv", soc_type, ver, rom_ver, boardid); break; case QCA_WCN7850: qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com
[ Upstream commit b8eb1081d267708ba976525a1fe2162901b34f3a ]
In order to migrate some (few) old clock drivers to the common mtk_clk_simple_probe() function, add dummy clock ops to be able to insert a dummy clock with ID 0 at the beginning of the list.
Signed-off-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Reviewed-by: Miles Chen miles.chen@mediatek.com Reviewed-by: Chen-Yu Tsai wenst@chromium.org Tested-by: Miles Chen miles.chen@mediatek.com Link: https://lore.kernel.org/r/20230120092053.182923-8-angelogioacchino.delregno@... Tested-by: Mingming Su mingming.su@mediatek.com Signed-off-by: Stephen Boyd sboyd@kernel.org Stable-dep-of: 7c8746126a4e ("clk: mediatek: mt2701-vdec: fix conversion to mtk_clk_simple_probe") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/mediatek/clk-mtk.c | 16 ++++++++++++++++ drivers/clk/mediatek/clk-mtk.h | 19 +++++++++++++++++++ 2 files changed, 35 insertions(+)
diff --git a/drivers/clk/mediatek/clk-mtk.c b/drivers/clk/mediatek/clk-mtk.c index 9dbfc11d5c591..7b1ad73309b1a 100644 --- a/drivers/clk/mediatek/clk-mtk.c +++ b/drivers/clk/mediatek/clk-mtk.c @@ -21,6 +21,22 @@ #include "clk-gate.h" #include "clk-mux.h"
+const struct mtk_gate_regs cg_regs_dummy = { 0, 0, 0 }; +EXPORT_SYMBOL_GPL(cg_regs_dummy); + +static int mtk_clk_dummy_enable(struct clk_hw *hw) +{ + return 0; +} + +static void mtk_clk_dummy_disable(struct clk_hw *hw) { } + +const struct clk_ops mtk_clk_dummy_ops = { + .enable = mtk_clk_dummy_enable, + .disable = mtk_clk_dummy_disable, +}; +EXPORT_SYMBOL_GPL(mtk_clk_dummy_ops); + static void mtk_init_clk_data(struct clk_hw_onecell_data *clk_data, unsigned int clk_num) { diff --git a/drivers/clk/mediatek/clk-mtk.h b/drivers/clk/mediatek/clk-mtk.h index 65c24ab6c9470..04f371730ee30 100644 --- a/drivers/clk/mediatek/clk-mtk.h +++ b/drivers/clk/mediatek/clk-mtk.h @@ -22,6 +22,25 @@
struct platform_device;
+/* + * We need the clock IDs to start from zero but to maintain devicetree + * backwards compatibility we can't change bindings to start from zero. + * Only a few platforms are affected, so we solve issues given by the + * commonized MTK clocks probe function(s) by adding a dummy clock at + * the beginning where needed. + */ +#define CLK_DUMMY 0 + +extern const struct clk_ops mtk_clk_dummy_ops; +extern const struct mtk_gate_regs cg_regs_dummy; + +#define GATE_DUMMY(_id, _name) { \ + .id = _id, \ + .name = _name, \ + .regs = &cg_regs_dummy, \ + .ops = &mtk_clk_dummy_ops, \ + } + struct mtk_fixed_clk { int id; const char *name;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Golle daniel@makrotopia.org
[ Upstream commit 7c8746126a4e256fcf1af9174ee7d92cc3f3bc31 ]
Commit 973d1607d936 ("clk: mediatek: mt2701: use mtk_clk_simple_probe to simplify driver") broke DT bindings as the highest index was reduced by 1 because the id count starts from 1 and not from 0.
Fix this, like for other drivers which had the same issue, by adding a dummy clk at index 0.
Fixes: 973d1607d936 ("clk: mediatek: mt2701: use mtk_clk_simple_probe to simplify driver") Cc: stable@vger.kernel.org Signed-off-by: Daniel Golle daniel@makrotopia.org Link: https://lore.kernel.org/r/b126a5577f3667ef19b1b5feea5e70174084fb03.173430066... Reviewed-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/mediatek/clk-mt2701-vdec.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/clk/mediatek/clk-mt2701-vdec.c b/drivers/clk/mediatek/clk-mt2701-vdec.c index 0f07c5d731df6..fdd2645c167f4 100644 --- a/drivers/clk/mediatek/clk-mt2701-vdec.c +++ b/drivers/clk/mediatek/clk-mt2701-vdec.c @@ -31,6 +31,7 @@ static const struct mtk_gate_regs vdec1_cg_regs = { GATE_MTK(_id, _name, _parent, &vdec1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
static const struct mtk_gate vdec_clks[] = { + GATE_DUMMY(CLK_DUMMY, "vdec_dummy"), GATE_VDEC0(CLK_VDEC_CKGEN, "vdec_cken", "vdec_sel", 0), GATE_VDEC1(CLK_VDEC_LARB, "vdec_larb_cken", "mm_sel", 0), };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Golle daniel@makrotopia.org
[ Upstream commit fd291adc5e9a4ee6cd91e57f148f3b427f80647b ]
Add dummy clk for index 0 which was missed during the conversion to mtk_clk_simple_probe().
Fixes: 973d1607d936 ("clk: mediatek: mt2701: use mtk_clk_simple_probe to simplify driver") Cc: stable@vger.kernel.org Signed-off-by: Daniel Golle daniel@makrotopia.org Link: https://lore.kernel.org/r/b8526c882a50f2b158df0eccb4a165956fd8fa13.173430066... Reviewed-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/mediatek/clk-mt2701-bdp.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/clk/mediatek/clk-mt2701-bdp.c b/drivers/clk/mediatek/clk-mt2701-bdp.c index b0f0572079452..d2647ea58ae28 100644 --- a/drivers/clk/mediatek/clk-mt2701-bdp.c +++ b/drivers/clk/mediatek/clk-mt2701-bdp.c @@ -31,6 +31,7 @@ static const struct mtk_gate_regs bdp1_cg_regs = { GATE_MTK(_id, _name, _parent, &bdp1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
static const struct mtk_gate bdp_clks[] = { + GATE_DUMMY(CLK_DUMMY, "bdp_dummy"), GATE_BDP0(CLK_BDP_BRG_BA, "brg_baclk", "mm_sel", 0), GATE_BDP0(CLK_BDP_BRG_DRAM, "brg_dram", "mm_sel", 1), GATE_BDP0(CLK_BDP_LARB_DRAM, "larb_dram", "mm_sel", 2),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Golle daniel@makrotopia.org
[ Upstream commit 366640868ccb4a7991aebe8442b01340fab218e2 ]
Add dummy clk for index 0 which was missed during the conversion to mtk_clk_simple_probe().
Fixes: 973d1607d936 ("clk: mediatek: mt2701: use mtk_clk_simple_probe to simplify driver") Cc: stable@vger.kernel.org Signed-off-by: Daniel Golle daniel@makrotopia.org Link: https://lore.kernel.org/r/d677486a5c563fe5c47aa995841adc2aaa183b8a.173430066... Reviewed-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/mediatek/clk-mt2701-img.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/clk/mediatek/clk-mt2701-img.c b/drivers/clk/mediatek/clk-mt2701-img.c index eb172473f0755..569b6d3607dd6 100644 --- a/drivers/clk/mediatek/clk-mt2701-img.c +++ b/drivers/clk/mediatek/clk-mt2701-img.c @@ -22,6 +22,7 @@ static const struct mtk_gate_regs img_cg_regs = { GATE_MTK(_id, _name, _parent, &img_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
static const struct mtk_gate img_clks[] = { + GATE_DUMMY(CLK_DUMMY, "img_dummy"), GATE_IMG(CLK_IMG_SMI_COMM, "img_smi_comm", "mm_sel", 0), GATE_IMG(CLK_IMG_RESZ, "img_resz", "mm_sel", 1), GATE_IMG(CLK_IMG_JPGDEC_SMI, "img_jpgdec_smi", "mm_sel", 5),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter dan.carpenter@linaro.org
[ Upstream commit 82a0a3e6f8c02b3236b55e784a083fa4ee07c321 ]
My static checker rule complains about this code. The concern is that if "sample_space" is negative then the "sample_space >= runtime->channels" condition will not work as intended because it will be type promoted to a high unsigned int value.
strm->fifo_sample_size is SSI_FIFO_DEPTH (32). The SSIFSR_TDC_MASK is 0x3f. Without any further context it does seem like a reasonable warning and it can't hurt to add a check for negatives.
Cc: stable@vger.kernel.org Fixes: 03e786bd4341 ("ASoC: sh: Add RZ/G2L SSIF-2 driver") Signed-off-by: Dan Carpenter dan.carpenter@linaro.org Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://patch.msgid.link/e07c3dc5-d885-4b04-a742-71f42243f4fd@stanley.mounta... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/sh/rz-ssi.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c index 468050467bb39..12e063c29a2ab 100644 --- a/sound/soc/sh/rz-ssi.c +++ b/sound/soc/sh/rz-ssi.c @@ -483,6 +483,8 @@ static int rz_ssi_pio_send(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm) sample_space = strm->fifo_sample_size; ssifsr = rz_ssi_reg_readl(ssi, SSIFSR); sample_space -= (ssifsr >> SSIFSR_TDC_SHIFT) & SSIFSR_TDC_MASK; + if (sample_space < 0) + return -EINVAL;
/* Only add full frames at a time */ while (frames_left && (sample_space >= runtime->channels)) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Douglas Gilbert dgilbert@interlog.com
[ Upstream commit 2bbeb8d12404cf0603f513fc33269ef9abfbb396 ]
The default handling of the NOT READY sense key is to wait for the device to become ready. The "wait" is assumed to be relatively short. However there is a sub-class of NOT READY that have the "... in progress" phrase in their additional sense code and these can take much longer. Following on from commit 505aa4b6a883 ("scsi: sd: Defer spinning up drive while SANITIZE is in progress") we now have element depopulation and restoration that can take a long time. For example, over 24 hours for a 20 TB, 7200 rpm hard disk to depopulate 1 of its 20 elements.
Add handling of ASC/ASCQ: 0x4,0x24 (depopulation in progress) and ASC/ASCQ: 0x4,0x25 (depopulation restoration in progress) to sd.c . The scsi_lib.c has incomplete handling of these two messages, so complete it.
Signed-off-by: Douglas Gilbert dgilbert@interlog.com Link: https://lore.kernel.org/r/20231015050650.131145-1-dgilbert@interlog.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Stable-dep-of: 9ff7c383b8ac ("scsi: core: Do not retry I/Os during depopulation") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/scsi_lib.c | 1 + drivers/scsi/sd.c | 4 ++++ 2 files changed, 5 insertions(+)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 5c5954b78585e..b8d58120badde 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -783,6 +783,7 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result) case 0x1b: /* sanitize in progress */ case 0x1d: /* configuration in progress */ case 0x24: /* depopulation in progress */ + case 0x25: /* depopulation restore in progress */ action = ACTION_DELAYED_RETRY; break; case 0x0a: /* ALUA state transition */ diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index b35ef52d9c632..c3006524eb039 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -2191,6 +2191,10 @@ sd_spinup_disk(struct scsi_disk *sdkp) break; /* unavailable */ if (sshdr.asc == 4 && sshdr.ascq == 0x1b) break; /* sanitize in progress */ + if (sshdr.asc == 4 && sshdr.ascq == 0x24) + break; /* depopulation in progress */ + if (sshdr.asc == 4 && sshdr.ascq == 0x25) + break; /* depopulation restoration in progress */ /* * Issue command to spin up drive when not ready */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Igor Pylypiv ipylypiv@google.com
[ Upstream commit 9ff7c383b8ac0c482a1da7989f703406d78445c6 ]
Fail I/Os instead of retry to prevent user space processes from being blocked on the I/O completion for several minutes.
Retrying I/Os during "depopulation in progress" or "depopulation restore in progress" results in a continuous retry loop until the depopulation completes or until the I/O retry loop is aborted due to a timeout by the scsi_cmd_runtime_exceeced().
Depopulation is slow and can take 24+ hours to complete on 20+ TB HDDs. Most I/Os in the depopulation retry loop end up taking several minutes before returning the failure to user space.
Cc: stable@vger.kernel.org # 4.18.x: 2bbeb8d scsi: core: Handle depopulation and restoration in progress Cc: stable@vger.kernel.org # 4.18.x Fixes: e37c7d9a0341 ("scsi: core: sanitize++ in progress") Signed-off-by: Igor Pylypiv ipylypiv@google.com Link: https://lore.kernel.org/r/20250131184408.859579-1-ipylypiv@google.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/scsi_lib.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index b8d58120badde..72d31b2267ef4 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -782,13 +782,18 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result) case 0x1a: /* start stop unit in progress */ case 0x1b: /* sanitize in progress */ case 0x1d: /* configuration in progress */ - case 0x24: /* depopulation in progress */ - case 0x25: /* depopulation restore in progress */ action = ACTION_DELAYED_RETRY; break; case 0x0a: /* ALUA state transition */ action = ACTION_DELAYED_REPREP; break; + /* + * Depopulation might take many hours, + * thus it is not worthwhile to retry. + */ + case 0x24: /* depopulation in progress */ + case 0x25: /* depopulation restore in progress */ + fallthrough; default: action = ACTION_FAIL; break;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chen-Yu Tsai wenst@chromium.org
[ Upstream commit 26f6e91fa29a58fdc76b47f94f8f6027944a490c ]
Most SoC dtsi files have the display output interfaces disabled by default, and only enabled on boards that utilize them. The MT8183 has it backwards: the display outputs are left enabled by default, and only disabled at the board level.
Reverse the situation for the DSI output so that it follows the normal scheme. For ease of backporting the DPI output is handled in a separate patch.
Fixes: 88ec840270e6 ("arm64: dts: mt8183: Add dsi node") Fixes: 19b6403f1e2a ("arm64: dts: mt8183: add mt8183 pumpkin board") Cc: stable@vger.kernel.org Signed-off-by: Chen-Yu Tsai wenst@chromium.org Reviewed-by: Fei Shao fshao@chromium.org Link: https://lore.kernel.org/r/20241025075630.3917458-2-wenst@chromium.org Signed-off-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/mediatek/mt8183.dtsi | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi index 2147e152683bf..fe4632744f6e5 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi @@ -1753,6 +1753,7 @@ resets = <&mmsys MT8183_MMSYS_SW0_RST_B_DISP_DSI0>; phys = <&mipi_tx0>; phy-names = "dphy"; + status = "disabled"; };
mutex: mutex@14016000 {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit 22dbcfd6f4a9f7d4391f0aa66d3f46addea4bee9 ]
Hex numbers in addresses and sizes should be rather eight digits, not nine. Drop leading zeros. No functional change (same DTB).
Suggested-by: Konrad Dybcio konrad.dybcio@linaro.org Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Reviewed-by: Konrad Dybcio konrad.dybcio@linaro.org Signed-off-by: Bjorn Andersson andersson@kernel.org Link: https://lore.kernel.org/r/20221115105046.95254-1-krzysztof.kozlowski@linaro.... Stable-dep-of: 3751fe2cba2a ("arm64: dts: qcom: sm8450: Fix CDSP memory length") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/qcom/sm8350.dtsi | 2 +- arch/arm64/boot/dts/qcom/sm8450.dtsi | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi index 956237489bc46..5a4972afc9776 100644 --- a/arch/arm64/boot/dts/qcom/sm8350.dtsi +++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi @@ -2226,7 +2226,7 @@
cdsp: remoteproc@98900000 { compatible = "qcom,sm8350-cdsp-pas"; - reg = <0 0x098900000 0 0x1400000>; + reg = <0 0x98900000 0 0x1400000>;
interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_LEVEL_HIGH>, <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>, diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi index 3f79aea644460..9151ed3b0a62f 100644 --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi @@ -2093,7 +2093,7 @@
remoteproc_adsp: remoteproc@30000000 { compatible = "qcom,sm8450-adsp-pas"; - reg = <0 0x030000000 0 0x100>; + reg = <0 0x30000000 0 0x100>;
interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>, <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>, @@ -2159,7 +2159,7 @@
remoteproc_cdsp: remoteproc@32300000 { compatible = "qcom,sm8450-cdsp-pas"; - reg = <0 0x032300000 0 0x1400000>; + reg = <0 0x32300000 0 0x1400000>;
interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>, <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit 3751fe2cba2a9fba2204ef62102bc4bb027cec7b ]
The address space in CDSP PAS (Peripheral Authentication Service) remoteproc node should point to the QDSP PUB address space (QDSP6...SS_PUB) which has a length of 0x10000. Value of 0x1400000 was copied from older DTS, but it does not look accurate at all.
This should have no functional impact on Linux users, because PAS loader does not use this address space at all.
Fixes: 1172729576fb ("arm64: dts: qcom: sm8450: Add remoteproc enablers and instances") Cc: stable@vger.kernel.org Reviewed-by: Neil Armstrong neil.armstrong@linaro.org Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Link: https://lore.kernel.org/r/20241213-dts-qcom-cdsp-mpss-base-address-v3-5-2e00... Signed-off-by: Bjorn Andersson andersson@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/qcom/sm8450.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi index 9151ed3b0a62f..9420857871b1e 100644 --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi @@ -2159,7 +2159,7 @@
remoteproc_cdsp: remoteproc@32300000 { compatible = "qcom,sm8450-cdsp-pas"; - reg = <0 0x32300000 0 0x1400000>; + reg = <0 0x32300000 0 0x10000>;
interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>, <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eddie James eajames@linux.ibm.com
[ Upstream commit 441b7152729f4a2bdb100135a58625fa0aeb69e4 ]
Since the bios event log is freed in the device release function, let devres handle the deallocation. This will allow other memory allocation/mapping functions to be used for the bios event log.
Signed-off-by: Eddie James eajames@linux.ibm.com Tested-by: Jarkko Sakkinen jarkko@kernel.org Reviewed-by: Jarkko Sakkinen jarkko@kernel.org Signed-off-by: Jarkko Sakkinen jarkko@kernel.org Stable-dep-of: a3a860bc0fd6 ("tpm: Change to kvalloc() in eventlog/acpi.c") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/char/tpm/eventlog/acpi.c | 5 +++-- drivers/char/tpm/eventlog/efi.c | 13 +++++++------ drivers/char/tpm/eventlog/of.c | 3 ++- drivers/char/tpm/tpm-chip.c | 1 - 4 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/char/tpm/eventlog/acpi.c b/drivers/char/tpm/eventlog/acpi.c index cd266021d0103..bd757d836c5cf 100644 --- a/drivers/char/tpm/eventlog/acpi.c +++ b/drivers/char/tpm/eventlog/acpi.c @@ -14,6 +14,7 @@ * Access to the event log extended by the TCG BIOS of PC platform */
+#include <linux/device.h> #include <linux/seq_file.h> #include <linux/fs.h> #include <linux/security.h> @@ -135,7 +136,7 @@ int tpm_read_log_acpi(struct tpm_chip *chip) }
/* malloc EventLog space */ - log->bios_event_log = kmalloc(len, GFP_KERNEL); + log->bios_event_log = devm_kmalloc(&chip->dev, len, GFP_KERNEL); if (!log->bios_event_log) return -ENOMEM;
@@ -164,7 +165,7 @@ int tpm_read_log_acpi(struct tpm_chip *chip) return format;
err: - kfree(log->bios_event_log); + devm_kfree(&chip->dev, log->bios_event_log); log->bios_event_log = NULL; return ret; } diff --git a/drivers/char/tpm/eventlog/efi.c b/drivers/char/tpm/eventlog/efi.c index e6cb9d525e30c..4e9d7c2bf32ee 100644 --- a/drivers/char/tpm/eventlog/efi.c +++ b/drivers/char/tpm/eventlog/efi.c @@ -6,6 +6,7 @@ * Thiebaud Weksteen tweek@google.com */
+#include <linux/device.h> #include <linux/efi.h> #include <linux/tpm_eventlog.h>
@@ -55,7 +56,7 @@ int tpm_read_log_efi(struct tpm_chip *chip) }
/* malloc EventLog space */ - log->bios_event_log = kmemdup(log_tbl->log, log_size, GFP_KERNEL); + log->bios_event_log = devm_kmemdup(&chip->dev, log_tbl->log, log_size, GFP_KERNEL); if (!log->bios_event_log) { ret = -ENOMEM; goto out; @@ -76,7 +77,7 @@ int tpm_read_log_efi(struct tpm_chip *chip) MEMREMAP_WB); if (!final_tbl) { pr_err("Could not map UEFI TPM final log\n"); - kfree(log->bios_event_log); + devm_kfree(&chip->dev, log->bios_event_log); ret = -ENOMEM; goto out; } @@ -91,11 +92,11 @@ int tpm_read_log_efi(struct tpm_chip *chip) * Allocate memory for the 'combined log' where we will append the * 'final events log' to. */ - tmp = krealloc(log->bios_event_log, - log_size + final_events_log_size, - GFP_KERNEL); + tmp = devm_krealloc(&chip->dev, log->bios_event_log, + log_size + final_events_log_size, + GFP_KERNEL); if (!tmp) { - kfree(log->bios_event_log); + devm_kfree(&chip->dev, log->bios_event_log); ret = -ENOMEM; goto out; } diff --git a/drivers/char/tpm/eventlog/of.c b/drivers/char/tpm/eventlog/of.c index a9ce66d09a754..741ab2204b11a 100644 --- a/drivers/char/tpm/eventlog/of.c +++ b/drivers/char/tpm/eventlog/of.c @@ -10,6 +10,7 @@ * Read the event log created by the firmware on PPC64 */
+#include <linux/device.h> #include <linux/slab.h> #include <linux/of.h> #include <linux/tpm_eventlog.h> @@ -65,7 +66,7 @@ int tpm_read_log_of(struct tpm_chip *chip) return -EIO; }
- log->bios_event_log = kmemdup(__va(base), size, GFP_KERNEL); + log->bios_event_log = devm_kmemdup(&chip->dev, __va(base), size, GFP_KERNEL); if (!log->bios_event_log) return -ENOMEM;
diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c index c0759d49fd145..916ee815b1401 100644 --- a/drivers/char/tpm/tpm-chip.c +++ b/drivers/char/tpm/tpm-chip.c @@ -267,7 +267,6 @@ static void tpm_dev_release(struct device *dev) idr_remove(&dev_nums_idr, chip->dev_num); mutex_unlock(&idr_lock);
- kfree(chip->log.bios_event_log); kfree(chip->work_space.context_buf); kfree(chip->work_space.session_buf); kfree(chip->allocated_banks);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jarkko Sakkinen jarkko@kernel.org
[ Upstream commit a3a860bc0fd6c07332e4911cf9a238d20de90173 ]
The following failure was reported on HPE ProLiant D320:
[ 10.693310][ T1] tpm_tis STM0925:00: 2.0 TPM (device-id 0x3, rev-id 0) [ 10.848132][ T1] ------------[ cut here ]------------ [ 10.853559][ T1] WARNING: CPU: 59 PID: 1 at mm/page_alloc.c:4727 __alloc_pages_noprof+0x2ca/0x330 [ 10.862827][ T1] Modules linked in: [ 10.866671][ T1] CPU: 59 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.0-lp155.2.g52785e2-default #1 openSUSE Tumbleweed (unreleased) 588cd98293a7c9eba9013378d807364c088c9375 [ 10.882741][ T1] Hardware name: HPE ProLiant DL320 Gen12/ProLiant DL320 Gen12, BIOS 1.20 10/28/2024 [ 10.892170][ T1] RIP: 0010:__alloc_pages_noprof+0x2ca/0x330 [ 10.898103][ T1] Code: 24 08 e9 4a fe ff ff e8 34 36 fa ff e9 88 fe ff ff 83 fe 0a 0f 86 b3 fd ff ff 80 3d 01 e7 ce 01 00 75 09 c6 05 f8 e6 ce 01 01 <0f> 0b 45 31 ff e9 e5 fe ff ff f7 c2 00 00 08 00 75 42 89 d9 80 e1 [ 10.917750][ T1] RSP: 0000:ffffb7cf40077980 EFLAGS: 00010246 [ 10.923777][ T1] RAX: 0000000000000000 RBX: 0000000000040cc0 RCX: 0000000000000000 [ 10.931727][ T1] RDX: 0000000000000000 RSI: 000000000000000c RDI: 0000000000040cc0
The above transcript shows that ACPI pointed a 16 MiB buffer for the log events because RSI maps to the 'order' parameter of __alloc_pages_noprof(). Address the bug by moving from devm_kmalloc() to devm_add_action() and kvmalloc() and devm_add_action().
Suggested-by: Ard Biesheuvel ardb@kernel.org Cc: stable@vger.kernel.org # v2.6.16+ Fixes: 55a82ab3181b ("[PATCH] tpm: add bios measurement log") Reported-by: Andy Liang andy.liang@hpe.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219495 Reviewed-by: Ard Biesheuvel ardb@kernel.org Reviewed-by: Stefan Berger stefanb@linux.ibm.com Reviewed-by: Takashi Iwai tiwai@suse.de Tested-by: Andy Liang andy.liang@hpe.com Signed-off-by: Jarkko Sakkinen jarkko@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/char/tpm/eventlog/acpi.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/char/tpm/eventlog/acpi.c b/drivers/char/tpm/eventlog/acpi.c index bd757d836c5cf..1a5644051d310 100644 --- a/drivers/char/tpm/eventlog/acpi.c +++ b/drivers/char/tpm/eventlog/acpi.c @@ -63,6 +63,11 @@ static bool tpm_is_tpm2_log(void *bios_event_log, u64 len) return n == 0; }
+static void tpm_bios_log_free(void *data) +{ + kvfree(data); +} + /* read binary bios log */ int tpm_read_log_acpi(struct tpm_chip *chip) { @@ -136,7 +141,7 @@ int tpm_read_log_acpi(struct tpm_chip *chip) }
/* malloc EventLog space */ - log->bios_event_log = devm_kmalloc(&chip->dev, len, GFP_KERNEL); + log->bios_event_log = kvmalloc(len, GFP_KERNEL); if (!log->bios_event_log) return -ENOMEM;
@@ -162,10 +167,16 @@ int tpm_read_log_acpi(struct tpm_chip *chip) goto err; }
+ ret = devm_add_action(&chip->dev, tpm_bios_log_free, log->bios_event_log); + if (ret) { + log->bios_event_log = NULL; + goto err; + } + return format;
err: - devm_kfree(&chip->dev, log->bios_event_log); + tpm_bios_log_free(log->bios_event_log); log->bios_event_log = NULL; return ret; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com
[ Upstream commit 916120df5aa926d65f4666c075ed8d4955ef7bab ]
This driver does exactly devm_clk_get() and clk_prepare_enable() right after, which is exactly what devm_clk_get_enabled() does: clean that up by switching to the latter.
This commit brings no functional changes.
Signed-off-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Link: https://lore.kernel.org/r/20221006110935.59695-1-angelogioacchino.delregno@c... Signed-off-by: Matthias Brugger matthias.bgg@gmail.com Stable-dep-of: c0eb059a4575 ("soc: mediatek: mtk-devapc: Fix leaking IO map on error paths") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/mediatek/mtk-devapc.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/soc/mediatek/mtk-devapc.c b/drivers/soc/mediatek/mtk-devapc.c index fc13334db1b11..bad139cb117ea 100644 --- a/drivers/soc/mediatek/mtk-devapc.c +++ b/drivers/soc/mediatek/mtk-devapc.c @@ -276,19 +276,14 @@ static int mtk_devapc_probe(struct platform_device *pdev) if (!devapc_irq) return -EINVAL;
- ctx->infra_clk = devm_clk_get(&pdev->dev, "devapc-infra-clock"); + ctx->infra_clk = devm_clk_get_enabled(&pdev->dev, "devapc-infra-clock"); if (IS_ERR(ctx->infra_clk)) return -EINVAL;
- if (clk_prepare_enable(ctx->infra_clk)) - return -EINVAL; - ret = devm_request_irq(&pdev->dev, devapc_irq, devapc_violation_irq, IRQF_TRIGGER_NONE, "devapc", ctx); - if (ret) { - clk_disable_unprepare(ctx->infra_clk); + if (ret) return ret; - }
platform_set_drvdata(pdev, ctx);
@@ -303,8 +298,6 @@ static int mtk_devapc_remove(struct platform_device *pdev)
stop_devapc(ctx);
- clk_disable_unprepare(ctx->infra_clk); - return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit c0eb059a4575ed57f265d9883a5203799c19982c ]
Error paths of mtk_devapc_probe() should unmap the memory. Reported by Smatch:
drivers/soc/mediatek/mtk-devapc.c:292 mtk_devapc_probe() warn: 'ctx->infra_base' from of_iomap() not released on lines: 277,281,286.
Fixes: 0890beb22618 ("soc: mediatek: add mt6779 devapc driver") Cc: stable@vger.kernel.org Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Link: https://lore.kernel.org/r/20250104142012.115974-1-krzysztof.kozlowski@linaro... Signed-off-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/mediatek/mtk-devapc.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/drivers/soc/mediatek/mtk-devapc.c b/drivers/soc/mediatek/mtk-devapc.c index bad139cb117ea..c72273aae7c64 100644 --- a/drivers/soc/mediatek/mtk-devapc.c +++ b/drivers/soc/mediatek/mtk-devapc.c @@ -273,23 +273,31 @@ static int mtk_devapc_probe(struct platform_device *pdev) return -EINVAL;
devapc_irq = irq_of_parse_and_map(node, 0); - if (!devapc_irq) - return -EINVAL; + if (!devapc_irq) { + ret = -EINVAL; + goto err; + }
ctx->infra_clk = devm_clk_get_enabled(&pdev->dev, "devapc-infra-clock"); - if (IS_ERR(ctx->infra_clk)) - return -EINVAL; + if (IS_ERR(ctx->infra_clk)) { + ret = -EINVAL; + goto err; + }
ret = devm_request_irq(&pdev->dev, devapc_irq, devapc_violation_irq, IRQF_TRIGGER_NONE, "devapc", ctx); if (ret) - return ret; + goto err;
platform_set_drvdata(pdev, ctx);
start_devapc(ctx);
return 0; + +err: + iounmap(ctx->infra_base); + return ret; }
static int mtk_devapc_remove(struct platform_device *pdev)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Uwe Kleine-König u.kleine-koenig@pengutronix.de
[ Upstream commit a129ac3555c0dca6f04ae404dc0f0790656587fb ]
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new() which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove().
Trivially convert this driver from always returning zero in the remove callback to the void returning variant.
Reviewed-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Link: https://lore.kernel.org/r/20230925095532.1984344-15-u.kleine-koenig@pengutro... Signed-off-by: Uwe Kleine-König u.kleine-koenig@pengutronix.de Stable-dep-of: c9c0036c1990 ("soc: mediatek: mtk-devapc: Fix leaking IO map on driver remove") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/mediatek/mtk-devapc.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/soc/mediatek/mtk-devapc.c b/drivers/soc/mediatek/mtk-devapc.c index c72273aae7c64..226a79f43492f 100644 --- a/drivers/soc/mediatek/mtk-devapc.c +++ b/drivers/soc/mediatek/mtk-devapc.c @@ -300,18 +300,16 @@ static int mtk_devapc_probe(struct platform_device *pdev) return ret; }
-static int mtk_devapc_remove(struct platform_device *pdev) +static void mtk_devapc_remove(struct platform_device *pdev) { struct mtk_devapc_context *ctx = platform_get_drvdata(pdev);
stop_devapc(ctx); - - return 0; }
static struct platform_driver mtk_devapc_driver = { .probe = mtk_devapc_probe, - .remove = mtk_devapc_remove, + .remove_new = mtk_devapc_remove, .driver = { .name = "mtk-devapc", .of_match_table = mtk_devapc_dt_match,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit c9c0036c1990da8d2dd33563e327e05a775fcf10 ]
Driver removal should fully clean up - unmap the memory.
Fixes: 0890beb22618 ("soc: mediatek: add mt6779 devapc driver") Cc: stable@vger.kernel.org Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Link: https://lore.kernel.org/r/20250104142012.115974-2-krzysztof.kozlowski@linaro... Signed-off-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/mediatek/mtk-devapc.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/soc/mediatek/mtk-devapc.c b/drivers/soc/mediatek/mtk-devapc.c index 226a79f43492f..7269ab8d29b64 100644 --- a/drivers/soc/mediatek/mtk-devapc.c +++ b/drivers/soc/mediatek/mtk-devapc.c @@ -305,6 +305,7 @@ static void mtk_devapc_remove(struct platform_device *pdev) struct mtk_devapc_context *ctx = platform_get_drvdata(pdev);
stop_devapc(ctx); + iounmap(ctx->infra_base); }
static struct platform_driver mtk_devapc_driver = {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yang Yingliang yangyingliang@huawei.com
[ Upstream commit 6cb7d1b3ff83e98e852db9739892c4643a31804b ]
In the probe path, dev_err() can be replaced with dev_err_probe() which will check if error code is -EPROBE_DEFER.
Reviewed-by: Sean Young sean@mess.org Reviewed-by: Ricardo Ribalda ribalda@chromium.org Reviewed-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Acked-by: Sakari Ailus sakari.ailus@linux.intel.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Stable-dep-of: a9ea1a3d88b7 ("media: uvcvideo: Fix crash during unbind if gpio unit is in use") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/cec/platform/stm32/stm32-cec.c | 9 +++---- drivers/media/i2c/ad5820.c | 18 +++++-------- drivers/media/i2c/imx274.c | 5 ++-- drivers/media/i2c/tc358743.c | 9 +++---- .../platform/mediatek/mdp/mtk_mdp_comp.c | 5 ++-- .../platform/samsung/exynos4-is/media-dev.c | 4 +-- drivers/media/platform/st/stm32/stm32-dcmi.c | 27 +++++++------------ drivers/media/platform/ti/omap3isp/isp.c | 3 +-- .../media/platform/xilinx/xilinx-csi2rxss.c | 8 +++--- drivers/media/rc/gpio-ir-recv.c | 10 +++---- drivers/media/rc/gpio-ir-tx.c | 9 +++---- drivers/media/rc/ir-rx51.c | 9 ++----- drivers/media/usb/uvc/uvc_driver.c | 9 +++---- 13 files changed, 41 insertions(+), 84 deletions(-)
diff --git a/drivers/media/cec/platform/stm32/stm32-cec.c b/drivers/media/cec/platform/stm32/stm32-cec.c index 40db7911b437b..7b2db46a57222 100644 --- a/drivers/media/cec/platform/stm32/stm32-cec.c +++ b/drivers/media/cec/platform/stm32/stm32-cec.c @@ -288,12 +288,9 @@ static int stm32_cec_probe(struct platform_device *pdev) return ret;
cec->clk_cec = devm_clk_get(&pdev->dev, "cec"); - if (IS_ERR(cec->clk_cec)) { - if (PTR_ERR(cec->clk_cec) != -EPROBE_DEFER) - dev_err(&pdev->dev, "Cannot get cec clock\n"); - - return PTR_ERR(cec->clk_cec); - } + if (IS_ERR(cec->clk_cec)) + return dev_err_probe(&pdev->dev, PTR_ERR(cec->clk_cec), + "Cannot get cec clock\n");
ret = clk_prepare(cec->clk_cec); if (ret) { diff --git a/drivers/media/i2c/ad5820.c b/drivers/media/i2c/ad5820.c index 088c29c4e2529..56d22d02a0d91 100644 --- a/drivers/media/i2c/ad5820.c +++ b/drivers/media/i2c/ad5820.c @@ -301,21 +301,15 @@ static int ad5820_probe(struct i2c_client *client, return -ENOMEM;
coil->vana = devm_regulator_get(&client->dev, "VANA"); - if (IS_ERR(coil->vana)) { - ret = PTR_ERR(coil->vana); - if (ret != -EPROBE_DEFER) - dev_err(&client->dev, "could not get regulator for vana\n"); - return ret; - } + if (IS_ERR(coil->vana)) + return dev_err_probe(&client->dev, PTR_ERR(coil->vana), + "could not get regulator for vana\n");
coil->enable_gpio = devm_gpiod_get_optional(&client->dev, "enable", GPIOD_OUT_LOW); - if (IS_ERR(coil->enable_gpio)) { - ret = PTR_ERR(coil->enable_gpio); - if (ret != -EPROBE_DEFER) - dev_err(&client->dev, "could not get enable gpio\n"); - return ret; - } + if (IS_ERR(coil->enable_gpio)) + return dev_err_probe(&client->dev, PTR_ERR(coil->enable_gpio), + "could not get enable gpio\n");
mutex_init(&coil->power_lock);
diff --git a/drivers/media/i2c/imx274.c b/drivers/media/i2c/imx274.c index a00761b1e18c2..9219f3c9594b0 100644 --- a/drivers/media/i2c/imx274.c +++ b/drivers/media/i2c/imx274.c @@ -2060,9 +2060,8 @@ static int imx274_probe(struct i2c_client *client) imx274->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); if (IS_ERR(imx274->reset_gpio)) { - if (PTR_ERR(imx274->reset_gpio) != -EPROBE_DEFER) - dev_err(dev, "Reset GPIO not setup in DT"); - ret = PTR_ERR(imx274->reset_gpio); + ret = dev_err_probe(dev, PTR_ERR(imx274->reset_gpio), + "Reset GPIO not setup in DT\n"); goto err_me; }
diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c index 45dd91d1cd816..2c8189e04a131 100644 --- a/drivers/media/i2c/tc358743.c +++ b/drivers/media/i2c/tc358743.c @@ -1891,12 +1891,9 @@ static int tc358743_probe_of(struct tc358743_state *state) int ret;
refclk = devm_clk_get(dev, "refclk"); - if (IS_ERR(refclk)) { - if (PTR_ERR(refclk) != -EPROBE_DEFER) - dev_err(dev, "failed to get refclk: %ld\n", - PTR_ERR(refclk)); - return PTR_ERR(refclk); - } + if (IS_ERR(refclk)) + return dev_err_probe(dev, PTR_ERR(refclk), + "failed to get refclk\n");
ep = of_graph_get_next_endpoint(dev->of_node, NULL); if (!ep) { diff --git a/drivers/media/platform/mediatek/mdp/mtk_mdp_comp.c b/drivers/media/platform/mediatek/mdp/mtk_mdp_comp.c index 1e3833f1c9ae2..ad5fab2d8bfae 100644 --- a/drivers/media/platform/mediatek/mdp/mtk_mdp_comp.c +++ b/drivers/media/platform/mediatek/mdp/mtk_mdp_comp.c @@ -52,9 +52,8 @@ int mtk_mdp_comp_init(struct device *dev, struct device_node *node, for (i = 0; i < ARRAY_SIZE(comp->clk); i++) { comp->clk[i] = of_clk_get(node, i); if (IS_ERR(comp->clk[i])) { - if (PTR_ERR(comp->clk[i]) != -EPROBE_DEFER) - dev_err(dev, "Failed to get clock\n"); - ret = PTR_ERR(comp->clk[i]); + ret = dev_err_probe(dev, PTR_ERR(comp->clk[i]), + "Failed to get clock\n"); goto put_dev; }
diff --git a/drivers/media/platform/samsung/exynos4-is/media-dev.c b/drivers/media/platform/samsung/exynos4-is/media-dev.c index 2f3071acb9c97..98a60f01129d4 100644 --- a/drivers/media/platform/samsung/exynos4-is/media-dev.c +++ b/drivers/media/platform/samsung/exynos4-is/media-dev.c @@ -1471,9 +1471,7 @@ static int fimc_md_probe(struct platform_device *pdev)
pinctrl = devm_pinctrl_get(dev); if (IS_ERR(pinctrl)) { - ret = PTR_ERR(pinctrl); - if (ret != -EPROBE_DEFER) - dev_err(dev, "Failed to get pinctrl: %d\n", ret); + ret = dev_err_probe(dev, PTR_ERR(pinctrl), "Failed to get pinctrl\n"); goto err_clk; }
diff --git a/drivers/media/platform/st/stm32/stm32-dcmi.c b/drivers/media/platform/st/stm32/stm32-dcmi.c index 37458d4d9564b..06be28b361f1a 100644 --- a/drivers/media/platform/st/stm32/stm32-dcmi.c +++ b/drivers/media/platform/st/stm32/stm32-dcmi.c @@ -1946,12 +1946,9 @@ static int dcmi_probe(struct platform_device *pdev) return -ENOMEM;
dcmi->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL); - if (IS_ERR(dcmi->rstc)) { - if (PTR_ERR(dcmi->rstc) != -EPROBE_DEFER) - dev_err(&pdev->dev, "Could not get reset control\n"); - - return PTR_ERR(dcmi->rstc); - } + if (IS_ERR(dcmi->rstc)) + return dev_err_probe(&pdev->dev, PTR_ERR(dcmi->rstc), + "Could not get reset control\n");
/* Get bus characteristics from devicetree */ np = of_graph_get_next_endpoint(np, NULL); @@ -2003,20 +2000,14 @@ static int dcmi_probe(struct platform_device *pdev) }
mclk = devm_clk_get(&pdev->dev, "mclk"); - if (IS_ERR(mclk)) { - if (PTR_ERR(mclk) != -EPROBE_DEFER) - dev_err(&pdev->dev, "Unable to get mclk\n"); - return PTR_ERR(mclk); - } + if (IS_ERR(mclk)) + return dev_err_probe(&pdev->dev, PTR_ERR(mclk), + "Unable to get mclk\n");
chan = dma_request_chan(&pdev->dev, "tx"); - if (IS_ERR(chan)) { - ret = PTR_ERR(chan); - if (ret != -EPROBE_DEFER) - dev_err(&pdev->dev, - "Failed to request DMA channel: %d\n", ret); - return ret; - } + if (IS_ERR(chan)) + return dev_err_probe(&pdev->dev, PTR_ERR(chan), + "Failed to request DMA channel\n");
dcmi->dma_max_burst = UINT_MAX; ret = dma_get_slave_caps(chan, &caps); diff --git a/drivers/media/platform/ti/omap3isp/isp.c b/drivers/media/platform/ti/omap3isp/isp.c index 11ae479ee89c8..e7327e38482de 100644 --- a/drivers/media/platform/ti/omap3isp/isp.c +++ b/drivers/media/platform/ti/omap3isp/isp.c @@ -1884,8 +1884,7 @@ static int isp_initialize_modules(struct isp_device *isp)
ret = omap3isp_ccp2_init(isp); if (ret < 0) { - if (ret != -EPROBE_DEFER) - dev_err(isp->dev, "CCP2 initialization failed\n"); + dev_err_probe(isp->dev, ret, "CCP2 initialization failed\n"); goto error_ccp2; }
diff --git a/drivers/media/platform/xilinx/xilinx-csi2rxss.c b/drivers/media/platform/xilinx/xilinx-csi2rxss.c index 29b53febc2e7a..d8a23f18cfbce 100644 --- a/drivers/media/platform/xilinx/xilinx-csi2rxss.c +++ b/drivers/media/platform/xilinx/xilinx-csi2rxss.c @@ -976,11 +976,9 @@ static int xcsi2rxss_probe(struct platform_device *pdev) /* Reset GPIO */ xcsi2rxss->rst_gpio = devm_gpiod_get_optional(dev, "video-reset", GPIOD_OUT_HIGH); - if (IS_ERR(xcsi2rxss->rst_gpio)) { - if (PTR_ERR(xcsi2rxss->rst_gpio) != -EPROBE_DEFER) - dev_err(dev, "Video Reset GPIO not setup in DT"); - return PTR_ERR(xcsi2rxss->rst_gpio); - } + if (IS_ERR(xcsi2rxss->rst_gpio)) + return dev_err_probe(dev, PTR_ERR(xcsi2rxss->rst_gpio), + "Video Reset GPIO not setup in DT\n");
ret = xcsi2rxss_parse_of(xcsi2rxss); if (ret < 0) diff --git a/drivers/media/rc/gpio-ir-recv.c b/drivers/media/rc/gpio-ir-recv.c index 16795e07dc103..41ef8cdba28c4 100644 --- a/drivers/media/rc/gpio-ir-recv.c +++ b/drivers/media/rc/gpio-ir-recv.c @@ -74,13 +74,9 @@ static int gpio_ir_recv_probe(struct platform_device *pdev) return -ENOMEM;
gpio_dev->gpiod = devm_gpiod_get(dev, NULL, GPIOD_IN); - if (IS_ERR(gpio_dev->gpiod)) { - rc = PTR_ERR(gpio_dev->gpiod); - /* Just try again if this happens */ - if (rc != -EPROBE_DEFER) - dev_err(dev, "error getting gpio (%d)\n", rc); - return rc; - } + if (IS_ERR(gpio_dev->gpiod)) + return dev_err_probe(dev, PTR_ERR(gpio_dev->gpiod), + "error getting gpio\n"); gpio_dev->irq = gpiod_to_irq(gpio_dev->gpiod); if (gpio_dev->irq < 0) return gpio_dev->irq; diff --git a/drivers/media/rc/gpio-ir-tx.c b/drivers/media/rc/gpio-ir-tx.c index d3063ddb472e3..2b829c146db15 100644 --- a/drivers/media/rc/gpio-ir-tx.c +++ b/drivers/media/rc/gpio-ir-tx.c @@ -174,12 +174,9 @@ static int gpio_ir_tx_probe(struct platform_device *pdev) return -ENOMEM;
gpio_ir->gpio = devm_gpiod_get(&pdev->dev, NULL, GPIOD_OUT_LOW); - if (IS_ERR(gpio_ir->gpio)) { - if (PTR_ERR(gpio_ir->gpio) != -EPROBE_DEFER) - dev_err(&pdev->dev, "Failed to get gpio (%ld)\n", - PTR_ERR(gpio_ir->gpio)); - return PTR_ERR(gpio_ir->gpio); - } + if (IS_ERR(gpio_ir->gpio)) + return dev_err_probe(&pdev->dev, PTR_ERR(gpio_ir->gpio), + "Failed to get gpio\n");
rcdev->priv = gpio_ir; rcdev->driver_name = DRIVER_NAME; diff --git a/drivers/media/rc/ir-rx51.c b/drivers/media/rc/ir-rx51.c index a3b1451832603..85080c3d20535 100644 --- a/drivers/media/rc/ir-rx51.c +++ b/drivers/media/rc/ir-rx51.c @@ -231,13 +231,8 @@ static int ir_rx51_probe(struct platform_device *dev) struct rc_dev *rcdev;
pwm = pwm_get(&dev->dev, NULL); - if (IS_ERR(pwm)) { - int err = PTR_ERR(pwm); - - if (err != -EPROBE_DEFER) - dev_err(&dev->dev, "pwm_get failed: %d\n", err); - return err; - } + if (IS_ERR(pwm)) + return dev_err_probe(&dev->dev, PTR_ERR(pwm), "pwm_get failed\n");
/* Use default, in case userspace does not set the carrier */ ir_rx51.freq = DIV_ROUND_CLOSEST_ULL(pwm_get_period(pwm), NSEC_PER_SEC); diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c index c8e72079b4278..92af9caf6b5db 100644 --- a/drivers/media/usb/uvc/uvc_driver.c +++ b/drivers/media/usb/uvc/uvc_driver.c @@ -1253,12 +1253,9 @@ static int uvc_gpio_parse(struct uvc_device *dev) return PTR_ERR_OR_ZERO(gpio_privacy);
irq = gpiod_to_irq(gpio_privacy); - if (irq < 0) { - if (irq != EPROBE_DEFER) - dev_err(&dev->udev->dev, - "No IRQ for privacy GPIO (%d)\n", irq); - return irq; - } + if (irq < 0) + return dev_err_probe(&dev->udev->dev, irq, + "No IRQ for privacy GPIO\n");
unit = uvc_alloc_entity(UVC_EXT_GPIO_UNIT, UVC_EXT_GPIO_UNIT_ID, 0, 1); if (!unit)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ricardo Ribalda ribalda@chromium.org
[ Upstream commit a9ea1a3d88b7947ce8cadb2afceee7a54872bbc5 ]
We used the wrong device for the device managed functions. We used the usb device, when we should be using the interface device.
If we unbind the driver from the usb interface, the cleanup functions are never called. In our case, the IRQ is never disabled.
If an IRQ is triggered, it will try to access memory sections that are already free, causing an OOPS.
We cannot use the function devm_request_threaded_irq here. The devm_* clean functions may be called after the main structure is released by uvc_delete.
Luckily this bug has small impact, as it is only affected by devices with gpio units and the user has to unbind the device, a disconnect will not trigger this error.
Cc: stable@vger.kernel.org Fixes: 2886477ff987 ("media: uvcvideo: Implement UVC_EXT_GPIO_UNIT") Reviewed-by: Sergey Senozhatsky senozhatsky@chromium.org Signed-off-by: Ricardo Ribalda ribalda@chromium.org Reviewed-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Link: https://lore.kernel.org/r/20241106-uvc-crashrmmod-v6-1-fbf9781c6e83@chromium... Signed-off-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/usb/uvc/uvc_driver.c | 28 +++++++++++++++++++++------- drivers/media/usb/uvc/uvcvideo.h | 1 + 2 files changed, 22 insertions(+), 7 deletions(-)
diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c index 92af9caf6b5db..47a6cedd5578c 100644 --- a/drivers/media/usb/uvc/uvc_driver.c +++ b/drivers/media/usb/uvc/uvc_driver.c @@ -1247,14 +1247,14 @@ static int uvc_gpio_parse(struct uvc_device *dev) struct gpio_desc *gpio_privacy; int irq;
- gpio_privacy = devm_gpiod_get_optional(&dev->udev->dev, "privacy", + gpio_privacy = devm_gpiod_get_optional(&dev->intf->dev, "privacy", GPIOD_IN); if (IS_ERR_OR_NULL(gpio_privacy)) return PTR_ERR_OR_ZERO(gpio_privacy);
irq = gpiod_to_irq(gpio_privacy); if (irq < 0) - return dev_err_probe(&dev->udev->dev, irq, + return dev_err_probe(&dev->intf->dev, irq, "No IRQ for privacy GPIO\n");
unit = uvc_alloc_entity(UVC_EXT_GPIO_UNIT, UVC_EXT_GPIO_UNIT_ID, 0, 1); @@ -1280,15 +1280,27 @@ static int uvc_gpio_parse(struct uvc_device *dev) static int uvc_gpio_init_irq(struct uvc_device *dev) { struct uvc_entity *unit = dev->gpio_unit; + int ret;
if (!unit || unit->gpio.irq < 0) return 0;
- return devm_request_threaded_irq(&dev->udev->dev, unit->gpio.irq, NULL, - uvc_gpio_irq, - IRQF_ONESHOT | IRQF_TRIGGER_FALLING | - IRQF_TRIGGER_RISING, - "uvc_privacy_gpio", dev); + ret = request_threaded_irq(unit->gpio.irq, NULL, uvc_gpio_irq, + IRQF_ONESHOT | IRQF_TRIGGER_FALLING | + IRQF_TRIGGER_RISING, + "uvc_privacy_gpio", dev); + + unit->gpio.initialized = !ret; + + return ret; +} + +static void uvc_gpio_deinit(struct uvc_device *dev) +{ + if (!dev->gpio_unit || !dev->gpio_unit->gpio.initialized) + return; + + free_irq(dev->gpio_unit->gpio.irq, dev); }
/* ------------------------------------------------------------------------ @@ -1882,6 +1894,8 @@ static void uvc_unregister_video(struct uvc_device *dev) { struct uvc_streaming *stream;
+ uvc_gpio_deinit(dev); + list_for_each_entry(stream, &dev->streams, list) { /* Nothing to do here, continue. */ if (!video_is_registered(&stream->vdev)) diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h index 33e7475d4e64a..475bf185be8a8 100644 --- a/drivers/media/usb/uvc/uvcvideo.h +++ b/drivers/media/usb/uvc/uvcvideo.h @@ -227,6 +227,7 @@ struct uvc_entity { u8 *bmControls; struct gpio_desc *gpio_privacy; int irq; + bool initialized; } gpio; };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ricardo Ribalda ribalda@chromium.org
[ Upstream commit 64627daf0c5f7838111f52bbbd1a597cb5d6871a ]
Avoid using the iterators after the list_for_each() constructs. This patch should be a NOP, but makes cocci, happier:
drivers/media/usb/uvc/uvc_ctrl.c:1861:44-50: ERROR: invalid reference to the index variable of the iterator on line 1850 drivers/media/usb/uvc/uvc_ctrl.c:2195:17-23: ERROR: invalid reference to the index variable of the iterator on line 2179
Reviewed-by: Sergey Senozhatsky senozhatsky@chromium.org Reviewed-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Ricardo Ribalda ribalda@chromium.org Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Stable-dep-of: d9fecd096f67 ("media: uvcvideo: Only save async fh if success") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/usb/uvc/uvc_ctrl.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c index 1bad64f4499ae..986e94f7164a6 100644 --- a/drivers/media/usb/uvc/uvc_ctrl.c +++ b/drivers/media/usb/uvc/uvc_ctrl.c @@ -1786,16 +1786,18 @@ int __uvc_ctrl_commit(struct uvc_fh *handle, int rollback, list_for_each_entry(entity, &chain->entities, chain) { ret = uvc_ctrl_commit_entity(chain->dev, entity, rollback, &err_ctrl); - if (ret < 0) + if (ret < 0) { + if (ctrls) + ctrls->error_idx = + uvc_ctrl_find_ctrl_idx(entity, ctrls, + err_ctrl); goto done; + } }
if (!rollback) uvc_ctrl_send_events(handle, ctrls->controls, ctrls->count); done: - if (ret < 0 && ctrls) - ctrls->error_idx = uvc_ctrl_find_ctrl_idx(entity, ctrls, - err_ctrl); mutex_unlock(&chain->ctrl_mutex); return ret; } @@ -2100,7 +2102,7 @@ static int uvc_ctrl_init_xu_ctrl(struct uvc_device *dev, int uvc_xu_ctrl_query(struct uvc_video_chain *chain, struct uvc_xu_control_query *xqry) { - struct uvc_entity *entity; + struct uvc_entity *entity, *iter; struct uvc_control *ctrl; unsigned int i; bool found; @@ -2110,16 +2112,16 @@ int uvc_xu_ctrl_query(struct uvc_video_chain *chain, int ret;
/* Find the extension unit. */ - found = false; - list_for_each_entry(entity, &chain->entities, chain) { - if (UVC_ENTITY_TYPE(entity) == UVC_VC_EXTENSION_UNIT && - entity->id == xqry->unit) { - found = true; + entity = NULL; + list_for_each_entry(iter, &chain->entities, chain) { + if (UVC_ENTITY_TYPE(iter) == UVC_VC_EXTENSION_UNIT && + iter->id == xqry->unit) { + entity = iter; break; } }
- if (!found) { + if (!entity) { uvc_dbg(chain->dev, CONTROL, "Extension unit %u not found\n", xqry->unit); return -ENOENT;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ricardo Ribalda ribalda@chromium.org
[ Upstream commit d9fecd096f67a4469536e040a8a10bbfb665918b ]
Now we keep a reference to the active fh for any call to uvc_ctrl_set, regardless if it is an actual set or if it is a just a try or if the device refused the operation.
We should only keep the file handle if the device actually accepted applying the operation.
Cc: stable@vger.kernel.org Fixes: e5225c820c05 ("media: uvcvideo: Send a control event when a Control Change interrupt arrives") Suggested-by: Hans de Goede hdegoede@redhat.com Reviewed-by: Hans de Goede hdegoede@redhat.com Reviewed-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Ricardo Ribalda ribalda@chromium.org Link: https://lore.kernel.org/r/20241203-uvc-fix-async-v6-1-26c867231118@chromium.... Signed-off-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/usb/uvc/uvc_ctrl.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c index 986e94f7164a6..6be1aff23e71c 100644 --- a/drivers/media/usb/uvc/uvc_ctrl.c +++ b/drivers/media/usb/uvc/uvc_ctrl.c @@ -1700,7 +1700,10 @@ int uvc_ctrl_begin(struct uvc_video_chain *chain) }
static int uvc_ctrl_commit_entity(struct uvc_device *dev, - struct uvc_entity *entity, int rollback, struct uvc_control **err_ctrl) + struct uvc_fh *handle, + struct uvc_entity *entity, + int rollback, + struct uvc_control **err_ctrl) { struct uvc_control *ctrl; unsigned int i; @@ -1748,6 +1751,10 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev, *err_ctrl = ctrl; return ret; } + + if (!rollback && handle && + ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS) + ctrl->handle = handle; }
return 0; @@ -1784,8 +1791,8 @@ int __uvc_ctrl_commit(struct uvc_fh *handle, int rollback,
/* Find the control. */ list_for_each_entry(entity, &chain->entities, chain) { - ret = uvc_ctrl_commit_entity(chain->dev, entity, rollback, - &err_ctrl); + ret = uvc_ctrl_commit_entity(chain->dev, handle, entity, + rollback, &err_ctrl); if (ret < 0) { if (ctrls) ctrls->error_idx = @@ -1927,9 +1934,6 @@ int uvc_ctrl_set(struct uvc_fh *handle, mapping->set(mapping, value, uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT));
- if (ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS) - ctrl->handle = handle; - ctrl->dirty = 1; ctrl->modified = 1; return 0; @@ -2258,7 +2262,7 @@ int uvc_ctrl_restore_values(struct uvc_device *dev) ctrl->dirty = 1; }
- ret = uvc_ctrl_commit_entity(dev, entity, 0, NULL); + ret = uvc_ctrl_commit_entity(dev, NULL, entity, 0, NULL); if (ret < 0) return ret; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ricardo Ribalda ribalda@chromium.org
[ Upstream commit 221cd51efe4565501a3dbf04cc011b537dcce7fb ]
When an async control is written, we copy a pointer to the file handle that started the operation. That pointer will be used when the device is done. Which could be anytime in the future.
If the user closes that file descriptor, its structure will be freed, and there will be one dangling pointer per pending async control, that the driver will try to use.
Clean all the dangling pointers during release().
To avoid adding a performance penalty in the most common case (no async operation), a counter has been introduced with some logic to make sure that it is properly handled.
Cc: stable@vger.kernel.org Fixes: e5225c820c05 ("media: uvcvideo: Send a control event when a Control Change interrupt arrives") Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Ricardo Ribalda ribalda@chromium.org Reviewed-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Link: https://lore.kernel.org/r/20241203-uvc-fix-async-v6-3-26c867231118@chromium.... Signed-off-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/usb/uvc/uvc_ctrl.c | 59 ++++++++++++++++++++++++++++++-- drivers/media/usb/uvc/uvc_v4l2.c | 2 ++ drivers/media/usb/uvc/uvcvideo.h | 9 ++++- 3 files changed, 67 insertions(+), 3 deletions(-)
diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c index 6be1aff23e71c..69f9f451ab400 100644 --- a/drivers/media/usb/uvc/uvc_ctrl.c +++ b/drivers/media/usb/uvc/uvc_ctrl.c @@ -1470,6 +1470,40 @@ static void uvc_ctrl_send_slave_event(struct uvc_video_chain *chain, uvc_ctrl_send_event(chain, handle, ctrl, mapping, val, changes); }
+static void uvc_ctrl_set_handle(struct uvc_fh *handle, struct uvc_control *ctrl, + struct uvc_fh *new_handle) +{ + lockdep_assert_held(&handle->chain->ctrl_mutex); + + if (new_handle) { + if (ctrl->handle) + dev_warn_ratelimited(&handle->stream->dev->udev->dev, + "UVC non compliance: Setting an async control with a pending operation."); + + if (new_handle == ctrl->handle) + return; + + if (ctrl->handle) { + WARN_ON(!ctrl->handle->pending_async_ctrls); + if (ctrl->handle->pending_async_ctrls) + ctrl->handle->pending_async_ctrls--; + } + + ctrl->handle = new_handle; + handle->pending_async_ctrls++; + return; + } + + /* Cannot clear the handle for a control not owned by us.*/ + if (WARN_ON(ctrl->handle != handle)) + return; + + ctrl->handle = NULL; + if (WARN_ON(!handle->pending_async_ctrls)) + return; + handle->pending_async_ctrls--; +} + void uvc_ctrl_status_event(struct uvc_video_chain *chain, struct uvc_control *ctrl, const u8 *data) { @@ -1480,7 +1514,8 @@ void uvc_ctrl_status_event(struct uvc_video_chain *chain, mutex_lock(&chain->ctrl_mutex);
handle = ctrl->handle; - ctrl->handle = NULL; + if (handle) + uvc_ctrl_set_handle(handle, ctrl, NULL);
list_for_each_entry(mapping, &ctrl->info.mappings, list) { s32 value = __uvc_ctrl_get_value(mapping, data); @@ -1754,7 +1789,7 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
if (!rollback && handle && ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS) - ctrl->handle = handle; + uvc_ctrl_set_handle(handle, ctrl, handle); }
return 0; @@ -2666,6 +2701,26 @@ int uvc_ctrl_init_device(struct uvc_device *dev) return 0; }
+void uvc_ctrl_cleanup_fh(struct uvc_fh *handle) +{ + struct uvc_entity *entity; + + guard(mutex)(&handle->chain->ctrl_mutex); + + if (!handle->pending_async_ctrls) + return; + + list_for_each_entry(entity, &handle->chain->dev->entities, list) { + for (unsigned int i = 0; i < entity->ncontrols; ++i) { + if (entity->controls[i].handle != handle) + continue; + uvc_ctrl_set_handle(handle, &entity->controls[i], NULL); + } + } + + WARN_ON(handle->pending_async_ctrls); +} + /* * Cleanup device controls. */ diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c index 950b42d78a107..bd4677a6e653a 100644 --- a/drivers/media/usb/uvc/uvc_v4l2.c +++ b/drivers/media/usb/uvc/uvc_v4l2.c @@ -607,6 +607,8 @@ static int uvc_v4l2_release(struct file *file)
uvc_dbg(stream->dev, CALLS, "%s\n", __func__);
+ uvc_ctrl_cleanup_fh(handle); + /* Only free resources if this is a privileged handle. */ if (uvc_has_privileges(handle)) uvc_queue_release(&stream->queue); diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h index 475bf185be8a8..45caa8523426d 100644 --- a/drivers/media/usb/uvc/uvcvideo.h +++ b/drivers/media/usb/uvc/uvcvideo.h @@ -331,7 +331,11 @@ struct uvc_video_chain { struct uvc_entity *processing; /* Processing unit */ struct uvc_entity *selector; /* Selector unit */
- struct mutex ctrl_mutex; /* Protects ctrl.info */ + struct mutex ctrl_mutex; /* + * Protects ctrl.info, + * ctrl.handle and + * uvc_fh.pending_async_ctrls + */
struct v4l2_prio_state prio; /* V4L2 priority state */ u32 caps; /* V4L2 chain-wide caps */ @@ -585,6 +589,7 @@ struct uvc_fh { struct uvc_video_chain *chain; struct uvc_streaming *stream; enum uvc_handle_state state; + unsigned int pending_async_ctrls; };
struct uvc_driver { @@ -769,6 +774,8 @@ int uvc_ctrl_is_accessible(struct uvc_video_chain *chain, u32 v4l2_id, int uvc_xu_ctrl_query(struct uvc_video_chain *chain, struct uvc_xu_control_query *xqry);
+void uvc_ctrl_cleanup_fh(struct uvc_fh *handle); + /* Utility functions */ struct usb_host_endpoint *uvc_find_endpoint(struct usb_host_interface *alts, u8 epaddr);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Roy Luo royluo@google.com
[ Upstream commit 0ef40f399aa2be8c04aee9b7430705612c104ce5 ]
udc device and gadget device are tightly coupled, yet there's no good way to corelate the two. Add a sysfs link in udc that points to the corresponding gadget device. An example use case: userspace configures a f_midi configfs driver and bind the udc device, then it tries to locate the corresponding midi device, which is a child device of the gadget device. The gadget device that's associated to the udc device has to be identified in order to index the midi device. Having a sysfs link would make things much easier.
Signed-off-by: Roy Luo royluo@google.com Link: https://lore.kernel.org/r/20240307030922.3573161-1-royluo@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: 399a45e5237c ("usb: gadget: core: flush gadget workqueue after device removal") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/gadget/udc/core.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c index 6d7f8e98ba2a8..5085580eeb3ad 100644 --- a/drivers/usb/gadget/udc/core.c +++ b/drivers/usb/gadget/udc/core.c @@ -1419,8 +1419,16 @@ int usb_add_gadget(struct usb_gadget *gadget) if (ret) goto err_free_id;
+ ret = sysfs_create_link(&udc->dev.kobj, + &gadget->dev.kobj, "gadget"); + if (ret) + goto err_del_gadget; + return 0;
+ err_del_gadget: + device_del(&gadget->dev); + err_free_id: ida_free(&gadget_id_numbers, gadget->id_number);
@@ -1529,6 +1537,7 @@ void usb_del_gadget(struct usb_gadget *gadget) mutex_unlock(&udc_lock);
kobject_uevent(&udc->dev.kobj, KOBJ_REMOVE); + sysfs_remove_link(&udc->dev.kobj, "gadget"); flush_work(&gadget->work); device_del(&gadget->dev); ida_free(&gadget_id_numbers, gadget->id_number);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Roy Luo royluo@google.com
[ Upstream commit 399a45e5237ca14037120b1b895bd38a3b4492ea ]
device_del() can lead to new work being scheduled in gadget->work workqueue. This is observed, for example, with the dwc3 driver with the following call stack: device_del() gadget_unbind_driver() usb_gadget_disconnect_locked() dwc3_gadget_pullup() dwc3_gadget_soft_disconnect() usb_gadget_set_state() schedule_work(&gadget->work)
Move flush_work() after device_del() to ensure the workqueue is cleaned up.
Fixes: 5702f75375aa9 ("usb: gadget: udc-core: move sysfs_notify() to a workqueue") Cc: stable stable@kernel.org Signed-off-by: Roy Luo royluo@google.com Reviewed-by: Alan Stern stern@rowland.harvard.edu Reviewed-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Link: https://lore.kernel.org/r/20250204233642.666991-1-royluo@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/gadget/udc/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c index 5085580eeb3ad..5adb6e831126a 100644 --- a/drivers/usb/gadget/udc/core.c +++ b/drivers/usb/gadget/udc/core.c @@ -1538,8 +1538,8 @@ void usb_del_gadget(struct usb_gadget *gadget)
kobject_uevent(&udc->dev.kobj, KOBJ_REMOVE); sysfs_remove_link(&udc->dev.kobj, "gadget"); - flush_work(&gadget->work); device_del(&gadget->dev); + flush_work(&gadget->work); ida_free(&gadget_id_numbers, gadget->id_number); cancel_work_sync(&udc->vbus_work); device_unregister(&udc->dev);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jill Donahue jilliandonahue58@gmail.com
[ Upstream commit 4ab37fcb42832cdd3e9d5e50653285ca84d6686f ]
When using USB MIDI, a lock is attempted to be acquired twice through a re-entrant call to f_midi_transmit, causing a deadlock.
Fix it by using queue_work() to schedule the inner f_midi_transmit() via a high priority work queue from the completion handler.
Link: https://lore.kernel.org/all/CAArt=LjxU0fUZOj06X+5tkeGT+6RbXzpWg1h4t4Fwa_KGVA... Fixes: d5daf49b58661 ("USB: gadget: midi: add midi function driver") Cc: stable stable@kernel.org Signed-off-by: Jill Donahue jilliandonahue58@gmail.com Link: https://lore.kernel.org/r/20250211174805.1369265-1-jdonahue@fender.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/gadget/function/f_midi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/gadget/function/f_midi.c b/drivers/usb/gadget/function/f_midi.c index 1d8521103b661..dd1cfeeffb671 100644 --- a/drivers/usb/gadget/function/f_midi.c +++ b/drivers/usb/gadget/function/f_midi.c @@ -282,7 +282,7 @@ f_midi_complete(struct usb_ep *ep, struct usb_request *req) /* Our transmit completed. See if there's more to go. * f_midi_transmit eats req, don't queue it again. */ req->length = 0; - f_midi_transmit(midi); + queue_work(system_highpri_wq, &midi->work); return; } break;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: John Keeping jkeeping@inmusicbrands.com
[ Upstream commit 6b24e67b4056ba83b1e95e005b7e50fdb1cc6cf4 ]
Commit 2f45a4e289779 ("ASoC: rockchip: i2s_tdm: Fixup config for SND_SOC_DAIFMT_DSP_A/B") applied a partial change to fix the configuration for DSP A and DSP B formats.
The shift control also needs updating to set the correct offset for frame data compared to LRCK. Set the correct values.
Fixes: 081068fd64140 ("ASoC: rockchip: add support for i2s-tdm controller") Signed-off-by: John Keeping jkeeping@inmusicbrands.com Link: https://patch.msgid.link/20250204161311.2117240-1-jkeeping@inmusicbrands.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/rockchip/rockchip_i2s_tdm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c index d20438cf8fc4a..ff743ba0a9441 100644 --- a/sound/soc/rockchip/rockchip_i2s_tdm.c +++ b/sound/soc/rockchip/rockchip_i2s_tdm.c @@ -453,11 +453,11 @@ static int rockchip_i2s_tdm_set_fmt(struct snd_soc_dai *cpu_dai, break; case SND_SOC_DAIFMT_DSP_A: val = I2S_TXCR_TFS_TDM_PCM; - tdm_val = TDM_SHIFT_CTRL(0); + tdm_val = TDM_SHIFT_CTRL(2); break; case SND_SOC_DAIFMT_DSP_B: val = I2S_TXCR_TFS_TDM_PCM; - tdm_val = TDM_SHIFT_CTRL(2); + tdm_val = TDM_SHIFT_CTRL(4); break; default: ret = -EINVAL;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Ellerman mpe@ellerman.id.au
[ Upstream commit 8ae4f16f7d7b59cca55aeca6db7c9636ffe7fbaa ]
The stub versions of __real_pte() etc are only used with HPT & 4K pages, so move them into the hash-4k.h header.
Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://msgid.link/20240821080729.872034-1-mpe@ellerman.id.au Stable-dep-of: 61bcc752d1b8 ("powerpc/64s: Rewrite __real_pte() and __rpte_to_hidx() as static inline") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/include/asm/book3s/64/hash-4k.h | 20 +++++++++++++++ arch/powerpc/include/asm/book3s/64/pgtable.h | 26 -------------------- 2 files changed, 20 insertions(+), 26 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index b6ac4f86c87b4..5a79dd66b2ed0 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -89,6 +89,26 @@ static inline int hash__hugepd_ok(hugepd_t hpd) } #endif
+/* + * With 4K page size the real_pte machinery is all nops. + */ +#define __real_pte(e, p, o) ((real_pte_t){(e)}) +#define __rpte_to_pte(r) ((r).pte) +#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >> H_PAGE_F_GIX_SHIFT) + +#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \ + do { \ + index = 0; \ + shift = mmu_psize_defs[psize].shift; \ + +#define pte_iterate_hashed_end() } while(0) + +/* + * We expect this to be called only for user addresses or kernel virtual + * addresses other than the linear mapping. + */ +#define pte_pagesize_index(mm, addr, pte) MMU_PAGE_4K + /* * 4K PTE format is different from 64K PTE format. Saving the hash_slot is just * a matter of returning the PTE bits that need to be modified. On 64K PTE, diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index c436d84226540..fdbe0b381f3ae 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -318,32 +318,6 @@ extern unsigned long pci_io_base;
#ifndef __ASSEMBLY__
-/* - * This is the default implementation of various PTE accessors, it's - * used in all cases except Book3S with 64K pages where we have a - * concept of sub-pages - */ -#ifndef __real_pte - -#define __real_pte(e, p, o) ((real_pte_t){(e)}) -#define __rpte_to_pte(r) ((r).pte) -#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >> H_PAGE_F_GIX_SHIFT) - -#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \ - do { \ - index = 0; \ - shift = mmu_psize_defs[psize].shift; \ - -#define pte_iterate_hashed_end() } while(0) - -/* - * We expect this to be called only for user addresses or kernel virtual - * addresses other than the linear mapping. - */ -#define pte_pagesize_index(mm, addr, pte) MMU_PAGE_4K - -#endif /* __real_pte */ - static inline unsigned long pte_update(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long clr, unsigned long set, int huge)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christophe Leroy christophe.leroy@csgroup.eu
[ Upstream commit 61bcc752d1b81fde3cae454ff20c1d3c359df500 ]
Rewrite __real_pte() and __rpte_to_hidx() as static inline in order to avoid following warnings/errors when building with 4k page size:
CC arch/powerpc/mm/book3s64/hash_tlb.o arch/powerpc/mm/book3s64/hash_tlb.c: In function 'hpte_need_flush': arch/powerpc/mm/book3s64/hash_tlb.c:49:16: error: variable 'offset' set but not used [-Werror=unused-but-set-variable] 49 | int i, offset; | ^~~~~~
CC arch/powerpc/mm/book3s64/hash_native.o arch/powerpc/mm/book3s64/hash_native.c: In function 'native_flush_hash_range': arch/powerpc/mm/book3s64/hash_native.c:782:29: error: variable 'index' set but not used [-Werror=unused-but-set-variable] 782 | unsigned long hash, index, hidx, shift, slot; | ^~~~~
Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/oe-kbuild-all/202501081741.AYFwybsq-lkp@intel.com/ Fixes: ff31e105464d ("powerpc/mm/hash64: Store the slot information at the right offset for hugetlb") Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Reviewed-by: Ritesh Harjani (IBM) ritesh.list@gmail.com Signed-off-by: Madhavan Srinivasan maddy@linux.ibm.com Link: https://patch.msgid.link/e0d340a5b7bd478ecbf245d826e6ab2778b74e06.1736706263... Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/include/asm/book3s/64/hash-4k.h | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index 5a79dd66b2ed0..433d164374cb6 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -92,9 +92,17 @@ static inline int hash__hugepd_ok(hugepd_t hpd) /* * With 4K page size the real_pte machinery is all nops. */ -#define __real_pte(e, p, o) ((real_pte_t){(e)}) +static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep, int offset) +{ + return (real_pte_t){pte}; +} + #define __rpte_to_pte(r) ((r).pte) -#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >> H_PAGE_F_GIX_SHIFT) + +static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index) +{ + return pte_val(__rpte_to_pte(rpte)) >> H_PAGE_F_GIX_SHIFT; +}
#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \ do { \
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kailang Yang kailang@realtek.com
[ Upstream commit 174448badb4409491bfba2e6b46f7aa078741c5e ]
Headset MIC will no function when power_save=0.
Fixes: 1fd50509fe14 ("ALSA: hda/realtek: Update ALC225 depop procedure") Link: https://bugzilla.kernel.org/show_bug.cgi?id=219743 Signed-off-by: Kailang Yang kailang@realtek.com Link: https://lore.kernel.org/0474a095ab0044d0939ec4bf4362423d@realtek.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 183c8a587acfe..96725b6599ec9 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -3776,6 +3776,7 @@ static void alc225_init(struct hda_codec *codec) AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
msleep(75); + alc_update_coef_idx(codec, 0x4a, 3 << 10, 0); alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */ } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christophe Leroy christophe.leroy@csgroup.eu
[ Upstream commit d262a192d38e527faa5984629aabda2e0d1c4f54 ]
Erhard reported the following KASAN hit while booting his PowerMac G4 with a KASAN-enabled kernel 6.13-rc6:
BUG: KASAN: vmalloc-out-of-bounds in copy_to_kernel_nofault+0xd8/0x1c8 Write of size 8 at addr f1000000 by task chronyd/1293
CPU: 0 UID: 123 PID: 1293 Comm: chronyd Tainted: G W 6.13.0-rc6-PMacG4 #2 Tainted: [W]=WARN Hardware name: PowerMac3,6 7455 0x80010303 PowerMac Call Trace: [c2437590] [c1631a84] dump_stack_lvl+0x70/0x8c (unreliable) [c24375b0] [c0504998] print_report+0xdc/0x504 [c2437610] [c050475c] kasan_report+0xf8/0x108 [c2437690] [c0505a3c] kasan_check_range+0x24/0x18c [c24376a0] [c03fb5e4] copy_to_kernel_nofault+0xd8/0x1c8 [c24376c0] [c004c014] patch_instructions+0x15c/0x16c [c2437710] [c00731a8] bpf_arch_text_copy+0x60/0x7c [c2437730] [c0281168] bpf_jit_binary_pack_finalize+0x50/0xac [c2437750] [c0073cf4] bpf_int_jit_compile+0xb30/0xdec [c2437880] [c0280394] bpf_prog_select_runtime+0x15c/0x478 [c24378d0] [c1263428] bpf_prepare_filter+0xbf8/0xc14 [c2437990] [c12677ec] bpf_prog_create_from_user+0x258/0x2b4 [c24379d0] [c027111c] do_seccomp+0x3dc/0x1890 [c2437ac0] [c001d8e0] system_call_exception+0x2dc/0x420 [c2437f30] [c00281ac] ret_from_syscall+0x0/0x2c --- interrupt: c00 at 0x5a1274 NIP: 005a1274 LR: 006a3b3c CTR: 005296c8 REGS: c2437f40 TRAP: 0c00 Tainted: G W (6.13.0-rc6-PMacG4) MSR: 0200f932 <VEC,EE,PR,FP,ME,IR,DR,RI> CR: 24004422 XER: 00000000
GPR00: 00000166 af8f3fa0 a7ee3540 00000001 00000000 013b6500 005a5858 0200f932 GPR08: 00000000 00001fe9 013d5fc8 005296c8 2822244c 00b2fcd8 00000000 af8f4b57 GPR16: 00000000 00000001 00000000 00000000 00000000 00000001 00000000 00000002 GPR24: 00afdbb0 00000000 00000000 00000000 006e0004 013ce060 006e7c1c 00000001 NIP [005a1274] 0x5a1274 LR [006a3b3c] 0x6a3b3c --- interrupt: c00
The buggy address belongs to the virtual mapping at [f1000000, f1002000) created by: text_area_cpu_up+0x20/0x190
The buggy address belongs to the physical page: page: refcount:1 mapcount:0 mapping:00000000 index:0x0 pfn:0x76e30 flags: 0x80000000(zone=2) raw: 80000000 00000000 00000122 00000000 00000000 00000000 ffffffff 00000001 raw: 00000000 page dumped because: kasan: bad access detected
Memory state around the buggy address: f0ffff00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f0ffff80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f1000000: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
^ f1000080: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f1000100: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 ==================================================================
f8 corresponds to KASAN_VMALLOC_INVALID which means the area is not initialised hence not supposed to be used yet.
Powerpc text patching infrastructure allocates a virtual memory area using get_vm_area() and flags it as VM_ALLOC. But that flag is meant to be used for vmalloc() and vmalloc() allocated memory is not supposed to be used before a call to __vmalloc_node_range() which is never called for that area.
That went undetected until commit e4137f08816b ("mm, kasan, kmsan: instrument copy_from/to_kernel_nofault")
The area allocated by text_area_cpu_up() is not vmalloc memory, it is mapped directly on demand when needed by map_kernel_page(). There is no VM flag corresponding to such usage, so just pass no flag. That way the area will be unpoisonned and usable immediately.
Reported-by: Erhard Furtner erhard_f@mailbox.org Closes: https://lore.kernel.org/all/20250112135832.57c92322@yea/ Fixes: 37bc3e5fd764 ("powerpc/lib/code-patching: Use alternate map for patch_instruction()") Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Signed-off-by: Madhavan Srinivasan maddy@linux.ibm.com Link: https://patch.msgid.link/06621423da339b374f48c0886e3a5db18e896be8.1739342693... Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/lib/code-patching.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index ad0cf3108dd09..65353cf2b0a8a 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -53,7 +53,7 @@ static int text_area_cpu_up(unsigned int cpu) unsigned long addr; int err;
- area = get_vm_area(PAGE_SIZE, VM_ALLOC); + area = get_vm_area(PAGE_SIZE, 0); if (!area) { WARN_ONCE(1, "Failed to create text area for cpu %d\n", cpu);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 9593172d93b9f91c362baec4643003dc29802929 ]
syzkaller reported a use-after-free in geneve_find_dev() [0] without repro.
geneve_configure() links struct geneve_dev.next to net_generic(net, geneve_net_id)->geneve_list.
The net here could differ from dev_net(dev) if IFLA_NET_NS_PID, IFLA_NET_NS_FD, or IFLA_TARGET_NETNSID is set.
When dev_net(dev) is dismantled, geneve_exit_batch_rtnl() finally calls unregister_netdevice_queue() for each dev in the netns, and later the dev is freed.
However, its geneve_dev.next is still linked to the backend UDP socket netns.
Then, use-after-free will occur when another geneve dev is created in the netns.
Let's call geneve_dellink() instead in geneve_destroy_tunnels().
[0]: BUG: KASAN: slab-use-after-free in geneve_find_dev drivers/net/geneve.c:1295 [inline] BUG: KASAN: slab-use-after-free in geneve_configure+0x234/0x858 drivers/net/geneve.c:1343 Read of size 2 at addr ffff000054d6ee24 by task syz.1.4029/13441
CPU: 1 UID: 0 PID: 13441 Comm: syz.1.4029 Not tainted 6.13.0-g0ad9617c78ac #24 dc35ca22c79fb82e8e7bc5c9c9adafea898b1e3d Hardware name: linux,dummy-virt (DT) Call trace: show_stack+0x38/0x50 arch/arm64/kernel/stacktrace.c:466 (C) __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0xbc/0x108 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0x16c/0x6f0 mm/kasan/report.c:489 kasan_report+0xc0/0x120 mm/kasan/report.c:602 __asan_report_load2_noabort+0x20/0x30 mm/kasan/report_generic.c:379 geneve_find_dev drivers/net/geneve.c:1295 [inline] geneve_configure+0x234/0x858 drivers/net/geneve.c:1343 geneve_newlink+0xb8/0x128 drivers/net/geneve.c:1634 rtnl_newlink_create+0x23c/0x868 net/core/rtnetlink.c:3795 __rtnl_newlink net/core/rtnetlink.c:3906 [inline] rtnl_newlink+0x1054/0x1630 net/core/rtnetlink.c:4021 rtnetlink_rcv_msg+0x61c/0x918 net/core/rtnetlink.c:6911 netlink_rcv_skb+0x1dc/0x398 net/netlink/af_netlink.c:2543 rtnetlink_rcv+0x34/0x50 net/core/rtnetlink.c:6938 netlink_unicast_kernel net/netlink/af_netlink.c:1322 [inline] netlink_unicast+0x618/0x838 net/netlink/af_netlink.c:1348 netlink_sendmsg+0x5fc/0x8b0 net/netlink/af_netlink.c:1892 sock_sendmsg_nosec net/socket.c:713 [inline] __sock_sendmsg net/socket.c:728 [inline] ____sys_sendmsg+0x410/0x6f8 net/socket.c:2568 ___sys_sendmsg+0x178/0x1d8 net/socket.c:2622 __sys_sendmsg net/socket.c:2654 [inline] __do_sys_sendmsg net/socket.c:2659 [inline] __se_sys_sendmsg net/socket.c:2657 [inline] __arm64_sys_sendmsg+0x12c/0x1c8 net/socket.c:2657 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline] invoke_syscall+0x90/0x278 arch/arm64/kernel/syscall.c:49 el0_svc_common+0x13c/0x250 arch/arm64/kernel/syscall.c:132 do_el0_svc+0x54/0x70 arch/arm64/kernel/syscall.c:151 el0_svc+0x4c/0xa8 arch/arm64/kernel/entry-common.c:744 el0t_64_sync_handler+0x78/0x108 arch/arm64/kernel/entry-common.c:762 el0t_64_sync+0x198/0x1a0 arch/arm64/kernel/entry.S:600
Allocated by task 13247: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x30/0x68 mm/kasan/common.c:68 kasan_save_alloc_info+0x44/0x58 mm/kasan/generic.c:568 poison_kmalloc_redzone mm/kasan/common.c:377 [inline] __kasan_kmalloc+0x84/0xa0 mm/kasan/common.c:394 kasan_kmalloc include/linux/kasan.h:260 [inline] __do_kmalloc_node mm/slub.c:4298 [inline] __kmalloc_node_noprof+0x2a0/0x560 mm/slub.c:4304 __kvmalloc_node_noprof+0x9c/0x230 mm/util.c:645 alloc_netdev_mqs+0xb8/0x11a0 net/core/dev.c:11470 rtnl_create_link+0x2b8/0xb50 net/core/rtnetlink.c:3604 rtnl_newlink_create+0x19c/0x868 net/core/rtnetlink.c:3780 __rtnl_newlink net/core/rtnetlink.c:3906 [inline] rtnl_newlink+0x1054/0x1630 net/core/rtnetlink.c:4021 rtnetlink_rcv_msg+0x61c/0x918 net/core/rtnetlink.c:6911 netlink_rcv_skb+0x1dc/0x398 net/netlink/af_netlink.c:2543 rtnetlink_rcv+0x34/0x50 net/core/rtnetlink.c:6938 netlink_unicast_kernel net/netlink/af_netlink.c:1322 [inline] netlink_unicast+0x618/0x838 net/netlink/af_netlink.c:1348 netlink_sendmsg+0x5fc/0x8b0 net/netlink/af_netlink.c:1892 sock_sendmsg_nosec net/socket.c:713 [inline] __sock_sendmsg net/socket.c:728 [inline] ____sys_sendmsg+0x410/0x6f8 net/socket.c:2568 ___sys_sendmsg+0x178/0x1d8 net/socket.c:2622 __sys_sendmsg net/socket.c:2654 [inline] __do_sys_sendmsg net/socket.c:2659 [inline] __se_sys_sendmsg net/socket.c:2657 [inline] __arm64_sys_sendmsg+0x12c/0x1c8 net/socket.c:2657 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline] invoke_syscall+0x90/0x278 arch/arm64/kernel/syscall.c:49 el0_svc_common+0x13c/0x250 arch/arm64/kernel/syscall.c:132 do_el0_svc+0x54/0x70 arch/arm64/kernel/syscall.c:151 el0_svc+0x4c/0xa8 arch/arm64/kernel/entry-common.c:744 el0t_64_sync_handler+0x78/0x108 arch/arm64/kernel/entry-common.c:762 el0t_64_sync+0x198/0x1a0 arch/arm64/kernel/entry.S:600
Freed by task 45: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x30/0x68 mm/kasan/common.c:68 kasan_save_free_info+0x58/0x70 mm/kasan/generic.c:582 poison_slab_object mm/kasan/common.c:247 [inline] __kasan_slab_free+0x48/0x68 mm/kasan/common.c:264 kasan_slab_free include/linux/kasan.h:233 [inline] slab_free_hook mm/slub.c:2353 [inline] slab_free mm/slub.c:4613 [inline] kfree+0x140/0x420 mm/slub.c:4761 kvfree+0x4c/0x68 mm/util.c:688 netdev_release+0x94/0xc8 net/core/net-sysfs.c:2065 device_release+0x98/0x1c0 kobject_cleanup lib/kobject.c:689 [inline] kobject_release lib/kobject.c:720 [inline] kref_put include/linux/kref.h:65 [inline] kobject_put+0x2b0/0x438 lib/kobject.c:737 netdev_run_todo+0xe5c/0xfc8 net/core/dev.c:11185 rtnl_unlock+0x20/0x38 net/core/rtnetlink.c:151 cleanup_net+0x4fc/0x8c0 net/core/net_namespace.c:648 process_one_work+0x700/0x1398 kernel/workqueue.c:3236 process_scheduled_works kernel/workqueue.c:3317 [inline] worker_thread+0x8c4/0xe10 kernel/workqueue.c:3398 kthread+0x4bc/0x608 kernel/kthread.c:464 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:862
The buggy address belongs to the object at ffff000054d6e000 which belongs to the cache kmalloc-cg-4k of size 4096 The buggy address is located 3620 bytes inside of freed 4096-byte region [ffff000054d6e000, ffff000054d6f000)
The buggy address belongs to the physical page: page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x94d68 head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 memcg:ffff000016276181 flags: 0x3fffe0000000040(head|node=0|zone=0|lastcpupid=0x1ffff) page_type: f5(slab) raw: 03fffe0000000040 ffff0000c000f500 dead000000000122 0000000000000000 raw: 0000000000000000 0000000000040004 00000001f5000000 ffff000016276181 head: 03fffe0000000040 ffff0000c000f500 dead000000000122 0000000000000000 head: 0000000000000000 0000000000040004 00000001f5000000 ffff000016276181 head: 03fffe0000000003 fffffdffc1535a01 ffffffffffffffff 0000000000000000 head: 0000000000000008 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: kasan: bad access detected
Memory state around the buggy address: ffff000054d6ed00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff000054d6ed80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff000054d6ee00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^ ffff000054d6ee80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff000054d6ef00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
Fixes: 2d07dc79fe04 ("geneve: add initial netdev driver for GENEVE tunnels") Reported-by: syzkaller syzkaller@googlegroups.com Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://patch.msgid.link/20250213043354.91368-1-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/geneve.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c index 27b570678c9fc..15b85eb3daa19 100644 --- a/drivers/net/geneve.c +++ b/drivers/net/geneve.c @@ -1970,16 +1970,11 @@ static void geneve_destroy_tunnels(struct net *net, struct list_head *head) /* gather any geneve devices that were moved into this ns */ for_each_netdev_safe(net, dev, aux) if (dev->rtnl_link_ops == &geneve_link_ops) - unregister_netdevice_queue(dev, head); + geneve_dellink(dev, head);
/* now gather any other geneve devices that were created in this ns */ - list_for_each_entry_safe(geneve, next, &gn->geneve_list, next) { - /* If geneve->dev is in the same netns, it was already added - * to the list by the previous loop. - */ - if (!net_eq(dev_net(geneve->dev), net)) - unregister_netdevice_queue(geneve->dev, head); - } + list_for_each_entry_safe(geneve, next, &gn->geneve_list, next) + geneve_dellink(geneve->dev, head); }
static void __net_exit geneve_exit_batch_net(struct list_head *net_list)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vitaly Rodionov vitalyr@opensource.cirrus.com
[ Upstream commit 08b613b9e2ba431db3bd15cb68ca72472a50ef5c ]
This patch corrects the full-scale volume setting logic. On certain platforms, the full-scale volume bit is required. The current logic mistakenly sets this bit and incorrectly clears reserved bit 0, causing the headphone output to be muted.
Fixes: 342b6b610ae2 ("ALSA: hda/cs8409: Fix Full Scale Volume setting for all variants") Signed-off-by: Vitaly Rodionov vitalyr@opensource.cirrus.com Link: https://patch.msgid.link/20250214210736.30814-1-vitalyr@opensource.cirrus.co... Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_cs8409-tables.c | 6 +++--- sound/pci/hda/patch_cs8409.c | 20 +++++++++++--------- sound/pci/hda/patch_cs8409.h | 5 +++-- 3 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/sound/pci/hda/patch_cs8409-tables.c b/sound/pci/hda/patch_cs8409-tables.c index b288874e401e5..b2e1856ab8918 100644 --- a/sound/pci/hda/patch_cs8409-tables.c +++ b/sound/pci/hda/patch_cs8409-tables.c @@ -121,7 +121,7 @@ static const struct cs8409_i2c_param cs42l42_init_reg_seq[] = { { CS42L42_MIXER_CHA_VOL, 0x3F }, { CS42L42_MIXER_CHB_VOL, 0x3F }, { CS42L42_MIXER_ADC_VOL, 0x3f }, - { CS42L42_HP_CTL, 0x03 }, + { CS42L42_HP_CTL, 0x0D }, { CS42L42_MIC_DET_CTL1, 0xB6 }, { CS42L42_TIPSENSE_CTL, 0xC2 }, { CS42L42_HS_CLAMP_DISABLE, 0x01 }, @@ -315,7 +315,7 @@ static const struct cs8409_i2c_param dolphin_c0_init_reg_seq[] = { { CS42L42_ASP_TX_SZ_EN, 0x01 }, { CS42L42_PWR_CTL1, 0x0A }, { CS42L42_PWR_CTL2, 0x84 }, - { CS42L42_HP_CTL, 0x03 }, + { CS42L42_HP_CTL, 0x0D }, { CS42L42_MIXER_CHA_VOL, 0x3F }, { CS42L42_MIXER_CHB_VOL, 0x3F }, { CS42L42_MIXER_ADC_VOL, 0x3f }, @@ -371,7 +371,7 @@ static const struct cs8409_i2c_param dolphin_c1_init_reg_seq[] = { { CS42L42_ASP_TX_SZ_EN, 0x00 }, { CS42L42_PWR_CTL1, 0x0E }, { CS42L42_PWR_CTL2, 0x84 }, - { CS42L42_HP_CTL, 0x01 }, + { CS42L42_HP_CTL, 0x0D }, { CS42L42_MIXER_CHA_VOL, 0x3F }, { CS42L42_MIXER_CHB_VOL, 0x3F }, { CS42L42_MIXER_ADC_VOL, 0x3f }, diff --git a/sound/pci/hda/patch_cs8409.c b/sound/pci/hda/patch_cs8409.c index 892223d9e64ab..b003ac1990ba8 100644 --- a/sound/pci/hda/patch_cs8409.c +++ b/sound/pci/hda/patch_cs8409.c @@ -876,7 +876,7 @@ static void cs42l42_resume(struct sub_codec *cs42l42) { CS42L42_DET_INT_STATUS2, 0x00 }, { CS42L42_TSRS_PLUG_STATUS, 0x00 }, }; - int fsv_old, fsv_new; + unsigned int fsv;
/* Bring CS42L42 out of Reset */ spec->gpio_data = snd_hda_codec_read(codec, CS8409_PIN_AFG, 0, AC_VERB_GET_GPIO_DATA, 0); @@ -893,13 +893,15 @@ static void cs42l42_resume(struct sub_codec *cs42l42) /* Clear interrupts, by reading interrupt status registers */ cs8409_i2c_bulk_read(cs42l42, irq_regs, ARRAY_SIZE(irq_regs));
- fsv_old = cs8409_i2c_read(cs42l42, CS42L42_HP_CTL); - if (cs42l42->full_scale_vol == CS42L42_FULL_SCALE_VOL_0DB) - fsv_new = fsv_old & ~CS42L42_FULL_SCALE_VOL_MASK; - else - fsv_new = fsv_old & CS42L42_FULL_SCALE_VOL_MASK; - if (fsv_new != fsv_old) - cs8409_i2c_write(cs42l42, CS42L42_HP_CTL, fsv_new); + fsv = cs8409_i2c_read(cs42l42, CS42L42_HP_CTL); + if (cs42l42->full_scale_vol) { + // Set the full scale volume bit + fsv |= CS42L42_FULL_SCALE_VOL_MASK; + cs8409_i2c_write(cs42l42, CS42L42_HP_CTL, fsv); + } + // Unmute analog channels A and B + fsv = (fsv & ~CS42L42_ANA_MUTE_AB); + cs8409_i2c_write(cs42l42, CS42L42_HP_CTL, fsv);
/* we have to explicitly allow unsol event handling even during the * resume phase so that the jack event is processed properly @@ -921,7 +923,7 @@ static void cs42l42_suspend(struct sub_codec *cs42l42) { CS42L42_MIXER_CHA_VOL, 0x3F }, { CS42L42_MIXER_ADC_VOL, 0x3F }, { CS42L42_MIXER_CHB_VOL, 0x3F }, - { CS42L42_HP_CTL, 0x0F }, + { CS42L42_HP_CTL, 0x0D }, { CS42L42_ASP_RX_DAI0_EN, 0x00 }, { CS42L42_ASP_CLK_CFG, 0x00 }, { CS42L42_PWR_CTL1, 0xFE }, diff --git a/sound/pci/hda/patch_cs8409.h b/sound/pci/hda/patch_cs8409.h index 937e9387abdc7..bca81d49f201a 100644 --- a/sound/pci/hda/patch_cs8409.h +++ b/sound/pci/hda/patch_cs8409.h @@ -230,9 +230,10 @@ enum cs8409_coefficient_index_registers { #define CS42L42_PDN_TIMEOUT_US (250000) #define CS42L42_PDN_SLEEP_US (2000) #define CS42L42_INIT_TIMEOUT_MS (45) +#define CS42L42_ANA_MUTE_AB (0x0C) #define CS42L42_FULL_SCALE_VOL_MASK (2) -#define CS42L42_FULL_SCALE_VOL_0DB (1) -#define CS42L42_FULL_SCALE_VOL_MINUS6DB (0) +#define CS42L42_FULL_SCALE_VOL_0DB (0) +#define CS42L42_FULL_SCALE_VOL_MINUS6DB (1)
/* Dell BULLSEYE / WARLOCK / CYBORG Specific Definitions */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nick Child nnac123@linux.ibm.com
[ Upstream commit 5cb431dcf8048572e9ffc6c30cdbd8832cbe502d ]
In ibmvnic_xmit() if ibmvnic_tx_scrq_flush() returns H_CLOSED then it will inform upper level networking functions to disable tx queues. H_CLOSED signals that the connection with the vnic server is down and a transport event is expected to recover the device.
Previously, ibmvnic_tx_scrq_flush() was hard-coded to return success. Therefore, the queues would remain active until ibmvnic_cleanup() is called within do_reset().
The problem is that do_reset() depends on the RTNL lock. If several ibmvnic devices are resetting then there can be a long wait time until the last device can grab the lock. During this time the tx/rx queues still appear active to upper level functions.
FYI, we do make a call to netif_carrier_off() outside the RTNL lock but its calls to dev_deactivate() are also dependent on the RTNL lock.
As a result, large amounts of retransmissions were observed in a short period of time, eventually leading to ETIMEOUT. This was specifically seen with HNV devices, likely because of even more RTNL dependencies.
Therefore, ensure the return code of ibmvnic_tx_scrq_flush() is propagated to the xmit function to allow for an earlier (and lock-less) response to a transport event.
Signed-off-by: Nick Child nnac123@linux.ibm.com Link: https://lore.kernel.org/r/20240416164128.387920-1-nnac123@linux.ibm.com Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: bdf5d13aa05e ("ibmvnic: Don't reference skb after sending to VIOS") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ibm/ibmvnic.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 6d17738c1c536..7fe1fefef9934 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -2181,7 +2181,7 @@ static int ibmvnic_tx_scrq_flush(struct ibmvnic_adapter *adapter, ibmvnic_tx_scrq_clean_buffer(adapter, tx_scrq); else ind_bufp->index = 0; - return 0; + return rc; }
static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) @@ -2234,7 +2234,9 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) tx_dropped++; tx_send_failed++; ret = NETDEV_TX_OK; - ibmvnic_tx_scrq_flush(adapter, tx_scrq); + lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq); + if (lpar_rc != H_SUCCESS) + goto tx_err; goto out; }
@@ -2249,8 +2251,10 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) dev_kfree_skb_any(skb); tx_send_failed++; tx_dropped++; - ibmvnic_tx_scrq_flush(adapter, tx_scrq); ret = NETDEV_TX_OK; + lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq); + if (lpar_rc != H_SUCCESS) + goto tx_err; goto out; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nick Child nnac123@linux.ibm.com
[ Upstream commit 74839f7a82689bf5a21a5447cae8e3a7b7a606d2 ]
Firmware supports two hcalls to send a sub-crq request: H_SEND_SUB_CRQ_INDIRECT and H_SEND_SUB_CRQ. The indirect hcall allows for submission of batched messages while the other hcall is limited to only one message. This protocol is defined in PAPR section 17.2.3.3.
Previously, the ibmvnic xmit function only used the indirect hcall. This allowed the driver to batch it's skbs. A single skb can occupy a few entries per hcall depending on if FW requires skb header information or not. The FW only needs header information if the packet is segmented.
By this logic, if an skb is not GSO then it can fit in one sub-crq message and therefore is a candidate for H_SEND_SUB_CRQ. Batching skb transmission is only useful when there are more packets coming down the line (ie netdev_xmit_more is true).
As it turns out, H_SEND_SUB_CRQ induces less latency than H_SEND_SUB_CRQ_INDIRECT. Therefore, use H_SEND_SUB_CRQ where appropriate.
Small latency gains seen when doing TCP_RR_150 (request/response workload). Ftrace results (graph-time=1): Previous: ibmvnic_xmit = 29618270.83 us / 8860058.0 hits = AVG 3.34 ibmvnic_tx_scrq_flush = 21972231.02 us / 6553972.0 hits = AVG 3.35 Now: ibmvnic_xmit = 22153350.96 us / 8438942.0 hits = AVG 2.63 ibmvnic_tx_scrq_flush = 15858922.4 us / 6244076.0 hits = AVG 2.54
Signed-off-by: Nick Child nnac123@linux.ibm.com Link: https://patch.msgid.link/20240807211809.1259563-6-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: bdf5d13aa05e ("ibmvnic: Don't reference skb after sending to VIOS") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ibm/ibmvnic.c | 52 ++++++++++++++++++++++++++---- 1 file changed, 46 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 7fe1fefef9934..0b06fcd2d0f40 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -116,6 +116,7 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter, struct ibmvnic_long_term_buff *ltb); static void ibmvnic_disable_irqs(struct ibmvnic_adapter *adapter); static void flush_reset_queue(struct ibmvnic_adapter *adapter); +static void print_subcrq_error(struct device *dev, int rc, const char *func);
struct ibmvnic_stat { char name[ETH_GSTRING_LEN]; @@ -2160,8 +2161,29 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter, } }
+static int send_subcrq_direct(struct ibmvnic_adapter *adapter, + u64 remote_handle, u64 *entry) +{ + unsigned int ua = adapter->vdev->unit_address; + struct device *dev = &adapter->vdev->dev; + int rc; + + /* Make sure the hypervisor sees the complete request */ + dma_wmb(); + rc = plpar_hcall_norets(H_SEND_SUB_CRQ, ua, + cpu_to_be64(remote_handle), + cpu_to_be64(entry[0]), cpu_to_be64(entry[1]), + cpu_to_be64(entry[2]), cpu_to_be64(entry[3])); + + if (rc) + print_subcrq_error(dev, rc, __func__); + + return rc; +} + static int ibmvnic_tx_scrq_flush(struct ibmvnic_adapter *adapter, - struct ibmvnic_sub_crq_queue *tx_scrq) + struct ibmvnic_sub_crq_queue *tx_scrq, + bool indirect) { struct ibmvnic_ind_xmit_queue *ind_bufp; u64 dma_addr; @@ -2176,7 +2198,13 @@ static int ibmvnic_tx_scrq_flush(struct ibmvnic_adapter *adapter,
if (!entries) return 0; - rc = send_subcrq_indirect(adapter, handle, dma_addr, entries); + + if (indirect) + rc = send_subcrq_indirect(adapter, handle, dma_addr, entries); + else + rc = send_subcrq_direct(adapter, handle, + (u64 *)ind_bufp->indir_arr); + if (rc) ibmvnic_tx_scrq_clean_buffer(adapter, tx_scrq); else @@ -2234,7 +2262,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) tx_dropped++; tx_send_failed++; ret = NETDEV_TX_OK; - lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq); + lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq, true); if (lpar_rc != H_SUCCESS) goto tx_err; goto out; @@ -2252,7 +2280,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) tx_send_failed++; tx_dropped++; ret = NETDEV_TX_OK; - lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq); + lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq, true); if (lpar_rc != H_SUCCESS) goto tx_err; goto out; @@ -2350,6 +2378,16 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) tx_crq.v1.flags1 |= IBMVNIC_TX_LSO; tx_crq.v1.mss = cpu_to_be16(skb_shinfo(skb)->gso_size); hdrs += 2; + } else if (!ind_bufp->index && !netdev_xmit_more()) { + ind_bufp->indir_arr[0] = tx_crq; + ind_bufp->index = 1; + tx_buff->num_entries = 1; + netdev_tx_sent_queue(txq, skb->len); + lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq, false); + if (lpar_rc != H_SUCCESS) + goto tx_err; + + goto early_exit; }
if ((*hdrs >> 7) & 1) @@ -2359,7 +2397,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) tx_buff->num_entries = num_entries; /* flush buffer if current entry can not fit */ if (num_entries + ind_bufp->index > IBMVNIC_MAX_IND_DESCS) { - lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq); + lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq, true); if (lpar_rc != H_SUCCESS) goto tx_flush_err; } @@ -2367,15 +2405,17 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) indir_arr[0] = tx_crq; memcpy(&ind_bufp->indir_arr[ind_bufp->index], &indir_arr[0], num_entries * sizeof(struct ibmvnic_generic_scrq)); + ind_bufp->index += num_entries; if (__netdev_tx_sent_queue(txq, skb->len, netdev_xmit_more() && ind_bufp->index < IBMVNIC_MAX_IND_DESCS)) { - lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq); + lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq, true); if (lpar_rc != H_SUCCESS) goto tx_err; }
+early_exit: if (atomic_add_return(num_entries, &tx_scrq->used) >= adapter->req_tx_entries_per_subcrq) { netdev_dbg(netdev, "Stopping queue %d\n", queue_num);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nick Child nnac123@linux.ibm.com
[ Upstream commit 2ee73c54a615b74d2e7ee6f20844fd3ba63fc485 ]
Allow tracking of packets sent with send_subcrq direct vs indirect. `ethtool -S <dev>` will now provide a counter of the number of uses of each xmit method. This metric will be useful in performance debugging.
Signed-off-by: Nick Child nnac123@linux.ibm.com Reviewed-by: Simon Horman horms@kernel.org Link: https://patch.msgid.link/20241001163531.1803152-1-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: bdf5d13aa05e ("ibmvnic: Don't reference skb after sending to VIOS") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ibm/ibmvnic.c | 23 ++++++++++++++++------- drivers/net/ethernet/ibm/ibmvnic.h | 3 ++- 2 files changed, 18 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 0b06fcd2d0f40..b83877cafaf7f 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -2136,7 +2136,7 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter, tx_buff = &tx_pool->tx_buff[index]; adapter->netdev->stats.tx_packets--; adapter->netdev->stats.tx_bytes -= tx_buff->skb->len; - adapter->tx_stats_buffers[queue_num].packets--; + adapter->tx_stats_buffers[queue_num].batched_packets--; adapter->tx_stats_buffers[queue_num].bytes -= tx_buff->skb->len; dev_kfree_skb_any(tx_buff->skb); @@ -2228,7 +2228,8 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) unsigned int tx_map_failed = 0; union sub_crq indir_arr[16]; unsigned int tx_dropped = 0; - unsigned int tx_packets = 0; + unsigned int tx_dpackets = 0; + unsigned int tx_bpackets = 0; unsigned int tx_bytes = 0; dma_addr_t data_dma_addr; struct netdev_queue *txq; @@ -2387,6 +2388,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) if (lpar_rc != H_SUCCESS) goto tx_err;
+ tx_dpackets++; goto early_exit; }
@@ -2415,6 +2417,8 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) goto tx_err; }
+ tx_bpackets++; + early_exit: if (atomic_add_return(num_entries, &tx_scrq->used) >= adapter->req_tx_entries_per_subcrq) { @@ -2422,7 +2426,6 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) netif_stop_subqueue(netdev, queue_num); }
- tx_packets++; tx_bytes += skb->len; txq_trans_cond_update(txq); ret = NETDEV_TX_OK; @@ -2452,10 +2455,11 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) rcu_read_unlock(); netdev->stats.tx_dropped += tx_dropped; netdev->stats.tx_bytes += tx_bytes; - netdev->stats.tx_packets += tx_packets; + netdev->stats.tx_packets += tx_bpackets + tx_dpackets; adapter->tx_send_failed += tx_send_failed; adapter->tx_map_failed += tx_map_failed; - adapter->tx_stats_buffers[queue_num].packets += tx_packets; + adapter->tx_stats_buffers[queue_num].batched_packets += tx_bpackets; + adapter->tx_stats_buffers[queue_num].direct_packets += tx_dpackets; adapter->tx_stats_buffers[queue_num].bytes += tx_bytes; adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
@@ -3621,7 +3625,10 @@ static void ibmvnic_get_strings(struct net_device *dev, u32 stringset, u8 *data) memcpy(data, ibmvnic_stats[i].name, ETH_GSTRING_LEN);
for (i = 0; i < adapter->req_tx_queues; i++) { - snprintf(data, ETH_GSTRING_LEN, "tx%d_packets", i); + snprintf(data, ETH_GSTRING_LEN, "tx%d_batched_packets", i); + data += ETH_GSTRING_LEN; + + snprintf(data, ETH_GSTRING_LEN, "tx%d_direct_packets", i); data += ETH_GSTRING_LEN;
snprintf(data, ETH_GSTRING_LEN, "tx%d_bytes", i); @@ -3686,7 +3693,9 @@ static void ibmvnic_get_ethtool_stats(struct net_device *dev, (adapter, ibmvnic_stats[i].offset));
for (j = 0; j < adapter->req_tx_queues; j++) { - data[i] = adapter->tx_stats_buffers[j].packets; + data[i] = adapter->tx_stats_buffers[j].batched_packets; + i++; + data[i] = adapter->tx_stats_buffers[j].direct_packets; i++; data[i] = adapter->tx_stats_buffers[j].bytes; i++; diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index e5c6ff3d0c472..f923cdab03f57 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -213,7 +213,8 @@ struct ibmvnic_statistics {
#define NUM_TX_STATS 3 struct ibmvnic_tx_queue_stats { - u64 packets; + u64 batched_packets; + u64 direct_packets; u64 bytes; u64 dropped_packets; };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nick Child nnac123@linux.ibm.com
[ Upstream commit bdf5d13aa05ec314d4385b31ac974d6c7e0997c9 ]
Previously, after successfully flushing the xmit buffer to VIOS, the tx_bytes stat was incremented by the length of the skb.
It is invalid to access the skb memory after sending the buffer to the VIOS because, at any point after sending, the VIOS can trigger an interrupt to free this memory. A race between reading skb->len and freeing the skb is possible (especially during LPM) and will result in use-after-free: ================================================================== BUG: KASAN: slab-use-after-free in ibmvnic_xmit+0x75c/0x1808 [ibmvnic] Read of size 4 at addr c00000024eb48a70 by task hxecom/14495 <...> Call Trace: [c000000118f66cf0] [c0000000018cba6c] dump_stack_lvl+0x84/0xe8 (unreliable) [c000000118f66d20] [c0000000006f0080] print_report+0x1a8/0x7f0 [c000000118f66df0] [c0000000006f08f0] kasan_report+0x128/0x1f8 [c000000118f66f00] [c0000000006f2868] __asan_load4+0xac/0xe0 [c000000118f66f20] [c0080000046eac84] ibmvnic_xmit+0x75c/0x1808 [ibmvnic] [c000000118f67340] [c0000000014be168] dev_hard_start_xmit+0x150/0x358 <...> Freed by task 0: kasan_save_stack+0x34/0x68 kasan_save_track+0x2c/0x50 kasan_save_free_info+0x64/0x108 __kasan_mempool_poison_object+0x148/0x2d4 napi_skb_cache_put+0x5c/0x194 net_tx_action+0x154/0x5b8 handle_softirqs+0x20c/0x60c do_softirq_own_stack+0x6c/0x88 <...> The buggy address belongs to the object at c00000024eb48a00 which belongs to the cache skbuff_head_cache of size 224 ==================================================================
Fixes: 032c5e82847a ("Driver for IBM System i/p VNIC protocol") Signed-off-by: Nick Child nnac123@linux.ibm.com Reviewed-by: Simon Horman horms@kernel.org Link: https://patch.msgid.link/20250214155233.235559-1-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ibm/ibmvnic.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index b83877cafaf7f..44991cae94045 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -2234,6 +2234,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) dma_addr_t data_dma_addr; struct netdev_queue *txq; unsigned long lpar_rc; + unsigned int skblen; union sub_crq tx_crq; unsigned int offset; int num_entries = 1; @@ -2336,6 +2337,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) tx_buff->skb = skb; tx_buff->index = bufidx; tx_buff->pool_index = queue_num; + skblen = skb->len;
memset(&tx_crq, 0, sizeof(tx_crq)); tx_crq.v1.first = IBMVNIC_CRQ_CMD; @@ -2426,7 +2428,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) netif_stop_subqueue(netdev, queue_num); }
- tx_bytes += skb->len; + tx_bytes += skblen; txq_trans_cond_update(txq); ret = NETDEV_TX_OK; goto out;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 4ccacf86491d33d2486b62d4d44864d7101b299d ]
Brad Spengler reported the list_del() corruption splat in gtp_net_exit_batch_rtnl(). [0]
Commit eb28fd76c0a0 ("gtp: Destroy device along with udp socket's netns dismantle.") added the for_each_netdev() loop in gtp_net_exit_batch_rtnl() to destroy devices in each netns as done in geneve and ip tunnels.
However, this could trigger ->dellink() twice for the same device during ->exit_batch_rtnl().
Say we have two netns A & B and gtp device B that resides in netns B but whose UDP socket is in netns A.
1. cleanup_net() processes netns A and then B.
2. gtp_net_exit_batch_rtnl() finds the device B while iterating netns A's gn->gtp_dev_list and calls ->dellink().
[ device B is not yet unlinked from netns B as unregister_netdevice_many() has not been called. ]
3. gtp_net_exit_batch_rtnl() finds the device B while iterating netns B's for_each_netdev() and calls ->dellink().
gtp_dellink() cleans up the device's hash table, unlinks the dev from gn->gtp_dev_list, and calls unregister_netdevice_queue().
Basically, calling gtp_dellink() multiple times is fine unless CONFIG_DEBUG_LIST is enabled.
Let's remove for_each_netdev() in gtp_net_exit_batch_rtnl() and delegate the destruction to default_device_exit_batch() as done in bareudp.
[0]: list_del corruption, ffff8880aaa62c00->next (autoslab_size_M_dev_P_net_core_dev_11127_8_1328_8_S_4096_A_64_n_139+0xc00/0x1000 [slab object]) is LIST_POISON1 (ffffffffffffff02) (prev is 0xffffffffffffff04) kernel BUG at lib/list_debug.c:58! Oops: invalid opcode: 0000 [#1] PREEMPT SMP KASAN CPU: 1 UID: 0 PID: 1804 Comm: kworker/u8:7 Tainted: G T 6.12.13-grsec-full-20250211091339 #1 Tainted: [T]=RANDSTRUCT Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 Workqueue: netns cleanup_net RIP: 0010:[<ffffffff84947381>] __list_del_entry_valid_or_report+0x141/0x200 lib/list_debug.c:58 Code: c2 76 91 31 c0 e8 9f b1 f7 fc 0f 0b 4d 89 f0 48 c7 c1 02 ff ff ff 48 89 ea 48 89 ee 48 c7 c7 e0 c2 76 91 31 c0 e8 7f b1 f7 fc <0f> 0b 4d 89 e8 48 c7 c1 04 ff ff ff 48 89 ea 48 89 ee 48 c7 c7 60 RSP: 0018:fffffe8040b4fbd0 EFLAGS: 00010283 RAX: 00000000000000cc RBX: dffffc0000000000 RCX: ffffffff818c4054 RDX: ffffffff84947381 RSI: ffffffff818d1512 RDI: 0000000000000000 RBP: ffff8880aaa62c00 R08: 0000000000000001 R09: fffffbd008169f32 R10: fffffe8040b4f997 R11: 0000000000000001 R12: a1988d84f24943e4 R13: ffffffffffffff02 R14: ffffffffffffff04 R15: ffff8880aaa62c08 RBX: kasan shadow of 0x0 RCX: __wake_up_klogd.part.0+0x74/0xe0 kernel/printk/printk.c:4554 RDX: __list_del_entry_valid_or_report+0x141/0x200 lib/list_debug.c:58 RSI: vprintk+0x72/0x100 kernel/printk/printk_safe.c:71 RBP: autoslab_size_M_dev_P_net_core_dev_11127_8_1328_8_S_4096_A_64_n_139+0xc00/0x1000 [slab object] RSP: process kstack fffffe8040b4fbd0+0x7bd0/0x8000 [kworker/u8:7+netns 1804 ] R09: kasan shadow of process kstack fffffe8040b4f990+0x7990/0x8000 [kworker/u8:7+netns 1804 ] R10: process kstack fffffe8040b4f997+0x7997/0x8000 [kworker/u8:7+netns 1804 ] R15: autoslab_size_M_dev_P_net_core_dev_11127_8_1328_8_S_4096_A_64_n_139+0xc08/0x1000 [slab object] FS: 0000000000000000(0000) GS:ffff888116000000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000748f5372c000 CR3: 0000000015408000 CR4: 00000000003406f0 shadow CR4: 00000000003406f0 Stack: 0000000000000000 ffffffff8a0c35e7 ffffffff8a0c3603 ffff8880aaa62c00 ffff8880aaa62c00 0000000000000004 ffff88811145311c 0000000000000005 0000000000000001 ffff8880aaa62000 fffffe8040b4fd40 ffffffff8a0c360d Call Trace: <TASK> [<ffffffff8a0c360d>] __list_del_entry_valid include/linux/list.h:131 [inline] fffffe8040b4fc28 [<ffffffff8a0c360d>] __list_del_entry include/linux/list.h:248 [inline] fffffe8040b4fc28 [<ffffffff8a0c360d>] list_del include/linux/list.h:262 [inline] fffffe8040b4fc28 [<ffffffff8a0c360d>] gtp_dellink+0x16d/0x360 drivers/net/gtp.c:1557 fffffe8040b4fc28 [<ffffffff8a0d0404>] gtp_net_exit_batch_rtnl+0x124/0x2c0 drivers/net/gtp.c:2495 fffffe8040b4fc88 [<ffffffff8e705b24>] cleanup_net+0x5a4/0xbe0 net/core/net_namespace.c:635 fffffe8040b4fcd0 [<ffffffff81754c97>] process_one_work+0xbd7/0x2160 kernel/workqueue.c:3326 fffffe8040b4fd88 [<ffffffff81757195>] process_scheduled_works kernel/workqueue.c:3407 [inline] fffffe8040b4fec0 [<ffffffff81757195>] worker_thread+0x6b5/0xfa0 kernel/workqueue.c:3488 fffffe8040b4fec0 [<ffffffff817782a0>] kthread+0x360/0x4c0 kernel/kthread.c:397 fffffe8040b4ff78 [<ffffffff814d8594>] ret_from_fork+0x74/0xe0 arch/x86/kernel/process.c:172 fffffe8040b4ffb8 [<ffffffff8110f509>] ret_from_fork_asm+0x29/0xc0 arch/x86/entry/entry_64.S:399 fffffe8040b4ffe8 </TASK> Modules linked in:
Fixes: eb28fd76c0a0 ("gtp: Destroy device along with udp socket's netns dismantle.") Reported-by: Brad Spengler spender@grsecurity.net Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://patch.msgid.link/20250217203705.40342-2-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/gtp.c | 5 ----- 1 file changed, 5 deletions(-)
diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c index 0de3dcd07cb7e..797886f10868a 100644 --- a/drivers/net/gtp.c +++ b/drivers/net/gtp.c @@ -1894,11 +1894,6 @@ static void __net_exit gtp_net_exit_batch_rtnl(struct list_head *net_list, list_for_each_entry(net, net_list, exit_list) { struct gtp_net *gn = net_generic(net, gtp_net_id); struct gtp_dev *gtp, *gtp_next; - struct net_device *dev; - - for_each_netdev(net, dev) - if (dev->rtnl_link_ops == >p_link_ops) - gtp_dellink(dev, dev_to_kill);
list_for_each_entry_safe(gtp, gtp_next, &gn->gtp_dev_list, list) gtp_dellink(gtp->dev, dev_to_kill);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 62fab6eef61f245dc8797e3a6a5b890ef40e8628 ]
As explained in the previous patch, iterating for_each_netdev() and gn->geneve_list during ->exit_batch_rtnl() could trigger ->dellink() twice for the same device.
If CONFIG_DEBUG_LIST is enabled, we will see a list_del() corruption splat in the 2nd call of geneve_dellink().
Let's remove for_each_netdev() in geneve_destroy_tunnels() and delegate that part to default_device_exit_batch().
Fixes: 9593172d93b9 ("geneve: Fix use-after-free in geneve_find_dev().") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://patch.msgid.link/20250217203705.40342-3-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/geneve.c | 7 ------- 1 file changed, 7 deletions(-)
diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c index 15b85eb3daa19..3dd5c69b05cb7 100644 --- a/drivers/net/geneve.c +++ b/drivers/net/geneve.c @@ -1965,14 +1965,7 @@ static void geneve_destroy_tunnels(struct net *net, struct list_head *head) { struct geneve_net *gn = net_generic(net, geneve_net_id); struct geneve_dev *geneve, *next; - struct net_device *dev, *aux;
- /* gather any geneve devices that were moved into this ns */ - for_each_netdev_safe(net, dev, aux) - if (dev->rtnl_link_ops == &geneve_link_ops) - geneve_dellink(dev, head); - - /* now gather any other geneve devices that were created in this ns */ list_for_each_entry_safe(geneve, next, &gn->geneve_list, next) geneve_dellink(geneve->dev, head); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Cong Wang xiyou.wangcong@gmail.com
[ Upstream commit 3e5796862c692ea608d96f0a1437f9290f44953a ]
This patch fixes a bug in TC flower filter where rules combining a specific destination port with a source port range weren't working correctly.
The specific case was when users tried to configure rules like:
tc filter add dev ens38 ingress protocol ip flower ip_proto udp \ dst_port 5000 src_port 2000-3000 action drop
The root cause was in the flow dissector code. While both FLOW_DISSECTOR_KEY_PORTS and FLOW_DISSECTOR_KEY_PORTS_RANGE flags were being set correctly in the classifier, the __skb_flow_dissect_ports() function was only populating one of them: whichever came first in the enum check. This meant that when the code needed both a specific port and a port range, one of them would be left as 0, causing the filter to not match packets as expected.
Fix it by removing the either/or logic and instead checking and populating both key types independently when they're in use.
Fixes: 8ffb055beae5 ("cls_flower: Fix the behavior using port ranges with hw-offload") Reported-by: Qiang Zhang dtzq01@gmail.com Closes: https://lore.kernel.org/netdev/CAPx+-5uvFxkhkz4=j_Xuwkezjn9U6kzKTD5jz4tZ9msS... Cc: Yoshiki Komachi komachi.yoshiki@gmail.com Cc: Jamal Hadi Salim jhs@mojatatu.com Cc: Jiri Pirko jiri@resnulli.us Signed-off-by: Cong Wang xiyou.wangcong@gmail.com Reviewed-by: Ido Schimmel idosch@nvidia.com Link: https://patch.msgid.link/20250218043210.732959-2-xiyou.wangcong@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/flow_dissector.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c index de17f13232381..41ad5c1cccf64 100644 --- a/net/core/flow_dissector.c +++ b/net/core/flow_dissector.c @@ -751,23 +751,30 @@ __skb_flow_dissect_ports(const struct sk_buff *skb, void *target_container, const void *data, int nhoff, u8 ip_proto, int hlen) { - enum flow_dissector_key_id dissector_ports = FLOW_DISSECTOR_KEY_MAX; - struct flow_dissector_key_ports *key_ports; + struct flow_dissector_key_ports_range *key_ports_range = NULL; + struct flow_dissector_key_ports *key_ports = NULL; + __be32 ports;
if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS)) - dissector_ports = FLOW_DISSECTOR_KEY_PORTS; - else if (dissector_uses_key(flow_dissector, - FLOW_DISSECTOR_KEY_PORTS_RANGE)) - dissector_ports = FLOW_DISSECTOR_KEY_PORTS_RANGE; + key_ports = skb_flow_dissector_target(flow_dissector, + FLOW_DISSECTOR_KEY_PORTS, + target_container);
- if (dissector_ports == FLOW_DISSECTOR_KEY_MAX) + if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS_RANGE)) + key_ports_range = skb_flow_dissector_target(flow_dissector, + FLOW_DISSECTOR_KEY_PORTS_RANGE, + target_container); + + if (!key_ports && !key_ports_range) return;
- key_ports = skb_flow_dissector_target(flow_dissector, - dissector_ports, - target_container); - key_ports->ports = __skb_flow_get_ports(skb, nhoff, ip_proto, - data, hlen); + ports = __skb_flow_get_ports(skb, nhoff, ip_proto, data, hlen); + + if (key_ports) + key_ports->ports = ports; + + if (key_ports_range) + key_ports_range->tp.ports = ports; }
static void
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Cong Wang xiyou.wangcong@gmail.com
[ Upstream commit 69ab34f705fbfabcace64b5d53bb7a4450fac875 ]
Fix how port range keys are handled in __skb_flow_bpf_to_target() by: - Separating PORTS and PORTS_RANGE key handling - Using correct key_ports_range structure for range keys - Properly initializing both key types independently
This ensures port range information is correctly stored in its dedicated structure rather than incorrectly using the regular ports key structure.
Fixes: 59fb9b62fb6c ("flow_dissector: Fix to use new variables for port ranges in bpf hook") Reported-by: Qiang Zhang dtzq01@gmail.com Closes: https://lore.kernel.org/netdev/CAPx+-5uvFxkhkz4=j_Xuwkezjn9U6kzKTD5jz4tZ9msS... Cc: Yoshiki Komachi komachi.yoshiki@gmail.com Cc: Jamal Hadi Salim jhs@mojatatu.com Cc: Jiri Pirko jiri@resnulli.us Signed-off-by: Cong Wang xiyou.wangcong@gmail.com Link: https://patch.msgid.link/20250218043210.732959-4-xiyou.wangcong@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/flow_dissector.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c index 41ad5c1cccf64..5f50e182acd57 100644 --- a/net/core/flow_dissector.c +++ b/net/core/flow_dissector.c @@ -829,6 +829,7 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys, struct flow_dissector *flow_dissector, void *target_container) { + struct flow_dissector_key_ports_range *key_ports_range = NULL; struct flow_dissector_key_ports *key_ports = NULL; struct flow_dissector_key_control *key_control; struct flow_dissector_key_basic *key_basic; @@ -873,20 +874,21 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys, key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS; }
- if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS)) + if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS)) { key_ports = skb_flow_dissector_target(flow_dissector, FLOW_DISSECTOR_KEY_PORTS, target_container); - else if (dissector_uses_key(flow_dissector, - FLOW_DISSECTOR_KEY_PORTS_RANGE)) - key_ports = skb_flow_dissector_target(flow_dissector, - FLOW_DISSECTOR_KEY_PORTS_RANGE, - target_container); - - if (key_ports) { key_ports->src = flow_keys->sport; key_ports->dst = flow_keys->dport; } + if (dissector_uses_key(flow_dissector, + FLOW_DISSECTOR_KEY_PORTS_RANGE)) { + key_ports_range = skb_flow_dissector_target(flow_dissector, + FLOW_DISSECTOR_KEY_PORTS_RANGE, + target_container); + key_ports_range->tp.src = flow_keys->sport; + key_ports_range->tp.dst = flow_keys->dport; + }
if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_FLOW_LABEL)) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Breno Leitao leitao@debian.org
[ Upstream commit 4b5a28b38c4a0106c64416a1b2042405166b26ce ]
Add dedicated helper for finding devices by hardware address when holding rtnl_lock, similar to existing dev_getbyhwaddr_rcu(). This prevents PROVE_LOCKING warnings when rtnl_lock is held but RCU read lock is not.
Extract common address comparison logic into dev_addr_cmp().
The context about this change could be found in the following discussion:
Link: https://lore.kernel.org/all/20250206-scarlet-ermine-of-improvement-1fcac5@le...
Cc: kuniyu@amazon.com Cc: ushankar@purestorage.com Suggested-by: Eric Dumazet edumazet@google.com Signed-off-by: Breno Leitao leitao@debian.org Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://patch.msgid.link/20250218-arm_fix_selftest-v5-1-d3d6892db9e1@debian.... Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 4eae0ee0f1e6 ("arp: switch to dev_getbyhwaddr() in arp_req_set_public()") Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/netdevice.h | 2 ++ net/core/dev.c | 37 ++++++++++++++++++++++++++++++++++--- 2 files changed, 36 insertions(+), 3 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index d0b4920dee730..f44701b82ea80 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3011,6 +3011,8 @@ static inline struct net_device *first_net_device_rcu(struct net *net) }
int netdev_boot_setup_check(struct net_device *dev); +struct net_device *dev_getbyhwaddr(struct net *net, unsigned short type, + const char *hwaddr); struct net_device *dev_getbyhwaddr_rcu(struct net *net, unsigned short type, const char *hwaddr); struct net_device *dev_getfirstbyhwtype(struct net *net, unsigned short type); diff --git a/net/core/dev.c b/net/core/dev.c index 90559cb668039..212a909b48405 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -921,6 +921,12 @@ int netdev_get_name(struct net *net, char *name, int ifindex) return ret; }
+static bool dev_addr_cmp(struct net_device *dev, unsigned short type, + const char *ha) +{ + return dev->type == type && !memcmp(dev->dev_addr, ha, dev->addr_len); +} + /** * dev_getbyhwaddr_rcu - find a device by its hardware address * @net: the applicable net namespace @@ -929,7 +935,7 @@ int netdev_get_name(struct net *net, char *name, int ifindex) * * Search for an interface by MAC address. Returns NULL if the device * is not found or a pointer to the device. - * The caller must hold RCU or RTNL. + * The caller must hold RCU. * The returned device has not had its ref count increased * and the caller must therefore be careful about locking * @@ -941,14 +947,39 @@ struct net_device *dev_getbyhwaddr_rcu(struct net *net, unsigned short type, struct net_device *dev;
for_each_netdev_rcu(net, dev) - if (dev->type == type && - !memcmp(dev->dev_addr, ha, dev->addr_len)) + if (dev_addr_cmp(dev, type, ha)) return dev;
return NULL; } EXPORT_SYMBOL(dev_getbyhwaddr_rcu);
+/** + * dev_getbyhwaddr() - find a device by its hardware address + * @net: the applicable net namespace + * @type: media type of device + * @ha: hardware address + * + * Similar to dev_getbyhwaddr_rcu(), but the owner needs to hold + * rtnl_lock. + * + * Context: rtnl_lock() must be held. + * Return: pointer to the net_device, or NULL if not found + */ +struct net_device *dev_getbyhwaddr(struct net *net, unsigned short type, + const char *ha) +{ + struct net_device *dev; + + ASSERT_RTNL(); + for_each_netdev(net, dev) + if (dev_addr_cmp(dev, type, ha)) + return dev; + + return NULL; +} +EXPORT_SYMBOL(dev_getbyhwaddr); + struct net_device *dev_getfirstbyhwtype(struct net *net, unsigned short type) { struct net_device *dev, *ret = NULL;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Breno Leitao leitao@debian.org
[ Upstream commit 4eae0ee0f1e6256d0b0b9dd6e72f1d9cf8f72e08 ]
The arp_req_set_public() function is called with the rtnl lock held, which provides enough synchronization protection. This makes the RCU variant of dev_getbyhwaddr() unnecessary. Switch to using the simpler dev_getbyhwaddr() function since we already have the required rtnl locking.
This change helps maintain consistency in the networking code by using the appropriate helper function for the existing locking context. Since we're not holding the RCU read lock in arp_req_set_public() existing code could trigger false positive locking warnings.
Fixes: 941666c2e3e0 ("net: RCU conversion of dev_getbyhwaddr() and arp_ioctl()") Suggested-by: Kuniyuki Iwashima kuniyu@amazon.com Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: Breno Leitao leitao@debian.org Link: https://patch.msgid.link/20250218-arm_fix_selftest-v5-2-d3d6892db9e1@debian.... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/arp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c index 8f9b5568f1dc1..50e2b4939d8e9 100644 --- a/net/ipv4/arp.c +++ b/net/ipv4/arp.c @@ -1030,7 +1030,7 @@ static int arp_req_set_public(struct net *net, struct arpreq *r, if (mask && mask != htonl(0xFFFFFFFF)) return -EINVAL; if (!dev && (r->arp_flags & ATF_COM)) { - dev = dev_getbyhwaddr_rcu(net, r->arp_ha.sa_family, + dev = dev_getbyhwaddr(net, r->arp_ha.sa_family, r->arp_ha.sa_data); if (!dev) return -ENODEV;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nick Hu nick.hu@sifive.com
[ Upstream commit a370295367b55662a32a4be92565fe72a5aa79bb ]
The external PHY will undergo a soft reset twice during the resume process when it wake up from suspend. The first reset occurs when the axienet driver calls phylink_of_phy_connect(), and the second occurs when mdio_bus_phy_resume() invokes phy_init_hw(). The second soft reset of the external PHY does not reinitialize the internal PHY, which causes issues with the internal PHY, resulting in the PHY link being down. To prevent this, setting the mac_managed_pm flag skips the mdio_bus_phy_resume() function.
Fixes: a129b41fe0a8 ("Revert "net: phy: dp83867: perform soft reset and retain established link"") Signed-off-by: Nick Hu nick.hu@sifive.com Reviewed-by: Jacob Keller jacob.e.keller@intel.com Link: https://patch.msgid.link/20250217055843.19799-1-nick.hu@sifive.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/xilinx/xilinx_axienet_main.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c index a957721581761..f227ed8e99345 100644 --- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c @@ -2159,6 +2159,7 @@ static int axienet_probe(struct platform_device *pdev)
lp->phylink_config.dev = &ndev->dev; lp->phylink_config.type = PHYLINK_NETDEV; + lp->phylink_config.mac_managed_pm = true; lp->phylink_config.mac_capabilities = MAC_SYM_PAUSE | MAC_ASYM_PAUSE | MAC_10FD | MAC_100FD | MAC_1000FD;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sabrina Dubroca sd@queasysnail.net
[ Upstream commit 9b6412e6979f6f9e0632075f8f008937b5cd4efd ]
Xiumei reported hitting the WARN in xfrm6_tunnel_net_exit while running tests that boil down to: - create a pair of netns - run a basic TCP test over ipcomp6 - delete the pair of netns
The xfrm_state found on spi_byaddr was not deleted at the time we delete the netns, because we still have a reference on it. This lingering reference comes from a secpath (which holds a ref on the xfrm_state), which is still attached to an skb. This skb is not leaked, it ends up on sk_receive_queue and then gets defer-free'd by skb_attempt_defer_free.
The problem happens when we defer freeing an skb (push it on one CPU's defer_list), and don't flush that list before the netns is deleted. In that case, we still have a reference on the xfrm_state that we don't expect at this point.
We already drop the skb's dst in the TCP receive path when it's no longer needed, so let's also drop the secpath. At this point, tcp_filter has already called into the LSM hooks that may require the secpath, so it should not be needed anymore. However, in some of those places, the MPTCP extension has just been attached to the skb, so we cannot simply drop all extensions.
Fixes: 68822bdf76f1 ("net: generalize skb freeing deferral to per-cpu lists") Reported-by: Xiumei Mu xmu@redhat.com Signed-off-by: Sabrina Dubroca sd@queasysnail.net Reviewed-by: Eric Dumazet edumazet@google.com Link: https://patch.msgid.link/5055ba8f8f72bdcb602faa299faca73c280b7735.1739743613... Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/tcp.h | 14 ++++++++++++++ net/ipv4/tcp_fastopen.c | 4 ++-- net/ipv4/tcp_input.c | 8 ++++---- net/ipv4/tcp_ipv4.c | 2 +- 4 files changed, 21 insertions(+), 7 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h index a770210fda9bc..14a00cdd31f42 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -40,6 +40,7 @@ #include <net/inet_ecn.h> #include <net/dst.h> #include <net/mptcp.h> +#include <net/xfrm.h>
#include <linux/seq_file.h> #include <linux/memcontrol.h> @@ -640,6 +641,19 @@ void tcp_fin(struct sock *sk); void tcp_check_space(struct sock *sk); void tcp_sack_compress_send_ack(struct sock *sk);
+static inline void tcp_cleanup_skb(struct sk_buff *skb) +{ + skb_dst_drop(skb); + secpath_reset(skb); +} + +static inline void tcp_add_receive_queue(struct sock *sk, struct sk_buff *skb) +{ + DEBUG_NET_WARN_ON_ONCE(skb_dst(skb)); + DEBUG_NET_WARN_ON_ONCE(secpath_exists(skb)); + __skb_queue_tail(&sk->sk_receive_queue, skb); +} + /* tcp_timer.c */ void tcp_init_xmit_timers(struct sock *); static inline void tcp_clear_xmit_timers(struct sock *sk) diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c index d0b7ded591bd4..cb01c770d8cf5 100644 --- a/net/ipv4/tcp_fastopen.c +++ b/net/ipv4/tcp_fastopen.c @@ -178,7 +178,7 @@ void tcp_fastopen_add_skb(struct sock *sk, struct sk_buff *skb) if (!skb) return;
- skb_dst_drop(skb); + tcp_cleanup_skb(skb); /* segs_in has been initialized to 1 in tcp_create_openreq_child(). * Hence, reset segs_in to 0 before calling tcp_segs_in() * to avoid double counting. Also, tcp_segs_in() expects @@ -195,7 +195,7 @@ void tcp_fastopen_add_skb(struct sock *sk, struct sk_buff *skb) TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_SYN;
tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq; - __skb_queue_tail(&sk->sk_receive_queue, skb); + tcp_add_receive_queue(sk, skb); tp->syn_data_acked = 1;
/* u64_stats_update_begin(&tp->syncp) not needed here, diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 2379ee5511645..3b81f6df829ff 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -4836,7 +4836,7 @@ static void tcp_ofo_queue(struct sock *sk) tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq); fin = TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN; if (!eaten) - __skb_queue_tail(&sk->sk_receive_queue, skb); + tcp_add_receive_queue(sk, skb); else kfree_skb_partial(skb, fragstolen);
@@ -5027,7 +5027,7 @@ static int __must_check tcp_queue_rcv(struct sock *sk, struct sk_buff *skb, skb, fragstolen)) ? 1 : 0; tcp_rcv_nxt_update(tcp_sk(sk), TCP_SKB_CB(skb)->end_seq); if (!eaten) { - __skb_queue_tail(&sk->sk_receive_queue, skb); + tcp_add_receive_queue(sk, skb); skb_set_owner_r(skb, sk); } return eaten; @@ -5110,7 +5110,7 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb) __kfree_skb(skb); return; } - skb_dst_drop(skb); + tcp_cleanup_skb(skb); __skb_pull(skb, tcp_hdr(skb)->doff * 4);
reason = SKB_DROP_REASON_NOT_SPECIFIED; @@ -6041,7 +6041,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb) NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPHPHITS);
/* Bulk data transfer: receiver */ - skb_dst_drop(skb); + tcp_cleanup_skb(skb); __skb_pull(skb, tcp_header_len); eaten = tcp_queue_rcv(sk, skb, &fragstolen);
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 805b1a9eca1c5..7647f1ec0584e 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1791,7 +1791,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb, */ skb_condense(skb);
- skb_dst_drop(skb); + tcp_cleanup_skb(skb);
if (unlikely(tcp_checksum_complete(skb))) { bh_unlock_sock(sk);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tomi Valkeinen tomi.valkeinen@ideasonboard.com
[ Upstream commit 576d96c5c896221b5bc8feae473739469a92e144 ]
K2G display controller does not support soft reset, but we can do the most important steps manually: mask the IRQs and disable the VPs.
Reviewed-by: Aradhya Bhatia a-bhatia1@ti.com Link: https://lore.kernel.org/r/20231109-tidss-probe-v2-7-ac91b5ea35c0@ideasonboar... Signed-off-by: Tomi Valkeinen tomi.valkeinen@ideasonboard.com Stable-dep-of: a9a73f2661e6 ("drm/tidss: Fix race condition while handling interrupt registers") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/tidss/tidss_dispc.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/tidss/tidss_dispc.c b/drivers/gpu/drm/tidss/tidss_dispc.c index c986d432af507..d3e57e6de5dbb 100644 --- a/drivers/gpu/drm/tidss/tidss_dispc.c +++ b/drivers/gpu/drm/tidss/tidss_dispc.c @@ -2655,14 +2655,28 @@ static void dispc_init_errata(struct dispc_device *dispc) } }
+/* + * K2G display controller does not support soft reset, so we do a basic manual + * reset here: make sure the IRQs are masked and VPs are disabled. + */ +static void dispc_softreset_k2g(struct dispc_device *dispc) +{ + dispc_set_irqenable(dispc, 0); + dispc_read_and_clear_irqstatus(dispc); + + for (unsigned int vp_idx = 0; vp_idx < dispc->feat->num_vps; ++vp_idx) + VP_REG_FLD_MOD(dispc, vp_idx, DISPC_VP_CONTROL, 0, 0, 0); +} + static int dispc_softreset(struct dispc_device *dispc) { u32 val; int ret = 0;
- /* K2G display controller does not support soft reset */ - if (dispc->feat->subrev == DISPC_K2G) + if (dispc->feat->subrev == DISPC_K2G) { + dispc_softreset_k2g(dispc); return 0; + }
/* Soft reset */ REG_FLD_MOD(dispc, DSS_SYSCONFIG, 1, 1, 1);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Devarsh Thakkar devarsht@ti.com
[ Upstream commit a9a73f2661e6f625d306c9b0ef082e4593f45a21 ]
The driver has a spinlock for protecting the irq_masks field and irq enable registers. However, the driver misses protecting the irq status registers which can lead to races.
Take the spinlock when accessing irqstatus too.
Fixes: 32a1795f57ee ("drm/tidss: New driver for TI Keystone platform Display SubSystem") Cc: stable@vger.kernel.org Signed-off-by: Devarsh Thakkar devarsht@ti.com [Tomi: updated the desc] Reviewed-by: Jonathan Cormier jcormier@criticallink.com Tested-by: Jonathan Cormier jcormier@criticallink.com Reviewed-by: Aradhya Bhatia aradhya.bhatia@linux.dev Signed-off-by: Tomi Valkeinen tomi.valkeinen@ideasonboard.com Link: https://patchwork.freedesktop.org/patch/msgid/20241021-tidss-irq-fix-v1-6-82... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/tidss/tidss_dispc.c | 4 ++++ drivers/gpu/drm/tidss/tidss_irq.c | 2 ++ 2 files changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/tidss/tidss_dispc.c b/drivers/gpu/drm/tidss/tidss_dispc.c index d3e57e6de5dbb..38b2ae0d7ec1d 100644 --- a/drivers/gpu/drm/tidss/tidss_dispc.c +++ b/drivers/gpu/drm/tidss/tidss_dispc.c @@ -2661,8 +2661,12 @@ static void dispc_init_errata(struct dispc_device *dispc) */ static void dispc_softreset_k2g(struct dispc_device *dispc) { + unsigned long flags; + + spin_lock_irqsave(&dispc->tidss->wait_lock, flags); dispc_set_irqenable(dispc, 0); dispc_read_and_clear_irqstatus(dispc); + spin_unlock_irqrestore(&dispc->tidss->wait_lock, flags);
for (unsigned int vp_idx = 0; vp_idx < dispc->feat->num_vps; ++vp_idx) VP_REG_FLD_MOD(dispc, vp_idx, DISPC_VP_CONTROL, 0, 0, 0); diff --git a/drivers/gpu/drm/tidss/tidss_irq.c b/drivers/gpu/drm/tidss/tidss_irq.c index 0c681c7600bcb..f13c7e434f8ed 100644 --- a/drivers/gpu/drm/tidss/tidss_irq.c +++ b/drivers/gpu/drm/tidss/tidss_irq.c @@ -60,7 +60,9 @@ static irqreturn_t tidss_irq_handler(int irq, void *arg) unsigned int id; dispc_irq_t irqstatus;
+ spin_lock(&tidss->wait_lock); irqstatus = dispc_read_and_clear_irqstatus(tidss->dispc); + spin_unlock(&tidss->wait_lock);
for (id = 0; id < tidss->num_crtcs; id++) { struct drm_crtc *crtc = tidss->crtcs[id];
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tomi Valkeinen tomi.valkeinen+renesas@ideasonboard.com
[ Upstream commit 6389e616fae8a101ce00068f7690461ab57b29d8 ]
The driver checks for bit 16 (using CLOCKSET1_LOCK define) in CLOCKSET1 register when waiting for the PPI clock. However, the right bit to check is bit 17 (CLOCKSET1_LOCK_PHY define). Not only that, but there's nothing in the documents for bit 16 for V3U nor V4H.
So, fix the check to use bit 17, and drop the define for bit 16.
Fixes: 155358310f01 ("drm: rcar-du: Add R-Car DSI driver") Fixes: 11696c5e8924 ("drm: Place Renesas drivers in a separate dir") Cc: stable@vger.kernel.org Signed-off-by: Tomi Valkeinen tomi.valkeinen+renesas@ideasonboard.com Reviewed-by: Laurent Pinchart laurent.pinchart+renesas@ideasonboard.com Tested-by: Geert Uytterhoeven geert+renesas@glider.be Signed-off-by: Tomi Valkeinen tomi.valkeinen@ideasonboard.com Link: https://patchwork.freedesktop.org/patch/msgid/20241217-rcar-gh-dsi-v5-1-e774... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/rcar-du/rcar_mipi_dsi.c | 2 +- drivers/gpu/drm/rcar-du/rcar_mipi_dsi_regs.h | 1 - 2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/rcar-du/rcar_mipi_dsi.c b/drivers/gpu/drm/rcar-du/rcar_mipi_dsi.c index a7f2b7f66a176..9ec9c43971dfb 100644 --- a/drivers/gpu/drm/rcar-du/rcar_mipi_dsi.c +++ b/drivers/gpu/drm/rcar-du/rcar_mipi_dsi.c @@ -396,7 +396,7 @@ static int rcar_mipi_dsi_startup(struct rcar_mipi_dsi *dsi, for (timeout = 10; timeout > 0; --timeout) { if ((rcar_mipi_dsi_read(dsi, PPICLSR) & PPICLSR_STPST) && (rcar_mipi_dsi_read(dsi, PPIDLSR) & PPIDLSR_STPST) && - (rcar_mipi_dsi_read(dsi, CLOCKSET1) & CLOCKSET1_LOCK)) + (rcar_mipi_dsi_read(dsi, CLOCKSET1) & CLOCKSET1_LOCK_PHY)) break;
usleep_range(1000, 2000); diff --git a/drivers/gpu/drm/rcar-du/rcar_mipi_dsi_regs.h b/drivers/gpu/drm/rcar-du/rcar_mipi_dsi_regs.h index 2eaca54636f3d..1f1eb46c721fe 100644 --- a/drivers/gpu/drm/rcar-du/rcar_mipi_dsi_regs.h +++ b/drivers/gpu/drm/rcar-du/rcar_mipi_dsi_regs.h @@ -141,7 +141,6 @@
#define CLOCKSET1 0x101c #define CLOCKSET1_LOCK_PHY (1 << 17) -#define CLOCKSET1_LOCK (1 << 16) #define CLOCKSET1_CLKSEL (1 << 8) #define CLOCKSET1_CLKINSEL_EXTAL (0 << 2) #define CLOCKSET1_CLKINSEL_DIG (1 << 2)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shigeru Yoshida syoshida@redhat.com
[ Upstream commit 6b3d638ca897e099fa99bd6d02189d3176f80a47 ]
KMSAN reported a use-after-free issue in eth_skb_pkt_type()[1]. The cause of the issue was that eth_skb_pkt_type() accessed skb's data that didn't contain an Ethernet header. This occurs when bpf_prog_test_run_xdp() passes an invalid value as the user_data argument to bpf_test_init().
Fix this by returning an error when user_data is less than ETH_HLEN in bpf_test_init(). Additionally, remove the check for "if (user_size > size)" as it is unnecessary.
[1] BUG: KMSAN: use-after-free in eth_skb_pkt_type include/linux/etherdevice.h:627 [inline] BUG: KMSAN: use-after-free in eth_type_trans+0x4ee/0x980 net/ethernet/eth.c:165 eth_skb_pkt_type include/linux/etherdevice.h:627 [inline] eth_type_trans+0x4ee/0x980 net/ethernet/eth.c:165 __xdp_build_skb_from_frame+0x5a8/0xa50 net/core/xdp.c:635 xdp_recv_frames net/bpf/test_run.c:272 [inline] xdp_test_run_batch net/bpf/test_run.c:361 [inline] bpf_test_run_xdp_live+0x2954/0x3330 net/bpf/test_run.c:390 bpf_prog_test_run_xdp+0x148e/0x1b10 net/bpf/test_run.c:1318 bpf_prog_test_run+0x5b7/0xa30 kernel/bpf/syscall.c:4371 __sys_bpf+0x6a6/0xe20 kernel/bpf/syscall.c:5777 __do_sys_bpf kernel/bpf/syscall.c:5866 [inline] __se_sys_bpf kernel/bpf/syscall.c:5864 [inline] __x64_sys_bpf+0xa4/0xf0 kernel/bpf/syscall.c:5864 x64_sys_call+0x2ea0/0x3d90 arch/x86/include/generated/asm/syscalls_64.h:322 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xd9/0x1d0 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f
Uninit was created at: free_pages_prepare mm/page_alloc.c:1056 [inline] free_unref_page+0x156/0x1320 mm/page_alloc.c:2657 __free_pages+0xa3/0x1b0 mm/page_alloc.c:4838 bpf_ringbuf_free kernel/bpf/ringbuf.c:226 [inline] ringbuf_map_free+0xff/0x1e0 kernel/bpf/ringbuf.c:235 bpf_map_free kernel/bpf/syscall.c:838 [inline] bpf_map_free_deferred+0x17c/0x310 kernel/bpf/syscall.c:862 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa2b/0x1b60 kernel/workqueue.c:3310 worker_thread+0xedf/0x1550 kernel/workqueue.c:3391 kthread+0x535/0x6b0 kernel/kthread.c:389 ret_from_fork+0x6e/0x90 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
CPU: 1 UID: 0 PID: 17276 Comm: syz.1.16450 Not tainted 6.12.0-05490-g9bb88c659673 #8 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014
Fixes: be3d72a2896c ("bpf: move user_size out of bpf_test_init") Reported-by: syzkaller syzkaller@googlegroups.com Suggested-by: Martin KaFai Lau martin.lau@linux.dev Signed-off-by: Shigeru Yoshida syoshida@redhat.com Signed-off-by: Martin KaFai Lau martin.lau@kernel.org Acked-by: Stanislav Fomichev sdf@fomichev.me Acked-by: Daniel Borkmann daniel@iogearbox.net Link: https://patch.msgid.link/20250121150643.671650-1-syoshida@redhat.com Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/bpf/test_run.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 64be562f0fe32..77b386b76d463 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -768,12 +768,9 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size, void __user *data_in = u64_to_user_ptr(kattr->test.data_in); void *data;
- if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom) + if (user_size < ETH_HLEN || user_size > PAGE_SIZE - headroom - tailroom) return ERR_PTR(-EINVAL);
- if (user_size > size) - return ERR_PTR(-EMSGSIZE); - size = SKB_DATA_ALIGN(size); data = kzalloc(size + headroom + tailroom, GFP_USER); if (!data)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiayuan Chen mrpre@163.com
[ Upstream commit 0532a79efd68a4d9686b0385e4993af4b130ff82 ]
Added a new read_sock handler, allowing users to customize read operations instead of relying on the native socket's read_sock.
Signed-off-by: Jiayuan Chen mrpre@163.com Signed-off-by: Martin KaFai Lau martin.lau@kernel.org Reviewed-by: Jakub Sitnicki jakub@cloudflare.com Acked-by: John Fastabend john.fastabend@gmail.com Link: https://patch.msgid.link/20250122100917.49845-2-mrpre@163.com Stable-dep-of: 36b62df5683c ("bpf: Fix wrong copied_seq calculation") Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/networking/strparser.rst | 9 ++++++++- include/net/strparser.h | 2 ++ net/strparser/strparser.c | 11 +++++++++-- 3 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/Documentation/networking/strparser.rst b/Documentation/networking/strparser.rst index 6cab1f74ae05a..7f623d1db72aa 100644 --- a/Documentation/networking/strparser.rst +++ b/Documentation/networking/strparser.rst @@ -112,7 +112,7 @@ Functions Callbacks =========
-There are six callbacks: +There are seven callbacks:
::
@@ -182,6 +182,13 @@ There are six callbacks: the length of the message. skb->len - offset may be greater then full_len since strparser does not trim the skb.
+ :: + + int (*read_sock)(struct strparser *strp, read_descriptor_t *desc, + sk_read_actor_t recv_actor); + + The read_sock callback is used by strparser instead of + sock->ops->read_sock, if provided. ::
int (*read_sock_done)(struct strparser *strp, int err); diff --git a/include/net/strparser.h b/include/net/strparser.h index 41e2ce9e9e10f..0a83010b3a64a 100644 --- a/include/net/strparser.h +++ b/include/net/strparser.h @@ -43,6 +43,8 @@ struct strparser; struct strp_callbacks { int (*parse_msg)(struct strparser *strp, struct sk_buff *skb); void (*rcv_msg)(struct strparser *strp, struct sk_buff *skb); + int (*read_sock)(struct strparser *strp, read_descriptor_t *desc, + sk_read_actor_t recv_actor); int (*read_sock_done)(struct strparser *strp, int err); void (*abort_parser)(struct strparser *strp, int err); void (*lock)(struct strparser *strp); diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c index 8299ceb3e3739..95696f42647ec 100644 --- a/net/strparser/strparser.c +++ b/net/strparser/strparser.c @@ -347,7 +347,10 @@ static int strp_read_sock(struct strparser *strp) struct socket *sock = strp->sk->sk_socket; read_descriptor_t desc;
- if (unlikely(!sock || !sock->ops || !sock->ops->read_sock)) + if (unlikely(!sock || !sock->ops)) + return -EBUSY; + + if (unlikely(!strp->cb.read_sock && !sock->ops->read_sock)) return -EBUSY;
desc.arg.data = strp; @@ -355,7 +358,10 @@ static int strp_read_sock(struct strparser *strp) desc.count = 1; /* give more than one skb per call */
/* sk should be locked here, so okay to do read_sock */ - sock->ops->read_sock(strp->sk, &desc, strp_recv); + if (strp->cb.read_sock) + strp->cb.read_sock(strp, &desc, strp_recv); + else + sock->ops->read_sock(strp->sk, &desc, strp_recv);
desc.error = strp->cb.read_sock_done(strp, desc.error);
@@ -468,6 +474,7 @@ int strp_init(struct strparser *strp, struct sock *sk, strp->cb.unlock = cb->unlock ? : strp_sock_unlock; strp->cb.rcv_msg = cb->rcv_msg; strp->cb.parse_msg = cb->parse_msg; + strp->cb.read_sock = cb->read_sock; strp->cb.read_sock_done = cb->read_sock_done ? : default_read_sock_done; strp->cb.abort_parser = cb->abort_parser ? : strp_abort_strp;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiayuan Chen mrpre@163.com
[ Upstream commit 36b62df5683c315ba58c950f1a9c771c796c30ec ]
'sk->copied_seq' was updated in the tcp_eat_skb() function when the action of a BPF program was SK_REDIRECT. For other actions, like SK_PASS, the update logic for 'sk->copied_seq' was moved to tcp_bpf_recvmsg_parser() to ensure the accuracy of the 'fionread' feature.
It works for a single stream_verdict scenario, as it also modified sk_data_ready->sk_psock_verdict_data_ready->tcp_read_skb to remove updating 'sk->copied_seq'.
However, for programs where both stream_parser and stream_verdict are active (strparser purpose), tcp_read_sock() was used instead of tcp_read_skb() (sk_data_ready->strp_data_ready->tcp_read_sock). tcp_read_sock() now still updates 'sk->copied_seq', leading to duplicate updates.
In summary, for strparser + SK_PASS, copied_seq is redundantly calculated in both tcp_read_sock() and tcp_bpf_recvmsg_parser().
The issue causes incorrect copied_seq calculations, which prevent correct data reads from the recv() interface in user-land.
We do not want to add new proto_ops to implement a new version of tcp_read_sock, as this would introduce code complexity [1].
We could have added noack and copied_seq to desc, and then called ops->read_sock. However, unfortunately, other modules didn’t fully initialize desc to zero. So, for now, we are directly calling tcp_read_sock_noack() in tcp_bpf.c.
[1]: https://lore.kernel.org/bpf/20241218053408.437295-1-mrpre@163.com
Fixes: e5c6de5fa025 ("bpf, sockmap: Incorrectly handling copied_seq") Suggested-by: Jakub Sitnicki jakub@cloudflare.com Signed-off-by: Jiayuan Chen mrpre@163.com Signed-off-by: Martin KaFai Lau martin.lau@kernel.org Reviewed-by: Jakub Sitnicki jakub@cloudflare.com Acked-by: John Fastabend john.fastabend@gmail.com Link: https://patch.msgid.link/20250122100917.49845-3-mrpre@163.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/skmsg.h | 2 ++ include/net/tcp.h | 8 ++++++++ net/core/skmsg.c | 7 +++++++ net/ipv4/tcp.c | 29 ++++++++++++++++++++++++----- net/ipv4/tcp_bpf.c | 36 ++++++++++++++++++++++++++++++++++++ 5 files changed, 77 insertions(+), 5 deletions(-)
diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 6ccfd9236387c..32bbebf5b71e3 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -87,6 +87,8 @@ struct sk_psock { struct sk_psock_progs progs; #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) struct strparser strp; + u32 copied_seq; + u32 ingress_bytes; #endif struct sk_buff_head ingress_skb; struct list_head ingress_msg; diff --git a/include/net/tcp.h b/include/net/tcp.h index 14a00cdd31f42..83e0362e3b721 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -700,6 +700,9 @@ void tcp_get_info(struct sock *, struct tcp_info *); /* Read 'sendfile()'-style from a TCP socket */ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, sk_read_actor_t recv_actor); +int tcp_read_sock_noack(struct sock *sk, read_descriptor_t *desc, + sk_read_actor_t recv_actor, bool noack, + u32 *copied_seq); int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor); struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off); void tcp_read_done(struct sock *sk, size_t len); @@ -2351,6 +2354,11 @@ struct sk_psock; struct proto *tcp_bpf_get_proto(struct sock *sk, struct sk_psock *psock); int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore); void tcp_bpf_clone(const struct sock *sk, struct sock *newsk); +#ifdef CONFIG_BPF_STREAM_PARSER +struct strparser; +int tcp_bpf_strp_read_sock(struct strparser *strp, read_descriptor_t *desc, + sk_read_actor_t recv_actor); +#endif /* CONFIG_BPF_STREAM_PARSER */ #endif /* CONFIG_BPF_SYSCALL */
#ifdef CONFIG_INET diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 65764952bc681..5a790cd1121b1 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -547,6 +547,9 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb, return num_sge; }
+#if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) + psock->ingress_bytes += len; +#endif copied = len; msg->sg.start = 0; msg->sg.size = copied; @@ -1140,6 +1143,10 @@ int sk_psock_init_strp(struct sock *sk, struct sk_psock *psock) if (!ret) sk_psock_set_state(psock, SK_PSOCK_RX_STRP_ENABLED);
+ if (sk_is_tcp(sk)) { + psock->strp.cb.read_sock = tcp_bpf_strp_read_sock; + psock->copied_seq = tcp_sk(sk)->copied_seq; + } return ret; }
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index e27a9a9bb1623..7d591a0cf0c70 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1699,12 +1699,13 @@ EXPORT_SYMBOL(tcp_recv_skb); * or for 'peeking' the socket using this routine * (although both would be easy to implement). */ -int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, - sk_read_actor_t recv_actor) +static int __tcp_read_sock(struct sock *sk, read_descriptor_t *desc, + sk_read_actor_t recv_actor, bool noack, + u32 *copied_seq) { struct sk_buff *skb; struct tcp_sock *tp = tcp_sk(sk); - u32 seq = tp->copied_seq; + u32 seq = *copied_seq; u32 offset; int copied = 0;
@@ -1758,9 +1759,12 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, tcp_eat_recv_skb(sk, skb); if (!desc->count) break; - WRITE_ONCE(tp->copied_seq, seq); + WRITE_ONCE(*copied_seq, seq); } - WRITE_ONCE(tp->copied_seq, seq); + WRITE_ONCE(*copied_seq, seq); + + if (noack) + goto out;
tcp_rcv_space_adjust(sk);
@@ -1769,10 +1773,25 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, tcp_recv_skb(sk, seq, &offset); tcp_cleanup_rbuf(sk, copied); } +out: return copied; } + +int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, + sk_read_actor_t recv_actor) +{ + return __tcp_read_sock(sk, desc, recv_actor, false, + &tcp_sk(sk)->copied_seq); +} EXPORT_SYMBOL(tcp_read_sock);
+int tcp_read_sock_noack(struct sock *sk, read_descriptor_t *desc, + sk_read_actor_t recv_actor, bool noack, + u32 *copied_seq) +{ + return __tcp_read_sock(sk, desc, recv_actor, noack, copied_seq); +} + int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) { struct sk_buff *skb; diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index a8db010e9e611..bf10fa3c37b76 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -691,6 +691,42 @@ static int tcp_bpf_assert_proto_ops(struct proto *ops) ops->sendpage == tcp_sendpage ? 0 : -ENOTSUPP; }
+#if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) +int tcp_bpf_strp_read_sock(struct strparser *strp, read_descriptor_t *desc, + sk_read_actor_t recv_actor) +{ + struct sock *sk = strp->sk; + struct sk_psock *psock; + struct tcp_sock *tp; + int copied = 0; + + tp = tcp_sk(sk); + rcu_read_lock(); + psock = sk_psock(sk); + if (WARN_ON_ONCE(!psock)) { + desc->error = -EINVAL; + goto out; + } + + psock->ingress_bytes = 0; + copied = tcp_read_sock_noack(sk, desc, recv_actor, true, + &psock->copied_seq); + if (copied < 0) + goto out; + /* recv_actor may redirect skb to another socket (SK_REDIRECT) or + * just put skb into ingress queue of current socket (SK_PASS). + * For SK_REDIRECT, we need to ack the frame immediately but for + * SK_PASS, we want to delay the ack until tcp_bpf_recvmsg_parser(). + */ + tp->copied_seq = psock->copied_seq - psock->ingress_bytes; + tcp_rcv_space_adjust(sk); + __tcp_cleanup_rbuf(sk, copied - psock->ingress_bytes); +out: + rcu_read_unlock(); + return copied; +} +#endif /* CONFIG_BPF_STREAM_PARSER */ + int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore) { int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrey Vatoropin a.vatoropin@crpt.ru
[ Upstream commit 3fb3cb4350befc4f901c54e0cb4a2a47b1302e08 ]
Size of variable sd_gain equals four bytes - DA9150_QIF_SD_GAIN_SIZE. Size of variable shunt_val equals two bytes - DA9150_QIF_SHUNT_VAL_SIZE.
The expression sd_gain * shunt_val is currently being evaluated using 32-bit arithmetic. So during the multiplication an overflow may occur.
As the value of type 'u64' is used as storage for the eventual result, put ULL variable at the first position of each expression in order to give the compiler complete information about the proper arithmetic to use. According to C99 the guaranteed width for a variable of type 'unsigned long long' >= 64 bits.
Remove the explicit cast to u64 as it is meaningless.
Just for the sake of consistency, perform the similar trick with another expression concerning 'iavg'.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: a419b4fd9138 ("power: Add support for DA9150 Fuel-Gauge") Signed-off-by: Andrey Vatoropin a.vatoropin@crpt.ru Link: https://lore.kernel.org/r/20250130090030.53422-1-a.vatoropin@crpt.ru Signed-off-by: Sebastian Reichel sebastian.reichel@collabora.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/power/supply/da9150-fg.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/power/supply/da9150-fg.c b/drivers/power/supply/da9150-fg.c index 8c5e2c49d6c1c..9d0a6ab698f58 100644 --- a/drivers/power/supply/da9150-fg.c +++ b/drivers/power/supply/da9150-fg.c @@ -248,9 +248,9 @@ static int da9150_fg_current_avg(struct da9150_fg *fg, DA9150_QIF_SD_GAIN_SIZE); da9150_fg_read_sync_end(fg);
- div = (u64) (sd_gain * shunt_val * 65536ULL); + div = 65536ULL * sd_gain * shunt_val; do_div(div, 1000000); - res = (u64) (iavg * 1000000ULL); + res = 1000000ULL * iavg; do_div(res, div);
val->intval = (int) res;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Hildenbrand david@redhat.com
[ Upstream commit b3fefbb30a1691533cb905006b69b2a474660744 ]
In case we have to retry the loop, we are missing to unlock+put the folio. In that case, we will keep failing make_device_exclusive_range() because we cannot grab the folio lock, and even return from the function with the folio locked and referenced, effectively never succeeding the make_device_exclusive_range().
While at it, convert the other unlock+put to use a folio as well.
This was found by code inspection.
Fixes: 8f187163eb89 ("nouveau/svm: implement atomic SVM access") Signed-off-by: David Hildenbrand david@redhat.com Reviewed-by: Alistair Popple apopple@nvidia.com Tested-by: Alistair Popple apopple@nvidia.com Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20250124181524.3584236-2-david... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/nouveau/nouveau_svm.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index be6674fb1af71..d0bff13ac758d 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -592,6 +592,7 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm, unsigned long timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); struct mm_struct *mm = svmm->notifier.mm; + struct folio *folio; struct page *page; unsigned long start = args->p.addr; unsigned long notifier_seq; @@ -618,12 +619,16 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm, ret = -EINVAL; goto out; } + folio = page_folio(page);
mutex_lock(&svmm->mutex); if (!mmu_interval_read_retry(¬ifier->notifier, notifier_seq)) break; mutex_unlock(&svmm->mutex); + + folio_unlock(folio); + folio_put(folio); }
/* Map the page on the GPU. */ @@ -639,8 +644,8 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm, ret = nvif_object_ioctl(&svmm->vmm->vmm.object, args, size, NULL); mutex_unlock(&svmm->mutex);
- unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio);
out: mmu_interval_notifier_remove(¬ifier->notifier);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marijn Suijten marijn.suijten@somainline.org
[ Upstream commit 144429831f447223253a0e4376489f84ff37d1a7 ]
What used to be the input_10_bits boolean - feeding into the lowest bit of DSC_ENC - on MSM downstream turned into an accidental OR with the full bits_per_component number when it was ported to the upstream kernel.
On typical bpc=8 setups we don't notice this because line_buf_depth is always an odd value (it contains bpc+1) and will also set the 4th bit after left-shifting by 3 (hence this |= bits_per_component is a no-op).
Now that guards are being removed to allow more bits_per_component values besides 8 (possible since commit 49fd30a7153b ("drm/msm/dsi: use DRM DSC helpers for DSC setup")), a bpc of 10 will instead clash with the 5th bit which is convert_rgb. This is "fortunately" also always set to true by MSM's dsi_populate_dsc_params() already, but once a bpc of 12 starts being used it'll write into simple_422 which is normally false.
To solve all these overlaps, simply replicate downstream code and only set this lowest bit if bits_per_component is equal to 10. It is unclear why DSC requires this only for bpc=10 but not bpc=12, and also notice that this lowest bit wasn't set previously despite having a panel and patch on the list using it without any mentioned issues.
Fixes: c110cfd1753e ("drm/msm/disp/dpu1: Add support for DSC") Signed-off-by: Marijn Suijten marijn.suijten@somainline.org Reviewed-by: Abhinav Kumar quic_abhinavk@quicinc.com Reviewed-by: Konrad Dybcio konrad.dybcio@oss.qualcomm.com Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/636311/ Link: https://lore.kernel.org/r/20250211-dsc-10-bit-v1-1-1c85a9430d9a@somainline.o... Signed-off-by: Abhinav Kumar quic_abhinavk@quicinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c index c8f14555834a8..70a6dfe94faa5 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c @@ -46,6 +46,7 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc, u32 slice_last_group_size; u32 det_thresh_flatness; bool is_cmd_mode = !(mode & DSC_MODE_VIDEO); + bool input_10_bits = dsc->bits_per_component == 10;
DPU_REG_WRITE(c, DSC_COMMON_MODE, mode);
@@ -62,7 +63,7 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc, data |= (dsc->line_buf_depth << 3); data |= (dsc->simple_422 << 2); data |= (dsc->convert_rgb << 1); - data |= dsc->bits_per_component; + data |= input_10_bits;
DPU_REG_WRITE(c, DSC_ENC, data);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Caleb Sander Mateos csander@purestorage.com
[ Upstream commit 487a3ea7b1b8ba2ca7d2c2bb3c3594dc360d6261 ]
nvme_validate_passthru_nsid() logs an err message whose format string is split over 2 lines. There is a missing space between the two pieces, resulting in log lines like "... does not match nsid (1)of namespace". Add the missing space between ")" and "of". Also combine the format string pieces onto a single line to make the err message easier to grep.
Fixes: e7d4b5493a2d ("nvme: factor out a nvme_validate_passthru_nsid helper") Signed-off-by: Caleb Sander Mateos csander@purestorage.com Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Keith Busch kbusch@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/ioctl.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index a02873792890e..acf73a91e87e7 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -262,8 +262,7 @@ static bool nvme_validate_passthru_nsid(struct nvme_ctrl *ctrl, { if (ns && nsid != ns->head->ns_id) { dev_err(ctrl->device, - "%s: nsid (%u) in cmd does not match nsid (%u)" - "of namespace\n", + "%s: nsid (%u) in cmd does not match nsid (%u) of namespace\n", current->comm, nsid, ns->head->ns_id); return false; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yan Zhai yan@cloudflare.com
[ Upstream commit 5644c6b50ffee0a56c1e01430a8c88e34decb120 ]
The generic_map_lookup_batch currently returns EINTR if it fails with ENOENT and retries several times on bpf_map_copy_value. The next batch would start from the same location, presuming it's a transient issue. This is incorrect if a map can actually have "holes", i.e. "get_next_key" can return a key that does not point to a valid value. At least the array of maps type may contain such holes legitly. Right now these holes show up, generic batch lookup cannot proceed any more. It will always fail with EINTR errors.
Rather, do not retry in generic_map_lookup_batch. If it finds a non existing element, skip to the next key. This simple solution comes with a price that transient errors may not be recovered, and the iteration might cycle back to the first key under parallel deletion. For example, Hou Tao houtao@huaweicloud.com pointed out a following scenario:
For LPM trie map: (1) ->map_get_next_key(map, prev_key, key) returns a valid key
(2) bpf_map_copy_value() return -ENOMENT It means the key must be deleted concurrently.
(3) goto next_key It swaps the prev_key and key
(4) ->map_get_next_key(map, prev_key, key) again prev_key points to a non-existing key, for LPM trie it will treat just like prev_key=NULL case, the returned key will be duplicated.
With the retry logic, the iteration can continue to the key next to the deleted one. But if we directly skip to the next key, the iteration loop would restart from the first key for the lpm_trie type.
However, not all races may be recovered. For example, if current key is deleted after instead of before bpf_map_copy_value, or if the prev_key also gets deleted, then the loop will still restart from the first key for lpm_tire anyway. For generic lookup it might be better to stay simple, i.e. just skip to the next key. To guarantee that the output keys are not duplicated, it is better to implement map type specific batch operations, which can properly lock the trie and synchronize with concurrent mutators.
Fixes: cb4d03ab499d ("bpf: Add generic support for lookup batch op") Closes: https://lore.kernel.org/bpf/Z6JXtA1M5jAZx8xD@debian.debian/ Signed-off-by: Yan Zhai yan@cloudflare.com Acked-by: Hou Tao houtao1@huawei.com Link: https://lore.kernel.org/r/85618439eea75930630685c467ccefeac0942e2b.173917159... Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/syscall.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index cfb361f4b00ea..7a4004f09bae7 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1713,8 +1713,6 @@ int generic_map_update_batch(struct bpf_map *map, return err; }
-#define MAP_LOOKUP_RETRIES 3 - int generic_map_lookup_batch(struct bpf_map *map, const union bpf_attr *attr, union bpf_attr __user *uattr) @@ -1724,8 +1722,8 @@ int generic_map_lookup_batch(struct bpf_map *map, void __user *values = u64_to_user_ptr(attr->batch.values); void __user *keys = u64_to_user_ptr(attr->batch.keys); void *buf, *buf_prevkey, *prev_key, *key, *value; - int err, retry = MAP_LOOKUP_RETRIES; u32 value_size, cp, max_count; + int err;
if (attr->batch.elem_flags & ~BPF_F_LOCK) return -EINVAL; @@ -1771,14 +1769,8 @@ int generic_map_lookup_batch(struct bpf_map *map, err = bpf_map_copy_value(map, key, value, attr->batch.elem_flags);
- if (err == -ENOENT) { - if (retry) { - retry--; - continue; - } - err = -EINTR; - break; - } + if (err == -ENOENT) + goto next_key;
if (err) goto free_buf; @@ -1793,12 +1785,12 @@ int generic_map_lookup_batch(struct bpf_map *map, goto free_buf; }
+ cp++; +next_key: if (!prev_key) prev_key = buf_prevkey;
swap(prev_key, key); - retry = MAP_LOOKUP_RETRIES; - cp++; cond_resched(); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jessica Zhang quic_jesszhan@quicinc.com
commit f063ac6b55df03ed25996bdc84d9e1c50147cfa1 upstream.
Disable pingpong dither in dpu_encoder_helper_phys_cleanup().
This avoids the issue where an encoder unknowingly uses dither after reserving a pingpong block that was previously bound to an encoder that had enabled dither.
Cc: stable@vger.kernel.org Reported-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Closes: https://lore.kernel.org/all/jr7zbj5w7iq4apg3gofuvcwf4r2swzqjk7sshwcdjll4mn6c... Signed-off-by: Jessica Zhang quic_jesszhan@quicinc.com Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Reviewed-by: Abhinav Kumar quic_abhinavk@quicinc.com Fixes: 3c128638a07d ("drm/msm/dpu: add support for dither block in display") Patchwork: https://patchwork.freedesktop.org/patch/636517/ Link: https://lore.kernel.org/r/20250211-dither-disable-v1-1-ac2cb455f6b9@quicinc.... Signed-off-by: Abhinav Kumar quic_abhinavk@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -2059,6 +2059,9 @@ void dpu_encoder_helper_phys_cleanup(str } }
+ if (phys_enc->hw_pp && phys_enc->hw_pp->ops.setup_dither) + phys_enc->hw_pp->ops.setup_dither(phys_enc->hw_pp, NULL); + /* reset the merge 3D HW block */ if (phys_enc->hw_pp && phys_enc->hw_pp->merge_3d) { phys_enc->hw_pp->merge_3d->ops.setup_3d_mode(phys_enc->hw_pp->merge_3d,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ville Syrjälä ville.syrjala@linux.intel.com
commit 07fb70d82e0df085980246bf17bc12537588795f upstream.
Any active plane needs to have its crtc included in the atomic state. For planes enabled via uapi that is all handler in the core. But when we use a plane for joiner the uapi code things the plane is disabled and therefore doesn't have a crtc. So we need to pull those in by hand. We do it first thing in intel_joiner_add_affected_crtcs() so that any newly added crtc will subsequently pull in all of its joined crtcs as well.
The symptoms from failing to do this are: - duct tape in the form of commit 1d5b09f8daf8 ("drm/i915: Fix NULL ptr deref by checking new_crtc_state") - the plane's hw state will get overwritten by the disabled uapi state if it can't find the uapi counterpart plane in the atomic state from where it should copy the correct state
Cc: stable@vger.kernel.org Reviewed-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250212164330.16891-2-ville.s... (cherry picked from commit 91077d1deb5374eb8be00fb391710f00e751dc4b) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/display/intel_display.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
--- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -6691,12 +6691,30 @@ static int intel_async_flip_check_hw(str static int intel_bigjoiner_add_affected_crtcs(struct intel_atomic_state *state) { struct drm_i915_private *i915 = to_i915(state->base.dev); + const struct intel_plane_state *plane_state; struct intel_crtc_state *crtc_state; + struct intel_plane *plane; struct intel_crtc *crtc; u8 affected_pipes = 0; u8 modeset_pipes = 0; int i;
+ /* + * Any plane which is in use by the joiner needs its crtc. + * Pull those in first as this will not have happened yet + * if the plane remains disabled according to uapi. + */ + for_each_new_intel_plane_in_state(state, plane, plane_state, i) { + crtc = to_intel_crtc(plane_state->hw.crtc); + if (!crtc) + continue; + + crtc_state = intel_atomic_get_crtc_state(&state->base, crtc); + if (IS_ERR(crtc_state)) + return PTR_ERR(crtc_state); + } + + /* Now pull in all joined crtcs */ for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) { affected_pipes |= crtc_state->bigjoiner_pipes; if (intel_crtc_needs_modeset(crtc_state))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sumit Garg sumit.garg@linaro.org
commit 70b0d6b0a199c5a3ee6c72f5e61681ed6f759612 upstream.
OP-TEE supplicant is a user-space daemon and it's possible for it be hung or crashed or killed in the middle of processing an OP-TEE RPC call. It becomes more complicated when there is incorrect shutdown ordering of the supplicant process vs the OP-TEE client application which can eventually lead to system hang-up waiting for the closure of the client application.
Allow the client process waiting in kernel for supplicant response to be killed rather than indefinitely waiting in an unkillable state. Also, a normal uninterruptible wait should not have resulted in the hung-task watchdog getting triggered, but the endless loop would.
This fixes issues observed during system reboot/shutdown when supplicant got hung for some reason or gets crashed/killed which lead to client getting hung in an unkillable state. It in turn lead to system being in hung up state requiring hard power off/on to recover.
Fixes: 4fb0a5eb364d ("tee: add OP-TEE driver") Suggested-by: Arnd Bergmann arnd@arndb.de Cc: stable@vger.kernel.org Signed-off-by: Sumit Garg sumit.garg@linaro.org Reviewed-by: Arnd Bergmann arnd@arndb.de Reviewed-by: Jens Wiklander jens.wiklander@linaro.org Signed-off-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tee/optee/supp.c | 35 ++++++++--------------------------- 1 file changed, 8 insertions(+), 27 deletions(-)
--- a/drivers/tee/optee/supp.c +++ b/drivers/tee/optee/supp.c @@ -80,7 +80,6 @@ u32 optee_supp_thrd_req(struct tee_conte struct optee *optee = tee_get_drvdata(ctx->teedev); struct optee_supp *supp = &optee->supp; struct optee_supp_req *req; - bool interruptable; u32 ret;
/* @@ -111,36 +110,18 @@ u32 optee_supp_thrd_req(struct tee_conte /* * Wait for supplicant to process and return result, once we've * returned from wait_for_completion(&req->c) successfully we have - * exclusive access again. + * exclusive access again. Allow the wait to be killable such that + * the wait doesn't turn into an indefinite state if the supplicant + * gets hung for some reason. */ - while (wait_for_completion_interruptible(&req->c)) { + if (wait_for_completion_killable(&req->c)) { mutex_lock(&supp->mutex); - interruptable = !supp->ctx; - if (interruptable) { - /* - * There's no supplicant available and since the - * supp->mutex currently is held none can - * become available until the mutex released - * again. - * - * Interrupting an RPC to supplicant is only - * allowed as a way of slightly improving the user - * experience in case the supplicant hasn't been - * started yet. During normal operation the supplicant - * will serve all requests in a timely manner and - * interrupting then wouldn't make sense. - */ - if (req->in_queue) { - list_del(&req->link); - req->in_queue = false; - } + if (req->in_queue) { + list_del(&req->link); + req->in_queue = false; } mutex_unlock(&supp->mutex); - - if (interruptable) { - req->ret = TEEC_ERROR_COMMUNICATION; - break; - } + req->ret = TEEC_ERROR_COMMUNICATION; }
ret = req->ret;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gavrilov Ilia Ilia.Gavrilov@infotecs.ru
commit 07b598c0e6f06a0f254c88dafb4ad50f8a8c6eea upstream.
Syzkaller reports the following bug:
BUG: spinlock bad magic on CPU#1, syz-executor.0/7995 lock: 0xffff88805303f3e0, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 CPU: 1 PID: 7995 Comm: syz-executor.0 Tainted: G E 5.10.209+ #1 Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x119/0x179 lib/dump_stack.c:118 debug_spin_lock_before kernel/locking/spinlock_debug.c:83 [inline] do_raw_spin_lock+0x1f6/0x270 kernel/locking/spinlock_debug.c:112 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:117 [inline] _raw_spin_lock_irqsave+0x50/0x70 kernel/locking/spinlock.c:159 reset_per_cpu_data+0xe6/0x240 [drop_monitor] net_dm_cmd_trace+0x43d/0x17a0 [drop_monitor] genl_family_rcv_msg_doit+0x22f/0x330 net/netlink/genetlink.c:739 genl_family_rcv_msg net/netlink/genetlink.c:783 [inline] genl_rcv_msg+0x341/0x5a0 net/netlink/genetlink.c:800 netlink_rcv_skb+0x14d/0x440 net/netlink/af_netlink.c:2497 genl_rcv+0x29/0x40 net/netlink/genetlink.c:811 netlink_unicast_kernel net/netlink/af_netlink.c:1322 [inline] netlink_unicast+0x54b/0x800 net/netlink/af_netlink.c:1348 netlink_sendmsg+0x914/0xe00 net/netlink/af_netlink.c:1916 sock_sendmsg_nosec net/socket.c:651 [inline] __sock_sendmsg+0x157/0x190 net/socket.c:663 ____sys_sendmsg+0x712/0x870 net/socket.c:2378 ___sys_sendmsg+0xf8/0x170 net/socket.c:2432 __sys_sendmsg+0xea/0x1b0 net/socket.c:2461 do_syscall_64+0x30/0x40 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x62/0xc7 RIP: 0033:0x7f3f9815aee9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f3f972bf0c8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007f3f9826d050 RCX: 00007f3f9815aee9 RDX: 0000000020000000 RSI: 0000000020001300 RDI: 0000000000000007 RBP: 00007f3f981b63bd R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000006e R14: 00007f3f9826d050 R15: 00007ffe01ee6768
If drop_monitor is built as a kernel module, syzkaller may have time to send a netlink NET_DM_CMD_START message during the module loading. This will call the net_dm_monitor_start() function that uses a spinlock that has not yet been initialized.
To fix this, let's place resource initialization above the registration of a generic netlink family.
Found by InfoTeCS on behalf of Linux Verification Center (linuxtesting.org) with Syzkaller.
Fixes: 9a8afc8d3962 ("Network Drop Monitor: Adding drop monitor implementation & Netlink protocol") Cc: stable@vger.kernel.org Signed-off-by: Ilia Gavrilov Ilia.Gavrilov@infotecs.ru Reviewed-by: Ido Schimmel idosch@nvidia.com Link: https://patch.msgid.link/20250213152054.2785669-1-Ilia.Gavrilov@infotecs.ru Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/core/drop_monitor.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-)
--- a/net/core/drop_monitor.c +++ b/net/core/drop_monitor.c @@ -1716,30 +1716,30 @@ static int __init init_net_drop_monitor( return -ENOSPC; }
- rc = genl_register_family(&net_drop_monitor_family); - if (rc) { - pr_err("Could not create drop monitor netlink family\n"); - return rc; + for_each_possible_cpu(cpu) { + net_dm_cpu_data_init(cpu); + net_dm_hw_cpu_data_init(cpu); } - WARN_ON(net_drop_monitor_family.mcgrp_offset != NET_DM_GRP_ALERT);
rc = register_netdevice_notifier(&dropmon_net_notifier); if (rc < 0) { pr_crit("Failed to register netdevice notifier\n"); + return rc; + } + + rc = genl_register_family(&net_drop_monitor_family); + if (rc) { + pr_err("Could not create drop monitor netlink family\n"); goto out_unreg; } + WARN_ON(net_drop_monitor_family.mcgrp_offset != NET_DM_GRP_ALERT);
rc = 0;
- for_each_possible_cpu(cpu) { - net_dm_cpu_data_init(cpu); - net_dm_hw_cpu_data_init(cpu); - } - goto out;
out_unreg: - genl_unregister_family(&net_drop_monitor_family); + WARN_ON(unregister_netdevice_notifier(&dropmon_net_notifier)); out: return rc; } @@ -1748,19 +1748,18 @@ static void exit_net_drop_monitor(void) { int cpu;
- BUG_ON(unregister_netdevice_notifier(&dropmon_net_notifier)); - /* * Because of the module_get/put we do in the trace state change path * we are guaranteed not to have any current users when we get here */ + BUG_ON(genl_unregister_family(&net_drop_monitor_family)); + + BUG_ON(unregister_netdevice_notifier(&dropmon_net_notifier));
for_each_possible_cpu(cpu) { net_dm_hw_cpu_data_fini(cpu); net_dm_cpu_data_fini(cpu); } - - BUG_ON(genl_unregister_family(&net_drop_monitor_family)); }
module_init(init_net_drop_monitor);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haoxiang Li haoxiang_li2024@163.com
commit 878e7b11736e062514e58f3b445ff343e6705537 upstream.
Add check for the return value of nfp_app_ctrl_msg_alloc() in nfp_bpf_cmsg_alloc() to prevent null pointer dereference.
Fixes: ff3d43f7568c ("nfp: bpf: implement helpers for FW map ops") Cc: stable@vger.kernel.org Signed-off-by: Haoxiang Li haoxiang_li2024@163.com Link: https://patch.msgid.link/20250218030409.2425798-1-haoxiang_li2024@163.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/netronome/nfp/bpf/cmsg.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c @@ -20,6 +20,8 @@ nfp_bpf_cmsg_alloc(struct nfp_app_bpf *b struct sk_buff *skb;
skb = nfp_app_ctrl_msg_alloc(bpf->app, size, GFP_KERNEL); + if (!skb) + return NULL; skb_put(skb, size);
return skb;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikita Zhandarovich n.zhandarovich@fintech.ru
commit a8c9a453387640dbe45761970f41301a6985e7fa upstream.
If 'micfil->quality' received from micfil_quality_set() somehow ends up with an unpredictable value, switch() operator will fail to initialize local variable qsel before regmap_update_bits() tries to utilize it.
While it is unlikely, play it safe and enable a default case that returns -EINVAL error.
Found by Linux Verification Center (linuxtesting.org) with static analysis tool SVACE.
Fixes: bea1d61d5892 ("ASoC: fsl_micfil: rework quality setting") Cc: stable@vger.kernel.org Signed-off-by: Nikita Zhandarovich n.zhandarovich@fintech.ru Link: https://patch.msgid.link/20250116142436.22389-1-n.zhandarovich@fintech.ru Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/soc/fsl/fsl_micfil.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/sound/soc/fsl/fsl_micfil.c +++ b/sound/soc/fsl/fsl_micfil.c @@ -123,6 +123,8 @@ static int micfil_set_quality(struct fsl case QUALITY_VLOW2: qsel = MICFIL_QSEL_VLOW2_QUALITY; break; + default: + return -EINVAL; }
return regmap_update_bits(micfil->regmap, REG_MICFIL_CTRL2,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wentao Liang vulab@iscas.ac.cn
commit 822b7ec657e99b44b874e052d8540d8b54fe8569 upstream.
Check the return value of snd_ctl_rename_id() in snd_hda_create_dig_out_ctls(). Ensure that failures are properly handled.
[ Note: the error cannot happen practically because the only error condition in snd_ctl_rename_id() is the missing ID, but this is a rename, hence it must be present. But for the code consistency, it's safer to have always the proper return check -- tiwai ]
Fixes: 5c219a340850 ("ALSA: hda: Fix kctl->id initialization") Cc: stable@vger.kernel.org # 6.4+ Signed-off-by: Wentao Liang vulab@iscas.ac.cn Link: https://patch.msgid.link/20250213074543.1620-1-vulab@iscas.ac.cn Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/hda_codec.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/sound/pci/hda/hda_codec.c +++ b/sound/pci/hda/hda_codec.c @@ -2465,7 +2465,9 @@ int snd_hda_create_dig_out_ctls(struct h break; id = kctl->id; id.index = spdif_index; - snd_ctl_rename_id(codec->card, &kctl->id, &id); + err = snd_ctl_rename_id(codec->card, &kctl->id, &id); + if (err < 0) + return err; } bus->primary_dig_out_type = HDA_PCM_TYPE_HDMI; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: John Veness john-linux@pelago.org.uk
commit 6d1f86610f23b0bc334d6506a186f21a98f51392 upstream.
Allows the LED on the dedicated mute button on the HP ProBook 450 G4 laptop to change colour correctly.
Signed-off-by: John Veness john-linux@pelago.org.uk Cc: stable@vger.kernel.org Link: https://patch.msgid.link/2fb55d48-6991-4a42-b591-4c78f2fad8d7@pelago.org.uk Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/patch_conexant.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -1098,6 +1098,7 @@ static const struct snd_pci_quirk cxt506 SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO), SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE), SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO), + SND_PCI_QUIRK(0x103c, 0x8231, "HP ProBook 450 G4", CXT_FIXUP_MUTE_LED_GPIO), SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Brauner brauner@kernel.org
commit 56d5f3eba3f5de0efdd556de4ef381e109b973a9 upstream.
In [1] it was reported that the acct(2) system call can be used to trigger NULL deref in cases where it is set to write to a file that triggers an internal lookup. This can e.g., happen when pointing acc(2) to /sys/power/resume. At the point the where the write to this file happens the calling task has already exited and called exit_fs(). A lookup will thus trigger a NULL-deref when accessing current->fs.
Reorganize the code so that the the final write happens from the workqueue but with the caller's credentials. This preserves the (strange) permission model and has almost no regression risk.
This api should stop to exist though.
Link: https://lore.kernel.org/r/20250127091811.3183623-1-quzicheng@huawei.com [1] Link: https://lore.kernel.org/r/20250211-work-acct-v1-1-1c16aecab8b3@kernel.org Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: Zicheng Qu quzicheng@huawei.com Cc: stable@vger.kernel.org Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/acct.c | 120 +++++++++++++++++++++++++++++++++------------------------- 1 file changed, 70 insertions(+), 50 deletions(-)
--- a/kernel/acct.c +++ b/kernel/acct.c @@ -104,48 +104,50 @@ struct bsd_acct_struct { atomic_long_t count; struct rcu_head rcu; struct mutex lock; - int active; + bool active; + bool check_space; unsigned long needcheck; struct file *file; struct pid_namespace *ns; struct work_struct work; struct completion done; + acct_t ac; };
-static void do_acct_process(struct bsd_acct_struct *acct); +static void fill_ac(struct bsd_acct_struct *acct); +static void acct_write_process(struct bsd_acct_struct *acct);
/* * Check the amount of free space and suspend/resume accordingly. */ -static int check_free_space(struct bsd_acct_struct *acct) +static bool check_free_space(struct bsd_acct_struct *acct) { struct kstatfs sbuf;
- if (time_is_after_jiffies(acct->needcheck)) - goto out; + if (!acct->check_space) + return acct->active;
/* May block */ if (vfs_statfs(&acct->file->f_path, &sbuf)) - goto out; + return acct->active;
if (acct->active) { u64 suspend = sbuf.f_blocks * SUSPEND; do_div(suspend, 100); if (sbuf.f_bavail <= suspend) { - acct->active = 0; + acct->active = false; pr_info("Process accounting paused\n"); } } else { u64 resume = sbuf.f_blocks * RESUME; do_div(resume, 100); if (sbuf.f_bavail >= resume) { - acct->active = 1; + acct->active = true; pr_info("Process accounting resumed\n"); } }
acct->needcheck = jiffies + ACCT_TIMEOUT*HZ; -out: return acct->active; }
@@ -190,7 +192,11 @@ static void acct_pin_kill(struct fs_pin { struct bsd_acct_struct *acct = to_acct(pin); mutex_lock(&acct->lock); - do_acct_process(acct); + /* + * Fill the accounting struct with the exiting task's info + * before punting to the workqueue. + */ + fill_ac(acct); schedule_work(&acct->work); wait_for_completion(&acct->done); cmpxchg(&acct->ns->bacct, pin, NULL); @@ -203,6 +209,9 @@ static void close_work(struct work_struc { struct bsd_acct_struct *acct = container_of(work, struct bsd_acct_struct, work); struct file *file = acct->file; + + /* We were fired by acct_pin_kill() which holds acct->lock. */ + acct_write_process(acct); if (file->f_op->flush) file->f_op->flush(file, NULL); __fput_sync(file); @@ -431,13 +440,27 @@ static u32 encode_float(u64 value) * do_exit() or when switching to a different output file. */
-static void fill_ac(acct_t *ac) +static void fill_ac(struct bsd_acct_struct *acct) { struct pacct_struct *pacct = ¤t->signal->pacct; + struct file *file = acct->file; + acct_t *ac = &acct->ac; u64 elapsed, run_time; time64_t btime; struct tty_struct *tty;
+ lockdep_assert_held(&acct->lock); + + if (time_is_after_jiffies(acct->needcheck)) { + acct->check_space = false; + + /* Don't fill in @ac if nothing will be written. */ + if (!acct->active) + return; + } else { + acct->check_space = true; + } + /* * Fill the accounting struct with the needed info as recorded * by the different kernel functions. @@ -485,64 +508,61 @@ static void fill_ac(acct_t *ac) ac->ac_majflt = encode_comp_t(pacct->ac_majflt); ac->ac_exitcode = pacct->ac_exitcode; spin_unlock_irq(¤t->sighand->siglock); -} -/* - * do_acct_process does all actual work. Caller holds the reference to file. - */ -static void do_acct_process(struct bsd_acct_struct *acct) -{ - acct_t ac; - unsigned long flim; - const struct cred *orig_cred; - struct file *file = acct->file; - - /* - * Accounting records are not subject to resource limits. - */ - flim = rlimit(RLIMIT_FSIZE); - current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY; - /* Perform file operations on behalf of whoever enabled accounting */ - orig_cred = override_creds(file->f_cred);
- /* - * First check to see if there is enough free_space to continue - * the process accounting system. - */ - if (!check_free_space(acct)) - goto out; - - fill_ac(&ac); /* we really need to bite the bullet and change layout */ - ac.ac_uid = from_kuid_munged(file->f_cred->user_ns, orig_cred->uid); - ac.ac_gid = from_kgid_munged(file->f_cred->user_ns, orig_cred->gid); + ac->ac_uid = from_kuid_munged(file->f_cred->user_ns, current_uid()); + ac->ac_gid = from_kgid_munged(file->f_cred->user_ns, current_gid()); #if ACCT_VERSION == 1 || ACCT_VERSION == 2 /* backward-compatible 16 bit fields */ - ac.ac_uid16 = ac.ac_uid; - ac.ac_gid16 = ac.ac_gid; + ac->ac_uid16 = ac->ac_uid; + ac->ac_gid16 = ac->ac_gid; #elif ACCT_VERSION == 3 { struct pid_namespace *ns = acct->ns;
- ac.ac_pid = task_tgid_nr_ns(current, ns); + ac->ac_pid = task_tgid_nr_ns(current, ns); rcu_read_lock(); - ac.ac_ppid = task_tgid_nr_ns(rcu_dereference(current->real_parent), - ns); + ac->ac_ppid = task_tgid_nr_ns(rcu_dereference(current->real_parent), ns); rcu_read_unlock(); } #endif +} + +static void acct_write_process(struct bsd_acct_struct *acct) +{ + struct file *file = acct->file; + const struct cred *cred; + acct_t *ac = &acct->ac; + + /* Perform file operations on behalf of whoever enabled accounting */ + cred = override_creds(file->f_cred); + /* - * Get freeze protection. If the fs is frozen, just skip the write - * as we could deadlock the system otherwise. + * First check to see if there is enough free_space to continue + * the process accounting system. Then get freeze protection. If + * the fs is frozen, just skip the write as we could deadlock + * the system otherwise. */ - if (file_start_write_trylock(file)) { + if (check_free_space(acct) && file_start_write_trylock(file)) { /* it's been opened O_APPEND, so position is irrelevant */ loff_t pos = 0; - __kernel_write(file, &ac, sizeof(acct_t), &pos); + __kernel_write(file, ac, sizeof(acct_t), &pos); file_end_write(file); } -out: + + revert_creds(cred); +} + +static void do_acct_process(struct bsd_acct_struct *acct) +{ + unsigned long flim; + + /* Accounting records are not subject to resource limits. */ + flim = rlimit(RLIMIT_FSIZE); + current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY; + fill_ac(acct); + acct_write_process(acct); current->signal->rlim[RLIMIT_FSIZE].rlim_cur = flim; - revert_creds(orig_cred); }
/**
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Brauner brauner@kernel.org
commit 890ed45bde808c422c3c27d3285fc45affa0f930 upstream.
There's no point in allowing anything kernel internal nor procfs or sysfs.
Link: https://lore.kernel.org/r/20250127091811.3183623-1-quzicheng@huawei.com Link: https://lore.kernel.org/r/20250211-work-acct-v1-2-1c16aecab8b3@kernel.org Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reviewed-by: Amir Goldstein amir73il@gmail.com Reported-by: Zicheng Qu quzicheng@huawei.com Cc: stable@vger.kernel.org Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/acct.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
--- a/kernel/acct.c +++ b/kernel/acct.c @@ -244,6 +244,20 @@ static int acct_on(struct filename *path return -EACCES; }
+ /* Exclude kernel kernel internal filesystems. */ + if (file_inode(file)->i_sb->s_flags & (SB_NOUSER | SB_KERNMOUNT)) { + kfree(acct); + filp_close(file, NULL); + return -EINVAL; + } + + /* Exclude procfs and sysfs. */ + if (file_inode(file)->i_sb->s_iflags & SB_I_USERNS_VISIBLE) { + kfree(acct); + filp_close(file, NULL); + return -EINVAL; + } + if (!(file->f_mode & FMODE_CAN_WRITE)) { kfree(acct); filp_close(file, NULL);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ricardo Cañuelo Navarro rcn@igalia.com
commit 2ede647a6fde3e54a6bfda7cf01c716649655900 upstream.
Add a sanity check to madvise_dontneed_free() to address a corner case in madvise where a race condition causes the current vma being processed to be backed by a different page size.
During a madvise(MADV_DONTNEED) call on a memory region registered with a userfaultfd, there's a period of time where the process mm lock is temporarily released in order to send a UFFD_EVENT_REMOVE and let userspace handle the event. During this time, the vma covering the current address range may change due to an explicit mmap done concurrently by another thread.
If, after that change, the memory region, which was originally backed by 4KB pages, is now backed by hugepages, the end address is rounded down to a hugepage boundary to avoid data loss (see "Fixes" below). This rounding may cause the end address to be truncated to the same address as the start.
Make this corner case follow the same semantics as in other similar cases where the requested region has zero length (ie. return 0).
This will make madvise_walk_vmas() continue to the next vma in the range (this time holding the process mm lock) which, due to the prev pointer becoming stale because of the vma change, will be the same hugepage-backed vma that was just checked before. The next time madvise_dontneed_free() runs for this vma, if the start address isn't aligned to a hugepage boundary, it'll return -EINVAL, which is also in line with the madvise api.
From userspace perspective, madvise() will return EINVAL because the start
address isn't aligned according to the new vma alignment requirements (hugepage), even though it was correctly page-aligned when the call was issued.
Link: https://lkml.kernel.org/r/20250203075206.1452208-1-rcn@igalia.com Fixes: 8ebe0a5eaaeb ("mm,madvise,hugetlb: fix unexpected data loss with MADV_DONTNEED on hugetlbfs") Signed-off-by: Ricardo Cañuelo Navarro rcn@igalia.com Reviewed-by: Oscar Salvador osalvador@suse.de Cc: Florent Revest revest@google.com Cc: Rik van Riel riel@surriel.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/madvise.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
--- a/mm/madvise.c +++ b/mm/madvise.c @@ -879,7 +879,16 @@ static long madvise_dontneed_free(struct */ end = vma->vm_end; } - VM_WARN_ON(start >= end); + /* + * If the memory region between start and end was + * originally backed by 4kB pages and then remapped to + * be backed by hugepages while mmap_lock was dropped, + * the adjustment for hugetlb vma above may have rounded + * end down to the start address. + */ + if (start == end) + return 0; + VM_WARN_ON(start > end); }
if (behavior == MADV_DONTNEED || behavior == MADV_DONTNEED_LOCKED)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
commit 2b9df00cded911e2ca2cfae5c45082166b24f8aa upstream.
Replace dma_request_channel() with dma_request_chan_by_mask() and use helper functions to return proper error code instead of fixed -EBUSY.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Niravkumar L Rabara niravkumar.l.rabara@intel.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/mtd/nand/raw/cadence-nand-controller.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
--- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -2863,11 +2863,10 @@ static int cadence_nand_init(struct cdns dma_cap_set(DMA_MEMCPY, mask);
if (cdns_ctrl->caps1->has_dma) { - cdns_ctrl->dmac = dma_request_channel(mask, NULL, NULL); - if (!cdns_ctrl->dmac) { - dev_err(cdns_ctrl->dev, - "Unable to get a DMA channel\n"); - ret = -EBUSY; + cdns_ctrl->dmac = dma_request_chan_by_mask(&mask); + if (IS_ERR(cdns_ctrl->dmac)) { + ret = dev_err_probe(cdns_ctrl->dev, PTR_ERR(cdns_ctrl->dmac), + "%d: Failed to get a DMA channel\n", ret); goto disable_irq; } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
commit d76d22b5096c5b05208fd982b153b3f182350b19 upstream.
Remap the slave DMA I/O resources to enhance driver portability. Using a physical address causes DMA translation failure when the ARM SMMU is enabled.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Niravkumar L Rabara niravkumar.l.rabara@intel.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/mtd/nand/raw/cadence-nand-controller.c | 29 +++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-)
--- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -469,6 +469,8 @@ struct cdns_nand_ctrl { struct { void __iomem *virt; dma_addr_t dma; + dma_addr_t iova_dma; + u32 size; } io;
int irq; @@ -1830,11 +1832,11 @@ static int cadence_nand_slave_dma_transf }
if (dir == DMA_FROM_DEVICE) { - src_dma = cdns_ctrl->io.dma; + src_dma = cdns_ctrl->io.iova_dma; dst_dma = buf_dma; } else { src_dma = buf_dma; - dst_dma = cdns_ctrl->io.dma; + dst_dma = cdns_ctrl->io.iova_dma; }
tx = dmaengine_prep_dma_memcpy(cdns_ctrl->dmac, dst_dma, src_dma, len, @@ -2828,6 +2830,7 @@ cadence_nand_irq_cleanup(int irqnum, str static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl) { dma_cap_mask_t mask; + struct dma_device *dma_dev = cdns_ctrl->dmac->device; int ret;
cdns_ctrl->cdma_desc = dma_alloc_coherent(cdns_ctrl->dev, @@ -2871,6 +2874,16 @@ static int cadence_nand_init(struct cdns } }
+ cdns_ctrl->io.iova_dma = dma_map_resource(dma_dev->dev, cdns_ctrl->io.dma, + cdns_ctrl->io.size, + DMA_BIDIRECTIONAL, 0); + + ret = dma_mapping_error(dma_dev->dev, cdns_ctrl->io.iova_dma); + if (ret) { + dev_err(cdns_ctrl->dev, "Failed to map I/O resource to DMA\n"); + goto dma_release_chnl; + } + nand_controller_init(&cdns_ctrl->controller); INIT_LIST_HEAD(&cdns_ctrl->chips);
@@ -2881,18 +2894,22 @@ static int cadence_nand_init(struct cdns if (ret) { dev_err(cdns_ctrl->dev, "Failed to register MTD: %d\n", ret); - goto dma_release_chnl; + goto unmap_dma_resource; }
kfree(cdns_ctrl->buf); cdns_ctrl->buf = kzalloc(cdns_ctrl->buf_size, GFP_KERNEL); if (!cdns_ctrl->buf) { ret = -ENOMEM; - goto dma_release_chnl; + goto unmap_dma_resource; }
return 0;
+unmap_dma_resource: + dma_unmap_resource(dma_dev->dev, cdns_ctrl->io.iova_dma, + cdns_ctrl->io.size, DMA_BIDIRECTIONAL, 0); + dma_release_chnl: if (cdns_ctrl->dmac) dma_release_channel(cdns_ctrl->dmac); @@ -2914,6 +2931,8 @@ free_buf_desc: static void cadence_nand_remove(struct cdns_nand_ctrl *cdns_ctrl) { cadence_nand_chips_cleanup(cdns_ctrl); + dma_unmap_resource(cdns_ctrl->dmac->device->dev, cdns_ctrl->io.iova_dma, + cdns_ctrl->io.size, DMA_BIDIRECTIONAL, 0); cadence_nand_irq_cleanup(cdns_ctrl->irq, cdns_ctrl); kfree(cdns_ctrl->buf); dma_free_coherent(cdns_ctrl->dev, sizeof(struct cadence_nand_cdma_desc), @@ -2982,7 +3001,9 @@ static int cadence_nand_dt_probe(struct cdns_ctrl->io.virt = devm_platform_get_and_ioremap_resource(ofdev, 1, &res); if (IS_ERR(cdns_ctrl->io.virt)) return PTR_ERR(cdns_ctrl->io.virt); + cdns_ctrl->io.dma = res->start; + cdns_ctrl->io.size = resource_size(res);
dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk"); if (IS_ERR(dt->clk))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
commit f37d135b42cb484bdecee93f56b9f483214ede78 upstream.
dma_map_single is using physical/bus device (DMA) but dma_unmap_single is using framework device(NAND controller), which is incorrect. Fixed dma_unmap_single to use correct physical/bus device.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Niravkumar L Rabara niravkumar.l.rabara@intel.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/mtd/nand/raw/cadence-nand-controller.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -1858,12 +1858,12 @@ static int cadence_nand_slave_dma_transf dma_async_issue_pending(cdns_ctrl->dmac); wait_for_completion(&finished);
- dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir); + dma_unmap_single(dma_dev->dev, buf_dma, len, dir);
return 0;
err_unmap: - dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir); + dma_unmap_single(dma_dev->dev, buf_dma, len, dir);
err: dev_dbg(cdns_ctrl->dev, "Fall back to CPU I/O\n");
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haoxiang Li haoxiang_li2024@163.com
commit 860ca5e50f73c2a1cef7eefc9d39d04e275417f7 upstream.
Add check for the return value of cifs_buf_get() and cifs_small_buf_get() in receive_encrypted_standard() to prevent null pointer dereference.
Fixes: eec04ea11969 ("smb: client: fix OOB in receive_encrypted_standard()") Cc: stable@vger.kernel.org Signed-off-by: Haoxiang Li haoxiang_li2024@163.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/smb/client/smb2ops.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -5158,6 +5158,10 @@ one_more: next_buffer = (char *)cifs_buf_get(); else next_buffer = (char *)cifs_small_buf_get(); + if (!next_buffer) { + cifs_server_dbg(VFS, "No memory for (large) SMB response\n"); + return -1; + } memcpy(next_buffer, buf + next_cmd, pdu_length - next_cmd); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Komal Bajaj quic_kbajaj@quicinc.com
commit c158647c107358bf1be579f98e4bb705c1953292 upstream.
The previous implementation incorrectly configured the cmn_interrupt_2_enable register for interrupt handling. Using cmn_interrupt_2_enable to configure Tag, Data RAM ECC interrupts would lead to issues like double handling of the interrupts (EL1 and EL3) as cmn_interrupt_2_enable is meant to be configured for interrupts which needs to be handled by EL3.
EL1 LLCC EDAC driver needs to use cmn_interrupt_0_enable register to configure Tag, Data RAM ECC interrupts instead of cmn_interrupt_2_enable.
Fixes: 27450653f1db ("drivers: edac: Add EDAC driver support for QCOM SoCs") Signed-off-by: Komal Bajaj quic_kbajaj@quicinc.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org Cc: stable@kernel.org Link: https://lore.kernel.org/r/20241119064608.12326-1-quic_kbajaj@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/edac/qcom_edac.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/edac/qcom_edac.c +++ b/drivers/edac/qcom_edac.c @@ -95,7 +95,7 @@ static int qcom_llcc_core_setup(struct l * Configure interrupt enable registers such that Tag, Data RAM related * interrupts are propagated to interrupt controller for servicing */ - ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable, + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_0_enable, TRP0_INTERRUPT_ENABLE, TRP0_INTERRUPT_ENABLE); if (ret) @@ -113,7 +113,7 @@ static int qcom_llcc_core_setup(struct l if (ret) return ret;
- ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable, + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_0_enable, DRP0_INTERRUPT_ENABLE, DRP0_INTERRUPT_ENABLE); if (ret)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sebastian Andrzej Siewior bigeasy@linutronix.de
commit 57b76bedc5c52c66968183b5ef57234894c25ce7 upstream.
The function tracer should record the preemption level at the point when the function is invoked. If the tracing subsystem decrement the preemption counter it needs to correct this before feeding the data into the trace buffer. This was broken in the commit cited below while shifting the preempt-disabled section.
Use tracing_gen_ctx_dec() which properly subtracts one from the preemption counter on a preemptible kernel.
Cc: stable@vger.kernel.org Cc: Wander Lairson Costa wander@redhat.com Cc: Masami Hiramatsu mhiramat@kernel.org Cc: Mathieu Desnoyers mathieu.desnoyers@efficios.com Cc: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/20250220140749.pfw8qoNZ@linutronix.de Fixes: ce5e48036c9e7 ("ftrace: disable preemption when recursion locked") Signed-off-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Tested-by: Wander Lairson Costa wander@redhat.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_functions.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
--- a/kernel/trace/trace_functions.c +++ b/kernel/trace/trace_functions.c @@ -185,7 +185,7 @@ function_trace_call(unsigned long ip, un if (bit < 0) return;
- trace_ctx = tracing_gen_ctx(); + trace_ctx = tracing_gen_ctx_dec();
cpu = smp_processor_id(); data = per_cpu_ptr(tr->array_buffer.data, cpu); @@ -285,7 +285,6 @@ function_no_repeats_trace_call(unsigned struct trace_array *tr = op->private; struct trace_array_cpu *data; unsigned int trace_ctx; - unsigned long flags; int bit; int cpu;
@@ -312,8 +311,7 @@ function_no_repeats_trace_call(unsigned if (is_repeat_check(tr, last_info, ip, parent_ip)) goto out;
- local_save_flags(flags); - trace_ctx = tracing_gen_ctx_flags(flags); + trace_ctx = tracing_gen_ctx_dec(); process_repeats(tr, ip, parent_ip, last_info, trace_ctx);
trace_function(tr, ip, parent_ip, trace_ctx);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steven Rostedt rostedt@goodmis.org
commit 8eb4b09e0bbd30981305643229fe7640ad41b667 upstream.
Check if a function is already in the manager ops of a subops. A manager ops contains multiple subops, and if two or more subops are tracing the same function, the manager ops only needs a single entry in its hash.
Cc: stable@vger.kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Mathieu Desnoyers mathieu.desnoyers@efficios.com Cc: Andrew Morton akpm@linux-foundation.org Cc: Sven Schnelle svens@linux.ibm.com Cc: Vasily Gorbik gor@linux.ibm.com Cc: Alexander Gordeev agordeev@linux.ibm.com Link: https://lore.kernel.org/20250220202055.226762894@goodmis.org Fixes: 4f554e955614f ("ftrace: Add ftrace_set_filter_ips function") Tested-by: Heiko Carstens hca@linux.ibm.com Reviewed-by: Masami Hiramatsu (Google) mhiramat@kernel.org Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/ftrace.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5108,6 +5108,9 @@ __ftrace_match_addr(struct ftrace_hash * return -ENOENT; free_hash_entry(hash, entry); return 0; + } else if (__ftrace_lookup_ip(hash, ip) != NULL) { + /* Already exists */ + return 0; }
return add_hash_entry(hash, ip);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrick Bellasi derkling@google.com
commit 318e8c339c9a0891c389298bb328ed0762a9935e upstream.
In [1] the meaning of the synthetic IBPB flags has been redefined for a better separation of concerns: - ENTRY_IBPB -- issue IBPB on entry only - IBPB_ON_VMEXIT -- issue IBPB on VM-Exit only and the Retbleed mitigations have been updated to match this new semantics.
Commit [2] was merged shortly before [1], and their interaction was not handled properly. This resulted in IBPB not being triggered on VM-Exit in all SRSO mitigation configs requesting an IBPB there.
Specifically, an IBPB on VM-Exit is triggered only when X86_FEATURE_IBPB_ON_VMEXIT is set. However:
- X86_FEATURE_IBPB_ON_VMEXIT is not set for "spec_rstack_overflow=ibpb", because before [1] having X86_FEATURE_ENTRY_IBPB was enough. Hence, an IBPB is triggered on entry but the expected IBPB on VM-exit is not.
- X86_FEATURE_IBPB_ON_VMEXIT is not set also when "spec_rstack_overflow=ibpb-vmexit" if X86_FEATURE_ENTRY_IBPB is already set.
That's because before [1] this was effectively redundant. Hence, e.g. a "retbleed=ibpb spec_rstack_overflow=bpb-vmexit" config mistakenly reports the machine still vulnerable to SRSO, despite an IBPB being triggered both on entry and VM-Exit, because of the Retbleed selected mitigation config.
- UNTRAIN_RET_VM won't still actually do anything unless CONFIG_MITIGATION_IBPB_ENTRY is set.
For "spec_rstack_overflow=ibpb", enable IBPB on both entry and VM-Exit and clear X86_FEATURE_RSB_VMEXIT which is made superfluous by X86_FEATURE_IBPB_ON_VMEXIT. This effectively makes this mitigation option similar to the one for 'retbleed=ibpb', thus re-order the code for the RETBLEED_MITIGATION_IBPB option to be less confusing by having all features enabling before the disabling of the not needed ones.
For "spec_rstack_overflow=ibpb-vmexit", guard this mitigation setting with CONFIG_MITIGATION_IBPB_ENTRY to ensure UNTRAIN_RET_VM sequence is effectively compiled in. Drop instead the CONFIG_MITIGATION_SRSO guard, since none of the SRSO compile cruft is required in this configuration. Also, check only that the required microcode is present to effectively enabled the IBPB on VM-Exit.
Finally, update the KConfig description for CONFIG_MITIGATION_IBPB_ENTRY to list also all SRSO config settings enabled by this guard.
Fixes: 864bcaa38ee4 ("x86/cpu/kvm: Provide UNTRAIN_RET_VM") [1] Fixes: d893832d0e1e ("x86/srso: Add IBPB on VMEXIT") [2] Reported-by: Yosry Ahmed yosryahmed@google.com Signed-off-by: Patrick Bellasi derkling@google.com Reviewed-by: Borislav Petkov (AMD) bp@alien8.de Cc: stable@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/Kconfig | 3 ++- arch/x86/kernel/cpu/bugs.c | 20 ++++++++++++++------ 2 files changed, 16 insertions(+), 7 deletions(-)
--- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2506,7 +2506,8 @@ config CPU_IBPB_ENTRY depends on CPU_SUP_AMD && X86_64 default y help - Compile the kernel with support for the retbleed=ibpb mitigation. + Compile the kernel with support for the retbleed=ibpb and + spec_rstack_overflow={ibpb,ibpb-vmexit} mitigations.
config CPU_IBRS_ENTRY bool "Enable IBRS on kernel entry" --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1092,6 +1092,8 @@ do_cmd_auto:
case RETBLEED_MITIGATION_IBPB: setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB); + setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT); + mitigate_smt = true;
/* * IBPB on entry already obviates the need for @@ -1101,8 +1103,6 @@ do_cmd_auto: setup_clear_cpu_cap(X86_FEATURE_UNRET); setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
- mitigate_smt = true; - /* * There is no need for RSB filling: entry_ibpb() ensures * all predictions, including the RSB, are invalidated, @@ -2607,6 +2607,7 @@ static void __init srso_select_mitigatio if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) { if (has_microcode) { setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB); + setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT); srso_mitigation = SRSO_MITIGATION_IBPB;
/* @@ -2616,6 +2617,13 @@ static void __init srso_select_mitigatio */ setup_clear_cpu_cap(X86_FEATURE_UNRET); setup_clear_cpu_cap(X86_FEATURE_RETHUNK); + + /* + * There is no need for RSB filling: entry_ibpb() ensures + * all predictions, including the RSB, are invalidated, + * regardless of IBPB implementation. + */ + setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT); } } else { pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n"); @@ -2624,8 +2632,8 @@ static void __init srso_select_mitigatio break;
case SRSO_CMD_IBPB_ON_VMEXIT: - if (IS_ENABLED(CONFIG_CPU_SRSO)) { - if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) { + if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) { + if (has_microcode) { setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT); srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
@@ -2637,9 +2645,9 @@ static void __init srso_select_mitigatio setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT); } } else { - pr_err("WARNING: kernel not compiled with CPU_SRSO.\n"); + pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n"); goto pred_cmd; - } + } break;
default:
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Valente paolo.valente@linaro.org
commit 9778369a2d6c5ed2b81a04164c4aa9da1bdb193d upstream.
Single-LUN multi-actuator SCSI drives, as well as all multi-actuator SATA drives appear as a single device to the I/O subsystem [1]. Yet they address commands to different actuators internally, as a function of Logical Block Addressing (LBAs). A given sector is reachable by only one of the actuators. For example, Seagate’s Serial Advanced Technology Attachment (SATA) version contains two actuators and maps the lower half of the SATA LBA space to the lower actuator and the upper half to the upper actuator.
Evidently, to fully utilize actuators, no actuator must be left idle or underutilized while there is pending I/O for it. The block layer must somehow control the load of each actuator individually. This commit lays the ground for allowing BFQ to provide such a per-actuator control.
BFQ associates an I/O-request sync bfq_queue with each process doing synchronous I/O, or with a group of processes, in case of queue merging. Then BFQ serves one bfq_queue at a time. While in service, a bfq_queue is emptied in request-position order. Yet the same process, or group of processes, may generate I/O for different actuators. In this case, different streams of I/O (each for a different actuator) get all inserted into the same sync bfq_queue. So there is basically no individual control on when each stream is served, i.e., on when the I/O requests of the stream are picked from the bfq_queue and dispatched to the drive.
This commit enables BFQ to control the service of each actuator individually for synchronous I/O, by simply splitting each sync bfq_queue into N queues, one for each actuator. In other words, a sync bfq_queue is now associated to a pair (process, actuator). As a consequence of this split, the per-queue proportional-share policy implemented by BFQ will guarantee that the sync I/O generated for each actuator, by each process, receives its fair share of service.
This is just a preparatory patch. If the I/O of the same process happens to be sent to different queues, then each of these queues may undergo queue merging. To handle this event, the bfq_io_cq data structure must be properly extended. In addition, stable merging must be disabled to avoid loss of control on individual actuators. Finally, also async queues must be split. These issues are described in detail and addressed in next commits. As for this commit, although multiple per-process bfq_queues are provided, the I/O of each process or group of processes is still sent to only one queue, regardless of the actuator the I/O is for. The forwarding to distinct bfq_queues will be enabled after addressing the above issues.
[1] https://www.linaro.org/blog/budget-fair-queueing-bfq-linux-io-scheduler-opti...
Reviewed-by: Damien Le Moal damien.lemoal@opensource.wdc.com Signed-off-by: Gabriele Felici felicigb@gmail.com Signed-off-by: Carmine Zaccagnino carmine@carminezacc.com Signed-off-by: Paolo Valente paolo.valente@linaro.org Link: https://lore.kernel.org/r/20230103145503.71712-2-paolo.valente@linaro.org Signed-off-by: Jens Axboe axboe@kernel.dk Stable-dep-of: e8b8344de398 ("block, bfq: fix bfqq uaf in bfq_limit_depth()") [Hagar: needed contextual fixes] Signed-off-by: Hagar Hemdan hagarhem@amazon.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/bfq-cgroup.c | 97 ++++++++++++++++--------------- block/bfq-iosched.c | 160 ++++++++++++++++++++++++++++++++++------------------ block/bfq-iosched.h | 51 +++++++++++++--- 3 files changed, 197 insertions(+), 111 deletions(-)
--- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -704,6 +704,46 @@ void bfq_bfqq_move(struct bfq_data *bfqd bfq_put_queue(bfqq); }
+static void bfq_sync_bfqq_move(struct bfq_data *bfqd, + struct bfq_queue *sync_bfqq, + struct bfq_io_cq *bic, + struct bfq_group *bfqg, + unsigned int act_idx) +{ + struct bfq_queue *bfqq; + + if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) { + /* We are the only user of this bfqq, just move it */ + if (sync_bfqq->entity.sched_data != &bfqg->sched_data) + bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + return; + } + + /* + * The queue was merged to a different queue. Check + * that the merge chain still belongs to the same + * cgroup. + */ + for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq) + if (bfqq->entity.sched_data != &bfqg->sched_data) + break; + if (bfqq) { + /* + * Some queue changed cgroup so the merge is not valid + * anymore. We cannot easily just cancel the merge (by + * clearing new_bfqq) as there may be other processes + * using this queue and holding refs to all queues + * below sync_bfqq->new_bfqq. Similarly if the merge + * already happened, we need to detach from bfqq now + * so that we cannot merge bio to a request from the + * old cgroup. + */ + bfq_put_cooperator(sync_bfqq); + bfq_release_process_ref(bfqd, sync_bfqq); + bic_set_bfqq(bic, NULL, true, act_idx); + } +} + /** * __bfq_bic_change_cgroup - move @bic to @bfqg. * @bfqd: the queue descriptor. @@ -714,60 +754,25 @@ void bfq_bfqq_move(struct bfq_data *bfqd * sure that the reference to cgroup is valid across the call (see * comments in bfq_bic_update_cgroup on this issue) */ -static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd, +static void __bfq_bic_change_cgroup(struct bfq_data *bfqd, struct bfq_io_cq *bic, struct bfq_group *bfqg) { - struct bfq_queue *async_bfqq = bic_to_bfqq(bic, false); - struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, true); - struct bfq_entity *entity; - - if (async_bfqq) { - entity = &async_bfqq->entity; + unsigned int act_idx;
- if (entity->sched_data != &bfqg->sched_data) { - bic_set_bfqq(bic, NULL, false); + for (act_idx = 0; act_idx < bfqd->num_actuators; act_idx++) { + struct bfq_queue *async_bfqq = bic_to_bfqq(bic, false, act_idx); + struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, true, act_idx); + + if (async_bfqq && + async_bfqq->entity.sched_data != &bfqg->sched_data) { + bic_set_bfqq(bic, NULL, false, act_idx); bfq_release_process_ref(bfqd, async_bfqq); } - }
- if (sync_bfqq) { - if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) { - /* We are the only user of this bfqq, just move it */ - if (sync_bfqq->entity.sched_data != &bfqg->sched_data) - bfq_bfqq_move(bfqd, sync_bfqq, bfqg); - } else { - struct bfq_queue *bfqq; - - /* - * The queue was merged to a different queue. Check - * that the merge chain still belongs to the same - * cgroup. - */ - for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq) - if (bfqq->entity.sched_data != - &bfqg->sched_data) - break; - if (bfqq) { - /* - * Some queue changed cgroup so the merge is - * not valid anymore. We cannot easily just - * cancel the merge (by clearing new_bfqq) as - * there may be other processes using this - * queue and holding refs to all queues below - * sync_bfqq->new_bfqq. Similarly if the merge - * already happened, we need to detach from - * bfqq now so that we cannot merge bio to a - * request from the old cgroup. - */ - bfq_put_cooperator(sync_bfqq); - bic_set_bfqq(bic, NULL, true); - bfq_release_process_ref(bfqd, sync_bfqq); - } - } + if (sync_bfqq) + bfq_sync_bfqq_move(bfqd, sync_bfqq, bic, bfqg, act_idx); } - - return bfqg; }
void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -377,16 +377,23 @@ static const unsigned long bfq_late_stab #define RQ_BIC(rq) ((struct bfq_io_cq *)((rq)->elv.priv[0])) #define RQ_BFQQ(rq) ((rq)->elv.priv[1])
-struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync) +struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync, + unsigned int actuator_idx) { - return bic->bfqq[is_sync]; + if (is_sync) + return bic->bfqq[1][actuator_idx]; + + return bic->bfqq[0][actuator_idx]; }
static void bfq_put_stable_ref(struct bfq_queue *bfqq);
-void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync) +void bic_set_bfqq(struct bfq_io_cq *bic, + struct bfq_queue *bfqq, + bool is_sync, + unsigned int actuator_idx) { - struct bfq_queue *old_bfqq = bic->bfqq[is_sync]; + struct bfq_queue *old_bfqq = bic->bfqq[is_sync][actuator_idx];
/* Clear bic pointer if bfqq is detached from this bic */ if (old_bfqq && old_bfqq->bic == bic) @@ -405,7 +412,10 @@ void bic_set_bfqq(struct bfq_io_cq *bic, * we cancel the stable merge if * bic->stable_merge_bfqq == bfqq. */ - bic->bfqq[is_sync] = bfqq; + if (is_sync) + bic->bfqq[1][actuator_idx] = bfqq; + else + bic->bfqq[0][actuator_idx] = bfqq;
if (bfqq && bic->stable_merge_bfqq == bfqq) { /* @@ -680,9 +690,9 @@ static void bfq_limit_depth(blk_opf_t op { struct bfq_data *bfqd = data->q->elevator->elevator_data; struct bfq_io_cq *bic = bfq_bic_lookup(data->q); - struct bfq_queue *bfqq = bic ? bic_to_bfqq(bic, op_is_sync(opf)) : NULL; int depth; unsigned limit = data->q->nr_requests; + unsigned int act_idx;
/* Sync reads have full depth available */ if (op_is_sync(opf) && !op_is_write(opf)) { @@ -692,14 +702,21 @@ static void bfq_limit_depth(blk_opf_t op limit = (limit * depth) >> bfqd->full_depth_shift; }
- /* - * Does queue (or any parent entity) exceed number of requests that - * should be available to it? Heavily limit depth so that it cannot - * consume more available requests and thus starve other entities. - */ - if (bfqq && bfqq_request_over_limit(bfqq, limit)) - depth = 1; + for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) { + struct bfq_queue *bfqq = + bic_to_bfqq(bic, op_is_sync(opf), act_idx);
+ /* + * Does queue (or any parent entity) exceed number of + * requests that should be available to it? Heavily + * limit depth so that it cannot consume more + * available requests and thus starve other entities. + */ + if (bfqq && bfqq_request_over_limit(bfqq, limit)) { + depth = 1; + break; + } + } bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u", __func__, bfqd->wr_busy_queues, op_is_sync(opf), depth); if (depth) @@ -1820,6 +1837,18 @@ static bool bfq_bfqq_higher_class_or_wei return bfqq_weight > in_serv_weight; }
+/* + * Get the index of the actuator that will serve bio. + */ +static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio) +{ + /* + * Multi-actuator support not complete yet, so always return 0 + * for the moment (to keep incomplete mechanisms off). + */ + return 0; +} + static bool bfq_better_to_idle(struct bfq_queue *bfqq);
static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, @@ -2150,7 +2179,7 @@ static void bfq_check_waker(struct bfq_d * We reset waker detection logic also if too much time has passed * since the first detection. If wakeups are rare, pointless idling * doesn't hurt throughput that much. The condition below makes sure - * we do not uselessly idle blocking waker in more than 1/64 cases. + * we do not uselessly idle blocking waker in more than 1/64 cases. */ if (bfqd->last_completed_rq_bfqq != bfqq->tentative_waker_bfqq || @@ -2486,7 +2515,8 @@ static bool bfq_bio_merge(struct request */ bfq_bic_update_cgroup(bic, bio);
- bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf)); + bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf), + bfq_actuator_index(bfqd, bio)); } else { bfqd->bio_bfqq = NULL; } @@ -3188,7 +3218,7 @@ static struct bfq_queue *bfq_merge_bfqqs /* * Merge queues (that is, let bic redirect its requests to new_bfqq) */ - bic_set_bfqq(bic, new_bfqq, true); + bic_set_bfqq(bic, new_bfqq, true, bfqq->actuator_idx); bfq_mark_bfqq_coop(new_bfqq); /* * new_bfqq now belongs to at least two bics (it is a shared queue): @@ -4818,11 +4848,8 @@ check_queue: */ if (bfq_bfqq_wait_request(bfqq) || (bfqq->dispatched != 0 && bfq_better_to_idle(bfqq))) { - struct bfq_queue *async_bfqq = - bfqq->bic && bfqq->bic->bfqq[0] && - bfq_bfqq_busy(bfqq->bic->bfqq[0]) && - bfqq->bic->bfqq[0]->next_rq ? - bfqq->bic->bfqq[0] : NULL; + unsigned int act_idx = bfqq->actuator_idx; + struct bfq_queue *async_bfqq = NULL; struct bfq_queue *blocked_bfqq = !hlist_empty(&bfqq->woken_list) ? container_of(bfqq->woken_list.first, @@ -4830,6 +4857,10 @@ check_queue: woken_list_node) : NULL;
+ if (bfqq->bic && bfqq->bic->bfqq[0][act_idx] && + bfq_bfqq_busy(bfqq->bic->bfqq[0][act_idx]) && + bfqq->bic->bfqq[0][act_idx]->next_rq) + async_bfqq = bfqq->bic->bfqq[0][act_idx]; /* * The next four mutually-exclusive ifs decide * whether to try injection, and choose the queue to @@ -4914,7 +4945,7 @@ check_queue: icq_to_bic(async_bfqq->next_rq->elv.icq) == bfqq->bic && bfq_serv_to_charge(async_bfqq->next_rq, async_bfqq) <= bfq_bfqq_budget_left(async_bfqq)) - bfqq = bfqq->bic->bfqq[0]; + bfqq = bfqq->bic->bfqq[0][act_idx]; else if (bfqq->waker_bfqq && bfq_bfqq_busy(bfqq->waker_bfqq) && bfqq->waker_bfqq->next_rq && @@ -5375,48 +5406,54 @@ static void bfq_exit_bfqq(struct bfq_dat bfq_release_process_ref(bfqd, bfqq); }
-static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync) +static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync, + unsigned int actuator_idx) { - struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync); + struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, actuator_idx); struct bfq_data *bfqd;
if (bfqq) bfqd = bfqq->bfqd; /* NULL if scheduler already exited */
if (bfqq && bfqd) { - unsigned long flags; - - spin_lock_irqsave(&bfqd->lock, flags); - bic_set_bfqq(bic, NULL, is_sync); + bic_set_bfqq(bic, NULL, is_sync, actuator_idx); bfq_exit_bfqq(bfqd, bfqq); - spin_unlock_irqrestore(&bfqd->lock, flags); } }
static void bfq_exit_icq(struct io_cq *icq) { struct bfq_io_cq *bic = icq_to_bic(icq); + struct bfq_data *bfqd = bic_to_bfqd(bic); + unsigned long flags; + unsigned int act_idx; + /* + * If bfqd and thus bfqd->num_actuators is not available any + * longer, then cycle over all possible per-actuator bfqqs in + * next loop. We rely on bic being zeroed on creation, and + * therefore on its unused per-actuator fields being NULL. + */ + unsigned int num_actuators = BFQ_MAX_ACTUATORS;
- if (bic->stable_merge_bfqq) { - struct bfq_data *bfqd = bic->stable_merge_bfqq->bfqd; + /* + * bfqd is NULL if scheduler already exited, and in that case + * this is the last time these queues are accessed. + */ + if (bfqd) { + spin_lock_irqsave(&bfqd->lock, flags); + num_actuators = bfqd->num_actuators; + }
- /* - * bfqd is NULL if scheduler already exited, and in - * that case this is the last time bfqq is accessed. - */ - if (bfqd) { - unsigned long flags; + if (bic->stable_merge_bfqq) + bfq_put_stable_ref(bic->stable_merge_bfqq);
- spin_lock_irqsave(&bfqd->lock, flags); - bfq_put_stable_ref(bic->stable_merge_bfqq); - spin_unlock_irqrestore(&bfqd->lock, flags); - } else { - bfq_put_stable_ref(bic->stable_merge_bfqq); - } + for (act_idx = 0; act_idx < num_actuators; act_idx++) { + bfq_exit_icq_bfqq(bic, true, act_idx); + bfq_exit_icq_bfqq(bic, false, act_idx); }
- bfq_exit_icq_bfqq(bic, true); - bfq_exit_icq_bfqq(bic, false); + if (bfqd) + spin_unlock_irqrestore(&bfqd->lock, flags); }
/* @@ -5493,25 +5530,27 @@ static void bfq_check_ioprio_change(stru
bic->ioprio = ioprio;
- bfqq = bic_to_bfqq(bic, false); + bfqq = bic_to_bfqq(bic, false, bfq_actuator_index(bfqd, bio)); if (bfqq) { struct bfq_queue *old_bfqq = bfqq;
bfqq = bfq_get_queue(bfqd, bio, false, bic, true); - bic_set_bfqq(bic, bfqq, false); + bic_set_bfqq(bic, bfqq, false, bfq_actuator_index(bfqd, bio)); bfq_release_process_ref(bfqd, old_bfqq); }
- bfqq = bic_to_bfqq(bic, true); + bfqq = bic_to_bfqq(bic, true, bfq_actuator_index(bfqd, bio)); if (bfqq) bfq_set_next_ioprio_data(bfqq, bic); }
static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, - struct bfq_io_cq *bic, pid_t pid, int is_sync) + struct bfq_io_cq *bic, pid_t pid, int is_sync, + unsigned int act_idx) { u64 now_ns = ktime_get_ns();
+ bfqq->actuator_idx = act_idx; RB_CLEAR_NODE(&bfqq->entity.rb_node); INIT_LIST_HEAD(&bfqq->fifo); INIT_HLIST_NODE(&bfqq->burst_list_node); @@ -5762,7 +5801,7 @@ static struct bfq_queue *bfq_get_queue(s
if (bfqq) { bfq_init_bfqq(bfqd, bfqq, bic, current->pid, - is_sync); + is_sync, bfq_actuator_index(bfqd, bio)); bfq_init_entity(&bfqq->entity, bfqg); bfq_log_bfqq(bfqd, bfqq, "allocated"); } else { @@ -6078,7 +6117,8 @@ static bool __bfq_insert_request(struct * then complete the merge and redirect it to * new_bfqq. */ - if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq) { + if (bic_to_bfqq(RQ_BIC(rq), true, + bfq_actuator_index(bfqd, rq->bio)) == bfqq) { while (bfqq != new_bfqq) bfqq = bfq_merge_bfqqs(bfqd, RQ_BIC(rq), bfqq); } @@ -6632,7 +6672,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, st return bfqq; }
- bic_set_bfqq(bic, NULL, true); + bic_set_bfqq(bic, NULL, true, bfqq->actuator_idx);
bfq_put_cooperator(bfqq);
@@ -6646,7 +6686,8 @@ static struct bfq_queue *bfq_get_bfqq_ha bool split, bool is_sync, bool *new_queue) { - struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync); + unsigned int act_idx = bfq_actuator_index(bfqd, bio); + struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx);
if (likely(bfqq && bfqq != &bfqd->oom_bfqq)) return bfqq; @@ -6658,7 +6699,7 @@ static struct bfq_queue *bfq_get_bfqq_ha bfq_put_queue(bfqq); bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, split);
- bic_set_bfqq(bic, bfqq, is_sync); + bic_set_bfqq(bic, bfqq, is_sync, act_idx); if (split && is_sync) { if ((bic->was_in_burst_list && bfqd->large_burst) || bic->saved_in_large_burst) @@ -7139,8 +7180,10 @@ static int bfq_init_queue(struct request * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues. * Grab a permanent reference to it, so that the normal code flow * will not attempt to free it. + * Set zero as actuator index: we will pretend that + * all I/O requests are for the same actuator. */ - bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0); + bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0, 0); bfqd->oom_bfqq.ref++; bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO; bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE; @@ -7159,6 +7202,13 @@ static int bfq_init_queue(struct request
bfqd->queue = q;
+ /* + * Multi-actuator support not complete yet, unconditionally + * set to only one actuator for the moment (to keep incomplete + * mechanisms off). + */ + bfqd->num_actuators = 1; + INIT_LIST_HEAD(&bfqd->dispatch);
hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC, --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -33,6 +33,14 @@ */ #define BFQ_SOFTRT_WEIGHT_FACTOR 100
+/* + * Maximum number of actuators supported. This constant is used simply + * to define the size of the static array that will contain + * per-actuator data. The current value is hopefully a good upper + * bound to the possible number of actuators of any actual drive. + */ +#define BFQ_MAX_ACTUATORS 8 + struct bfq_entity;
/** @@ -225,12 +233,14 @@ struct bfq_ttime { * struct bfq_queue - leaf schedulable entity. * * A bfq_queue is a leaf request queue; it can be associated with an - * io_context or more, if it is async or shared between cooperating - * processes. @cgroup holds a reference to the cgroup, to be sure that it - * does not disappear while a bfqq still references it (mostly to avoid - * races between request issuing and task migration followed by cgroup - * destruction). - * All the fields are protected by the queue lock of the containing bfqd. + * io_context or more, if it is async or shared between cooperating + * processes. Besides, it contains I/O requests for only one actuator + * (an io_context is associated with a different bfq_queue for each + * actuator it generates I/O for). @cgroup holds a reference to the + * cgroup, to be sure that it does not disappear while a bfqq still + * references it (mostly to avoid races between request issuing and + * task migration followed by cgroup destruction). All the fields are + * protected by the queue lock of the containing bfqd. */ struct bfq_queue { /* reference counter */ @@ -395,6 +405,9 @@ struct bfq_queue { * the woken queues when this queue exits. */ struct hlist_head woken_list; + + /* index of the actuator this queue is associated with */ + unsigned int actuator_idx; };
/** @@ -403,8 +416,17 @@ struct bfq_queue { struct bfq_io_cq { /* associated io_cq structure */ struct io_cq icq; /* must be the first member */ - /* array of two process queues, the sync and the async */ - struct bfq_queue *bfqq[2]; + /* + * Matrix of associated process queues: first row for async + * queues, second row sync queues. Each row contains one + * column for each actuator. An I/O request generated by the + * process is inserted into the queue pointed by bfqq[i][j] if + * the request is to be served by the j-th actuator of the + * drive, where i==0 or i==1, depending on whether the request + * is async or sync. So there is a distinct queue for each + * actuator. + */ + struct bfq_queue *bfqq[2][BFQ_MAX_ACTUATORS]; /* per (request_queue, blkcg) ioprio */ int ioprio; #ifdef CONFIG_BFQ_GROUP_IOSCHED @@ -768,6 +790,13 @@ struct bfq_data { */ unsigned int word_depths[2][2]; unsigned int full_depth_shift; + + /* + * Number of independent actuators. This is equal to 1 in + * case of single-actuator drives. + */ + unsigned int num_actuators; + };
enum bfqq_state_flags { @@ -964,8 +993,10 @@ struct bfq_group {
extern const int bfq_timeout;
-struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync); -void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync); +struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync, + unsigned int actuator_idx); +void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync, + unsigned int actuator_idx); struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic); void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu Kuai yukuai3@huawei.com
commit e8b8344de3980709080d86c157d24e7de07d70ad upstream.
Set new allocated bfqq to bic or remove freed bfqq from bic are both protected by bfqd->lock, however bfq_limit_depth() is deferencing bfqq from bic without the lock, this can lead to UAF if the io_context is shared by multiple tasks.
For example, test bfq with io_uring can trigger following UAF in v6.6:
================================================================== BUG: KASAN: slab-use-after-free in bfqq_group+0x15/0x50
Call Trace: <TASK> dump_stack_lvl+0x47/0x80 print_address_description.constprop.0+0x66/0x300 print_report+0x3e/0x70 kasan_report+0xb4/0xf0 bfqq_group+0x15/0x50 bfqq_request_over_limit+0x130/0x9a0 bfq_limit_depth+0x1b5/0x480 __blk_mq_alloc_requests+0x2b5/0xa00 blk_mq_get_new_requests+0x11d/0x1d0 blk_mq_submit_bio+0x286/0xb00 submit_bio_noacct_nocheck+0x331/0x400 __block_write_full_folio+0x3d0/0x640 writepage_cb+0x3b/0xc0 write_cache_pages+0x254/0x6c0 write_cache_pages+0x254/0x6c0 do_writepages+0x192/0x310 filemap_fdatawrite_wbc+0x95/0xc0 __filemap_fdatawrite_range+0x99/0xd0 filemap_write_and_wait_range.part.0+0x4d/0xa0 blkdev_read_iter+0xef/0x1e0 io_read+0x1b6/0x8a0 io_issue_sqe+0x87/0x300 io_wq_submit_work+0xeb/0x390 io_worker_handle_work+0x24d/0x550 io_wq_worker+0x27f/0x6c0 ret_from_fork_asm+0x1b/0x30 </TASK>
Allocated by task 808602: kasan_save_stack+0x1e/0x40 kasan_set_track+0x21/0x30 __kasan_slab_alloc+0x83/0x90 kmem_cache_alloc_node+0x1b1/0x6d0 bfq_get_queue+0x138/0xfa0 bfq_get_bfqq_handle_split+0xe3/0x2c0 bfq_init_rq+0x196/0xbb0 bfq_insert_request.isra.0+0xb5/0x480 bfq_insert_requests+0x156/0x180 blk_mq_insert_request+0x15d/0x440 blk_mq_submit_bio+0x8a4/0xb00 submit_bio_noacct_nocheck+0x331/0x400 __blkdev_direct_IO_async+0x2dd/0x330 blkdev_write_iter+0x39a/0x450 io_write+0x22a/0x840 io_issue_sqe+0x87/0x300 io_wq_submit_work+0xeb/0x390 io_worker_handle_work+0x24d/0x550 io_wq_worker+0x27f/0x6c0 ret_from_fork+0x2d/0x50 ret_from_fork_asm+0x1b/0x30
Freed by task 808589: kasan_save_stack+0x1e/0x40 kasan_set_track+0x21/0x30 kasan_save_free_info+0x27/0x40 __kasan_slab_free+0x126/0x1b0 kmem_cache_free+0x10c/0x750 bfq_put_queue+0x2dd/0x770 __bfq_insert_request.isra.0+0x155/0x7a0 bfq_insert_request.isra.0+0x122/0x480 bfq_insert_requests+0x156/0x180 blk_mq_dispatch_plug_list+0x528/0x7e0 blk_mq_flush_plug_list.part.0+0xe5/0x590 __blk_flush_plug+0x3b/0x90 blk_finish_plug+0x40/0x60 do_writepages+0x19d/0x310 filemap_fdatawrite_wbc+0x95/0xc0 __filemap_fdatawrite_range+0x99/0xd0 filemap_write_and_wait_range.part.0+0x4d/0xa0 blkdev_read_iter+0xef/0x1e0 io_read+0x1b6/0x8a0 io_issue_sqe+0x87/0x300 io_wq_submit_work+0xeb/0x390 io_worker_handle_work+0x24d/0x550 io_wq_worker+0x27f/0x6c0 ret_from_fork+0x2d/0x50 ret_from_fork_asm+0x1b/0x30
Fix the problem by protecting bic_to_bfqq() with bfqd->lock.
CC: Jan Kara jack@suse.cz Fixes: 76f1df88bbc2 ("bfq: Limit number of requests consumed by each cgroup") Signed-off-by: Yu Kuai yukuai3@huawei.com Link: https://lore.kernel.org/r/20241129091509.2227136-1-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Hagar Hemdan hagarhem@amazon.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/bfq-iosched.c | 37 ++++++++++++++++++++++++------------- 1 file changed, 24 insertions(+), 13 deletions(-)
--- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -581,23 +581,31 @@ static struct request *bfq_choose_req(st #define BFQ_LIMIT_INLINE_DEPTH 16
#ifdef CONFIG_BFQ_GROUP_IOSCHED -static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) +static bool bfqq_request_over_limit(struct bfq_data *bfqd, + struct bfq_io_cq *bic, blk_opf_t opf, + unsigned int act_idx, int limit) { - struct bfq_data *bfqd = bfqq->bfqd; - struct bfq_entity *entity = &bfqq->entity; struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH]; struct bfq_entity **entities = inline_entities; - int depth, level, alloc_depth = BFQ_LIMIT_INLINE_DEPTH; - int class_idx = bfqq->ioprio_class - 1; + int alloc_depth = BFQ_LIMIT_INLINE_DEPTH; struct bfq_sched_data *sched_data; + struct bfq_entity *entity; + struct bfq_queue *bfqq; unsigned long wsum; bool ret = false; - - if (!entity->on_st_or_in_serv) - return false; + int depth; + int level;
retry: spin_lock_irq(&bfqd->lock); + bfqq = bic_to_bfqq(bic, op_is_sync(opf), act_idx); + if (!bfqq) + goto out; + + entity = &bfqq->entity; + if (!entity->on_st_or_in_serv) + goto out; + /* +1 for bfqq entity, root cgroup not included */ depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1; if (depth > alloc_depth) { @@ -642,7 +650,7 @@ retry: * class. */ wsum = 0; - for (i = 0; i <= class_idx; i++) { + for (i = 0; i <= bfqq->ioprio_class - 1; i++) { wsum = wsum * IOPRIO_BE_NR + sched_data->service_tree[i].wsum; } @@ -665,7 +673,9 @@ out: return ret; } #else -static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) +static bool bfqq_request_over_limit(struct bfq_data *bfqd, + struct bfq_io_cq *bic, blk_opf_t opf, + unsigned int act_idx, int limit) { return false; } @@ -703,8 +713,9 @@ static void bfq_limit_depth(blk_opf_t op }
for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) { - struct bfq_queue *bfqq = - bic_to_bfqq(bic, op_is_sync(opf), act_idx); + /* Fast path to check if bfqq is already allocated. */ + if (!bic_to_bfqq(bic, op_is_sync(opf), act_idx)) + continue;
/* * Does queue (or any parent entity) exceed number of @@ -712,7 +723,7 @@ static void bfq_limit_depth(blk_opf_t op * limit depth so that it cannot consume more * available requests and thus starve other entities. */ - if (bfqq && bfqq_request_over_limit(bfqq, limit)) { + if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) { depth = 1; break; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yunfei Dong yunfei.dong@mediatek.com
commit 9be85491619f1953b8a29590ca630be571941ffa upstream.
Fix a smatch static checker warning on vdec_h264_req_multi_if.c. Which leads to a kernel crash when fb is NULL.
Fixes: 397edc703a10 ("media: mediatek: vcodec: add h264 decoder driver for mt8186") Signed-off-by: Yunfei Dong yunfei.dong@mediatek.com Reviewed-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Signed-off-by: Sebastian Fricke sebastian.fricke@collabora.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl [ drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_multi_if.c is renamed from drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c since 0934d3759615 ("media: mediatek: vcodec: separate decoder and encoder"). The path is changed accordingly to apply the patch on 6.1.y. ] Signed-off-by: Wenshan Lan jetlan9@163.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c +++ b/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c @@ -729,11 +729,16 @@ static int vdec_h264_slice_single_decode return vpu_dec_reset(vpu);
fb = inst->ctx->dev->vdec_pdata->get_cap_buffer(inst->ctx); + if (!fb) { + mtk_vcodec_err(inst, "fb buffer is NULL"); + return -ENOMEM; + } + src_buf_info = container_of(bs, struct mtk_video_dec_buf, bs_buffer); dst_buf_info = container_of(fb, struct mtk_video_dec_buf, frame_buffer);
- y_fb_dma = fb ? (u64)fb->base_y.dma_addr : 0; - c_fb_dma = fb ? (u64)fb->base_c.dma_addr : 0; + y_fb_dma = fb->base_y.dma_addr; + c_fb_dma = fb->base_c.dma_addr; mtk_vcodec_debug(inst, "[h264-dec] [%d] y_dma=%llx c_dma=%llx", inst->ctx->decoded_frame_cnt, y_fb_dma, c_fb_dma);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Dahl ada@thorsis.com
commit 329ca3eed4a9a161515a8714be6ba182321385c7 upstream.
Previously the MR and SCR registers were just set with the supposedly required values, from cached register values (cached reg content initialized to zero).
All parts fixed here did not consider the current register (cache) content, which would make future support of cs_setup, cs_hold, and cs_inactive impossible.
Setting SCBR in atmel_qspi_setup() erases a possible DLYBS setting from atmel_qspi_set_cs_timing(). The DLYBS setting is applied by ORing over the current setting, without resetting the bits first. All writes to MR did not consider possible settings of DLYCS and DLYBCT.
Signed-off-by: Alexander Dahl ada@thorsis.com Fixes: f732646d0ccd ("spi: atmel-quadspi: Add support for configuring CS timing") Link: https://patch.msgid.link/20240918082744.379610-2-ada@thorsis.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/spi/atmel-quadspi.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
--- a/drivers/spi/atmel-quadspi.c +++ b/drivers/spi/atmel-quadspi.c @@ -388,9 +388,9 @@ static int atmel_qspi_set_cfg(struct atm * If the QSPI controller is set in regular SPI mode, set it in * Serial Memory Mode (SMM). */ - if (aq->mr != QSPI_MR_SMM) { - atmel_qspi_write(QSPI_MR_SMM, aq, QSPI_MR); - aq->mr = QSPI_MR_SMM; + if (!(aq->mr & QSPI_MR_SMM)) { + aq->mr |= QSPI_MR_SMM; + atmel_qspi_write(aq->scr, aq, QSPI_MR); }
/* Clear pending interrupts */ @@ -545,7 +545,8 @@ static int atmel_qspi_setup(struct spi_d if (ret < 0) return ret;
- aq->scr = QSPI_SCR_SCBR(scbr); + aq->scr &= ~QSPI_SCR_SCBR_MASK; + aq->scr |= QSPI_SCR_SCBR(scbr); atmel_qspi_write(aq->scr, aq, QSPI_SCR);
pm_runtime_mark_last_busy(ctrl->dev.parent); @@ -578,6 +579,7 @@ static int atmel_qspi_set_cs_timing(stru if (ret < 0) return ret;
+ aq->scr &= ~QSPI_SCR_DLYBS_MASK; aq->scr |= QSPI_SCR_DLYBS(cs_setup); atmel_qspi_write(aq->scr, aq, QSPI_SCR);
@@ -593,8 +595,8 @@ static void atmel_qspi_init(struct atmel atmel_qspi_write(QSPI_CR_SWRST, aq, QSPI_CR);
/* Set the QSPI controller by default in Serial Memory Mode */ - atmel_qspi_write(QSPI_MR_SMM, aq, QSPI_MR); - aq->mr = QSPI_MR_SMM; + aq->mr |= QSPI_MR_SMM; + atmel_qspi_write(aq->mr, aq, QSPI_MR);
/* Enable the QSPI controller */ atmel_qspi_write(QSPI_CR_QSPIEN, aq, QSPI_CR);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Dahl ada@thorsis.com
commit 162d9b5d2308c7e48efbc97d36babbf4d73b2c61 upstream.
aq->mr should go to MR, nothing else.
Fixes: 329ca3eed4a9 ("spi: atmel-quadspi: Avoid overwriting delay register settings") Signed-off-by: Alexander Dahl ada@thorsis.com Link: https://lore.kernel.org/linux-spi/20240926-macarena-wincing-7c4995487a29@tho... Link: https://patch.msgid.link/20240926090356.105789-1-ada@thorsis.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/spi/atmel-quadspi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/spi/atmel-quadspi.c +++ b/drivers/spi/atmel-quadspi.c @@ -390,7 +390,7 @@ static int atmel_qspi_set_cfg(struct atm */ if (!(aq->mr & QSPI_MR_SMM)) { aq->mr |= QSPI_MR_SMM; - atmel_qspi_write(aq->scr, aq, QSPI_MR); + atmel_qspi_write(aq->mr, aq, QSPI_MR); }
/* Clear pending interrupts */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xin Long lucien.xin@gmail.com
commit 4914109a8e1e494c6aa9852f9e84ec77a5fc643f upstream.
Currently nf_conntrack_in() calling nf_ct_find_expectation() will remove the exp from the hash table. However, in some scenario, we expect the exp not to be removed when the created ct will not be confirmed, like in OVS and TC conntrack in the following patches.
This patch allows exp not to be removed by setting IPS_CONFIRMED in the status of the tmpl.
Signed-off-by: Xin Long lucien.xin@gmail.com Acked-by: Aaron Conole aconole@redhat.com Acked-by: Florian Westphal fw@strlen.de Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/netfilter/nf_conntrack_expect.h | 2 +- net/netfilter/nf_conntrack_core.c | 2 +- net/netfilter/nf_conntrack_expect.c | 4 ++-- net/netfilter/nft_ct.c | 2 ++ 4 files changed, 6 insertions(+), 4 deletions(-)
--- a/include/net/netfilter/nf_conntrack_expect.h +++ b/include/net/netfilter/nf_conntrack_expect.h @@ -100,7 +100,7 @@ nf_ct_expect_find_get(struct net *net, struct nf_conntrack_expect * nf_ct_find_expectation(struct net *net, const struct nf_conntrack_zone *zone, - const struct nf_conntrack_tuple *tuple); + const struct nf_conntrack_tuple *tuple, bool unlink);
void nf_ct_unlink_expect_report(struct nf_conntrack_expect *exp, u32 portid, int report); --- a/net/netfilter/nf_conntrack_core.c +++ b/net/netfilter/nf_conntrack_core.c @@ -1770,7 +1770,7 @@ init_conntrack(struct net *net, struct n cnet = nf_ct_pernet(net); if (cnet->expect_count) { spin_lock_bh(&nf_conntrack_expect_lock); - exp = nf_ct_find_expectation(net, zone, tuple); + exp = nf_ct_find_expectation(net, zone, tuple, !tmpl || nf_ct_is_confirmed(tmpl)); if (exp) { pr_debug("expectation arrives ct=%p exp=%p\n", ct, exp); --- a/net/netfilter/nf_conntrack_expect.c +++ b/net/netfilter/nf_conntrack_expect.c @@ -171,7 +171,7 @@ EXPORT_SYMBOL_GPL(nf_ct_expect_find_get) struct nf_conntrack_expect * nf_ct_find_expectation(struct net *net, const struct nf_conntrack_zone *zone, - const struct nf_conntrack_tuple *tuple) + const struct nf_conntrack_tuple *tuple, bool unlink) { struct nf_conntrack_net *cnet = nf_ct_pernet(net); struct nf_conntrack_expect *i, *exp = NULL; @@ -211,7 +211,7 @@ nf_ct_find_expectation(struct net *net, !refcount_inc_not_zero(&exp->master->ct_general.use))) return NULL;
- if (exp->flags & NF_CT_EXPECT_PERMANENT) { + if (exp->flags & NF_CT_EXPECT_PERMANENT || !unlink) { refcount_inc(&exp->use); return exp; } else if (del_timer(&exp->timeout)) { --- a/net/netfilter/nft_ct.c +++ b/net/netfilter/nft_ct.c @@ -272,6 +272,7 @@ static void nft_ct_set_zone_eval(const s regs->verdict.code = NF_DROP; return; } + __set_bit(IPS_CONFIRMED_BIT, &ct->status); }
nf_ct_set(skb, ct, IP_CT_NEW); @@ -378,6 +379,7 @@ static bool nft_ct_tmpl_alloc_pcpu(void) return false; }
+ __set_bit(IPS_CONFIRMED_BIT, &tmp->status); per_cpu(nft_ct_pcpu_template, cpu) = tmp; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Aharon Landau aharonl@nvidia.com
[ Upstream commit a2a88b8e22d1b202225d0e40b02ad068afab2ccb ]
mkc.log_page_size can be changed using UMR. Therefore, don't treat it as a cache entry property.
Removing it from struct mlx5_cache_ent.
All cache mkeys will be created with default PAGE_SHIFT, and updated with the needed page_shift using UMR when passing them to a user.
Link: https://lore.kernel.org/r/20230125222807.6921-2-michaelgur@nvidia.com Signed-off-by: Aharon Landau aharonl@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Stable-dep-of: d97505baea64 ("RDMA/mlx5: Fix the recovery flow of the UMR QP") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 1 - drivers/infiniband/hw/mlx5/mr.c | 3 +-- drivers/infiniband/hw/mlx5/odp.c | 2 -- 3 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 0ef347e91ffeb..10c87901da27c 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -759,7 +759,6 @@ struct mlx5_cache_ent { char name[4]; u32 order; u32 access_mode; - u32 page; unsigned int ndescs;
u8 disabled:1; diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index b81b03aa2a629..53fadd6edb68d 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -297,7 +297,7 @@ static void set_cache_mkc(struct mlx5_cache_ent *ent, void *mkc)
MLX5_SET(mkc, mkc, translations_octword_size, get_mkc_octo_size(ent->access_mode, ent->ndescs)); - MLX5_SET(mkc, mkc, log_page_size, ent->page); + MLX5_SET(mkc, mkc, log_page_size, PAGE_SHIFT); }
/* Asynchronously schedule new MRs to be populated in the cache. */ @@ -765,7 +765,6 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) if (ent->order > mkey_cache_max_order(dev)) continue;
- ent->page = PAGE_SHIFT; ent->ndescs = 1 << ent->order; ent->access_mode = MLX5_MKC_ACCESS_MODE_MTT; if ((dev->mdev->profile.mask & MLX5_PROF_MASK_MR_CACHE) && diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 87fbee8061003..a5c9baec8be85 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -1598,14 +1598,12 @@ void mlx5_odp_init_mkey_cache_entry(struct mlx5_cache_ent *ent)
switch (ent->order - 2) { case MLX5_IMR_MTT_CACHE_ENTRY: - ent->page = PAGE_SHIFT; ent->ndescs = MLX5_IMR_MTT_ENTRIES; ent->access_mode = MLX5_MKC_ACCESS_MODE_MTT; ent->limit = 0; break;
case MLX5_IMR_KSM_CACHE_ENTRY: - ent->page = MLX5_KSM_PAGE_SHIFT; ent->ndescs = mlx5_imr_ksm_entries; ent->access_mode = MLX5_MKC_ACCESS_MODE_KSM; ent->limit = 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Aharon Landau aharonl@nvidia.com
[ Upstream commit 18b1746bddf5e7f6b2618966596d9517172a5cd7 ]
Implicit ODP mkey doesn't have unique properties. It shares the same properties as the order 18 cache entry. There is no need to devote a special entry for that.
Link: https://lore.kernel.org/r/20230125222807.6921-3-michaelgur@nvidia.com Signed-off-by: Aharon Landau aharonl@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Stable-dep-of: d97505baea64 ("RDMA/mlx5: Fix the recovery flow of the UMR QP") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/odp.c | 20 +++++--------------- include/linux/mlx5/driver.h | 1 - 2 files changed, 5 insertions(+), 16 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index a5c9baec8be85..5f0a17382de73 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -406,6 +406,7 @@ static void mlx5_ib_page_fault_resume(struct mlx5_ib_dev *dev, static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr, unsigned long idx) { + int order = order_base_2(MLX5_IMR_MTT_ENTRIES); struct mlx5_ib_dev *dev = mr_to_mdev(imr); struct ib_umem_odp *odp; struct mlx5_ib_mr *mr; @@ -418,7 +419,8 @@ static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr, if (IS_ERR(odp)) return ERR_CAST(odp);
- mr = mlx5_mr_cache_alloc(dev, &dev->cache.ent[MLX5_IMR_MTT_CACHE_ENTRY], + BUILD_BUG_ON(order > MKEY_CACHE_LAST_STD_ENTRY); + mr = mlx5_mr_cache_alloc(dev, &dev->cache.ent[order], imr->access_flags); if (IS_ERR(mr)) { ib_umem_odp_release(odp); @@ -1595,20 +1597,8 @@ void mlx5_odp_init_mkey_cache_entry(struct mlx5_cache_ent *ent) { if (!(ent->dev->odp_caps.general_caps & IB_ODP_SUPPORT_IMPLICIT)) return; - - switch (ent->order - 2) { - case MLX5_IMR_MTT_CACHE_ENTRY: - ent->ndescs = MLX5_IMR_MTT_ENTRIES; - ent->access_mode = MLX5_MKC_ACCESS_MODE_MTT; - ent->limit = 0; - break; - - case MLX5_IMR_KSM_CACHE_ENTRY: - ent->ndescs = mlx5_imr_ksm_entries; - ent->access_mode = MLX5_MKC_ACCESS_MODE_KSM; - ent->limit = 0; - break; - } + ent->ndescs = mlx5_imr_ksm_entries; + ent->access_mode = MLX5_MKC_ACCESS_MODE_KSM; }
static const struct ib_device_ops mlx5_ib_dev_odp_ops = { diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 3c3e0f26c2446..6cea62ca76d6b 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -744,7 +744,6 @@ enum {
enum { MKEY_CACHE_LAST_STD_ENTRY = 20, - MLX5_IMR_MTT_CACHE_ENTRY, MLX5_IMR_KSM_CACHE_ENTRY, MAX_MKEY_CACHE_ENTRIES };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Guralnik michaelgur@nvidia.com
[ Upstream commit b9584517832858a0f78d6851d09b697a829514cd ]
Currently, the cache structure is a static linear array. Therefore, his size is limited to the number of entries in it and is not expandable. The entries are dedicated to mkeys of size 2^x and no access_flags. Mkeys with different properties are not cacheable.
In this patch, we change the cache structure to an RB-tree. This will allow to extend the cache to support more entries with different mkey properties.
Link: https://lore.kernel.org/r/20230125222807.6921-4-michaelgur@nvidia.com Signed-off-by: Michael Guralnik michaelgur@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Stable-dep-of: d97505baea64 ("RDMA/mlx5: Fix the recovery flow of the UMR QP") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 11 +- drivers/infiniband/hw/mlx5/mr.c | 160 ++++++++++++++++++++------- drivers/infiniband/hw/mlx5/odp.c | 8 +- 3 files changed, 132 insertions(+), 47 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 10c87901da27c..bd998ac8c29c1 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -761,6 +761,8 @@ struct mlx5_cache_ent { u32 access_mode; unsigned int ndescs;
+ struct rb_node node; + u8 disabled:1; u8 fill_to_high_water:1;
@@ -790,8 +792,9 @@ struct mlx5r_async_create_mkey {
struct mlx5_mkey_cache { struct workqueue_struct *wq; - struct mlx5_cache_ent ent[MAX_MKEY_CACHE_ENTRIES]; - struct dentry *root; + struct rb_root rb_root; + struct mutex rb_lock; + struct dentry *fs_root; unsigned long last_add; };
@@ -1336,11 +1339,15 @@ void mlx5_ib_copy_pas(u64 *old, u64 *new, int step, int num); int mlx5_ib_get_cqe_size(struct ib_cq *ibcq); int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev); int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev); +struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, + int order);
struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent, int access_flags);
+struct mlx5_ib_mr *mlx5_mr_cache_alloc_order(struct mlx5_ib_dev *dev, u32 order, + int access_flags); int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask, struct ib_mr_status *mr_status); struct ib_wq *mlx5_ib_create_wq(struct ib_pd *pd, diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 53fadd6edb68d..b3d83920d3cfb 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -515,18 +515,22 @@ static const struct file_operations limit_fops = {
static bool someone_adding(struct mlx5_mkey_cache *cache) { - unsigned int i; - - for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { - struct mlx5_cache_ent *ent = &cache->ent[i]; - bool ret; + struct mlx5_cache_ent *ent; + struct rb_node *node; + bool ret;
+ mutex_lock(&cache->rb_lock); + for (node = rb_first(&cache->rb_root); node; node = rb_next(node)) { + ent = rb_entry(node, struct mlx5_cache_ent, node); xa_lock_irq(&ent->mkeys); ret = ent->stored < ent->limit; xa_unlock_irq(&ent->mkeys); - if (ret) + if (ret) { + mutex_unlock(&cache->rb_lock); return true; + } } + mutex_unlock(&cache->rb_lock); return false; }
@@ -637,6 +641,59 @@ static void delayed_cache_work_func(struct work_struct *work) __cache_work_func(ent); }
+static int mlx5_cache_ent_insert(struct mlx5_mkey_cache *cache, + struct mlx5_cache_ent *ent) +{ + struct rb_node **new = &cache->rb_root.rb_node, *parent = NULL; + struct mlx5_cache_ent *cur; + + mutex_lock(&cache->rb_lock); + /* Figure out where to put new node */ + while (*new) { + cur = rb_entry(*new, struct mlx5_cache_ent, node); + parent = *new; + if (ent->order < cur->order) + new = &((*new)->rb_left); + if (ent->order > cur->order) + new = &((*new)->rb_right); + if (ent->order == cur->order) { + mutex_unlock(&cache->rb_lock); + return -EEXIST; + } + } + + /* Add new node and rebalance tree. */ + rb_link_node(&ent->node, parent, new); + rb_insert_color(&ent->node, &cache->rb_root); + + mutex_unlock(&cache->rb_lock); + return 0; +} + +static struct mlx5_cache_ent *mkey_cache_ent_from_order(struct mlx5_ib_dev *dev, + unsigned int order) +{ + struct rb_node *node = dev->cache.rb_root.rb_node; + struct mlx5_cache_ent *cur, *smallest = NULL; + + /* + * Find the smallest ent with order >= requested_order. + */ + while (node) { + cur = rb_entry(node, struct mlx5_cache_ent, node); + if (cur->order > order) { + smallest = cur; + node = node->rb_left; + } + if (cur->order < order) + node = node->rb_right; + if (cur->order == order) + return cur; + } + + return smallest; +} + struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent, int access_flags) @@ -677,10 +734,16 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, return mr; }
-static void clean_keys(struct mlx5_ib_dev *dev, int c) +struct mlx5_ib_mr *mlx5_mr_cache_alloc_order(struct mlx5_ib_dev *dev, + u32 order, int access_flags) +{ + struct mlx5_cache_ent *ent = mkey_cache_ent_from_order(dev, order); + + return mlx5_mr_cache_alloc(dev, ent, access_flags); +} + +static void clean_keys(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent) { - struct mlx5_mkey_cache *cache = &dev->cache; - struct mlx5_cache_ent *ent = &cache->ent[c]; u32 mkey;
cancel_delayed_work(&ent->dwork); @@ -699,8 +762,8 @@ static void mlx5_mkey_cache_debugfs_cleanup(struct mlx5_ib_dev *dev) if (!mlx5_debugfs_root || dev->is_rep) return;
- debugfs_remove_recursive(dev->cache.root); - dev->cache.root = NULL; + debugfs_remove_recursive(dev->cache.fs_root); + dev->cache.fs_root = NULL; }
static void mlx5_mkey_cache_debugfs_init(struct mlx5_ib_dev *dev) @@ -713,12 +776,13 @@ static void mlx5_mkey_cache_debugfs_init(struct mlx5_ib_dev *dev) if (!mlx5_debugfs_root || dev->is_rep) return;
- cache->root = debugfs_create_dir("mr_cache", mlx5_debugfs_get_dev_root(dev->mdev)); + dir = mlx5_debugfs_get_dev_root(dev->mdev); + cache->fs_root = debugfs_create_dir("mr_cache", dir);
for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { - ent = &cache->ent[i]; + ent = mkey_cache_ent_from_order(dev, i); sprintf(ent->name, "%d", ent->order); - dir = debugfs_create_dir(ent->name, cache->root); + dir = debugfs_create_dir(ent->name, cache->fs_root); debugfs_create_file("size", 0600, dir, ent, &size_fops); debugfs_create_file("limit", 0600, dir, ent, &limit_fops); debugfs_create_ulong("cur", 0400, dir, &ent->stored); @@ -733,6 +797,30 @@ static void delay_time_func(struct timer_list *t) WRITE_ONCE(dev->fill_delay, 0); }
+struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, + int order) +{ + struct mlx5_cache_ent *ent; + int ret; + + ent = kzalloc(sizeof(*ent), GFP_KERNEL); + if (!ent) + return ERR_PTR(-ENOMEM); + + xa_init_flags(&ent->mkeys, XA_FLAGS_LOCK_IRQ); + ent->order = order; + ent->dev = dev; + + INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func); + + ret = mlx5_cache_ent_insert(&dev->cache, ent); + if (ret) { + kfree(ent); + return ERR_PTR(ret); + } + return ent; +} + int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) { struct mlx5_mkey_cache *cache = &dev->cache; @@ -740,6 +828,8 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) int i;
mutex_init(&dev->slow_path_mutex); + mutex_init(&dev->cache.rb_lock); + dev->cache.rb_root = RB_ROOT; cache->wq = alloc_ordered_workqueue("mkey_cache", WQ_MEM_RECLAIM); if (!cache->wq) { mlx5_ib_warn(dev, "failed to create work queue\n"); @@ -749,13 +839,7 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) mlx5_cmd_init_async_ctx(dev->mdev, &dev->async_ctx); timer_setup(&dev->delay_timer, delay_time_func, 0); for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { - ent = &cache->ent[i]; - xa_init_flags(&ent->mkeys, XA_FLAGS_LOCK_IRQ); - ent->order = i + 2; - ent->dev = dev; - ent->limit = 0; - - INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func); + ent = mlx5r_cache_create_ent(dev, i);
if (i > MKEY_CACHE_LAST_STD_ENTRY) { mlx5_odp_init_mkey_cache_entry(ent); @@ -785,14 +869,16 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev) { - unsigned int i; + struct rb_root *root = &dev->cache.rb_root; + struct mlx5_cache_ent *ent; + struct rb_node *node;
if (!dev->cache.wq) return 0;
- for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { - struct mlx5_cache_ent *ent = &dev->cache.ent[i]; - + mutex_lock(&dev->cache.rb_lock); + for (node = rb_first(root); node; node = rb_next(node)) { + ent = rb_entry(node, struct mlx5_cache_ent, node); xa_lock_irq(&ent->mkeys); ent->disabled = true; xa_unlock_irq(&ent->mkeys); @@ -802,8 +888,15 @@ int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev) mlx5_mkey_cache_debugfs_cleanup(dev); mlx5_cmd_cleanup_async_ctx(&dev->async_ctx);
- for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) - clean_keys(dev, i); + node = rb_first(root); + while (node) { + ent = rb_entry(node, struct mlx5_cache_ent, node); + node = rb_next(node); + clean_keys(dev, ent); + rb_erase(&ent->node, root); + kfree(ent); + } + mutex_unlock(&dev->cache.rb_lock);
destroy_workqueue(dev->cache.wq); del_timer_sync(&dev->delay_timer); @@ -876,19 +969,6 @@ static int mkey_cache_max_order(struct mlx5_ib_dev *dev) return MLX5_MAX_UMR_SHIFT; }
-static struct mlx5_cache_ent *mkey_cache_ent_from_order(struct mlx5_ib_dev *dev, - unsigned int order) -{ - struct mlx5_mkey_cache *cache = &dev->cache; - - if (order < cache->ent[0].order) - return &cache->ent[0]; - order = order - cache->ent[0].order; - if (order > MKEY_CACHE_LAST_STD_ENTRY) - return NULL; - return &cache->ent[order]; -} - static void set_mr_fields(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr, u64 length, int access_flags, u64 iova) { diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 5f0a17382de73..7f68940ca0d1e 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -420,8 +420,7 @@ static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr, return ERR_CAST(odp);
BUILD_BUG_ON(order > MKEY_CACHE_LAST_STD_ENTRY); - mr = mlx5_mr_cache_alloc(dev, &dev->cache.ent[order], - imr->access_flags); + mr = mlx5_mr_cache_alloc_order(dev, order, imr->access_flags); if (IS_ERR(mr)) { ib_umem_odp_release(odp); return mr; @@ -495,9 +494,8 @@ struct mlx5_ib_mr *mlx5_ib_alloc_implicit_mr(struct mlx5_ib_pd *pd, if (IS_ERR(umem_odp)) return ERR_CAST(umem_odp);
- imr = mlx5_mr_cache_alloc(dev, - &dev->cache.ent[MLX5_IMR_KSM_CACHE_ENTRY], - access_flags); + imr = mlx5_mr_cache_alloc_order(dev, MLX5_IMR_KSM_CACHE_ENTRY, + access_flags); if (IS_ERR(imr)) { ib_umem_odp_release(umem_odp); return imr;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Guralnik michaelgur@nvidia.com
[ Upstream commit 73d09b2fe8336f5f37935e46418666ddbcd3c343 ]
Switch from using the mkey order to using the new struct as the key to the RB tree of cache entries.
The key is all the mkey properties that UMR operations can't modify. Using this key to define the cache entries and to search and create cache mkeys.
Link: https://lore.kernel.org/r/20230125222807.6921-5-michaelgur@nvidia.com Signed-off-by: Michael Guralnik michaelgur@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Stable-dep-of: d97505baea64 ("RDMA/mlx5: Fix the recovery flow of the UMR QP") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 27 ++-- drivers/infiniband/hw/mlx5/mr.c | 228 +++++++++++++++++++-------- drivers/infiniband/hw/mlx5/odp.c | 30 ++-- 3 files changed, 201 insertions(+), 84 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index bd998ac8c29c1..7c9d5648947e9 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -637,6 +637,13 @@ enum mlx5_mkey_type { MLX5_MKEY_INDIRECT_DEVX, };
+struct mlx5r_cache_rb_key { + u8 ats:1; + unsigned int access_mode; + unsigned int access_flags; + unsigned int ndescs; +}; + struct mlx5_ib_mkey { u32 key; enum mlx5_mkey_type type; @@ -757,11 +764,9 @@ struct mlx5_cache_ent { unsigned long reserved;
char name[4]; - u32 order; - u32 access_mode; - unsigned int ndescs;
struct rb_node node; + struct mlx5r_cache_rb_key rb_key;
u8 disabled:1; u8 fill_to_high_water:1; @@ -1340,14 +1345,13 @@ int mlx5_ib_get_cqe_size(struct ib_cq *ibcq); int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev); int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev); struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, - int order); + struct mlx5r_cache_rb_key rb_key, + bool persistent_entry);
struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, - struct mlx5_cache_ent *ent, - int access_flags); + int access_flags, int access_mode, + int ndescs);
-struct mlx5_ib_mr *mlx5_mr_cache_alloc_order(struct mlx5_ib_dev *dev, u32 order, - int access_flags); int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask, struct ib_mr_status *mr_status); struct ib_wq *mlx5_ib_create_wq(struct ib_pd *pd, @@ -1370,7 +1374,7 @@ int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq); void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev); int __init mlx5_ib_odp_init(void); void mlx5_ib_odp_cleanup(void); -void mlx5_odp_init_mkey_cache_entry(struct mlx5_cache_ent *ent); +int mlx5_odp_init_mkey_cache(struct mlx5_ib_dev *dev); void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, struct mlx5_ib_mr *mr, int flags);
@@ -1389,7 +1393,10 @@ static inline int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, static inline void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev) {} static inline int mlx5_ib_odp_init(void) { return 0; } static inline void mlx5_ib_odp_cleanup(void) {} -static inline void mlx5_odp_init_mkey_cache_entry(struct mlx5_cache_ent *ent) {} +static inline int mlx5_odp_init_mkey_cache(struct mlx5_ib_dev *dev) +{ + return 0; +} static inline void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, struct mlx5_ib_mr *mr, int flags) {}
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index b3d83920d3cfb..1060b30a837a0 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -292,11 +292,13 @@ static void set_cache_mkc(struct mlx5_cache_ent *ent, void *mkc) set_mkc_access_pd_addr_fields(mkc, 0, 0, ent->dev->umrc.pd); MLX5_SET(mkc, mkc, free, 1); MLX5_SET(mkc, mkc, umr_en, 1); - MLX5_SET(mkc, mkc, access_mode_1_0, ent->access_mode & 0x3); - MLX5_SET(mkc, mkc, access_mode_4_2, (ent->access_mode >> 2) & 0x7); + MLX5_SET(mkc, mkc, access_mode_1_0, ent->rb_key.access_mode & 0x3); + MLX5_SET(mkc, mkc, access_mode_4_2, + (ent->rb_key.access_mode >> 2) & 0x7);
MLX5_SET(mkc, mkc, translations_octword_size, - get_mkc_octo_size(ent->access_mode, ent->ndescs)); + get_mkc_octo_size(ent->rb_key.access_mode, + ent->rb_key.ndescs)); MLX5_SET(mkc, mkc, log_page_size, PAGE_SHIFT); }
@@ -594,8 +596,8 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) if (err != -EAGAIN) { mlx5_ib_warn( dev, - "command failed order %d, err %d\n", - ent->order, err); + "add keys command failed, err %d\n", + err); queue_delayed_work(cache->wq, &ent->dwork, msecs_to_jiffies(1000)); } @@ -641,22 +643,49 @@ static void delayed_cache_work_func(struct work_struct *work) __cache_work_func(ent); }
+static int cache_ent_key_cmp(struct mlx5r_cache_rb_key key1, + struct mlx5r_cache_rb_key key2) +{ + int res; + + res = key1.ats - key2.ats; + if (res) + return res; + + res = key1.access_mode - key2.access_mode; + if (res) + return res; + + res = key1.access_flags - key2.access_flags; + if (res) + return res; + + /* + * keep ndescs the last in the compare table since the find function + * searches for an exact match on all properties and only closest + * match in size. + */ + return key1.ndescs - key2.ndescs; +} + static int mlx5_cache_ent_insert(struct mlx5_mkey_cache *cache, struct mlx5_cache_ent *ent) { struct rb_node **new = &cache->rb_root.rb_node, *parent = NULL; struct mlx5_cache_ent *cur; + int cmp;
mutex_lock(&cache->rb_lock); /* Figure out where to put new node */ while (*new) { cur = rb_entry(*new, struct mlx5_cache_ent, node); parent = *new; - if (ent->order < cur->order) + cmp = cache_ent_key_cmp(cur->rb_key, ent->rb_key); + if (cmp > 0) new = &((*new)->rb_left); - if (ent->order > cur->order) + if (cmp < 0) new = &((*new)->rb_right); - if (ent->order == cur->order) { + if (cmp == 0) { mutex_unlock(&cache->rb_lock); return -EEXIST; } @@ -670,40 +699,45 @@ static int mlx5_cache_ent_insert(struct mlx5_mkey_cache *cache, return 0; }
-static struct mlx5_cache_ent *mkey_cache_ent_from_order(struct mlx5_ib_dev *dev, - unsigned int order) +static struct mlx5_cache_ent * +mkey_cache_ent_from_rb_key(struct mlx5_ib_dev *dev, + struct mlx5r_cache_rb_key rb_key) { struct rb_node *node = dev->cache.rb_root.rb_node; struct mlx5_cache_ent *cur, *smallest = NULL; + int cmp;
/* * Find the smallest ent with order >= requested_order. */ while (node) { cur = rb_entry(node, struct mlx5_cache_ent, node); - if (cur->order > order) { + cmp = cache_ent_key_cmp(cur->rb_key, rb_key); + if (cmp > 0) { smallest = cur; node = node->rb_left; } - if (cur->order < order) + if (cmp < 0) node = node->rb_right; - if (cur->order == order) + if (cmp == 0) return cur; }
- return smallest; + return (smallest && + smallest->rb_key.access_mode == rb_key.access_mode && + smallest->rb_key.access_flags == rb_key.access_flags && + smallest->rb_key.ats == rb_key.ats) ? + smallest : + NULL; }
-struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, - struct mlx5_cache_ent *ent, - int access_flags) +static struct mlx5_ib_mr *_mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, + struct mlx5_cache_ent *ent, + int access_flags) { struct mlx5_ib_mr *mr; int err;
- if (!mlx5r_umr_can_reconfig(dev, 0, access_flags)) - return ERR_PTR(-EOPNOTSUPP); - mr = kzalloc(sizeof(*mr), GFP_KERNEL); if (!mr) return ERR_PTR(-ENOMEM); @@ -734,12 +768,44 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, return mr; }
-struct mlx5_ib_mr *mlx5_mr_cache_alloc_order(struct mlx5_ib_dev *dev, - u32 order, int access_flags) +static int get_unchangeable_access_flags(struct mlx5_ib_dev *dev, + int access_flags) +{ + int ret = 0; + + if ((access_flags & IB_ACCESS_REMOTE_ATOMIC) && + MLX5_CAP_GEN(dev->mdev, atomic) && + MLX5_CAP_GEN(dev->mdev, umr_modify_atomic_disabled)) + ret |= IB_ACCESS_REMOTE_ATOMIC; + + if ((access_flags & IB_ACCESS_RELAXED_ORDERING) && + MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write) && + !MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write_umr)) + ret |= IB_ACCESS_RELAXED_ORDERING; + + if ((access_flags & IB_ACCESS_RELAXED_ORDERING) && + MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) && + !MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_umr)) + ret |= IB_ACCESS_RELAXED_ORDERING; + + return ret; +} + +struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, + int access_flags, int access_mode, + int ndescs) { - struct mlx5_cache_ent *ent = mkey_cache_ent_from_order(dev, order); + struct mlx5r_cache_rb_key rb_key = { + .ndescs = ndescs, + .access_mode = access_mode, + .access_flags = get_unchangeable_access_flags(dev, access_flags) + }; + struct mlx5_cache_ent *ent = mkey_cache_ent_from_rb_key(dev, rb_key);
- return mlx5_mr_cache_alloc(dev, ent, access_flags); + if (!ent) + return ERR_PTR(-EOPNOTSUPP); + + return _mlx5_mr_cache_alloc(dev, ent, access_flags); }
static void clean_keys(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent) @@ -766,28 +832,32 @@ static void mlx5_mkey_cache_debugfs_cleanup(struct mlx5_ib_dev *dev) dev->cache.fs_root = NULL; }
+static void mlx5_mkey_cache_debugfs_add_ent(struct mlx5_ib_dev *dev, + struct mlx5_cache_ent *ent) +{ + int order = order_base_2(ent->rb_key.ndescs); + struct dentry *dir; + + if (ent->rb_key.access_mode == MLX5_MKC_ACCESS_MODE_KSM) + order = MLX5_IMR_KSM_CACHE_ENTRY + 2; + + sprintf(ent->name, "%d", order); + dir = debugfs_create_dir(ent->name, dev->cache.fs_root); + debugfs_create_file("size", 0600, dir, ent, &size_fops); + debugfs_create_file("limit", 0600, dir, ent, &limit_fops); + debugfs_create_ulong("cur", 0400, dir, &ent->stored); + debugfs_create_u32("miss", 0600, dir, &ent->miss); +} + static void mlx5_mkey_cache_debugfs_init(struct mlx5_ib_dev *dev) { + struct dentry *dbg_root = mlx5_debugfs_get_dev_root(dev->mdev); struct mlx5_mkey_cache *cache = &dev->cache; - struct mlx5_cache_ent *ent; - struct dentry *dir; - int i;
if (!mlx5_debugfs_root || dev->is_rep) return;
- dir = mlx5_debugfs_get_dev_root(dev->mdev); - cache->fs_root = debugfs_create_dir("mr_cache", dir); - - for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { - ent = mkey_cache_ent_from_order(dev, i); - sprintf(ent->name, "%d", ent->order); - dir = debugfs_create_dir(ent->name, cache->fs_root); - debugfs_create_file("size", 0600, dir, ent, &size_fops); - debugfs_create_file("limit", 0600, dir, ent, &limit_fops); - debugfs_create_ulong("cur", 0400, dir, &ent->stored); - debugfs_create_u32("miss", 0600, dir, &ent->miss); - } + cache->fs_root = debugfs_create_dir("mr_cache", dbg_root); }
static void delay_time_func(struct timer_list *t) @@ -798,9 +868,11 @@ static void delay_time_func(struct timer_list *t) }
struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, - int order) + struct mlx5r_cache_rb_key rb_key, + bool persistent_entry) { struct mlx5_cache_ent *ent; + int order; int ret;
ent = kzalloc(sizeof(*ent), GFP_KERNEL); @@ -808,7 +880,7 @@ struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, return ERR_PTR(-ENOMEM);
xa_init_flags(&ent->mkeys, XA_FLAGS_LOCK_IRQ); - ent->order = order; + ent->rb_key = rb_key; ent->dev = dev;
INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func); @@ -818,13 +890,36 @@ struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, kfree(ent); return ERR_PTR(ret); } + + if (persistent_entry) { + if (rb_key.access_mode == MLX5_MKC_ACCESS_MODE_KSM) + order = MLX5_IMR_KSM_CACHE_ENTRY; + else + order = order_base_2(rb_key.ndescs) - 2; + + if ((dev->mdev->profile.mask & MLX5_PROF_MASK_MR_CACHE) && + !dev->is_rep && mlx5_core_is_pf(dev->mdev) && + mlx5r_umr_can_load_pas(dev, 0)) + ent->limit = dev->mdev->profile.mr_cache[order].limit; + else + ent->limit = 0; + + mlx5_mkey_cache_debugfs_add_ent(dev, ent); + } + return ent; }
int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) { struct mlx5_mkey_cache *cache = &dev->cache; + struct rb_root *root = &dev->cache.rb_root; + struct mlx5r_cache_rb_key rb_key = { + .access_mode = MLX5_MKC_ACCESS_MODE_MTT, + }; struct mlx5_cache_ent *ent; + struct rb_node *node; + int ret; int i;
mutex_init(&dev->slow_path_mutex); @@ -838,33 +933,32 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
mlx5_cmd_init_async_ctx(dev->mdev, &dev->async_ctx); timer_setup(&dev->delay_timer, delay_time_func, 0); - for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { - ent = mlx5r_cache_create_ent(dev, i); - - if (i > MKEY_CACHE_LAST_STD_ENTRY) { - mlx5_odp_init_mkey_cache_entry(ent); - continue; + mlx5_mkey_cache_debugfs_init(dev); + for (i = 0; i <= mkey_cache_max_order(dev); i++) { + rb_key.ndescs = 1 << (i + 2); + ent = mlx5r_cache_create_ent(dev, rb_key, true); + if (IS_ERR(ent)) { + ret = PTR_ERR(ent); + goto err; } + }
- if (ent->order > mkey_cache_max_order(dev)) - continue; + ret = mlx5_odp_init_mkey_cache(dev); + if (ret) + goto err;
- ent->ndescs = 1 << ent->order; - ent->access_mode = MLX5_MKC_ACCESS_MODE_MTT; - if ((dev->mdev->profile.mask & MLX5_PROF_MASK_MR_CACHE) && - !dev->is_rep && mlx5_core_is_pf(dev->mdev) && - mlx5r_umr_can_load_pas(dev, 0)) - ent->limit = dev->mdev->profile.mr_cache[i].limit; - else - ent->limit = 0; + for (node = rb_first(root); node; node = rb_next(node)) { + ent = rb_entry(node, struct mlx5_cache_ent, node); xa_lock_irq(&ent->mkeys); queue_adjust_cache_locked(ent); xa_unlock_irq(&ent->mkeys); }
- mlx5_mkey_cache_debugfs_init(dev); - return 0; + +err: + mlx5_ib_warn(dev, "failed to create mkey cache entry\n"); + return ret; }
int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev) @@ -965,7 +1059,7 @@ static int get_octo_len(u64 addr, u64 len, int page_shift) static int mkey_cache_max_order(struct mlx5_ib_dev *dev) { if (MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset)) - return MKEY_CACHE_LAST_STD_ENTRY + 2; + return MKEY_CACHE_LAST_STD_ENTRY; return MLX5_MAX_UMR_SHIFT; }
@@ -995,6 +1089,9 @@ static struct mlx5_ib_mr *alloc_cacheable_mr(struct ib_pd *pd, struct ib_umem *umem, u64 iova, int access_flags) { + struct mlx5r_cache_rb_key rb_key = { + .access_mode = MLX5_MKC_ACCESS_MODE_MTT, + }; struct mlx5_ib_dev *dev = to_mdev(pd->device); struct mlx5_cache_ent *ent; struct mlx5_ib_mr *mr; @@ -1007,8 +1104,11 @@ static struct mlx5_ib_mr *alloc_cacheable_mr(struct ib_pd *pd, 0, iova); if (WARN_ON(!page_size)) return ERR_PTR(-EINVAL); - ent = mkey_cache_ent_from_order( - dev, order_base_2(ib_umem_num_dma_blocks(umem, page_size))); + + rb_key.ndescs = ib_umem_num_dma_blocks(umem, page_size); + rb_key.ats = mlx5_umem_needs_ats(dev, umem, access_flags); + rb_key.access_flags = get_unchangeable_access_flags(dev, access_flags); + ent = mkey_cache_ent_from_rb_key(dev, rb_key); /* * Matches access in alloc_cache_mr(). If the MR can't come from the * cache then synchronously create an uncached one. @@ -1022,7 +1122,7 @@ static struct mlx5_ib_mr *alloc_cacheable_mr(struct ib_pd *pd, return mr; }
- mr = mlx5_mr_cache_alloc(dev, ent, access_flags); + mr = _mlx5_mr_cache_alloc(dev, ent, access_flags); if (IS_ERR(mr)) return mr;
@@ -1452,7 +1552,7 @@ static bool can_use_umr_rereg_pas(struct mlx5_ib_mr *mr, mlx5_umem_find_best_pgsz(new_umem, mkc, log_page_size, 0, iova); if (WARN_ON(!*page_size)) return false; - return (1ULL << mr->mmkey.cache_ent->order) >= + return (mr->mmkey.cache_ent->rb_key.ndescs) >= ib_umem_num_dma_blocks(new_umem, *page_size); }
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 7f68940ca0d1e..96d4faabbff8a 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -406,7 +406,6 @@ static void mlx5_ib_page_fault_resume(struct mlx5_ib_dev *dev, static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr, unsigned long idx) { - int order = order_base_2(MLX5_IMR_MTT_ENTRIES); struct mlx5_ib_dev *dev = mr_to_mdev(imr); struct ib_umem_odp *odp; struct mlx5_ib_mr *mr; @@ -419,8 +418,9 @@ static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr, if (IS_ERR(odp)) return ERR_CAST(odp);
- BUILD_BUG_ON(order > MKEY_CACHE_LAST_STD_ENTRY); - mr = mlx5_mr_cache_alloc_order(dev, order, imr->access_flags); + mr = mlx5_mr_cache_alloc(dev, imr->access_flags, + MLX5_MKC_ACCESS_MODE_MTT, + MLX5_IMR_MTT_ENTRIES); if (IS_ERR(mr)) { ib_umem_odp_release(odp); return mr; @@ -494,8 +494,8 @@ struct mlx5_ib_mr *mlx5_ib_alloc_implicit_mr(struct mlx5_ib_pd *pd, if (IS_ERR(umem_odp)) return ERR_CAST(umem_odp);
- imr = mlx5_mr_cache_alloc_order(dev, MLX5_IMR_KSM_CACHE_ENTRY, - access_flags); + imr = mlx5_mr_cache_alloc(dev, access_flags, MLX5_MKC_ACCESS_MODE_KSM, + mlx5_imr_ksm_entries); if (IS_ERR(imr)) { ib_umem_odp_release(umem_odp); return imr; @@ -1591,12 +1591,22 @@ mlx5_ib_odp_destroy_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq) return err; }
-void mlx5_odp_init_mkey_cache_entry(struct mlx5_cache_ent *ent) +int mlx5_odp_init_mkey_cache(struct mlx5_ib_dev *dev) { - if (!(ent->dev->odp_caps.general_caps & IB_ODP_SUPPORT_IMPLICIT)) - return; - ent->ndescs = mlx5_imr_ksm_entries; - ent->access_mode = MLX5_MKC_ACCESS_MODE_KSM; + struct mlx5r_cache_rb_key rb_key = { + .access_mode = MLX5_MKC_ACCESS_MODE_KSM, + .ndescs = mlx5_imr_ksm_entries, + }; + struct mlx5_cache_ent *ent; + + if (!(dev->odp_caps.general_caps & IB_ODP_SUPPORT_IMPLICIT)) + return 0; + + ent = mlx5r_cache_create_ent(dev, rb_key, true); + if (IS_ERR(ent)) + return PTR_ERR(ent); + + return 0; }
static const struct ib_device_ops mlx5_ib_dev_odp_ops = {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Guralnik michaelgur@nvidia.com
[ Upstream commit dd1b913fb0d0e3e6d55e92d2319d954474dd66ac ]
Currently, when dereging an MR, if the mkey doesn't belong to a cache entry, it will be destroyed. As a result, the restart of applications with many non-cached mkeys is not efficient since all the mkeys are destroyed and then recreated. This process takes a long time (for 100,000 MRs, it is ~20 seconds for dereg and ~28 seconds for re-reg).
To shorten the restart runtime, insert all cacheable mkeys to the cache. If there is no fitting entry to the mkey properties, create a temporary entry that fits it.
After a predetermined timeout, the cache entries will shrink to the initial high limit.
The mkeys will still be in the cache when consuming them again after an application restart. Therefore, the registration will be much faster (for 100,000 MRs, it is ~4 seconds for dereg and ~5 seconds for re-reg).
The temporary cache entries created to store the non-cache mkeys are not exposed through sysfs like the default cache entries.
Link: https://lore.kernel.org/r/20230125222807.6921-6-michaelgur@nvidia.com Signed-off-by: Michael Guralnik michaelgur@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Stable-dep-of: d97505baea64 ("RDMA/mlx5: Fix the recovery flow of the UMR QP") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 2 + drivers/infiniband/hw/mlx5/mr.c | 55 +++++++++++++++++++++------- 2 files changed, 44 insertions(+), 13 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 7c9d5648947e9..f345e2ae394d2 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -650,6 +650,8 @@ struct mlx5_ib_mkey { unsigned int ndescs; struct wait_queue_head wait; refcount_t usecount; + /* User Mkey must hold either a rb_key or a cache_ent. */ + struct mlx5r_cache_rb_key rb_key; struct mlx5_cache_ent *cache_ent; };
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 1060b30a837a0..bf1ca7565be67 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1110,15 +1110,14 @@ static struct mlx5_ib_mr *alloc_cacheable_mr(struct ib_pd *pd, rb_key.access_flags = get_unchangeable_access_flags(dev, access_flags); ent = mkey_cache_ent_from_rb_key(dev, rb_key); /* - * Matches access in alloc_cache_mr(). If the MR can't come from the - * cache then synchronously create an uncached one. + * If the MR can't come from the cache then synchronously create an uncached + * one. */ - if (!ent || ent->limit == 0 || - !mlx5r_umr_can_reconfig(dev, 0, access_flags) || - mlx5_umem_needs_ats(dev, umem, access_flags)) { + if (!ent) { mutex_lock(&dev->slow_path_mutex); mr = reg_create(pd, umem, iova, access_flags, page_size, false); mutex_unlock(&dev->slow_path_mutex); + mr->mmkey.rb_key = rb_key; return mr; }
@@ -1209,6 +1208,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, struct ib_umem *umem, goto err_2; } mr->mmkey.type = MLX5_MKEY_MR; + mr->mmkey.ndescs = get_octo_len(iova, umem->length, mr->page_shift); mr->umem = umem; set_mr_fields(dev, mr, umem->length, access_flags, iova); kvfree(in); @@ -1747,6 +1747,40 @@ mlx5_free_priv_descs(struct mlx5_ib_mr *mr) } }
+static int cache_ent_find_and_store(struct mlx5_ib_dev *dev, + struct mlx5_ib_mr *mr) +{ + struct mlx5_mkey_cache *cache = &dev->cache; + struct mlx5_cache_ent *ent; + + if (mr->mmkey.cache_ent) { + xa_lock_irq(&mr->mmkey.cache_ent->mkeys); + mr->mmkey.cache_ent->in_use--; + xa_unlock_irq(&mr->mmkey.cache_ent->mkeys); + goto end; + } + + mutex_lock(&cache->rb_lock); + ent = mkey_cache_ent_from_rb_key(dev, mr->mmkey.rb_key); + mutex_unlock(&cache->rb_lock); + if (ent) { + if (ent->rb_key.ndescs == mr->mmkey.rb_key.ndescs) { + mr->mmkey.cache_ent = ent; + goto end; + } + } + + ent = mlx5r_cache_create_ent(dev, mr->mmkey.rb_key, false); + if (IS_ERR(ent)) + return PTR_ERR(ent); + + mr->mmkey.cache_ent = ent; + +end: + return push_mkey(mr->mmkey.cache_ent, false, + xa_mk_value(mr->mmkey.key)); +} + int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) { struct mlx5_ib_mr *mr = to_mmr(ibmr); @@ -1792,16 +1826,11 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) }
/* Stop DMA */ - if (mr->mmkey.cache_ent) { - xa_lock_irq(&mr->mmkey.cache_ent->mkeys); - mr->mmkey.cache_ent->in_use--; - xa_unlock_irq(&mr->mmkey.cache_ent->mkeys); - + if (mr->umem && mlx5r_umr_can_load_pas(dev, mr->umem->length)) if (mlx5r_umr_revoke_mr(mr) || - push_mkey(mr->mmkey.cache_ent, false, - xa_mk_value(mr->mmkey.key))) + cache_ent_find_and_store(dev, mr)) mr->mmkey.cache_ent = NULL; - } + if (!mr->mmkey.cache_ent) { rc = destroy_mkey(to_mdev(mr->ibmr.device), mr); if (rc)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Guralnik michaelgur@nvidia.com
[ Upstream commit 627122280c878cf5d3cda2d2c5a0a8f6a7e35cb7 ]
The non-cache mkeys are stored in the cache only to shorten restarting application time. Don't store them longer than needed.
Configure cache entries that store non-cache MRs as temporary entries. If 30 seconds have passed and no user reclaimed the temporarily cached mkeys, an asynchronous work will destroy the mkeys entries.
Link: https://lore.kernel.org/r/20230125222807.6921-7-michaelgur@nvidia.com Signed-off-by: Michael Guralnik michaelgur@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Stable-dep-of: d97505baea64 ("RDMA/mlx5: Fix the recovery flow of the UMR QP") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 9 ++- drivers/infiniband/hw/mlx5/mr.c | 94 ++++++++++++++++++++++------ drivers/infiniband/hw/mlx5/odp.c | 2 +- 3 files changed, 82 insertions(+), 23 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index f345e2ae394d2..7c72e0e9db54a 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -770,6 +770,7 @@ struct mlx5_cache_ent { struct rb_node node; struct mlx5r_cache_rb_key rb_key;
+ u8 is_tmp:1; u8 disabled:1; u8 fill_to_high_water:1;
@@ -803,6 +804,7 @@ struct mlx5_mkey_cache { struct mutex rb_lock; struct dentry *fs_root; unsigned long last_add; + struct delayed_work remove_ent_dwork; };
struct mlx5_ib_port_resources { @@ -1346,9 +1348,10 @@ void mlx5_ib_copy_pas(u64 *old, u64 *new, int step, int num); int mlx5_ib_get_cqe_size(struct ib_cq *ibcq); int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev); int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev); -struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, - struct mlx5r_cache_rb_key rb_key, - bool persistent_entry); +struct mlx5_cache_ent * +mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev, + struct mlx5r_cache_rb_key rb_key, + bool persistent_entry);
struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, int access_flags, int access_mode, diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index bf1ca7565be67..2c1a935734273 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -140,19 +140,16 @@ static void create_mkey_warn(struct mlx5_ib_dev *dev, int status, void *out) mlx5_cmd_out_err(dev->mdev, MLX5_CMD_OP_CREATE_MKEY, 0, out); }
- -static int push_mkey(struct mlx5_cache_ent *ent, bool limit_pendings, - void *to_store) +static int push_mkey_locked(struct mlx5_cache_ent *ent, bool limit_pendings, + void *to_store) { XA_STATE(xas, &ent->mkeys, 0); void *curr;
- xa_lock_irq(&ent->mkeys); if (limit_pendings && - (ent->reserved - ent->stored) > MAX_PENDING_REG_MR) { - xa_unlock_irq(&ent->mkeys); + (ent->reserved - ent->stored) > MAX_PENDING_REG_MR) return -EAGAIN; - } + while (1) { /* * This is cmpxchg (NULL, XA_ZERO_ENTRY) however this version @@ -191,6 +188,7 @@ static int push_mkey(struct mlx5_cache_ent *ent, bool limit_pendings, break; xa_lock_irq(&ent->mkeys); } + xa_lock_irq(&ent->mkeys); if (xas_error(&xas)) return xas_error(&xas); if (WARN_ON(curr)) @@ -198,6 +196,17 @@ static int push_mkey(struct mlx5_cache_ent *ent, bool limit_pendings, return 0; }
+static int push_mkey(struct mlx5_cache_ent *ent, bool limit_pendings, + void *to_store) +{ + int ret; + + xa_lock_irq(&ent->mkeys); + ret = push_mkey_locked(ent, limit_pendings, to_store); + xa_unlock_irq(&ent->mkeys); + return ret; +} + static void undo_push_reserve_mkey(struct mlx5_cache_ent *ent) { void *old; @@ -545,7 +554,7 @@ static void queue_adjust_cache_locked(struct mlx5_cache_ent *ent) { lockdep_assert_held(&ent->mkeys.xa_lock);
- if (ent->disabled || READ_ONCE(ent->dev->fill_delay)) + if (ent->disabled || READ_ONCE(ent->dev->fill_delay) || ent->is_tmp) return; if (ent->stored < ent->limit) { ent->fill_to_high_water = true; @@ -675,7 +684,6 @@ static int mlx5_cache_ent_insert(struct mlx5_mkey_cache *cache, struct mlx5_cache_ent *cur; int cmp;
- mutex_lock(&cache->rb_lock); /* Figure out where to put new node */ while (*new) { cur = rb_entry(*new, struct mlx5_cache_ent, node); @@ -695,7 +703,6 @@ static int mlx5_cache_ent_insert(struct mlx5_mkey_cache *cache, rb_link_node(&ent->node, parent, new); rb_insert_color(&ent->node, &cache->rb_root);
- mutex_unlock(&cache->rb_lock); return 0; }
@@ -867,9 +874,10 @@ static void delay_time_func(struct timer_list *t) WRITE_ONCE(dev->fill_delay, 0); }
-struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, - struct mlx5r_cache_rb_key rb_key, - bool persistent_entry) +struct mlx5_cache_ent * +mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev, + struct mlx5r_cache_rb_key rb_key, + bool persistent_entry) { struct mlx5_cache_ent *ent; int order; @@ -882,6 +890,7 @@ struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, xa_init_flags(&ent->mkeys, XA_FLAGS_LOCK_IRQ); ent->rb_key = rb_key; ent->dev = dev; + ent->is_tmp = !persistent_entry;
INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func);
@@ -905,11 +914,44 @@ struct mlx5_cache_ent *mlx5r_cache_create_ent(struct mlx5_ib_dev *dev, ent->limit = 0;
mlx5_mkey_cache_debugfs_add_ent(dev, ent); + } else { + mod_delayed_work(ent->dev->cache.wq, + &ent->dev->cache.remove_ent_dwork, + msecs_to_jiffies(30 * 1000)); }
return ent; }
+static void remove_ent_work_func(struct work_struct *work) +{ + struct mlx5_mkey_cache *cache; + struct mlx5_cache_ent *ent; + struct rb_node *cur; + + cache = container_of(work, struct mlx5_mkey_cache, + remove_ent_dwork.work); + mutex_lock(&cache->rb_lock); + cur = rb_last(&cache->rb_root); + while (cur) { + ent = rb_entry(cur, struct mlx5_cache_ent, node); + cur = rb_prev(cur); + mutex_unlock(&cache->rb_lock); + + xa_lock_irq(&ent->mkeys); + if (!ent->is_tmp) { + xa_unlock_irq(&ent->mkeys); + mutex_lock(&cache->rb_lock); + continue; + } + xa_unlock_irq(&ent->mkeys); + + clean_keys(ent->dev, ent); + mutex_lock(&cache->rb_lock); + } + mutex_unlock(&cache->rb_lock); +} + int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) { struct mlx5_mkey_cache *cache = &dev->cache; @@ -925,6 +967,7 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) mutex_init(&dev->slow_path_mutex); mutex_init(&dev->cache.rb_lock); dev->cache.rb_root = RB_ROOT; + INIT_DELAYED_WORK(&dev->cache.remove_ent_dwork, remove_ent_work_func); cache->wq = alloc_ordered_workqueue("mkey_cache", WQ_MEM_RECLAIM); if (!cache->wq) { mlx5_ib_warn(dev, "failed to create work queue\n"); @@ -934,9 +977,10 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) mlx5_cmd_init_async_ctx(dev->mdev, &dev->async_ctx); timer_setup(&dev->delay_timer, delay_time_func, 0); mlx5_mkey_cache_debugfs_init(dev); + mutex_lock(&cache->rb_lock); for (i = 0; i <= mkey_cache_max_order(dev); i++) { rb_key.ndescs = 1 << (i + 2); - ent = mlx5r_cache_create_ent(dev, rb_key, true); + ent = mlx5r_cache_create_ent_locked(dev, rb_key, true); if (IS_ERR(ent)) { ret = PTR_ERR(ent); goto err; @@ -947,6 +991,7 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) if (ret) goto err;
+ mutex_unlock(&cache->rb_lock); for (node = rb_first(root); node; node = rb_next(node)) { ent = rb_entry(node, struct mlx5_cache_ent, node); xa_lock_irq(&ent->mkeys); @@ -957,6 +1002,7 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) return 0;
err: + mutex_unlock(&cache->rb_lock); mlx5_ib_warn(dev, "failed to create mkey cache entry\n"); return ret; } @@ -970,6 +1016,7 @@ int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev) if (!dev->cache.wq) return 0;
+ cancel_delayed_work_sync(&dev->cache.remove_ent_dwork); mutex_lock(&dev->cache.rb_lock); for (node = rb_first(root); node; node = rb_next(node)) { ent = rb_entry(node, struct mlx5_cache_ent, node); @@ -1752,33 +1799,42 @@ static int cache_ent_find_and_store(struct mlx5_ib_dev *dev, { struct mlx5_mkey_cache *cache = &dev->cache; struct mlx5_cache_ent *ent; + int ret;
if (mr->mmkey.cache_ent) { xa_lock_irq(&mr->mmkey.cache_ent->mkeys); mr->mmkey.cache_ent->in_use--; - xa_unlock_irq(&mr->mmkey.cache_ent->mkeys); goto end; }
mutex_lock(&cache->rb_lock); ent = mkey_cache_ent_from_rb_key(dev, mr->mmkey.rb_key); - mutex_unlock(&cache->rb_lock); if (ent) { if (ent->rb_key.ndescs == mr->mmkey.rb_key.ndescs) { + if (ent->disabled) { + mutex_unlock(&cache->rb_lock); + return -EOPNOTSUPP; + } mr->mmkey.cache_ent = ent; + xa_lock_irq(&mr->mmkey.cache_ent->mkeys); + mutex_unlock(&cache->rb_lock); goto end; } }
- ent = mlx5r_cache_create_ent(dev, mr->mmkey.rb_key, false); + ent = mlx5r_cache_create_ent_locked(dev, mr->mmkey.rb_key, false); + mutex_unlock(&cache->rb_lock); if (IS_ERR(ent)) return PTR_ERR(ent);
mr->mmkey.cache_ent = ent; + xa_lock_irq(&mr->mmkey.cache_ent->mkeys);
end: - return push_mkey(mr->mmkey.cache_ent, false, - xa_mk_value(mr->mmkey.key)); + ret = push_mkey_locked(mr->mmkey.cache_ent, false, + xa_mk_value(mr->mmkey.key)); + xa_unlock_irq(&mr->mmkey.cache_ent->mkeys); + return ret; }
int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 96d4faabbff8a..6ba4aa1afdc2d 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -1602,7 +1602,7 @@ int mlx5_odp_init_mkey_cache(struct mlx5_ib_dev *dev) if (!(dev->odp_caps.general_caps & IB_ODP_SUPPORT_IMPLICIT)) return 0;
- ent = mlx5r_cache_create_ent(dev, rb_key, true); + ent = mlx5r_cache_create_ent_locked(dev, rb_key, true); if (IS_ERR(ent)) return PTR_ERR(ent);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shay Drory shayd@nvidia.com
[ Upstream commit 57e7071683ef6148c9f5ea0ba84598d2ba681375 ]
Currently, mkeys are managed via xarray. This implementation leads to a degradation in cases many MRs are unregistered in parallel, due to xarray internal implementation, for example: deregistration 1M MRs via 64 threads is taking ~15% more time[1].
Hence, implement mkeys management via LIFO queue, which solved the degradation.
[1] 2.8us in kernel v5.19 compare to 3.2us in kernel v6.4
Signed-off-by: Shay Drory shayd@nvidia.com Link: https://lore.kernel.org/r/fde3d4cfab0f32f0ccb231cd113298256e1502c5.169528338... Signed-off-by: Leon Romanovsky leon@kernel.org Stable-dep-of: d97505baea64 ("RDMA/mlx5: Fix the recovery flow of the UMR QP") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 21 +- drivers/infiniband/hw/mlx5/mr.c | 324 ++++++++++++--------------- drivers/infiniband/hw/mlx5/umr.c | 4 +- 3 files changed, 169 insertions(+), 180 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 7c72e0e9db54a..024d2071c6a5d 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -760,10 +760,25 @@ struct umr_common { unsigned int state; };
+#define NUM_MKEYS_PER_PAGE \ + ((PAGE_SIZE - sizeof(struct list_head)) / sizeof(u32)) + +struct mlx5_mkeys_page { + u32 mkeys[NUM_MKEYS_PER_PAGE]; + struct list_head list; +}; +static_assert(sizeof(struct mlx5_mkeys_page) == PAGE_SIZE); + +struct mlx5_mkeys_queue { + struct list_head pages_list; + u32 num_pages; + unsigned long ci; + spinlock_t lock; /* sync list ops */ +}; + struct mlx5_cache_ent { - struct xarray mkeys; - unsigned long stored; - unsigned long reserved; + struct mlx5_mkeys_queue mkeys_queue; + u32 pending;
char name[4];
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 2c1a935734273..b66b8346c2dc6 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -140,110 +140,47 @@ static void create_mkey_warn(struct mlx5_ib_dev *dev, int status, void *out) mlx5_cmd_out_err(dev->mdev, MLX5_CMD_OP_CREATE_MKEY, 0, out); }
-static int push_mkey_locked(struct mlx5_cache_ent *ent, bool limit_pendings, - void *to_store) +static int push_mkey_locked(struct mlx5_cache_ent *ent, u32 mkey) { - XA_STATE(xas, &ent->mkeys, 0); - void *curr; + unsigned long tmp = ent->mkeys_queue.ci % NUM_MKEYS_PER_PAGE; + struct mlx5_mkeys_page *page;
- if (limit_pendings && - (ent->reserved - ent->stored) > MAX_PENDING_REG_MR) - return -EAGAIN; - - while (1) { - /* - * This is cmpxchg (NULL, XA_ZERO_ENTRY) however this version - * doesn't transparently unlock. Instead we set the xas index to - * the current value of reserved every iteration. - */ - xas_set(&xas, ent->reserved); - curr = xas_load(&xas); - if (!curr) { - if (to_store && ent->stored == ent->reserved) - xas_store(&xas, to_store); - else - xas_store(&xas, XA_ZERO_ENTRY); - if (xas_valid(&xas)) { - ent->reserved++; - if (to_store) { - if (ent->stored != ent->reserved) - __xa_store(&ent->mkeys, - ent->stored, - to_store, - GFP_KERNEL); - ent->stored++; - queue_adjust_cache_locked(ent); - WRITE_ONCE(ent->dev->cache.last_add, - jiffies); - } - } - } - xa_unlock_irq(&ent->mkeys); - - /* - * Notice xas_nomem() must always be called as it cleans - * up any cached allocation. - */ - if (!xas_nomem(&xas, GFP_KERNEL)) - break; - xa_lock_irq(&ent->mkeys); + lockdep_assert_held(&ent->mkeys_queue.lock); + if (ent->mkeys_queue.ci >= + ent->mkeys_queue.num_pages * NUM_MKEYS_PER_PAGE) { + page = kzalloc(sizeof(*page), GFP_ATOMIC); + if (!page) + return -ENOMEM; + ent->mkeys_queue.num_pages++; + list_add_tail(&page->list, &ent->mkeys_queue.pages_list); + } else { + page = list_last_entry(&ent->mkeys_queue.pages_list, + struct mlx5_mkeys_page, list); } - xa_lock_irq(&ent->mkeys); - if (xas_error(&xas)) - return xas_error(&xas); - if (WARN_ON(curr)) - return -EINVAL; - return 0; -} - -static int push_mkey(struct mlx5_cache_ent *ent, bool limit_pendings, - void *to_store) -{ - int ret; - - xa_lock_irq(&ent->mkeys); - ret = push_mkey_locked(ent, limit_pendings, to_store); - xa_unlock_irq(&ent->mkeys); - return ret; -} - -static void undo_push_reserve_mkey(struct mlx5_cache_ent *ent) -{ - void *old; - - ent->reserved--; - old = __xa_erase(&ent->mkeys, ent->reserved); - WARN_ON(old); -} - -static void push_to_reserved(struct mlx5_cache_ent *ent, u32 mkey) -{ - void *old;
- old = __xa_store(&ent->mkeys, ent->stored, xa_mk_value(mkey), 0); - WARN_ON(old); - ent->stored++; + page->mkeys[tmp] = mkey; + ent->mkeys_queue.ci++; + return 0; }
-static u32 pop_stored_mkey(struct mlx5_cache_ent *ent) +static int pop_mkey_locked(struct mlx5_cache_ent *ent) { - void *old, *xa_mkey; - - ent->stored--; - ent->reserved--; + unsigned long tmp = (ent->mkeys_queue.ci - 1) % NUM_MKEYS_PER_PAGE; + struct mlx5_mkeys_page *last_page; + u32 mkey;
- if (ent->stored == ent->reserved) { - xa_mkey = __xa_erase(&ent->mkeys, ent->stored); - WARN_ON(!xa_mkey); - return (u32)xa_to_value(xa_mkey); + lockdep_assert_held(&ent->mkeys_queue.lock); + last_page = list_last_entry(&ent->mkeys_queue.pages_list, + struct mlx5_mkeys_page, list); + mkey = last_page->mkeys[tmp]; + last_page->mkeys[tmp] = 0; + ent->mkeys_queue.ci--; + if (ent->mkeys_queue.num_pages > 1 && !tmp) { + list_del(&last_page->list); + ent->mkeys_queue.num_pages--; + kfree(last_page); } - - xa_mkey = __xa_store(&ent->mkeys, ent->stored, XA_ZERO_ENTRY, - GFP_KERNEL); - WARN_ON(!xa_mkey || xa_is_err(xa_mkey)); - old = __xa_erase(&ent->mkeys, ent->reserved); - WARN_ON(old); - return (u32)xa_to_value(xa_mkey); + return mkey; }
static void create_mkey_callback(int status, struct mlx5_async_work *context) @@ -257,10 +194,10 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context) if (status) { create_mkey_warn(dev, status, mkey_out->out); kfree(mkey_out); - xa_lock_irqsave(&ent->mkeys, flags); - undo_push_reserve_mkey(ent); + spin_lock_irqsave(&ent->mkeys_queue.lock, flags); + ent->pending--; WRITE_ONCE(dev->fill_delay, 1); - xa_unlock_irqrestore(&ent->mkeys, flags); + spin_unlock_irqrestore(&ent->mkeys_queue.lock, flags); mod_timer(&dev->delay_timer, jiffies + HZ); return; } @@ -269,11 +206,12 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context) MLX5_GET(create_mkey_out, mkey_out->out, mkey_index)); WRITE_ONCE(dev->cache.last_add, jiffies);
- xa_lock_irqsave(&ent->mkeys, flags); - push_to_reserved(ent, mkey_out->mkey); + spin_lock_irqsave(&ent->mkeys_queue.lock, flags); + push_mkey_locked(ent, mkey_out->mkey); /* If we are doing fill_to_high_water then keep going. */ queue_adjust_cache_locked(ent); - xa_unlock_irqrestore(&ent->mkeys, flags); + ent->pending--; + spin_unlock_irqrestore(&ent->mkeys_queue.lock, flags); kfree(mkey_out); }
@@ -329,24 +267,28 @@ static int add_keys(struct mlx5_cache_ent *ent, unsigned int num) set_cache_mkc(ent, mkc); async_create->ent = ent;
- err = push_mkey(ent, true, NULL); - if (err) + spin_lock_irq(&ent->mkeys_queue.lock); + if (ent->pending >= MAX_PENDING_REG_MR) { + err = -EAGAIN; goto free_async_create; + } + ent->pending++; + spin_unlock_irq(&ent->mkeys_queue.lock);
err = mlx5_ib_create_mkey_cb(async_create); if (err) { mlx5_ib_warn(ent->dev, "create mkey failed %d\n", err); - goto err_undo_reserve; + goto err_create_mkey; } }
return 0;
-err_undo_reserve: - xa_lock_irq(&ent->mkeys); - undo_push_reserve_mkey(ent); - xa_unlock_irq(&ent->mkeys); +err_create_mkey: + spin_lock_irq(&ent->mkeys_queue.lock); + ent->pending--; free_async_create: + spin_unlock_irq(&ent->mkeys_queue.lock); kfree(async_create); return err; } @@ -379,36 +321,36 @@ static void remove_cache_mr_locked(struct mlx5_cache_ent *ent) { u32 mkey;
- lockdep_assert_held(&ent->mkeys.xa_lock); - if (!ent->stored) + lockdep_assert_held(&ent->mkeys_queue.lock); + if (!ent->mkeys_queue.ci) return; - mkey = pop_stored_mkey(ent); - xa_unlock_irq(&ent->mkeys); + mkey = pop_mkey_locked(ent); + spin_unlock_irq(&ent->mkeys_queue.lock); mlx5_core_destroy_mkey(ent->dev->mdev, mkey); - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); }
static int resize_available_mrs(struct mlx5_cache_ent *ent, unsigned int target, bool limit_fill) - __acquires(&ent->mkeys) __releases(&ent->mkeys) + __acquires(&ent->mkeys_queue.lock) __releases(&ent->mkeys_queue.lock) { int err;
- lockdep_assert_held(&ent->mkeys.xa_lock); + lockdep_assert_held(&ent->mkeys_queue.lock);
while (true) { if (limit_fill) target = ent->limit * 2; - if (target == ent->reserved) + if (target == ent->pending + ent->mkeys_queue.ci) return 0; - if (target > ent->reserved) { - u32 todo = target - ent->reserved; + if (target > ent->pending + ent->mkeys_queue.ci) { + u32 todo = target - (ent->pending + ent->mkeys_queue.ci);
- xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); err = add_keys(ent, todo); if (err == -EAGAIN) usleep_range(3000, 5000); - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); if (err) { if (err != -EAGAIN) return err; @@ -436,7 +378,7 @@ static ssize_t size_write(struct file *filp, const char __user *buf, * cannot free MRs that are in use. Compute the target value for stored * mkeys. */ - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); if (target < ent->in_use) { err = -EINVAL; goto err_unlock; @@ -449,12 +391,12 @@ static ssize_t size_write(struct file *filp, const char __user *buf, err = resize_available_mrs(ent, target, false); if (err) goto err_unlock; - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock);
return count;
err_unlock: - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); return err; }
@@ -465,7 +407,8 @@ static ssize_t size_read(struct file *filp, char __user *buf, size_t count, char lbuf[20]; int err;
- err = snprintf(lbuf, sizeof(lbuf), "%ld\n", ent->stored + ent->in_use); + err = snprintf(lbuf, sizeof(lbuf), "%ld\n", + ent->mkeys_queue.ci + ent->in_use); if (err < 0) return err;
@@ -494,10 +437,10 @@ static ssize_t limit_write(struct file *filp, const char __user *buf, * Upon set we immediately fill the cache to high water mark implied by * the limit. */ - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); ent->limit = var; err = resize_available_mrs(ent, 0, true); - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); if (err) return err; return count; @@ -533,9 +476,9 @@ static bool someone_adding(struct mlx5_mkey_cache *cache) mutex_lock(&cache->rb_lock); for (node = rb_first(&cache->rb_root); node; node = rb_next(node)) { ent = rb_entry(node, struct mlx5_cache_ent, node); - xa_lock_irq(&ent->mkeys); - ret = ent->stored < ent->limit; - xa_unlock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); + ret = ent->mkeys_queue.ci < ent->limit; + spin_unlock_irq(&ent->mkeys_queue.lock); if (ret) { mutex_unlock(&cache->rb_lock); return true; @@ -552,26 +495,26 @@ static bool someone_adding(struct mlx5_mkey_cache *cache) */ static void queue_adjust_cache_locked(struct mlx5_cache_ent *ent) { - lockdep_assert_held(&ent->mkeys.xa_lock); + lockdep_assert_held(&ent->mkeys_queue.lock);
if (ent->disabled || READ_ONCE(ent->dev->fill_delay) || ent->is_tmp) return; - if (ent->stored < ent->limit) { + if (ent->mkeys_queue.ci < ent->limit) { ent->fill_to_high_water = true; mod_delayed_work(ent->dev->cache.wq, &ent->dwork, 0); } else if (ent->fill_to_high_water && - ent->reserved < 2 * ent->limit) { + ent->mkeys_queue.ci + ent->pending < 2 * ent->limit) { /* * Once we start populating due to hitting a low water mark * continue until we pass the high water mark. */ mod_delayed_work(ent->dev->cache.wq, &ent->dwork, 0); - } else if (ent->stored == 2 * ent->limit) { + } else if (ent->mkeys_queue.ci == 2 * ent->limit) { ent->fill_to_high_water = false; - } else if (ent->stored > 2 * ent->limit) { + } else if (ent->mkeys_queue.ci > 2 * ent->limit) { /* Queue deletion of excess entries */ ent->fill_to_high_water = false; - if (ent->stored != ent->reserved) + if (ent->pending) queue_delayed_work(ent->dev->cache.wq, &ent->dwork, msecs_to_jiffies(1000)); else @@ -585,15 +528,16 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) struct mlx5_mkey_cache *cache = &dev->cache; int err;
- xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); if (ent->disabled) goto out;
- if (ent->fill_to_high_water && ent->reserved < 2 * ent->limit && + if (ent->fill_to_high_water && + ent->mkeys_queue.ci + ent->pending < 2 * ent->limit && !READ_ONCE(dev->fill_delay)) { - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); err = add_keys(ent, 1); - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); if (ent->disabled) goto out; if (err) { @@ -611,7 +555,7 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) msecs_to_jiffies(1000)); } } - } else if (ent->stored > 2 * ent->limit) { + } else if (ent->mkeys_queue.ci > 2 * ent->limit) { bool need_delay;
/* @@ -626,11 +570,11 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) * the garbage collection work to try to run in next cycle, in * order to free CPU resources to other tasks. */ - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); need_delay = need_resched() || someone_adding(cache) || !time_after(jiffies, READ_ONCE(cache->last_add) + 300 * HZ); - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); if (ent->disabled) goto out; if (need_delay) { @@ -641,7 +585,7 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) queue_adjust_cache_locked(ent); } out: - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); }
static void delayed_cache_work_func(struct work_struct *work) @@ -749,25 +693,25 @@ static struct mlx5_ib_mr *_mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, if (!mr) return ERR_PTR(-ENOMEM);
- xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); ent->in_use++;
- if (!ent->stored) { + if (!ent->mkeys_queue.ci) { queue_adjust_cache_locked(ent); ent->miss++; - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); err = create_cache_mkey(ent, &mr->mmkey.key); if (err) { - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); ent->in_use--; - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); kfree(mr); return ERR_PTR(err); } } else { - mr->mmkey.key = pop_stored_mkey(ent); + mr->mmkey.key = pop_mkey_locked(ent); queue_adjust_cache_locked(ent); - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); } mr->mmkey.cache_ent = ent; mr->mmkey.type = MLX5_MKEY_MR; @@ -820,14 +764,14 @@ static void clean_keys(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent) u32 mkey;
cancel_delayed_work(&ent->dwork); - xa_lock_irq(&ent->mkeys); - while (ent->stored) { - mkey = pop_stored_mkey(ent); - xa_unlock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); + while (ent->mkeys_queue.ci) { + mkey = pop_mkey_locked(ent); + spin_unlock_irq(&ent->mkeys_queue.lock); mlx5_core_destroy_mkey(dev->mdev, mkey); - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); } - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); }
static void mlx5_mkey_cache_debugfs_cleanup(struct mlx5_ib_dev *dev) @@ -852,7 +796,7 @@ static void mlx5_mkey_cache_debugfs_add_ent(struct mlx5_ib_dev *dev, dir = debugfs_create_dir(ent->name, dev->cache.fs_root); debugfs_create_file("size", 0600, dir, ent, &size_fops); debugfs_create_file("limit", 0600, dir, ent, &limit_fops); - debugfs_create_ulong("cur", 0400, dir, &ent->stored); + debugfs_create_ulong("cur", 0400, dir, &ent->mkeys_queue.ci); debugfs_create_u32("miss", 0600, dir, &ent->miss); }
@@ -874,6 +818,31 @@ static void delay_time_func(struct timer_list *t) WRITE_ONCE(dev->fill_delay, 0); }
+static int mlx5r_mkeys_init(struct mlx5_cache_ent *ent) +{ + struct mlx5_mkeys_page *page; + + page = kzalloc(sizeof(*page), GFP_KERNEL); + if (!page) + return -ENOMEM; + INIT_LIST_HEAD(&ent->mkeys_queue.pages_list); + spin_lock_init(&ent->mkeys_queue.lock); + list_add_tail(&page->list, &ent->mkeys_queue.pages_list); + ent->mkeys_queue.num_pages++; + return 0; +} + +static void mlx5r_mkeys_uninit(struct mlx5_cache_ent *ent) +{ + struct mlx5_mkeys_page *page; + + WARN_ON(ent->mkeys_queue.ci || ent->mkeys_queue.num_pages > 1); + page = list_last_entry(&ent->mkeys_queue.pages_list, + struct mlx5_mkeys_page, list); + list_del(&page->list); + kfree(page); +} + struct mlx5_cache_ent * mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev, struct mlx5r_cache_rb_key rb_key, @@ -887,7 +856,9 @@ mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev, if (!ent) return ERR_PTR(-ENOMEM);
- xa_init_flags(&ent->mkeys, XA_FLAGS_LOCK_IRQ); + ret = mlx5r_mkeys_init(ent); + if (ret) + goto mkeys_err; ent->rb_key = rb_key; ent->dev = dev; ent->is_tmp = !persistent_entry; @@ -895,10 +866,8 @@ mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev, INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func);
ret = mlx5_cache_ent_insert(&dev->cache, ent); - if (ret) { - kfree(ent); - return ERR_PTR(ret); - } + if (ret) + goto ent_insert_err;
if (persistent_entry) { if (rb_key.access_mode == MLX5_MKC_ACCESS_MODE_KSM) @@ -921,6 +890,11 @@ mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev, }
return ent; +ent_insert_err: + mlx5r_mkeys_uninit(ent); +mkeys_err: + kfree(ent); + return ERR_PTR(ret); }
static void remove_ent_work_func(struct work_struct *work) @@ -938,13 +912,13 @@ static void remove_ent_work_func(struct work_struct *work) cur = rb_prev(cur); mutex_unlock(&cache->rb_lock);
- xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); if (!ent->is_tmp) { - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); mutex_lock(&cache->rb_lock); continue; } - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock);
clean_keys(ent->dev, ent); mutex_lock(&cache->rb_lock); @@ -994,9 +968,9 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) mutex_unlock(&cache->rb_lock); for (node = rb_first(root); node; node = rb_next(node)) { ent = rb_entry(node, struct mlx5_cache_ent, node); - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); queue_adjust_cache_locked(ent); - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); }
return 0; @@ -1020,9 +994,9 @@ int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev) mutex_lock(&dev->cache.rb_lock); for (node = rb_first(root); node; node = rb_next(node)) { ent = rb_entry(node, struct mlx5_cache_ent, node); - xa_lock_irq(&ent->mkeys); + spin_lock_irq(&ent->mkeys_queue.lock); ent->disabled = true; - xa_unlock_irq(&ent->mkeys); + spin_unlock_irq(&ent->mkeys_queue.lock); cancel_delayed_work_sync(&ent->dwork); }
@@ -1035,6 +1009,7 @@ int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev) node = rb_next(node); clean_keys(dev, ent); rb_erase(&ent->node, root); + mlx5r_mkeys_uninit(ent); kfree(ent); } mutex_unlock(&dev->cache.rb_lock); @@ -1802,7 +1777,7 @@ static int cache_ent_find_and_store(struct mlx5_ib_dev *dev, int ret;
if (mr->mmkey.cache_ent) { - xa_lock_irq(&mr->mmkey.cache_ent->mkeys); + spin_lock_irq(&mr->mmkey.cache_ent->mkeys_queue.lock); mr->mmkey.cache_ent->in_use--; goto end; } @@ -1816,7 +1791,7 @@ static int cache_ent_find_and_store(struct mlx5_ib_dev *dev, return -EOPNOTSUPP; } mr->mmkey.cache_ent = ent; - xa_lock_irq(&mr->mmkey.cache_ent->mkeys); + spin_lock_irq(&mr->mmkey.cache_ent->mkeys_queue.lock); mutex_unlock(&cache->rb_lock); goto end; } @@ -1828,12 +1803,11 @@ static int cache_ent_find_and_store(struct mlx5_ib_dev *dev, return PTR_ERR(ent);
mr->mmkey.cache_ent = ent; - xa_lock_irq(&mr->mmkey.cache_ent->mkeys); + spin_lock_irq(&mr->mmkey.cache_ent->mkeys_queue.lock);
end: - ret = push_mkey_locked(mr->mmkey.cache_ent, false, - xa_mk_value(mr->mmkey.key)); - xa_unlock_irq(&mr->mmkey.cache_ent->mkeys); + ret = push_mkey_locked(mr->mmkey.cache_ent, mr->mmkey.key); + spin_unlock_irq(&mr->mmkey.cache_ent->mkeys_queue.lock); return ret; }
diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c index cb5cee3dee2b6..fa000182d0b41 100644 --- a/drivers/infiniband/hw/mlx5/umr.c +++ b/drivers/infiniband/hw/mlx5/umr.c @@ -332,8 +332,8 @@ static int mlx5r_umr_post_send_wait(struct mlx5_ib_dev *dev, u32 mkey,
WARN_ON_ONCE(1); mlx5_ib_warn(dev, - "reg umr failed (%u). Trying to recover and resubmit the flushed WQEs\n", - umr_context.status); + "reg umr failed (%u). Trying to recover and resubmit the flushed WQEs, mkey = %u\n", + umr_context.status, mkey); mutex_lock(&umrc->lock); err = mlx5r_umr_recover(dev); mutex_unlock(&umrc->lock);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yishai Hadas yishaih@nvidia.com
[ Upstream commit d97505baea64d93538b16baf14ce7b8c1fbad746 ]
This patch addresses an issue in the recovery flow of the UMR QP, ensuring tasks do not get stuck, as highlighted by the call trace [1].
During recovery, before transitioning the QP to the RESET state, the software must wait for all outstanding WRs to complete.
Failing to do so can cause the firmware to skip sending some flushed CQEs with errors and simply discard them upon the RESET, as per the IB specification.
This race condition can result in lost CQEs and tasks becoming stuck.
To resolve this, the patch sends a final WR which serves only as a barrier before moving the QP state to RESET.
Once a CQE is received for that final WR, it guarantees that no outstanding WRs remain, making it safe to transition the QP to RESET and subsequently back to RTS, restoring proper functionality.
Note: For the barrier WR, we simply reuse the failed and ready WR. Since the QP is in an error state, it will only receive IB_WC_WR_FLUSH_ERR. However, as it serves only as a barrier we don't care about its status.
[1] INFO: task rdma_resource_l:1922 blocked for more than 120 seconds. Tainted: G W 6.12.0-rc7+ #1626 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:rdma_resource_l state:D stack:0 pid:1922 tgid:1922 ppid:1369 flags:0x00004004 Call Trace: <TASK> __schedule+0x420/0xd30 schedule+0x47/0x130 schedule_timeout+0x280/0x300 ? mark_held_locks+0x48/0x80 ? lockdep_hardirqs_on_prepare+0xe5/0x1a0 wait_for_completion+0x75/0x130 mlx5r_umr_post_send_wait+0x3c2/0x5b0 [mlx5_ib] ? __pfx_mlx5r_umr_done+0x10/0x10 [mlx5_ib] mlx5r_umr_revoke_mr+0x93/0xc0 [mlx5_ib] __mlx5_ib_dereg_mr+0x299/0x520 [mlx5_ib] ? _raw_spin_unlock_irq+0x24/0x40 ? wait_for_completion+0xfe/0x130 ? rdma_restrack_put+0x63/0xe0 [ib_core] ib_dereg_mr_user+0x5f/0x120 [ib_core] ? lock_release+0xc6/0x280 destroy_hw_idr_uobject+0x1d/0x60 [ib_uverbs] uverbs_destroy_uobject+0x58/0x1d0 [ib_uverbs] uobj_destroy+0x3f/0x70 [ib_uverbs] ib_uverbs_cmd_verbs+0x3e4/0xbb0 [ib_uverbs] ? __pfx_uverbs_destroy_def_handler+0x10/0x10 [ib_uverbs] ? __lock_acquire+0x64e/0x2080 ? mark_held_locks+0x48/0x80 ? find_held_lock+0x2d/0xa0 ? lock_acquire+0xc1/0x2f0 ? ib_uverbs_ioctl+0xcb/0x170 [ib_uverbs] ? __fget_files+0xc3/0x1b0 ib_uverbs_ioctl+0xe7/0x170 [ib_uverbs] ? ib_uverbs_ioctl+0xcb/0x170 [ib_uverbs] __x64_sys_ioctl+0x1b0/0xa70 do_syscall_64+0x6b/0x140 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f99c918b17b RSP: 002b:00007ffc766d0468 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007ffc766d0578 RCX: 00007f99c918b17b RDX: 00007ffc766d0560 RSI: 00000000c0181b01 RDI: 0000000000000003 RBP: 00007ffc766d0540 R08: 00007f99c8f99010 R09: 000000000000bd7e R10: 00007f99c94c1c70 R11: 0000000000000246 R12: 00007ffc766d0530 R13: 000000000000001c R14: 0000000040246a80 R15: 0000000000000000 </TASK>
Fixes: 158e71bb69e3 ("RDMA/mlx5: Add a umr recovery flow") Signed-off-by: Yishai Hadas yishaih@nvidia.com Reviewed-by: Michael Guralnik michaelgur@nvidia.com Link: https://patch.msgid.link/27b51b92ec42dfb09d8096fcbd51878f397ce6ec.1737290141... Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/umr.c | 83 +++++++++++++++++++++----------- 1 file changed, 56 insertions(+), 27 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c index fa000182d0b41..1a39e86178ece 100644 --- a/drivers/infiniband/hw/mlx5/umr.c +++ b/drivers/infiniband/hw/mlx5/umr.c @@ -199,30 +199,6 @@ void mlx5r_umr_resource_cleanup(struct mlx5_ib_dev *dev) ib_dealloc_pd(dev->umrc.pd); }
-static int mlx5r_umr_recover(struct mlx5_ib_dev *dev) -{ - struct umr_common *umrc = &dev->umrc; - struct ib_qp_attr attr; - int err; - - attr.qp_state = IB_QPS_RESET; - err = ib_modify_qp(umrc->qp, &attr, IB_QP_STATE); - if (err) { - mlx5_ib_dbg(dev, "Couldn't modify UMR QP\n"); - goto err; - } - - err = mlx5r_umr_qp_rst2rts(dev, umrc->qp); - if (err) - goto err; - - umrc->state = MLX5_UMR_STATE_ACTIVE; - return 0; - -err: - umrc->state = MLX5_UMR_STATE_ERR; - return err; -}
static int mlx5r_umr_post_send(struct ib_qp *ibqp, u32 mkey, struct ib_cqe *cqe, struct mlx5r_umr_wqe *wqe, bool with_data) @@ -270,6 +246,61 @@ static int mlx5r_umr_post_send(struct ib_qp *ibqp, u32 mkey, struct ib_cqe *cqe, return err; }
+static int mlx5r_umr_recover(struct mlx5_ib_dev *dev, u32 mkey, + struct mlx5r_umr_context *umr_context, + struct mlx5r_umr_wqe *wqe, bool with_data) +{ + struct umr_common *umrc = &dev->umrc; + struct ib_qp_attr attr; + int err; + + mutex_lock(&umrc->lock); + /* Preventing any further WRs to be sent now */ + if (umrc->state != MLX5_UMR_STATE_RECOVER) { + mlx5_ib_warn(dev, "UMR recovery encountered an unexpected state=%d\n", + umrc->state); + umrc->state = MLX5_UMR_STATE_RECOVER; + } + mutex_unlock(&umrc->lock); + + /* Sending a final/barrier WR (the failed one) and wait for its completion. + * This will ensure that all the previous WRs got a completion before + * we set the QP state to RESET. + */ + err = mlx5r_umr_post_send(umrc->qp, mkey, &umr_context->cqe, wqe, + with_data); + if (err) { + mlx5_ib_warn(dev, "UMR recovery post send failed, err %d\n", err); + goto err; + } + + /* Since the QP is in an error state, it will only receive + * IB_WC_WR_FLUSH_ERR. However, as it serves only as a barrier + * we don't care about its status. + */ + wait_for_completion(&umr_context->done); + + attr.qp_state = IB_QPS_RESET; + err = ib_modify_qp(umrc->qp, &attr, IB_QP_STATE); + if (err) { + mlx5_ib_warn(dev, "Couldn't modify UMR QP to RESET, err=%d\n", err); + goto err; + } + + err = mlx5r_umr_qp_rst2rts(dev, umrc->qp); + if (err) { + mlx5_ib_warn(dev, "Couldn't modify UMR QP to RTS, err=%d\n", err); + goto err; + } + + umrc->state = MLX5_UMR_STATE_ACTIVE; + return 0; + +err: + umrc->state = MLX5_UMR_STATE_ERR; + return err; +} + static void mlx5r_umr_done(struct ib_cq *cq, struct ib_wc *wc) { struct mlx5_ib_umr_context *context = @@ -334,9 +365,7 @@ static int mlx5r_umr_post_send_wait(struct mlx5_ib_dev *dev, u32 mkey, mlx5_ib_warn(dev, "reg umr failed (%u). Trying to recover and resubmit the flushed WQEs, mkey = %u\n", umr_context.status, mkey); - mutex_lock(&umrc->lock); - err = mlx5r_umr_recover(dev); - mutex_unlock(&umrc->lock); + err = mlx5r_umr_recover(dev, mkey, &umr_context, wqe, with_data); if (err) mlx5_ib_warn(dev, "couldn't recover UMR, err %d\n", err);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Zhang markzhang@nvidia.com
[ Upstream commit 12d044770e12c4205fa69535b4fa8a9981fea98f ]
When a DCT QP is created on an active lag, it's dctc.port is assigned in a round-robin way, which is from 1 to dev->lag_port. In this case when querying this QP, we may get qp_attr.port_num > 2. Fix this by setting qp->port when modifying a DCT QP, and read port_num from qp->port instead of dctc.port when querying it.
Fixes: 7c4b1ab9f167 ("IB/mlx5: Add DCT RoCE LAG support") Signed-off-by: Mark Zhang markzhang@nvidia.com Reviewed-by: Maher Sanalla msanalla@nvidia.com Link: https://patch.msgid.link/94c76bf0adbea997f87ffa27674e0a7118ad92a9.1737290358... Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/qp.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 8d132b726c64b..d782a494abcda 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -4466,6 +4466,8 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
set_id = mlx5_ib_get_counters_id(dev, attr->port_num - 1); MLX5_SET(dctc, dctc, counter_set_id, set_id); + + qp->port = attr->port_num; } else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_RTR) { struct mlx5_ib_modify_qp_resp resp = {}; u32 out[MLX5_ST_SZ_DW(create_dct_out)] = {}; @@ -4955,7 +4957,7 @@ static int mlx5_ib_dct_query_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *mqp, }
if (qp_attr_mask & IB_QP_PORT) - qp_attr->port_num = MLX5_GET(dctc, dctc, port); + qp_attr->port_num = mqp->port; if (qp_attr_mask & IB_QP_MIN_RNR_TIMER) qp_attr->min_rnr_timer = MLX5_GET(dctc, dctc, min_rnr_nak); if (qp_attr_mask & IB_QP_AV) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vasiliy Kovalev kovalev@altlinux.org
[ Upstream commit c84e125fff2615b4d9c259e762596134eddd2f27 ]
The issue was caused by dput(upper) being called before ovl_dentry_update_reval(), while upper->d_flags was still accessed in ovl_dentry_remote().
Move dput(upper) after its last use to prevent use-after-free.
BUG: KASAN: slab-use-after-free in ovl_dentry_remote fs/overlayfs/util.c:162 [inline] BUG: KASAN: slab-use-after-free in ovl_dentry_update_reval+0xd2/0xf0 fs/overlayfs/util.c:167
Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114 print_address_description mm/kasan/report.c:377 [inline] print_report+0xc3/0x620 mm/kasan/report.c:488 kasan_report+0xd9/0x110 mm/kasan/report.c:601 ovl_dentry_remote fs/overlayfs/util.c:162 [inline] ovl_dentry_update_reval+0xd2/0xf0 fs/overlayfs/util.c:167 ovl_link_up fs/overlayfs/copy_up.c:610 [inline] ovl_copy_up_one+0x2105/0x3490 fs/overlayfs/copy_up.c:1170 ovl_copy_up_flags+0x18d/0x200 fs/overlayfs/copy_up.c:1223 ovl_rename+0x39e/0x18c0 fs/overlayfs/dir.c:1136 vfs_rename+0xf84/0x20a0 fs/namei.c:4893 ... </TASK>
Fixes: b07d5cc93e1b ("ovl: update of dentry revalidate flags after copy up") Reported-by: syzbot+316db8a1191938280eb6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=316db8a1191938280eb6 Signed-off-by: Vasiliy Kovalev kovalev@altlinux.org Link: https://lore.kernel.org/r/20250214215148.761147-1-kovalev@altlinux.org Reviewed-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/overlayfs/copy_up.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c index 86d4b6975dbcb..203b88293f6bb 100644 --- a/fs/overlayfs/copy_up.c +++ b/fs/overlayfs/copy_up.c @@ -532,7 +532,6 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c) err = PTR_ERR(upper); if (!IS_ERR(upper)) { err = ovl_do_link(ofs, ovl_dentry_upper(c->dentry), udir, upper); - dput(upper);
if (!err) { /* Restore timestamps on parent (best effort) */ @@ -540,6 +539,7 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c) ovl_dentry_set_upper_alias(c->dentry); ovl_dentry_update_reval(c->dentry, upper); } + dput(upper); } inode_unlock(udir); if (err)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stephen Brennan stephen.s.brennan@oracle.com
[ Upstream commit 0b108e83795c9c23101f584ef7e3ab4f1f120ef0 ]
The RPC_TASK_* constants are defined as macros, which means that most kernel builds will not contain their definitions in the debuginfo. However, it's quite useful for debuggers to be able to view the task state constant and interpret it correctly. Conversion to an enum will ensure the constants are present in debuginfo and can be interpreted by debuggers without needing to hard-code them and track their changes.
Signed-off-by: Stephen Brennan stephen.s.brennan@oracle.com Signed-off-by: Anna Schumaker anna.schumaker@oracle.com Stable-dep-of: 5bbd6e863b15 ("SUNRPC: Prevent looping due to rpc_signal_task() races") Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/sunrpc/sched.h | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h index 8f9bee0e21c3b..f80b90aca380a 100644 --- a/include/linux/sunrpc/sched.h +++ b/include/linux/sunrpc/sched.h @@ -140,13 +140,15 @@ struct rpc_task_setup { #define RPC_WAS_SENT(t) ((t)->tk_flags & RPC_TASK_SENT) #define RPC_IS_MOVEABLE(t) ((t)->tk_flags & RPC_TASK_MOVEABLE)
-#define RPC_TASK_RUNNING 0 -#define RPC_TASK_QUEUED 1 -#define RPC_TASK_ACTIVE 2 -#define RPC_TASK_NEED_XMIT 3 -#define RPC_TASK_NEED_RECV 4 -#define RPC_TASK_MSG_PIN_WAIT 5 -#define RPC_TASK_SIGNALLED 6 +enum { + RPC_TASK_RUNNING, + RPC_TASK_QUEUED, + RPC_TASK_ACTIVE, + RPC_TASK_NEED_XMIT, + RPC_TASK_NEED_RECV, + RPC_TASK_MSG_PIN_WAIT, + RPC_TASK_SIGNALLED, +};
#define rpc_test_and_set_running(t) \ test_and_set_bit(RPC_TASK_RUNNING, &(t)->tk_runstate)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit 5bbd6e863b15a85221e49b9bdb2d5d8f0bb91f3d ]
If rpc_signal_task() is called while a task is in an rpc_call_done() callback function, and the latter calls rpc_restart_call(), the task can end up looping due to the RPC_TASK_SIGNALLED flag being set without the tk_rpc_status being set. Removing the redundant mechanism for signalling the task fixes the looping behaviour.
Reported-by: Li Lingfeng lilingfeng3@huawei.com Fixes: 39494194f93b ("SUNRPC: Fix races with rpc_killall_tasks()") Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: Anna Schumaker anna.schumaker@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/sunrpc/sched.h | 3 +-- include/trace/events/sunrpc.h | 3 +-- net/sunrpc/sched.c | 2 -- 3 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h index f80b90aca380a..a220b28904ca5 100644 --- a/include/linux/sunrpc/sched.h +++ b/include/linux/sunrpc/sched.h @@ -147,7 +147,6 @@ enum { RPC_TASK_NEED_XMIT, RPC_TASK_NEED_RECV, RPC_TASK_MSG_PIN_WAIT, - RPC_TASK_SIGNALLED, };
#define rpc_test_and_set_running(t) \ @@ -160,7 +159,7 @@ enum {
#define RPC_IS_ACTIVATED(t) test_bit(RPC_TASK_ACTIVE, &(t)->tk_runstate)
-#define RPC_SIGNALLED(t) test_bit(RPC_TASK_SIGNALLED, &(t)->tk_runstate) +#define RPC_SIGNALLED(t) (READ_ONCE(task->tk_rpc_status) == -ERESTARTSYS)
/* * Task priorities. diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h index ffe2679a13ced..b70f47a57bf6d 100644 --- a/include/trace/events/sunrpc.h +++ b/include/trace/events/sunrpc.h @@ -328,8 +328,7 @@ TRACE_EVENT(rpc_request, { (1UL << RPC_TASK_ACTIVE), "ACTIVE" }, \ { (1UL << RPC_TASK_NEED_XMIT), "NEED_XMIT" }, \ { (1UL << RPC_TASK_NEED_RECV), "NEED_RECV" }, \ - { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" }, \ - { (1UL << RPC_TASK_SIGNALLED), "SIGNALLED" }) + { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" })
DECLARE_EVENT_CLASS(rpc_task_running,
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c index cef623ea15060..9b45fbdc90cab 100644 --- a/net/sunrpc/sched.c +++ b/net/sunrpc/sched.c @@ -864,8 +864,6 @@ void rpc_signal_task(struct rpc_task *task) if (!rpc_task_set_rpc_status(task, -ERESTARTSYS)) return; trace_rpc_task_signalled(task, task->tk_action); - set_bit(RPC_TASK_SIGNALLED, &task->tk_runstate); - smp_mb__after_atomic(); queue = READ_ONCE(task->tk_waitqueue); if (queue) rpc_wake_up_queued_task(queue, task);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Zhang markzhang@nvidia.com
[ Upstream commit 312b8f79eb05479628ee71357749815b2eeeeea8 ]
Move the call of qp event handler from atomic to workqueue context, so that the handler is able to block. This is needed by following patches.
Signed-off-by: Mark Zhang markzhang@nvidia.com Reviewed-by: Patrisious Haddad phaddad@nvidia.com Link: https://lore.kernel.org/r/0cd17b8331e445f03942f4bb28d447f24ac5669d.167282118... Signed-off-by: Leon Romanovsky leon@kernel.org Stable-dep-of: c534ffda781f ("RDMA/mlx5: Fix AH static rate parsing") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx4/main.c | 8 ++ drivers/infiniband/hw/mlx4/mlx4_ib.h | 3 + drivers/infiniband/hw/mlx4/qp.c | 121 +++++++++++++++++------- drivers/infiniband/hw/mlx5/main.c | 7 ++ drivers/infiniband/hw/mlx5/qp.c | 119 ++++++++++++++++------- drivers/infiniband/hw/mlx5/qp.h | 2 + drivers/infiniband/hw/mlx5/qpc.c | 3 +- drivers/net/ethernet/mellanox/mlx4/qp.c | 14 ++- include/linux/mlx4/qp.h | 1 + include/rdma/ib_verbs.h | 2 +- 10 files changed, 202 insertions(+), 78 deletions(-)
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c index 7c3dc86ab7f04..0f0b130cc8aac 100644 --- a/drivers/infiniband/hw/mlx4/main.c +++ b/drivers/infiniband/hw/mlx4/main.c @@ -3307,6 +3307,10 @@ static int __init mlx4_ib_init(void) if (!wq) return -ENOMEM;
+ err = mlx4_ib_qp_event_init(); + if (err) + goto clean_qp_event; + err = mlx4_ib_cm_init(); if (err) goto clean_wq; @@ -3328,6 +3332,9 @@ static int __init mlx4_ib_init(void) mlx4_ib_cm_destroy();
clean_wq: + mlx4_ib_qp_event_cleanup(); + +clean_qp_event: destroy_workqueue(wq); return err; } @@ -3337,6 +3344,7 @@ static void __exit mlx4_ib_cleanup(void) mlx4_unregister_interface(&mlx4_ib_interface); mlx4_ib_mcg_destroy(); mlx4_ib_cm_destroy(); + mlx4_ib_qp_event_cleanup(); destroy_workqueue(wq); }
diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index 6a3b0f121045e..17fee1e73a45a 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -940,4 +940,7 @@ int mlx4_ib_umem_calc_optimal_mtt_size(struct ib_umem *umem, u64 start_va, int mlx4_ib_cm_init(void); void mlx4_ib_cm_destroy(void);
+int mlx4_ib_qp_event_init(void); +void mlx4_ib_qp_event_cleanup(void); + #endif /* MLX4_IB_H */ diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c index ac479e81ddee8..9d08aa99f3cb0 100644 --- a/drivers/infiniband/hw/mlx4/qp.c +++ b/drivers/infiniband/hw/mlx4/qp.c @@ -102,6 +102,14 @@ enum mlx4_ib_source_type { MLX4_IB_RWQ_SRC = 1, };
+struct mlx4_ib_qp_event_work { + struct work_struct work; + struct mlx4_qp *qp; + enum mlx4_event type; +}; + +static struct workqueue_struct *mlx4_ib_qp_event_wq; + static int is_tunnel_qp(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp) { if (!mlx4_is_master(dev->dev)) @@ -200,50 +208,77 @@ static void stamp_send_wqe(struct mlx4_ib_qp *qp, int n) } }
+static void mlx4_ib_handle_qp_event(struct work_struct *_work) +{ + struct mlx4_ib_qp_event_work *qpe_work = + container_of(_work, struct mlx4_ib_qp_event_work, work); + struct ib_qp *ibqp = &to_mibqp(qpe_work->qp)->ibqp; + struct ib_event event = {}; + + event.device = ibqp->device; + event.element.qp = ibqp; + + switch (qpe_work->type) { + case MLX4_EVENT_TYPE_PATH_MIG: + event.event = IB_EVENT_PATH_MIG; + break; + case MLX4_EVENT_TYPE_COMM_EST: + event.event = IB_EVENT_COMM_EST; + break; + case MLX4_EVENT_TYPE_SQ_DRAINED: + event.event = IB_EVENT_SQ_DRAINED; + break; + case MLX4_EVENT_TYPE_SRQ_QP_LAST_WQE: + event.event = IB_EVENT_QP_LAST_WQE_REACHED; + break; + case MLX4_EVENT_TYPE_WQ_CATAS_ERROR: + event.event = IB_EVENT_QP_FATAL; + break; + case MLX4_EVENT_TYPE_PATH_MIG_FAILED: + event.event = IB_EVENT_PATH_MIG_ERR; + break; + case MLX4_EVENT_TYPE_WQ_INVAL_REQ_ERROR: + event.event = IB_EVENT_QP_REQ_ERR; + break; + case MLX4_EVENT_TYPE_WQ_ACCESS_ERROR: + event.event = IB_EVENT_QP_ACCESS_ERR; + break; + default: + pr_warn("Unexpected event type %d on QP %06x\n", + qpe_work->type, qpe_work->qp->qpn); + goto out; + } + + ibqp->event_handler(&event, ibqp->qp_context); + +out: + mlx4_put_qp(qpe_work->qp); + kfree(qpe_work); +} + static void mlx4_ib_qp_event(struct mlx4_qp *qp, enum mlx4_event type) { - struct ib_event event; struct ib_qp *ibqp = &to_mibqp(qp)->ibqp; + struct mlx4_ib_qp_event_work *qpe_work;
if (type == MLX4_EVENT_TYPE_PATH_MIG) to_mibqp(qp)->port = to_mibqp(qp)->alt_port;
- if (ibqp->event_handler) { - event.device = ibqp->device; - event.element.qp = ibqp; - switch (type) { - case MLX4_EVENT_TYPE_PATH_MIG: - event.event = IB_EVENT_PATH_MIG; - break; - case MLX4_EVENT_TYPE_COMM_EST: - event.event = IB_EVENT_COMM_EST; - break; - case MLX4_EVENT_TYPE_SQ_DRAINED: - event.event = IB_EVENT_SQ_DRAINED; - break; - case MLX4_EVENT_TYPE_SRQ_QP_LAST_WQE: - event.event = IB_EVENT_QP_LAST_WQE_REACHED; - break; - case MLX4_EVENT_TYPE_WQ_CATAS_ERROR: - event.event = IB_EVENT_QP_FATAL; - break; - case MLX4_EVENT_TYPE_PATH_MIG_FAILED: - event.event = IB_EVENT_PATH_MIG_ERR; - break; - case MLX4_EVENT_TYPE_WQ_INVAL_REQ_ERROR: - event.event = IB_EVENT_QP_REQ_ERR; - break; - case MLX4_EVENT_TYPE_WQ_ACCESS_ERROR: - event.event = IB_EVENT_QP_ACCESS_ERR; - break; - default: - pr_warn("Unexpected event type %d " - "on QP %06x\n", type, qp->qpn); - return; - } + if (!ibqp->event_handler) + goto out_no_handler;
- ibqp->event_handler(&event, ibqp->qp_context); - } + qpe_work = kzalloc(sizeof(*qpe_work), GFP_ATOMIC); + if (!qpe_work) + goto out_no_handler; + + qpe_work->qp = qp; + qpe_work->type = type; + INIT_WORK(&qpe_work->work, mlx4_ib_handle_qp_event); + queue_work(mlx4_ib_qp_event_wq, &qpe_work->work); + return; + +out_no_handler: + mlx4_put_qp(qp); }
static void mlx4_ib_wq_event(struct mlx4_qp *qp, enum mlx4_event type) @@ -4472,3 +4507,17 @@ void mlx4_ib_drain_rq(struct ib_qp *qp)
handle_drain_completion(cq, &rdrain, dev); } + +int mlx4_ib_qp_event_init(void) +{ + mlx4_ib_qp_event_wq = alloc_ordered_workqueue("mlx4_ib_qp_event_wq", 0); + if (!mlx4_ib_qp_event_wq) + return -ENOMEM; + + return 0; +} + +void mlx4_ib_qp_event_cleanup(void) +{ + destroy_workqueue(mlx4_ib_qp_event_wq); +} diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 45a414e8d35fa..a22649617e017 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -4410,6 +4410,10 @@ static int __init mlx5_ib_init(void) return -ENOMEM; }
+ ret = mlx5_ib_qp_event_init(); + if (ret) + goto qp_event_err; + mlx5_ib_odp_init(); ret = mlx5r_rep_init(); if (ret) @@ -4427,6 +4431,8 @@ static int __init mlx5_ib_init(void) mp_err: mlx5r_rep_cleanup(); rep_err: + mlx5_ib_qp_event_cleanup(); +qp_event_err: destroy_workqueue(mlx5_ib_event_wq); free_page((unsigned long)xlt_emergency_page); return ret; @@ -4438,6 +4444,7 @@ static void __exit mlx5_ib_cleanup(void) auxiliary_driver_unregister(&mlx5r_mp_driver); mlx5r_rep_cleanup();
+ mlx5_ib_qp_event_cleanup(); destroy_workqueue(mlx5_ib_event_wq); free_page((unsigned long)xlt_emergency_page); } diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index d782a494abcda..43c0123babd10 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -71,6 +71,14 @@ struct mlx5_modify_raw_qp_param { u32 port; };
+struct mlx5_ib_qp_event_work { + struct work_struct work; + struct mlx5_core_qp *qp; + int type; +}; + +static struct workqueue_struct *mlx5_ib_qp_event_wq; + static void get_cqs(enum ib_qp_type qp_type, struct ib_cq *ib_send_cq, struct ib_cq *ib_recv_cq, struct mlx5_ib_cq **send_cq, struct mlx5_ib_cq **recv_cq); @@ -302,51 +310,78 @@ int mlx5_ib_read_wqe_srq(struct mlx5_ib_srq *srq, int wqe_index, void *buffer, return mlx5_ib_read_user_wqe_srq(srq, wqe_index, buffer, buflen, bc); }
+static void mlx5_ib_handle_qp_event(struct work_struct *_work) +{ + struct mlx5_ib_qp_event_work *qpe_work = + container_of(_work, struct mlx5_ib_qp_event_work, work); + struct ib_qp *ibqp = &to_mibqp(qpe_work->qp)->ibqp; + struct ib_event event = {}; + + event.device = ibqp->device; + event.element.qp = ibqp; + switch (qpe_work->type) { + case MLX5_EVENT_TYPE_PATH_MIG: + event.event = IB_EVENT_PATH_MIG; + break; + case MLX5_EVENT_TYPE_COMM_EST: + event.event = IB_EVENT_COMM_EST; + break; + case MLX5_EVENT_TYPE_SQ_DRAINED: + event.event = IB_EVENT_SQ_DRAINED; + break; + case MLX5_EVENT_TYPE_SRQ_LAST_WQE: + event.event = IB_EVENT_QP_LAST_WQE_REACHED; + break; + case MLX5_EVENT_TYPE_WQ_CATAS_ERROR: + event.event = IB_EVENT_QP_FATAL; + break; + case MLX5_EVENT_TYPE_PATH_MIG_FAILED: + event.event = IB_EVENT_PATH_MIG_ERR; + break; + case MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR: + event.event = IB_EVENT_QP_REQ_ERR; + break; + case MLX5_EVENT_TYPE_WQ_ACCESS_ERROR: + event.event = IB_EVENT_QP_ACCESS_ERR; + break; + default: + pr_warn("mlx5_ib: Unexpected event type %d on QP %06x\n", + qpe_work->type, qpe_work->qp->qpn); + goto out; + } + + ibqp->event_handler(&event, ibqp->qp_context); + +out: + mlx5_core_res_put(&qpe_work->qp->common); + kfree(qpe_work); +} + static void mlx5_ib_qp_event(struct mlx5_core_qp *qp, int type) { struct ib_qp *ibqp = &to_mibqp(qp)->ibqp; - struct ib_event event; + struct mlx5_ib_qp_event_work *qpe_work;
if (type == MLX5_EVENT_TYPE_PATH_MIG) { /* This event is only valid for trans_qps */ to_mibqp(qp)->port = to_mibqp(qp)->trans_qp.alt_port; }
- if (ibqp->event_handler) { - event.device = ibqp->device; - event.element.qp = ibqp; - switch (type) { - case MLX5_EVENT_TYPE_PATH_MIG: - event.event = IB_EVENT_PATH_MIG; - break; - case MLX5_EVENT_TYPE_COMM_EST: - event.event = IB_EVENT_COMM_EST; - break; - case MLX5_EVENT_TYPE_SQ_DRAINED: - event.event = IB_EVENT_SQ_DRAINED; - break; - case MLX5_EVENT_TYPE_SRQ_LAST_WQE: - event.event = IB_EVENT_QP_LAST_WQE_REACHED; - break; - case MLX5_EVENT_TYPE_WQ_CATAS_ERROR: - event.event = IB_EVENT_QP_FATAL; - break; - case MLX5_EVENT_TYPE_PATH_MIG_FAILED: - event.event = IB_EVENT_PATH_MIG_ERR; - break; - case MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR: - event.event = IB_EVENT_QP_REQ_ERR; - break; - case MLX5_EVENT_TYPE_WQ_ACCESS_ERROR: - event.event = IB_EVENT_QP_ACCESS_ERR; - break; - default: - pr_warn("mlx5_ib: Unexpected event type %d on QP %06x\n", type, qp->qpn); - return; - } + if (!ibqp->event_handler) + goto out_no_handler;
- ibqp->event_handler(&event, ibqp->qp_context); - } + qpe_work = kzalloc(sizeof(*qpe_work), GFP_ATOMIC); + if (!qpe_work) + goto out_no_handler; + + qpe_work->qp = qp; + qpe_work->type = type; + INIT_WORK(&qpe_work->work, mlx5_ib_handle_qp_event); + queue_work(mlx5_ib_qp_event_wq, &qpe_work->work); + return; + +out_no_handler: + mlx5_core_res_put(&qp->common); }
static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap, @@ -5752,3 +5787,17 @@ int mlx5_ib_qp_set_counter(struct ib_qp *qp, struct rdma_counter *counter) mutex_unlock(&mqp->mutex); return err; } + +int mlx5_ib_qp_event_init(void) +{ + mlx5_ib_qp_event_wq = alloc_ordered_workqueue("mlx5_ib_qp_event_wq", 0); + if (!mlx5_ib_qp_event_wq) + return -ENOMEM; + + return 0; +} + +void mlx5_ib_qp_event_cleanup(void) +{ + destroy_workqueue(mlx5_ib_qp_event_wq); +} diff --git a/drivers/infiniband/hw/mlx5/qp.h b/drivers/infiniband/hw/mlx5/qp.h index 5d4e140db99ce..fb2f4e030bb8f 100644 --- a/drivers/infiniband/hw/mlx5/qp.h +++ b/drivers/infiniband/hw/mlx5/qp.h @@ -44,4 +44,6 @@ void mlx5_core_res_put(struct mlx5_core_rsc_common *res); int mlx5_core_xrcd_alloc(struct mlx5_ib_dev *dev, u32 *xrcdn); int mlx5_core_xrcd_dealloc(struct mlx5_ib_dev *dev, u32 xrcdn); int mlx5_ib_qp_set_counter(struct ib_qp *qp, struct rdma_counter *counter); +int mlx5_ib_qp_event_init(void); +void mlx5_ib_qp_event_cleanup(void); #endif /* _MLX5_IB_QP_H */ diff --git a/drivers/infiniband/hw/mlx5/qpc.c b/drivers/infiniband/hw/mlx5/qpc.c index d4e7864c56f18..a824ff22f4615 100644 --- a/drivers/infiniband/hw/mlx5/qpc.c +++ b/drivers/infiniband/hw/mlx5/qpc.c @@ -135,7 +135,8 @@ static int rsc_event_notifier(struct notifier_block *nb, case MLX5_RES_SQ: qp = (struct mlx5_core_qp *)common; qp->event(qp, event_type); - break; + /* Need to put resource in event handler */ + return NOTIFY_OK; case MLX5_RES_DCT: dct = (struct mlx5_core_dct *)common; if (event_type == MLX5_EVENT_TYPE_DCT_DRAINED) diff --git a/drivers/net/ethernet/mellanox/mlx4/qp.c b/drivers/net/ethernet/mellanox/mlx4/qp.c index 48cfaa7eaf50c..913ed255990f4 100644 --- a/drivers/net/ethernet/mellanox/mlx4/qp.c +++ b/drivers/net/ethernet/mellanox/mlx4/qp.c @@ -46,6 +46,13 @@ #define MLX4_BF_QP_SKIP_MASK 0xc0 #define MLX4_MAX_BF_QP_RANGE 0x40
+void mlx4_put_qp(struct mlx4_qp *qp) +{ + if (refcount_dec_and_test(&qp->refcount)) + complete(&qp->free); +} +EXPORT_SYMBOL_GPL(mlx4_put_qp); + void mlx4_qp_event(struct mlx4_dev *dev, u32 qpn, int event_type) { struct mlx4_qp_table *qp_table = &mlx4_priv(dev)->qp_table; @@ -64,10 +71,8 @@ void mlx4_qp_event(struct mlx4_dev *dev, u32 qpn, int event_type) return; }
+ /* Need to call mlx4_put_qp() in event handler */ qp->event(qp, event_type); - - if (refcount_dec_and_test(&qp->refcount)) - complete(&qp->free); }
/* used for INIT/CLOSE port logic */ @@ -523,8 +528,7 @@ EXPORT_SYMBOL_GPL(mlx4_qp_remove);
void mlx4_qp_free(struct mlx4_dev *dev, struct mlx4_qp *qp) { - if (refcount_dec_and_test(&qp->refcount)) - complete(&qp->free); + mlx4_put_qp(qp); wait_for_completion(&qp->free);
mlx4_qp_free_icm(dev, qp->qpn); diff --git a/include/linux/mlx4/qp.h b/include/linux/mlx4/qp.h index b6b626157b03a..b9a7b1319f5d3 100644 --- a/include/linux/mlx4/qp.h +++ b/include/linux/mlx4/qp.h @@ -504,4 +504,5 @@ static inline u16 folded_qp(u32 q)
u16 mlx4_qp_roce_entropy(struct mlx4_dev *dev, u32 qpn);
+void mlx4_put_qp(struct mlx4_qp *qp); #endif /* MLX4_QP_H */ diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 5582509003264..68fd6d22adfd4 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1162,7 +1162,7 @@ enum ib_qp_create_flags { */
struct ib_qp_init_attr { - /* Consumer's event_handler callback must not block */ + /* This callback occurs in workqueue context */ void (*event_handler)(struct ib_event *, void *);
void *qp_context;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Leon Romanovsky leonro@nvidia.com
[ Upstream commit 2ecfd946169e7f56534db2a5f6935858be3005ba ]
driver.h is common header to whole mlx5 code base, but struct mlx5_qp_table is used in mlx5_ib driver only. So move that struct to be under sole responsibility of mlx5_ib.
Link: https://lore.kernel.org/r/bec0dc1158e795813b135d1143147977f26bf668.168595349... Signed-off-by: Leon Romanovsky leonro@nvidia.com Stable-dep-of: c534ffda781f ("RDMA/mlx5: Fix AH static rate parsing") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 1 + drivers/infiniband/hw/mlx5/qp.h | 11 ++++++++++- include/linux/mlx5/driver.h | 9 --------- 3 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 024d2071c6a5d..5c533023a51a4 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -25,6 +25,7 @@ #include <rdma/mlx5_user_ioctl_verbs.h>
#include "srq.h" +#include "qp.h"
#define mlx5_ib_dbg(_dev, format, arg...) \ dev_dbg(&(_dev)->ib_dev.dev, "%s:%d:(pid %d): " format, __func__, \ diff --git a/drivers/infiniband/hw/mlx5/qp.h b/drivers/infiniband/hw/mlx5/qp.h index fb2f4e030bb8f..e677fa0ca4226 100644 --- a/drivers/infiniband/hw/mlx5/qp.h +++ b/drivers/infiniband/hw/mlx5/qp.h @@ -6,7 +6,16 @@ #ifndef _MLX5_IB_QP_H #define _MLX5_IB_QP_H
-#include "mlx5_ib.h" +struct mlx5_ib_dev; + +struct mlx5_qp_table { + struct notifier_block nb; + + /* protect radix tree + */ + spinlock_t lock; + struct radix_tree_root tree; +};
int mlx5_init_qp_table(struct mlx5_ib_dev *dev); void mlx5_cleanup_qp_table(struct mlx5_ib_dev *dev); diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 6cea62ca76d6b..060610183fdf9 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -440,15 +440,6 @@ struct mlx5_core_health { struct delayed_work update_fw_log_ts_work; };
-struct mlx5_qp_table { - struct notifier_block nb; - - /* protect radix tree - */ - spinlock_t lock; - struct radix_tree_root tree; -}; - enum { MLX5_PF_NOTIFY_DISABLE_VF, MLX5_PF_NOTIFY_ENABLE_VF,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Or Har-Toov ohartoov@nvidia.com
[ Upstream commit 703289ce43f740b0096724300107df82d008552f ]
Add new IBTA speed XDR, the new rate that was added to Infiniband spec as part of XDR and supporting signaling rate of 200Gb.
In order to report that value to rdma-core, add new u32 field to query_port response.
Signed-off-by: Or Har-Toov ohartoov@nvidia.com Reviewed-by: Mark Zhang markzhang@nvidia.com Link: https://lore.kernel.org/r/9d235fc600a999e8274010f0e18b40fa60540e6c.169520415... Reviewed-by: Jacob Keller jacob.e.keller@intel.com Signed-off-by: Leon Romanovsky leon@kernel.org Stable-dep-of: c534ffda781f ("RDMA/mlx5: Fix AH static rate parsing") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/core/sysfs.c | 4 ++++ drivers/infiniband/core/uverbs_std_types_device.c | 3 ++- drivers/infiniband/core/verbs.c | 3 +++ include/rdma/ib_verbs.h | 2 ++ include/uapi/rdma/ib_user_ioctl_verbs.h | 3 ++- 5 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c index ec5efdc166601..9f97bef021497 100644 --- a/drivers/infiniband/core/sysfs.c +++ b/drivers/infiniband/core/sysfs.c @@ -342,6 +342,10 @@ static ssize_t rate_show(struct ib_device *ibdev, u32 port_num, speed = " NDR"; rate = 1000; break; + case IB_SPEED_XDR: + speed = " XDR"; + rate = 2000; + break; case IB_SPEED_SDR: default: /* default to SDR for invalid rates */ speed = " SDR"; diff --git a/drivers/infiniband/core/uverbs_std_types_device.c b/drivers/infiniband/core/uverbs_std_types_device.c index 049684880ae03..fb0555647336f 100644 --- a/drivers/infiniband/core/uverbs_std_types_device.c +++ b/drivers/infiniband/core/uverbs_std_types_device.c @@ -203,6 +203,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_PORT)(
copy_port_attr_to_resp(&attr, &resp.legacy_resp, ib_dev, port_num); resp.port_cap_flags2 = attr.port_cap_flags2; + resp.active_speed_ex = attr.active_speed;
return uverbs_copy_to_struct_or_zero(attrs, UVERBS_ATTR_QUERY_PORT_RESP, &resp, sizeof(resp)); @@ -461,7 +462,7 @@ DECLARE_UVERBS_NAMED_METHOD( UVERBS_ATTR_PTR_OUT( UVERBS_ATTR_QUERY_PORT_RESP, UVERBS_ATTR_STRUCT(struct ib_uverbs_query_port_resp_ex, - reserved), + active_speed_ex), UA_MANDATORY));
DECLARE_UVERBS_NAMED_METHOD( diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index b99b3cc283b65..90848546f1704 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -147,6 +147,7 @@ __attribute_const__ int ib_rate_to_mult(enum ib_rate rate) case IB_RATE_50_GBPS: return 20; case IB_RATE_400_GBPS: return 160; case IB_RATE_600_GBPS: return 240; + case IB_RATE_800_GBPS: return 320; default: return -1; } } @@ -176,6 +177,7 @@ __attribute_const__ enum ib_rate mult_to_ib_rate(int mult) case 20: return IB_RATE_50_GBPS; case 160: return IB_RATE_400_GBPS; case 240: return IB_RATE_600_GBPS; + case 320: return IB_RATE_800_GBPS; default: return IB_RATE_PORT_CURRENT; } } @@ -205,6 +207,7 @@ __attribute_const__ int ib_rate_to_mbps(enum ib_rate rate) case IB_RATE_50_GBPS: return 53125; case IB_RATE_400_GBPS: return 425000; case IB_RATE_600_GBPS: return 637500; + case IB_RATE_800_GBPS: return 850000; default: return -1; } } diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 68fd6d22adfd4..750effb875783 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -557,6 +557,7 @@ enum ib_port_speed { IB_SPEED_EDR = 32, IB_SPEED_HDR = 64, IB_SPEED_NDR = 128, + IB_SPEED_XDR = 256, };
enum ib_stat_flag { @@ -836,6 +837,7 @@ enum ib_rate { IB_RATE_50_GBPS = 20, IB_RATE_400_GBPS = 21, IB_RATE_600_GBPS = 22, + IB_RATE_800_GBPS = 23, };
/** diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib_user_ioctl_verbs.h index 7dd56210226f5..125fb9f0ef4ab 100644 --- a/include/uapi/rdma/ib_user_ioctl_verbs.h +++ b/include/uapi/rdma/ib_user_ioctl_verbs.h @@ -218,7 +218,8 @@ enum ib_uverbs_advise_mr_flag { struct ib_uverbs_query_port_resp_ex { struct ib_uverbs_query_port_resp legacy_resp; __u16 port_cap_flags2; - __u8 reserved[6]; + __u8 reserved[2]; + __u32 active_speed_ex; };
struct ib_uverbs_qp_cap {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrisious Haddad phaddad@nvidia.com
[ Upstream commit c534ffda781f44a1c6ac25ef6e0e444da38ca8af ]
Previously static rate wasn't translated according to our PRM but simply used the 4 lower bytes.
Correctly translate static rate value passed in AH creation attribute according to our PRM expected values.
In addition change 800GB mapping to zero, which is the PRM specified value.
Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters") Signed-off-by: Patrisious Haddad phaddad@nvidia.com Reviewed-by: Maor Gottlieb maorg@nvidia.com Link: https://patch.msgid.link/18ef4cc5396caf80728341eb74738cd777596f60.1739187089... Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/ah.c | 3 ++- drivers/infiniband/hw/mlx5/qp.c | 6 +++--- drivers/infiniband/hw/mlx5/qp.h | 1 + 3 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/ah.c b/drivers/infiniband/hw/mlx5/ah.c index 505bc47fd575d..99036afb3aef0 100644 --- a/drivers/infiniband/hw/mlx5/ah.c +++ b/drivers/infiniband/hw/mlx5/ah.c @@ -67,7 +67,8 @@ static void create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah, ah->av.tclass = grh->traffic_class; }
- ah->av.stat_rate_sl = (rdma_ah_get_static_rate(ah_attr) << 4); + ah->av.stat_rate_sl = + (mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah_attr)) << 4);
if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) { if (init_attr->xmit_slave) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 43c0123babd10..59dca0cd89052 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -3379,11 +3379,11 @@ static int ib_to_mlx5_rate_map(u8 rate) return 0; }
-static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate) +int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate) { u32 stat_rate_support;
- if (rate == IB_RATE_PORT_CURRENT) + if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS) return 0;
if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_600_GBPS) @@ -3528,7 +3528,7 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, sizeof(grh->dgid.raw)); }
- err = ib_rate_to_mlx5(dev, rdma_ah_get_static_rate(ah)); + err = mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah)); if (err < 0) return err; MLX5_SET(ads, path, stat_rate, err); diff --git a/drivers/infiniband/hw/mlx5/qp.h b/drivers/infiniband/hw/mlx5/qp.h index e677fa0ca4226..4abb77d551670 100644 --- a/drivers/infiniband/hw/mlx5/qp.h +++ b/drivers/infiniband/hw/mlx5/qp.h @@ -55,4 +55,5 @@ int mlx5_core_xrcd_dealloc(struct mlx5_ib_dev *dev, u32 xrcdn); int mlx5_ib_qp_set_counter(struct ib_qp *qp, struct rdma_counter *counter); int mlx5_ib_qp_event_init(void); void mlx5_ib_qp_event_cleanup(void); +int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate); #endif /* _MLX5_IB_QP_H */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ye Bin yebin10@huawei.com
[ Upstream commit dce5c4afd035e8090a26e5d776b1682c0e649683 ]
After commit 1bad6c4a57ef ("scsi: zero per-cmd private driver data for each MQ I/O"), the xen-scsifront/virtio_scsi/snic drivers all removed code that explicitly zeroed driver-private command data.
In combination with commit 464a00c9e0ad ("scsi: core: Kill DRIVER_SENSE"), after virtio_scsi performs a capacity expansion, the first request will return a unit attention to indicate that the capacity has changed. And then the original command is retried. As driver-private command data was not cleared, the request would return UA again and eventually time out and fail.
Zero driver-private command data when a request is retried.
Fixes: f7de50da1479 ("scsi: xen-scsifront: Remove code that zeroes driver-private command data") Fixes: c2bb87318baa ("scsi: virtio_scsi: Remove code that zeroes driver-private command data") Fixes: c3006a926468 ("scsi: snic: Remove code that zeroes driver-private command data") Signed-off-by: Ye Bin yebin10@huawei.com Reviewed-by: Bart Van Assche bvanassche@acm.org Link: https://lore.kernel.org/r/20250217021628.2929248-1-yebin@huaweicloud.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/scsi_lib.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 72d31b2267ef4..8e75eb1b6eab8 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1579,13 +1579,6 @@ static blk_status_t scsi_prepare_cmd(struct request *req) if (in_flight) __set_bit(SCMD_STATE_INFLIGHT, &cmd->state);
- /* - * Only clear the driver-private command data if the LLD does not supply - * a function to initialize that data. - */ - if (!shost->hostt->init_cmd_priv) - memset(cmd + 1, 0, shost->hostt->cmd_size); - cmd->prot_op = SCSI_PROT_NORMAL; if (blk_rq_bytes(req)) cmd->sc_data_direction = rq_dma_dir(req); @@ -1747,6 +1740,13 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, if (!scsi_host_queue_ready(q, shost, sdev, cmd)) goto out_dec_target_busy;
+ /* + * Only clear the driver-private command data if the LLD does not supply + * a function to initialize that data. + */ + if (shost->hostt->cmd_size && !shost->hostt->init_cmd_priv) + memset(cmd + 1, 0, shost->hostt->cmd_size); + if (!(req->rq_flags & RQF_DONTPREP)) { ret = scsi_prepare_cmd(req); if (ret != BLK_STS_OK)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrisious Haddad phaddad@nvidia.com
[ Upstream commit e1a0bdbdfdf08428f0ede5ae49c7f4139ac73ef5 ]
When there is a failure during bind QP, the cleanup flow destroys the counter regardless if it is the one that created it or not, which is problematic since if it isn't the one that created it, that counter could still be in use.
Fix that by destroying the counter only if it was created during this call.
Fixes: 45842fc627c7 ("IB/mlx5: Support statistic q counter configuration") Signed-off-by: Patrisious Haddad phaddad@nvidia.com Reviewed-by: Mark Zhang markzhang@nvidia.com Link: https://patch.msgid.link/25dfefddb0ebefa668c32e06a94d84e3216257cf.1740033937... Reviewed-by: Zhu Yanjun yanjun.zhu@linux.dev Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/counters.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/counters.c b/drivers/infiniband/hw/mlx5/counters.c index 3e1272695d993..9915504ad1e18 100644 --- a/drivers/infiniband/hw/mlx5/counters.c +++ b/drivers/infiniband/hw/mlx5/counters.c @@ -444,6 +444,7 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter, struct ib_qp *qp) { struct mlx5_ib_dev *dev = to_mdev(qp->device); + bool new = false; int err;
if (!counter->id) { @@ -458,6 +459,7 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter, return err; counter->id = MLX5_GET(alloc_q_counter_out, out, counter_set_id); + new = true; }
err = mlx5_ib_qp_set_counter(qp, counter); @@ -467,8 +469,10 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter, return 0;
fail_set_counter: - mlx5_ib_counter_dealloc(counter); - counter->id = 0; + if (new) { + mlx5_ib_counter_dealloc(counter); + counter->id = 0; + }
return err; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann arnd@arndb.de
[ Upstream commit 1f7a4f98c11fbeb18ed21f3b3a497e90a50ad2e0 ]
There is a warning about unused variables when building with W=1 and no procfs:
net/sunrpc/cache.c:1660:30: error: 'cache_flush_proc_ops' defined but not used [-Werror=unused-const-variable=] 1660 | static const struct proc_ops cache_flush_proc_ops = { | ^~~~~~~~~~~~~~~~~~~~ net/sunrpc/cache.c:1622:30: error: 'content_proc_ops' defined but not used [-Werror=unused-const-variable=] 1622 | static const struct proc_ops content_proc_ops = { | ^~~~~~~~~~~~~~~~ net/sunrpc/cache.c:1598:30: error: 'cache_channel_proc_ops' defined but not used [-Werror=unused-const-variable=] 1598 | static const struct proc_ops cache_channel_proc_ops = { | ^~~~~~~~~~~~~~~~~~~~~~
These are used inside of an #ifdef, so replacing that with an IS_ENABLED() check lets the compiler see how they are used while still dropping them during dead code elimination.
Fixes: dbf847ecb631 ("knfsd: allow cache_register to return error on failure") Reviewed-by: Jeff Layton jlayton@kernel.org Acked-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Anna Schumaker anna.schumaker@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sunrpc/cache.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c index 94889df659f0f..7ac4648c7da7f 100644 --- a/net/sunrpc/cache.c +++ b/net/sunrpc/cache.c @@ -1675,12 +1675,14 @@ static void remove_cache_proc_entries(struct cache_detail *cd) } }
-#ifdef CONFIG_PROC_FS static int create_cache_proc_entries(struct cache_detail *cd, struct net *net) { struct proc_dir_entry *p; struct sunrpc_net *sn;
+ if (!IS_ENABLED(CONFIG_PROC_FS)) + return 0; + sn = net_generic(net, sunrpc_net_id); cd->procfs = proc_mkdir(cd->name, sn->proc_net_rpc); if (cd->procfs == NULL) @@ -1708,12 +1710,6 @@ static int create_cache_proc_entries(struct cache_detail *cd, struct net *net) remove_cache_proc_entries(cd); return -ENOMEM; } -#else /* CONFIG_PROC_FS */ -static int create_cache_proc_entries(struct cache_detail *cd, struct net *net) -{ - return 0; -} -#endif
void __init cache_initialize(void) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
[ Upstream commit a3bdd8f5c2217e1cb35db02c2eed36ea20fb50f5 ]
We fixed the UAF issue in USB MIDI code by canceling the pending work at closing each MIDI output device in the commit below. However, this assumed that it's the only one that is tied with the endpoint, and it resulted in unexpected data truncations when multiple devices are assigned to a single endpoint and opened simultaneously.
For addressing the unexpected MIDI message drops, simply replace cancel_work_sync() with flush_work(). The drain callback should have been already invoked before the close callback, hence the port->active flag must be already cleared. So this just assures that the pending work is finished before freeing the resources.
Fixes: 0125de38122f ("ALSA: usb-audio: Cancel pending work at closing a MIDI substream") Reported-and-tested-by: John Keeping jkeeping@inmusicbrands.com Closes: https://lore.kernel.org/20250217111647.3368132-1-jkeeping@inmusicbrands.com Link: https://patch.msgid.link/20250218114024.23125-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/usb/midi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sound/usb/midi.c b/sound/usb/midi.c index 2839f6b6f09b4..eed71369c7af2 100644 --- a/sound/usb/midi.c +++ b/sound/usb/midi.c @@ -1145,7 +1145,7 @@ static int snd_usbmidi_output_close(struct snd_rawmidi_substream *substream) { struct usbmidi_out_port *port = substream->runtime->private_data;
- cancel_work_sync(&port->ep->work); + flush_work(&port->ep->work); return substream_open(substream, 0, 0); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
[ Upstream commit b25120e1d5f2ebb3db00af557709041f47f7f3d0 ]
L2CAP_ECRED_CONN_RSP needs to respond DCID in the same order received as SCID but the order is reversed due to use of list_add which actually prepend channels to the list so the response is reversed:
ACL Data RX: Handle 16 flags 0x02 dlen 26
LE L2CAP: Enhanced Credit Connection Request (0x17) ident 2 len 18 PSM: 39 (0x0027) MTU: 256 MPS: 251 Credits: 65535 Source CID: 116 Source CID: 117 Source CID: 118 Source CID: 119 Source CID: 120 < ACL Data TX: Handle 16 flags 0x00 dlen 26 LE L2CAP: Enhanced Credit Connection Response (0x18) ident 2 len 18 MTU: 517 MPS: 247 Credits: 3 Result: Connection successful (0x0000) Destination CID: 68 Destination CID: 67 Destination CID: 66 Destination CID: 65 Destination CID: 64
Also make sure the response don't include channels that are not on BT_CONNECT2 since the chan->ident can be set to the same value as in the following trace:
< ACL Data TX: Handle 16 flags 0x00 dlen 12 LE L2CAP: LE Flow Control Credit (0x16) ident 6 len 4 Source CID: 64 Credits: 1 ...
ACL Data RX: Handle 16 flags 0x02 dlen 18
LE L2CAP: Enhanced Credit Connection Request (0x17) ident 6 len 10 PSM: 39 (0x0027) MTU: 517 MPS: 251 Credits: 255 Source CID: 70 < ACL Data TX: Handle 16 flags 0x00 dlen 20 LE L2CAP: Enhanced Credit Connection Response (0x18) ident 6 len 12 MTU: 517 MPS: 247 Credits: 3 Result: Connection successful (0x0000) Destination CID: 64 Destination CID: 68
Closes: https://github.com/bluez/bluez/issues/1094 Fixes: 9aa9d9473f15 ("Bluetooth: L2CAP: Fix responding with wrong PDU type") Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/bluetooth/l2cap_core.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 2a8051fae08c7..36d6122f2e12d 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -656,7 +656,8 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) test_bit(FLAG_HOLD_HCI_CONN, &chan->flags)) hci_conn_hold(conn->hcon);
- list_add(&chan->list, &conn->chan_l); + /* Append to the list since the order matters for ECRED */ + list_add_tail(&chan->list, &conn->chan_l); }
void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) @@ -3995,7 +3996,11 @@ static void l2cap_ecred_rsp_defer(struct l2cap_chan *chan, void *data) { struct l2cap_ecred_rsp_data *rsp = data;
- if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags)) + /* Check if channel for outgoing connection or if it wasn't deferred + * since in those cases it must be skipped. + */ + if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags) || + !test_and_clear_bit(FLAG_DEFER_SETUP, &chan->flags)) return;
/* Reset ident so only one response is sent */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Colin Ian King colin.i.king@gmail.com
[ Upstream commit 318b83b71242998814a570c3420c042ee6165fca ]
Variable nr_servers is no longer being used, the last reference to it was removed in commit 45df8462730d ("afs: Fix server list handling") so clean up the code by removing it.
Signed-off-by: Colin Ian King colin.i.king@gmail.com Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: linux-afs@lists.infradead.org Link: https://lore.kernel.org/r/20221020173923.21342-1-colin.i.king@gmail.com/ Stable-dep-of: add117e48df4 ("afs: Fix the server_list to unuse a displaced server rather than putting it") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/volume.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/fs/afs/volume.c b/fs/afs/volume.c index a146d70efa650..c028598a903c9 100644 --- a/fs/afs/volume.c +++ b/fs/afs/volume.c @@ -76,11 +76,7 @@ static struct afs_volume *afs_alloc_volume(struct afs_fs_context *params, { struct afs_server_list *slist; struct afs_volume *volume; - int ret = -ENOMEM, nr_servers = 0, i; - - for (i = 0; i < vldb->nr_servers; i++) - if (vldb->fs_mask[i] & type_mask) - nr_servers++; + int ret = -ENOMEM;
volume = kzalloc(sizeof(struct afs_volume), GFP_KERNEL); if (!volume)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells dhowells@redhat.com
[ Upstream commit ca0e79a46097d54e4af46c67c852479d97af35bb ]
Make it possible to find the afs_volume structs that are using an afs_server struct to aid in breaking volume callbacks.
The way this is done is that each afs_volume already has an array of afs_server_entry records that point to the servers where that volume might be found. An afs_volume backpointer and a list node is added to each entry and each entry is then added to an RCU-traversable list on the afs_server to which it points.
Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: linux-afs@lists.infradead.org Stable-dep-of: add117e48df4 ("afs: Fix the server_list to unuse a displaced server rather than putting it") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/cell.c | 1 + fs/afs/internal.h | 23 +++++---- fs/afs/server.c | 1 + fs/afs/server_list.c | 112 +++++++++++++++++++++++++++++++++++++++---- fs/afs/vl_alias.c | 2 +- fs/afs/volume.c | 36 ++++++++------ 6 files changed, 143 insertions(+), 32 deletions(-)
diff --git a/fs/afs/cell.c b/fs/afs/cell.c index 926cb1188eba6..7c0dce8eecadd 100644 --- a/fs/afs/cell.c +++ b/fs/afs/cell.c @@ -161,6 +161,7 @@ static struct afs_cell *afs_alloc_cell(struct afs_net *net, refcount_set(&cell->ref, 1); atomic_set(&cell->active, 0); INIT_WORK(&cell->manager, afs_manage_cell_work); + spin_lock_init(&cell->vs_lock); cell->volumes = RB_ROOT; INIT_HLIST_HEAD(&cell->proc_volumes); seqlock_init(&cell->volume_lock); diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 097d5a5f07b1a..fd4310272ccc1 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -378,6 +378,7 @@ struct afs_cell { unsigned int debug_id;
/* The volumes belonging to this cell */ + spinlock_t vs_lock; /* Lock for server->volumes */ struct rb_root volumes; /* Tree of volumes on this server */ struct hlist_head proc_volumes; /* procfs volume list */ seqlock_t volume_lock; /* For volumes */ @@ -501,6 +502,7 @@ struct afs_server { struct hlist_node addr4_link; /* Link in net->fs_addresses4 */ struct hlist_node addr6_link; /* Link in net->fs_addresses6 */ struct hlist_node proc_link; /* Link in net->fs_proc */ + struct list_head volumes; /* RCU list of afs_server_entry objects */ struct work_struct initcb_work; /* Work for CB.InitCallBackState* */ struct afs_server *gc_next; /* Next server in manager's list */ time64_t unuse_time; /* Time at which last unused */ @@ -549,12 +551,14 @@ struct afs_server { */ struct afs_server_entry { struct afs_server *server; + struct afs_volume *volume; + struct list_head slink; /* Link in server->volumes */ };
struct afs_server_list { struct rcu_head rcu; - afs_volid_t vids[AFS_MAXTYPES]; /* Volume IDs */ refcount_t usage; + bool attached; /* T if attached to servers */ unsigned char nr_servers; unsigned char preferred; /* Preferred server */ unsigned short vnovol_mask; /* Servers to be skipped due to VNOVOL */ @@ -567,10 +571,9 @@ struct afs_server_list { * Live AFS volume management. */ struct afs_volume { - union { - struct rcu_head rcu; - afs_volid_t vid; /* volume ID */ - }; + struct rcu_head rcu; + afs_volid_t vid; /* The volume ID of this volume */ + afs_volid_t vids[AFS_MAXTYPES]; /* All associated volume IDs */ refcount_t ref; time64_t update_at; /* Time at which to next update */ struct afs_cell *cell; /* Cell to which belongs (pins ref) */ @@ -1450,10 +1453,14 @@ static inline struct afs_server_list *afs_get_serverlist(struct afs_server_list }
extern void afs_put_serverlist(struct afs_net *, struct afs_server_list *); -extern struct afs_server_list *afs_alloc_server_list(struct afs_cell *, struct key *, - struct afs_vldb_entry *, - u8); +struct afs_server_list *afs_alloc_server_list(struct afs_volume *volume, + struct key *key, + struct afs_vldb_entry *vldb); extern bool afs_annotate_server_list(struct afs_server_list *, struct afs_server_list *); +void afs_attach_volume_to_servers(struct afs_volume *volume, struct afs_server_list *slist); +void afs_reattach_volume_to_servers(struct afs_volume *volume, struct afs_server_list *slist, + struct afs_server_list *old); +void afs_detach_volume_from_servers(struct afs_volume *volume, struct afs_server_list *slist);
/* * super.c diff --git a/fs/afs/server.c b/fs/afs/server.c index 0bd2f5ba6900c..87381c2ffe374 100644 --- a/fs/afs/server.c +++ b/fs/afs/server.c @@ -236,6 +236,7 @@ static struct afs_server *afs_alloc_server(struct afs_cell *cell, server->addr_version = alist->version; server->uuid = *uuid; rwlock_init(&server->fs_lock); + INIT_LIST_HEAD(&server->volumes); INIT_WORK(&server->initcb_work, afs_server_init_callback_work); init_waitqueue_head(&server->probe_wq); INIT_LIST_HEAD(&server->probe_link); diff --git a/fs/afs/server_list.c b/fs/afs/server_list.c index b59896b1de0af..4d6369477f54e 100644 --- a/fs/afs/server_list.c +++ b/fs/afs/server_list.c @@ -24,13 +24,13 @@ void afs_put_serverlist(struct afs_net *net, struct afs_server_list *slist) /* * Build a server list from a VLDB record. */ -struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, +struct afs_server_list *afs_alloc_server_list(struct afs_volume *volume, struct key *key, - struct afs_vldb_entry *vldb, - u8 type_mask) + struct afs_vldb_entry *vldb) { struct afs_server_list *slist; struct afs_server *server; + unsigned int type_mask = 1 << volume->type; int ret = -ENOMEM, nr_servers = 0, i, j;
for (i = 0; i < vldb->nr_servers; i++) @@ -44,15 +44,12 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, refcount_set(&slist->usage, 1); rwlock_init(&slist->lock);
- for (i = 0; i < AFS_MAXTYPES; i++) - slist->vids[i] = vldb->vid[i]; - /* Make sure a records exists for each server in the list. */ for (i = 0; i < vldb->nr_servers; i++) { if (!(vldb->fs_mask[i] & type_mask)) continue;
- server = afs_lookup_server(cell, key, &vldb->fs_server[i], + server = afs_lookup_server(volume->cell, key, &vldb->fs_server[i], vldb->addr_version[i]); if (IS_ERR(server)) { ret = PTR_ERR(server); @@ -70,7 +67,7 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, break; if (j < slist->nr_servers) { if (slist->servers[j].server == server) { - afs_put_server(cell->net, server, + afs_put_server(volume->cell->net, server, afs_server_trace_put_slist_isort); continue; } @@ -81,6 +78,7 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, }
slist->servers[j].server = server; + slist->servers[j].volume = volume; slist->nr_servers++; }
@@ -92,7 +90,7 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, return slist;
error_2: - afs_put_serverlist(cell->net, slist); + afs_put_serverlist(volume->cell->net, slist); error: return ERR_PTR(ret); } @@ -127,3 +125,99 @@ bool afs_annotate_server_list(struct afs_server_list *new,
return true; } + +/* + * Attach a volume to the servers it is going to use. + */ +void afs_attach_volume_to_servers(struct afs_volume *volume, struct afs_server_list *slist) +{ + struct afs_server_entry *se, *pe; + struct afs_server *server; + struct list_head *p; + unsigned int i; + + spin_lock(&volume->cell->vs_lock); + + for (i = 0; i < slist->nr_servers; i++) { + se = &slist->servers[i]; + server = se->server; + + list_for_each(p, &server->volumes) { + pe = list_entry(p, struct afs_server_entry, slink); + if (volume->vid <= pe->volume->vid) + break; + } + list_add_tail_rcu(&se->slink, p); + } + + slist->attached = true; + spin_unlock(&volume->cell->vs_lock); +} + +/* + * Reattach a volume to the servers it is going to use when server list is + * replaced. We try to switch the attachment points to avoid rewalking the + * lists. + */ +void afs_reattach_volume_to_servers(struct afs_volume *volume, struct afs_server_list *new, + struct afs_server_list *old) +{ + unsigned int n = 0, o = 0; + + spin_lock(&volume->cell->vs_lock); + + while (n < new->nr_servers || o < old->nr_servers) { + struct afs_server_entry *pn = n < new->nr_servers ? &new->servers[n] : NULL; + struct afs_server_entry *po = o < old->nr_servers ? &old->servers[o] : NULL; + struct afs_server_entry *s; + struct list_head *p; + int diff; + + if (pn && po && pn->server == po->server) { + list_replace_rcu(&po->slink, &pn->slink); + n++; + o++; + continue; + } + + if (pn && po) + diff = memcmp(&pn->server->uuid, &po->server->uuid, + sizeof(pn->server->uuid)); + else + diff = pn ? -1 : 1; + + if (diff < 0) { + list_for_each(p, &pn->server->volumes) { + s = list_entry(p, struct afs_server_entry, slink); + if (volume->vid <= s->volume->vid) + break; + } + list_add_tail_rcu(&pn->slink, p); + n++; + } else { + list_del_rcu(&po->slink); + o++; + } + } + + spin_unlock(&volume->cell->vs_lock); +} + +/* + * Detach a volume from the servers it has been using. + */ +void afs_detach_volume_from_servers(struct afs_volume *volume, struct afs_server_list *slist) +{ + unsigned int i; + + if (!slist->attached) + return; + + spin_lock(&volume->cell->vs_lock); + + for (i = 0; i < slist->nr_servers; i++) + list_del_rcu(&slist->servers[i].slink); + + slist->attached = false; + spin_unlock(&volume->cell->vs_lock); +} diff --git a/fs/afs/vl_alias.c b/fs/afs/vl_alias.c index 83cf1bfbe343a..b2cc10df95308 100644 --- a/fs/afs/vl_alias.c +++ b/fs/afs/vl_alias.c @@ -126,7 +126,7 @@ static int afs_compare_volume_slists(const struct afs_volume *vol_a, lb = rcu_dereference(vol_b->servers);
for (i = 0; i < AFS_MAXTYPES; i++) - if (la->vids[i] != lb->vids[i]) + if (vol_a->vids[i] != vol_b->vids[i]) return 0;
while (a < la->nr_servers && b < lb->nr_servers) { diff --git a/fs/afs/volume.c b/fs/afs/volume.c index c028598a903c9..0f64b97581272 100644 --- a/fs/afs/volume.c +++ b/fs/afs/volume.c @@ -72,11 +72,11 @@ static void afs_remove_volume_from_cell(struct afs_volume *volume) */ static struct afs_volume *afs_alloc_volume(struct afs_fs_context *params, struct afs_vldb_entry *vldb, - unsigned long type_mask) + struct afs_server_list **_slist) { struct afs_server_list *slist; struct afs_volume *volume; - int ret = -ENOMEM; + int ret = -ENOMEM, i;
volume = kzalloc(sizeof(struct afs_volume), GFP_KERNEL); if (!volume) @@ -95,13 +95,16 @@ static struct afs_volume *afs_alloc_volume(struct afs_fs_context *params, rwlock_init(&volume->cb_v_break_lock); memcpy(volume->name, vldb->name, vldb->name_len + 1);
- slist = afs_alloc_server_list(params->cell, params->key, vldb, type_mask); + for (i = 0; i < AFS_MAXTYPES; i++) + volume->vids[i] = vldb->vid[i]; + + slist = afs_alloc_server_list(volume, params->key, vldb); if (IS_ERR(slist)) { ret = PTR_ERR(slist); goto error_1; }
- refcount_set(&slist->usage, 1); + *_slist = slist; rcu_assign_pointer(volume->servers, slist); trace_afs_volume(volume->vid, 1, afs_volume_trace_alloc); return volume; @@ -117,17 +120,19 @@ static struct afs_volume *afs_alloc_volume(struct afs_fs_context *params, * Look up or allocate a volume record. */ static struct afs_volume *afs_lookup_volume(struct afs_fs_context *params, - struct afs_vldb_entry *vldb, - unsigned long type_mask) + struct afs_vldb_entry *vldb) { + struct afs_server_list *slist; struct afs_volume *candidate, *volume;
- candidate = afs_alloc_volume(params, vldb, type_mask); + candidate = afs_alloc_volume(params, vldb, &slist); if (IS_ERR(candidate)) return candidate;
volume = afs_insert_volume_into_cell(params->cell, candidate); - if (volume != candidate) + if (volume == candidate) + afs_attach_volume_to_servers(volume, slist); + else afs_put_volume(params->net, candidate, afs_volume_trace_put_cell_dup); return volume; } @@ -208,8 +213,7 @@ struct afs_volume *afs_create_volume(struct afs_fs_context *params) goto error; }
- type_mask = 1UL << params->type; - volume = afs_lookup_volume(params, vldb, type_mask); + volume = afs_lookup_volume(params, vldb);
error: kfree(vldb); @@ -221,14 +225,17 @@ struct afs_volume *afs_create_volume(struct afs_fs_context *params) */ static void afs_destroy_volume(struct afs_net *net, struct afs_volume *volume) { + struct afs_server_list *slist = rcu_access_pointer(volume->servers); + _enter("%p", volume);
#ifdef CONFIG_AFS_FSCACHE ASSERTCMP(volume->cache, ==, NULL); #endif
+ afs_detach_volume_from_servers(volume, slist); afs_remove_volume_from_cell(volume); - afs_put_serverlist(net, rcu_access_pointer(volume->servers)); + afs_put_serverlist(net, slist); afs_put_cell(volume->cell, afs_cell_trace_put_vol); trace_afs_volume(volume->vid, refcount_read(&volume->ref), afs_volume_trace_free); @@ -362,8 +369,7 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key) }
/* See if the volume's server list got updated. */ - new = afs_alloc_server_list(volume->cell, key, - vldb, (1 << volume->type)); + new = afs_alloc_server_list(volume, key, vldb); if (IS_ERR(new)) { ret = PTR_ERR(new); goto error_vldb; @@ -384,9 +390,11 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key)
volume->update_at = ktime_get_real_seconds() + afs_volume_record_life; write_unlock(&volume->servers_lock); - ret = 0;
+ if (discard == old) + afs_reattach_volume_to_servers(volume, new, old); afs_put_serverlist(volume->cell->net, discard); + ret = 0; error_vldb: kfree(vldb); error:
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells dhowells@redhat.com
[ Upstream commit add117e48df4788a86a21bd0515833c0a6db1ad1 ]
When allocating and building an afs_server_list struct object from a VLDB record, we look up each server address to get the server record for it - but a server may have more than one entry in the record and we discard the duplicate pointers. Currently, however, when we discard, we only put a server record, not unuse it - but the lookup got as an active-user count.
The active-user count on an afs_server_list object determines its lifetime whereas the refcount keeps the memory backing it around. Failing to reduce the active-user counter prevents the record from being cleaned up and can lead to multiple copied being seen - and pointing to deleted afs_cell objects and other such things.
Fix this by switching the incorrect 'put' to an 'unuse' instead.
Without this, occasionally, a dead server record can be seen in /proc/net/afs/servers and list corruption may be observed:
list_del corruption. prev->next should be ffff888102423e40, but was 0000000000000000. (prev=ffff88810140cd38)
Fixes: 977e5f8ed0ab ("afs: Split the usage count on struct afs_server") Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: Simon Horman horms@kernel.org cc: linux-afs@lists.infradead.org Link: https://patch.msgid.link/20250218192250.296870-5-dhowells@redhat.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/server_list.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/afs/server_list.c b/fs/afs/server_list.c index 4d6369477f54e..89c75d934f79e 100644 --- a/fs/afs/server_list.c +++ b/fs/afs/server_list.c @@ -67,8 +67,8 @@ struct afs_server_list *afs_alloc_server_list(struct afs_volume *volume, break; if (j < slist->nr_servers) { if (slist->servers[j].server == server) { - afs_put_server(volume->cell->net, server, - afs_server_trace_put_slist_isort); + afs_unuse_server(volume->cell->net, server, + afs_server_trace_put_slist_isort); continue; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 0e4427f8f587c4b603475468bb3aee9418574893 ]
After commit 22600596b675 ("ipv4: give an IPv4 dev to blackhole_netdev") IPv4 neighbors can be constructed on the blackhole net device, but they are constructed with an output function (neigh_direct_output()) that simply calls dev_queue_xmit(). The latter will transmit packets via 'skb->dev' which might not be the blackhole net device if dst_dev_put() switched 'dst->dev' to the blackhole net device while another CPU was using the dst entry in ip_output(), but after it already initialized 'skb->dev' from 'dst->dev'.
Specifically, the following can happen:
CPU1 CPU2
udp_sendmsg(sk1) udp_sendmsg(sk2) udp_send_skb() [...] ip_output() skb->dev = skb_dst(skb)->dev dst_dev_put() dst->dev = blackhole_netdev ip_finish_output2() resolves neigh on dst->dev neigh_output() neigh_direct_output() dev_queue_xmit()
This will result in IPv4 packets being sent without an Ethernet header via a valid net device:
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on enp9s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes 22:07:02.329668 20:00:40:11:18:fb > 45:00:00:44:f4:94, ethertype Unknown (0x58c6), length 68: 0x0000: 8dda 74ca f1ae ca6c ca6c 0098 969c 0400 ..t....l.l...... 0x0010: 0000 4730 3f18 6800 0000 0000 0000 9971 ..G0?.h........q 0x0020: c4c9 9055 a157 0a70 9ead bf83 38ca ab38 ...U.W.p....8..8 0x0030: 8add ab96 e052 .....R
Fix by making sure that neighbors are constructed on top of the blackhole net device with an output function that simply consumes the packets, in a similar fashion to dst_discard_out() and blackhole_netdev_xmit().
Fixes: 8d7017fd621d ("blackhole_netdev: use blackhole_netdev to invalidate dst entries") Fixes: 22600596b675 ("ipv4: give an IPv4 dev to blackhole_netdev") Reported-by: Florian Meister fmei@sfs.com Closes: https://lore.kernel.org/netdev/20250210084931.23a5c2e4@hermes.local/ Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Eric Dumazet edumazet@google.com Link: https://patch.msgid.link/20250220072559.782296-1-idosch@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/loopback.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c index 2e9742952c4e9..b213397672d22 100644 --- a/drivers/net/loopback.c +++ b/drivers/net/loopback.c @@ -246,8 +246,22 @@ static netdev_tx_t blackhole_netdev_xmit(struct sk_buff *skb, return NETDEV_TX_OK; }
+static int blackhole_neigh_output(struct neighbour *n, struct sk_buff *skb) +{ + kfree_skb(skb); + return 0; +} + +static int blackhole_neigh_construct(struct net_device *dev, + struct neighbour *n) +{ + n->output = blackhole_neigh_output; + return 0; +} + static const struct net_device_ops blackhole_netdev_ops = { .ndo_start_xmit = blackhole_netdev_xmit, + .ndo_neigh_construct = blackhole_neigh_construct, };
/* This is a dst-dummy device used specifically for invalidated
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiri Slaby (SUSE) jirislaby@kernel.org
[ Upstream commit c180188ec02281126045414e90d08422a80f75b4 ]
Commit 7acf8a1e8a28 ("Replace 2 jiffies with sysctl netdev_budget_usecs to enable softirq tuning") added a possibility to set net_hotdata.netdev_budget_usecs, but added no lower bound checking.
Commit a4837980fd9f ("net: revert default NAPI poll timeout to 2 jiffies") made the *initial* value HZ-dependent, so the initial value is at least 2 jiffies even for lower HZ values (2 ms for 1000 Hz, 8ms for 250 Hz, 20 ms for 100 Hz).
But a user still can set improper values by a sysctl. Set .extra1 (the lower bound) for net_hotdata.netdev_budget_usecs to the same value as in the latter commit. That is to 2 jiffies.
Fixes: a4837980fd9f ("net: revert default NAPI poll timeout to 2 jiffies") Fixes: 7acf8a1e8a28 ("Replace 2 jiffies with sysctl netdev_budget_usecs to enable softirq tuning") Signed-off-by: Jiri Slaby (SUSE) jirislaby@kernel.org Cc: Dmitry Yakunin zeil@yandex-team.ru Cc: Konstantin Khlebnikov khlebnikov@yandex-team.ru Link: https://patch.msgid.link/20250220110752.137639-1-jirislaby@kernel.org Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sysctl_net_core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c index 47ca6d3ddbb56..75efc712bb9bc 100644 --- a/net/core/sysctl_net_core.c +++ b/net/core/sysctl_net_core.c @@ -30,6 +30,7 @@ static int min_sndbuf = SOCK_MIN_SNDBUF; static int min_rcvbuf = SOCK_MIN_RCVBUF; static int max_skb_frags = MAX_SKB_FRAGS; static int min_mem_pcpu_rsv = SK_MEMORY_PCPU_RESERVE; +static int netdev_budget_usecs_min = 2 * USEC_PER_SEC / HZ;
static int net_msg_warn; /* Unused, but still a sysctl */
@@ -554,7 +555,7 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = SYSCTL_ZERO, + .extra1 = &netdev_budget_usecs_min, }, { .procname = "fb_tunnels_only_for_init_net",
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peilin He he.peilin@zte.com.cn
[ Upstream commit db3efdcf70c752e8a8deb16071d8e693c3ef8746 ]
Introduce a tracepoint for icmp_send, which can help users to get more detail information conveniently when icmp abnormal events happen.
1. Giving an usecase example: ============================= When an application experiences packet loss due to an unreachable UDP destination port, the kernel will send an exception message through the icmp_send function. By adding a trace point for icmp_send, developers or system administrators can obtain detailed information about the UDP packet loss, including the type, code, source address, destination address, source port, and destination port. This facilitates the trouble-shooting of UDP packet loss issues especially for those network-service applications.
2. Operation Instructions: ========================== Switch to the tracing directory. cd /sys/kernel/tracing Filter for destination port unreachable. echo "type==3 && code==3" > events/icmp/icmp_send/filter Enable trace event. echo 1 > events/icmp/icmp_send/enable
3. Result View: ================ udp_client_erro-11370 [002] ...s.12 124.728002: icmp_send: icmp_send: type=3, code=3. From 127.0.0.1:41895 to 127.0.0.1:6666 ulen=23 skbaddr=00000000589b167a
Signed-off-by: Peilin He he.peilin@zte.com.cn Signed-off-by: xu xin xu.xin16@zte.com.cn Reviewed-by: Yunkai Zhang zhang.yunkai@zte.com.cn Cc: Yang Yang yang.yang29@zte.com.cn Cc: Liu Chun liu.chun2@zte.com.cn Cc: Xuexin Jiang jiang.xuexin@zte.com.cn Reviewed-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- include/trace/events/icmp.h | 67 +++++++++++++++++++++++++++++++++++++ net/ipv4/icmp.c | 4 +++ 2 files changed, 71 insertions(+) create mode 100644 include/trace/events/icmp.h
diff --git a/include/trace/events/icmp.h b/include/trace/events/icmp.h new file mode 100644 index 0000000000000..31559796949a7 --- /dev/null +++ b/include/trace/events/icmp.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM icmp + +#if !defined(_TRACE_ICMP_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_ICMP_H + +#include <linux/icmp.h> +#include <linux/tracepoint.h> + +TRACE_EVENT(icmp_send, + + TP_PROTO(const struct sk_buff *skb, int type, int code), + + TP_ARGS(skb, type, code), + + TP_STRUCT__entry( + __field(const void *, skbaddr) + __field(int, type) + __field(int, code) + __array(__u8, saddr, 4) + __array(__u8, daddr, 4) + __field(__u16, sport) + __field(__u16, dport) + __field(unsigned short, ulen) + ), + + TP_fast_assign( + struct iphdr *iph = ip_hdr(skb); + struct udphdr *uh = udp_hdr(skb); + int proto_4 = iph->protocol; + __be32 *p32; + + __entry->skbaddr = skb; + __entry->type = type; + __entry->code = code; + + if (proto_4 != IPPROTO_UDP || (u8 *)uh < skb->head || + (u8 *)uh + sizeof(struct udphdr) + > skb_tail_pointer(skb)) { + __entry->sport = 0; + __entry->dport = 0; + __entry->ulen = 0; + } else { + __entry->sport = ntohs(uh->source); + __entry->dport = ntohs(uh->dest); + __entry->ulen = ntohs(uh->len); + } + + p32 = (__be32 *) __entry->saddr; + *p32 = iph->saddr; + + p32 = (__be32 *) __entry->daddr; + *p32 = iph->daddr; + ), + + TP_printk("icmp_send: type=%d, code=%d. From %pI4:%u to %pI4:%u ulen=%d skbaddr=%p", + __entry->type, __entry->code, + __entry->saddr, __entry->sport, __entry->daddr, + __entry->dport, __entry->ulen, __entry->skbaddr) +); + +#endif /* _TRACE_ICMP_H */ + +/* This part must be outside protection */ +#include <trace/define_trace.h> + diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index a21d32b3ae6c3..b05fa424ad5ce 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -93,6 +93,8 @@ #include <net/ip_fib.h> #include <net/l3mdev.h> #include <net/addrconf.h> +#define CREATE_TRACE_POINTS +#include <trace/events/icmp.h>
/* * Build xmit assembly blocks @@ -778,6 +780,8 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info, if (!fl4.saddr) fl4.saddr = htonl(INADDR_DUMMY);
+ trace_icmp_send(skb_in, type, code); + icmp_push_reply(sk, &icmp_param, &fl4, &ipc, &rt); ende: ip_rt_put(rt);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 1c6f50b37f711b831d78973dad0df1da99ad0014 ]
Align the ICMP code to other callers of ip_route_input() and pass the full DS field. In the future this will allow us to perform a route lookup according to the full DSCP value.
No functional changes intended since the upper DSCP bits are masked when comparing against the TOS selectors in FIB rules and routes.
Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Guillaume Nault gnault@redhat.com Acked-by: Florian Westphal fw@strlen.de Reviewed-by: David Ahern dsahern@kernel.org Link: https://patch.msgid.link/20240821125251.1571445-11-idosch@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/icmp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index b05fa424ad5ce..3807a269e0755 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -550,7 +550,7 @@ static struct rtable *icmp_route_lookup(struct net *net, orefdst = skb_in->_skb_refdst; /* save old refdst */ skb_dst_set(skb_in, NULL); err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr, - RT_TOS(tos), rt2->dst.dev); + tos, rt2->dst.dev);
dst_release(&rt2->dst); rt2 = skb_rtable(skb_in);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 4805646c42e51d2fbf142864d281473ad453ad5d ]
The function is called to resolve a route for an ICMP message that is sent in response to a situation. Based on the type of the generated ICMP message, the function is either passed the DS field of the packet that generated the ICMP message or a DS field that is derived from it.
Unmask the upper DSCP bits before resolving and output route via ip_route_output_key_hash() so that in the future the lookup could be performed according to the full DSCP value.
Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Guillaume Nault gnault@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/icmp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 3807a269e0755..a154339845dd4 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -93,6 +93,7 @@ #include <net/ip_fib.h> #include <net/l3mdev.h> #include <net/addrconf.h> +#include <net/inet_dscp.h> #define CREATE_TRACE_POINTS #include <trace/events/icmp.h>
@@ -502,7 +503,7 @@ static struct rtable *icmp_route_lookup(struct net *net, fl4->saddr = saddr; fl4->flowi4_mark = mark; fl4->flowi4_uid = sock_net_uid(net, NULL); - fl4->flowi4_tos = RT_TOS(tos); + fl4->flowi4_tos = tos & INET_DSCP_MASK; fl4->flowi4_proto = IPPROTO_ICMP; fl4->fl4_icmp_type = type; fl4->fl4_icmp_code = code;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 939cd1abf080c629552a9c5e6db4c0509d13e4c7 ]
Unmask the upper DSCP bits when calling ip_route_output_flow() so that in the future it could perform the FIB lookup according to the full DSCP value.
Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Guillaume Nault gnault@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipvlan/ipvlan_core.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c index 1d49771d07f4c..d22a705ac4d6f 100644 --- a/drivers/net/ipvlan/ipvlan_core.c +++ b/drivers/net/ipvlan/ipvlan_core.c @@ -2,6 +2,8 @@ /* Copyright (c) 2014 Mahesh Bandewar maheshb@google.com */
+#include <net/inet_dscp.h> + #include "ipvlan.h"
static u32 ipvlan_jhash_secret __read_mostly; @@ -420,7 +422,7 @@ static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb) int err, ret = NET_XMIT_DROP; struct flowi4 fl4 = { .flowi4_oif = dev->ifindex, - .flowi4_tos = RT_TOS(ip4h->tos), + .flowi4_tos = ip4h->tos & INET_DSCP_MASK, .flowi4_flags = FLOWI_FLAG_ANYSRC, .flowi4_mark = skb->mark, .daddr = ip4h->daddr,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Guillaume Nault gnault@redhat.com
[ Upstream commit 913c83a610bb7dd8e5952a2b4663e1feec0b5de6 ]
Pass a dscp_t variable to icmp_route_lookup(), instead of a plain u8, to prevent accidental setting of ECN bits in ->flowi4_tos. Rename that variable ("tos" -> "dscp") to make the intent clear.
While there, reorganise the function parameters to fill up horizontal space.
Signed-off-by: Guillaume Nault gnault@redhat.com Reviewed-by: David Ahern dsahern@kernel.org Link: https://patch.msgid.link/294fead85c6035bcdc5fcf9a6bb4ce8798c45ba1.1727807926... Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/icmp.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index a154339845dd4..855fcef829e2c 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -484,13 +484,11 @@ static struct net_device *icmp_get_route_lookup_dev(struct sk_buff *skb) return route_lookup_dev; }
-static struct rtable *icmp_route_lookup(struct net *net, - struct flowi4 *fl4, +static struct rtable *icmp_route_lookup(struct net *net, struct flowi4 *fl4, struct sk_buff *skb_in, - const struct iphdr *iph, - __be32 saddr, u8 tos, u32 mark, - int type, int code, - struct icmp_bxm *param) + const struct iphdr *iph, __be32 saddr, + dscp_t dscp, u32 mark, int type, + int code, struct icmp_bxm *param) { struct net_device *route_lookup_dev; struct rtable *rt, *rt2; @@ -503,7 +501,7 @@ static struct rtable *icmp_route_lookup(struct net *net, fl4->saddr = saddr; fl4->flowi4_mark = mark; fl4->flowi4_uid = sock_net_uid(net, NULL); - fl4->flowi4_tos = tos & INET_DSCP_MASK; + fl4->flowi4_tos = inet_dscp_to_dsfield(dscp); fl4->flowi4_proto = IPPROTO_ICMP; fl4->fl4_icmp_type = type; fl4->fl4_icmp_code = code; @@ -551,7 +549,7 @@ static struct rtable *icmp_route_lookup(struct net *net, orefdst = skb_in->_skb_refdst; /* save old refdst */ skb_dst_set(skb_in, NULL); err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr, - tos, rt2->dst.dev); + inet_dscp_to_dsfield(dscp), rt2->dst.dev);
dst_release(&rt2->dst); rt2 = skb_rtable(skb_in); @@ -747,8 +745,9 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info, ipc.opt = &icmp_param.replyopts.opt; ipc.sockc.mark = mark;
- rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr, tos, mark, - type, code, &icmp_param); + rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr, + inet_dsfield_to_dscp(tos), mark, type, code, + &icmp_param); if (IS_ERR(rt)) goto out_unlock;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Guillaume Nault gnault@redhat.com
[ Upstream commit 7e863e5db6185b1add0df4cb01b31a4ed1c4b738 ]
Pass a dscp_t variable to ip_route_input(), instead of a plain u8, to prevent accidental setting of ECN bits in ->flowi4_tos.
Callers of ip_route_input() to consider are:
* input_action_end_dx4_finish() and input_action_end_dt4() in net/ipv6/seg6_local.c. These functions set the tos parameter to 0, which is already a valid dscp_t value, so they don't need to be adjusted for the new prototype.
* icmp_route_lookup(), which already has a dscp_t variable to pass as parameter. We just need to remove the inet_dscp_to_dsfield() conversion.
* br_nf_pre_routing_finish(), ip_options_rcv_srr() and ip4ip6_err(), which get the DSCP directly from IPv4 headers. Define a helper to read the .tos field of struct iphdr as dscp_t, so that these function don't have to do the conversion manually.
While there, declare *iph as const in br_nf_pre_routing_finish(), declare its local variables in reverse-christmas-tree order and move the "err = ip_route_input()" assignment out of the conditional to avoid checkpatch warning.
Signed-off-by: Guillaume Nault gnault@redhat.com Reviewed-by: David Ahern dsahern@kernel.org Link: https://patch.msgid.link/e9d40781d64d3d69f4c79ac8a008b8d67a033e8d.1727807926... Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/ip.h | 5 +++++ include/net/route.h | 5 +++-- net/bridge/br_netfilter_hooks.c | 8 +++++--- net/ipv4/icmp.c | 2 +- net/ipv4/ip_options.c | 3 ++- net/ipv6/ip6_tunnel.c | 4 ++-- 6 files changed, 18 insertions(+), 9 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h index 9d754c4a53002..4ee23eb0814a3 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -409,6 +409,11 @@ int ip_decrease_ttl(struct iphdr *iph) return --iph->ttl; }
+static inline dscp_t ip4h_dscp(const struct iphdr *ip4h) +{ + return inet_dsfield_to_dscp(ip4h->tos); +} + static inline int ip_mtu_locked(const struct dst_entry *dst) { const struct rtable *rt = (const struct rtable *)dst; diff --git a/include/net/route.h b/include/net/route.h index f396176022377..4185e6da9ef85 100644 --- a/include/net/route.h +++ b/include/net/route.h @@ -203,12 +203,13 @@ int ip_route_use_hint(struct sk_buff *skb, __be32 dst, __be32 src, const struct sk_buff *hint);
static inline int ip_route_input(struct sk_buff *skb, __be32 dst, __be32 src, - u8 tos, struct net_device *devin) + dscp_t dscp, struct net_device *devin) { int err;
rcu_read_lock(); - err = ip_route_input_noref(skb, dst, src, tos, devin); + err = ip_route_input_noref(skb, dst, src, inet_dscp_to_dsfield(dscp), + devin); if (!err) { skb_dst_force(skb); if (!skb_dst(skb)) diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c index 5c6ed1d49b92c..b4d661fe7886d 100644 --- a/net/bridge/br_netfilter_hooks.c +++ b/net/bridge/br_netfilter_hooks.c @@ -366,9 +366,9 @@ br_nf_ipv4_daddr_was_changed(const struct sk_buff *skb, */ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_buff *skb) { - struct net_device *dev = skb->dev, *br_indev; - struct iphdr *iph = ip_hdr(skb); struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb); + struct net_device *dev = skb->dev, *br_indev; + const struct iphdr *iph = ip_hdr(skb); struct rtable *rt; int err;
@@ -386,7 +386,9 @@ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_ } nf_bridge->in_prerouting = 0; if (br_nf_ipv4_daddr_was_changed(skb, nf_bridge)) { - if ((err = ip_route_input(skb, iph->daddr, iph->saddr, iph->tos, dev))) { + err = ip_route_input(skb, iph->daddr, iph->saddr, + ip4h_dscp(iph), dev); + if (err) { struct in_device *in_dev = __in_dev_get_rcu(dev);
/* If err equals -EHOSTUNREACH the error is due to a diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 855fcef829e2c..94501bb30c431 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -549,7 +549,7 @@ static struct rtable *icmp_route_lookup(struct net *net, struct flowi4 *fl4, orefdst = skb_in->_skb_refdst; /* save old refdst */ skb_dst_set(skb_in, NULL); err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr, - inet_dscp_to_dsfield(dscp), rt2->dst.dev); + dscp, rt2->dst.dev);
dst_release(&rt2->dst); rt2 = skb_rtable(skb_in); diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c index a9e22a098872f..b4c59708fc095 100644 --- a/net/ipv4/ip_options.c +++ b/net/ipv4/ip_options.c @@ -617,7 +617,8 @@ int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev)
orefdst = skb->_skb_refdst; skb_dst_set(skb, NULL); - err = ip_route_input(skb, nexthop, iph->saddr, iph->tos, dev); + err = ip_route_input(skb, nexthop, iph->saddr, ip4h_dscp(iph), + dev); rt2 = skb_rtable(skb); if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) { skb_dst_drop(skb); diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c index f3324f2a40466..a82d382193e41 100644 --- a/net/ipv6/ip6_tunnel.c +++ b/net/ipv6/ip6_tunnel.c @@ -628,8 +628,8 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, } skb_dst_set(skb2, &rt->dst); } else { - if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos, - skb2->dev) || + if (ip_route_input(skb2, eiph->daddr, eiph->saddr, + ip4h_dscp(eiph), skb2->dev) || skb_dst(skb2)->dev->type != ARPHRD_TUNNEL6) goto out; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Guillaume Nault gnault@redhat.com
[ Upstream commit 0c30d6eedd1ec0c1382bcab9576d26413cd278a3 ]
Use ip4h_dscp() to get the DSCP from the IPv4 header, then convert the dscp_t value to __u8 with inet_dscp_to_dsfield().
Then, when we'll convert .flowi4_tos to dscp_t, we'll just have to drop the inet_dscp_to_dsfield() call.
Signed-off-by: Guillaume Nault gnault@redhat.com Reviewed-by: Ido Schimmel idosch@nvidia.com Link: https://patch.msgid.link/f48335504a05b3587e0081a9b4511e0761571ca5.1730292157... Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipvlan/ipvlan_core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c index d22a705ac4d6f..38eb40cba5aac 100644 --- a/drivers/net/ipvlan/ipvlan_core.c +++ b/drivers/net/ipvlan/ipvlan_core.c @@ -3,6 +3,7 @@ */
#include <net/inet_dscp.h> +#include <net/ip.h>
#include "ipvlan.h"
@@ -422,7 +423,7 @@ static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb) int err, ret = NET_XMIT_DROP; struct flowi4 fl4 = { .flowi4_oif = dev->ifindex, - .flowi4_tos = ip4h->tos & INET_DSCP_MASK, + .flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h)), .flowi4_flags = FLOWI_FLAG_ANYSRC, .flowi4_mark = skb->mark, .daddr = ip4h->daddr,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 27843ce6ba3d3122b65066550fe33fb8839f8aef ]
syzbot found that ipvlan_process_v6_outbound() was assuming the IPv6 network header isis present in skb->head [1]
Add the needed pskb_network_may_pull() calls for both IPv4 and IPv6 handlers.
[1] BUG: KMSAN: uninit-value in __ipv6_addr_type+0xa2/0x490 net/ipv6/addrconf_core.c:47 __ipv6_addr_type+0xa2/0x490 net/ipv6/addrconf_core.c:47 ipv6_addr_type include/net/ipv6.h:555 [inline] ip6_route_output_flags_noref net/ipv6/route.c:2616 [inline] ip6_route_output_flags+0x51/0x720 net/ipv6/route.c:2651 ip6_route_output include/net/ip6_route.h:93 [inline] ipvlan_route_v6_outbound+0x24e/0x520 drivers/net/ipvlan/ipvlan_core.c:476 ipvlan_process_v6_outbound drivers/net/ipvlan/ipvlan_core.c:491 [inline] ipvlan_process_outbound drivers/net/ipvlan/ipvlan_core.c:541 [inline] ipvlan_xmit_mode_l3 drivers/net/ipvlan/ipvlan_core.c:605 [inline] ipvlan_queue_xmit+0xd72/0x1780 drivers/net/ipvlan/ipvlan_core.c:671 ipvlan_start_xmit+0x5b/0x210 drivers/net/ipvlan/ipvlan_main.c:223 __netdev_start_xmit include/linux/netdevice.h:5150 [inline] netdev_start_xmit include/linux/netdevice.h:5159 [inline] xmit_one net/core/dev.c:3735 [inline] dev_hard_start_xmit+0x247/0xa20 net/core/dev.c:3751 sch_direct_xmit+0x399/0xd40 net/sched/sch_generic.c:343 qdisc_restart net/sched/sch_generic.c:408 [inline] __qdisc_run+0x14da/0x35d0 net/sched/sch_generic.c:416 qdisc_run+0x141/0x4d0 include/net/pkt_sched.h:127 net_tx_action+0x78b/0x940 net/core/dev.c:5484 handle_softirqs+0x1a0/0x7c0 kernel/softirq.c:561 __do_softirq+0x14/0x1a kernel/softirq.c:595 do_softirq+0x9a/0x100 kernel/softirq.c:462 __local_bh_enable_ip+0x9f/0xb0 kernel/softirq.c:389 local_bh_enable include/linux/bottom_half.h:33 [inline] rcu_read_unlock_bh include/linux/rcupdate.h:919 [inline] __dev_queue_xmit+0x2758/0x57d0 net/core/dev.c:4611 dev_queue_xmit include/linux/netdevice.h:3311 [inline] packet_xmit+0x9c/0x6c0 net/packet/af_packet.c:276 packet_snd net/packet/af_packet.c:3132 [inline] packet_sendmsg+0x93e0/0xa7e0 net/packet/af_packet.c:3164 sock_sendmsg_nosec net/socket.c:718 [inline]
Fixes: 2ad7bf363841 ("ipvlan: Initial check-in of the IPVLAN driver.") Reported-by: syzbot+93ab4a777bafb9d9f960@syzkaller.appspotmail.com Closes: https://lore.kernel.org/netdev/67b74f01.050a0220.14d86d.02d8.GAE@google.com/... Signed-off-by: Eric Dumazet edumazet@google.com Cc: Mahesh Bandewar maheshb@google.com Link: https://patch.msgid.link/20250220155336.61884-1-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipvlan/ipvlan_core.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c index 38eb40cba5aac..eea81a7334052 100644 --- a/drivers/net/ipvlan/ipvlan_core.c +++ b/drivers/net/ipvlan/ipvlan_core.c @@ -416,20 +416,25 @@ struct ipvl_addr *ipvlan_addr_lookup(struct ipvl_port *port, void *lyr3h,
static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb) { - const struct iphdr *ip4h = ip_hdr(skb); struct net_device *dev = skb->dev; struct net *net = dev_net(dev); - struct rtable *rt; int err, ret = NET_XMIT_DROP; + const struct iphdr *ip4h; + struct rtable *rt; struct flowi4 fl4 = { .flowi4_oif = dev->ifindex, - .flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h)), .flowi4_flags = FLOWI_FLAG_ANYSRC, .flowi4_mark = skb->mark, - .daddr = ip4h->daddr, - .saddr = ip4h->saddr, };
+ if (!pskb_network_may_pull(skb, sizeof(struct iphdr))) + goto err; + + ip4h = ip_hdr(skb); + fl4.daddr = ip4h->daddr; + fl4.saddr = ip4h->saddr; + fl4.flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h)); + rt = ip_route_output_flow(net, &fl4, NULL); if (IS_ERR(rt)) goto err; @@ -488,6 +493,12 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb) struct net_device *dev = skb->dev; int err, ret = NET_XMIT_DROP;
+ if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr))) { + DEV_STATS_INC(dev, tx_errors); + kfree_skb(skb); + return ret; + } + err = ipvlan_route_v6_outbound(dev, skb); if (unlikely(err)) { DEV_STATS_INC(dev, tx_errors);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Anderson sean.anderson@linux.dev
[ Upstream commit fa52f15c745ce55261b92873676f64f7348cfe82 ]
Stats calculations involve a RMW to add the stat update to the existing value. This is currently not protected by any synchronization mechanism, so data races are possible. Add a spinlock to protect the update. The reader side could be protected using u64_stats, but we would still need a spinlock for the update side anyway. And we always do an update immediately before reading the stats anyway.
Fixes: 89e5785fc8a6 ("[PATCH] Atmel MACB ethernet driver") Signed-off-by: Sean Anderson sean.anderson@linux.dev Link: https://patch.msgid.link/20250220162950.95941-1-sean.anderson@linux.dev Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/cadence/macb.h | 2 ++ drivers/net/ethernet/cadence/macb_main.c | 12 ++++++++++-- 2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h index 1aa578c1ca4ad..8d66de71ea604 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -1271,6 +1271,8 @@ struct macb { struct clk *rx_clk; struct clk *tsu_clk; struct net_device *dev; + /* Protects hw_stats and ethtool_stats */ + spinlock_t stats_lock; union { struct macb_stats macb; struct gem_stats gem; diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index d44d53d697620..fc3342944dbcc 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1936,10 +1936,12 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
if (status & MACB_BIT(ISR_ROVR)) { /* We missed at least one packet */ + spin_lock(&bp->stats_lock); if (macb_is_gem(bp)) bp->hw_stats.gem.rx_overruns++; else bp->hw_stats.macb.rx_overruns++; + spin_unlock(&bp->stats_lock);
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) queue_writel(queue, ISR, MACB_BIT(ISR_ROVR)); @@ -2999,6 +3001,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp) if (!netif_running(bp->dev)) return nstat;
+ spin_lock_irq(&bp->stats_lock); gem_update_stats(bp);
nstat->rx_errors = (hwstat->rx_frame_check_sequence_errors + @@ -3028,6 +3031,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp) nstat->tx_aborted_errors = hwstat->tx_excessive_collisions; nstat->tx_carrier_errors = hwstat->tx_carrier_sense_errors; nstat->tx_fifo_errors = hwstat->tx_underrun; + spin_unlock_irq(&bp->stats_lock);
return nstat; } @@ -3035,12 +3039,13 @@ static struct net_device_stats *gem_get_stats(struct macb *bp) static void gem_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *stats, u64 *data) { - struct macb *bp; + struct macb *bp = netdev_priv(dev);
- bp = netdev_priv(dev); + spin_lock_irq(&bp->stats_lock); gem_update_stats(bp); memcpy(data, &bp->ethtool_stats, sizeof(u64) * (GEM_STATS_LEN + QUEUE_STATS_LEN * MACB_MAX_QUEUES)); + spin_unlock_irq(&bp->stats_lock); }
static int gem_get_sset_count(struct net_device *dev, int sset) @@ -3090,6 +3095,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev) return gem_get_stats(bp);
/* read stats from hardware */ + spin_lock_irq(&bp->stats_lock); macb_update_stats(bp);
/* Convert HW stats into netdevice stats */ @@ -3123,6 +3129,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev) nstat->tx_carrier_errors = hwstat->tx_carrier_errors; nstat->tx_fifo_errors = hwstat->tx_underruns; /* Don't know about heartbeat or window errors... */ + spin_unlock_irq(&bp->stats_lock);
return nstat; } @@ -4949,6 +4956,7 @@ static int macb_probe(struct platform_device *pdev) bp->usrio = macb_config->usrio;
spin_lock_init(&bp->lock); + spin_lock_init(&bp->stats_lock);
/* setup capabilities */ macb_configure_caps(bp, macb_config);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicolas Frattaroli nicolas.frattaroli@collabora.com
[ Upstream commit 5b0c02f9b8acf2a791e531bbc09acae2d51f4f9b ]
The ES8328 codec driver, which is also used for the ES8388 chip that appears to have an identical register map, claims that the output can either take the route from DAC->Mixer->Output or through DAC->Output directly. To the best of what I could find, this is not true, and creates problems.
Without DACCONTROL17 bit index 7 set for the left channel, as well as DACCONTROL20 bit index 7 set for the right channel, I cannot get any analog audio out on Left Out 2 and Right Out 2 respectively, despite the DAPM routes claiming that this should be possible. Furthermore, the same is the case for Left Out 1 and Right Out 1, showing that those two don't have a direct route from DAC to output bypassing the mixer either.
Those control bits toggle whether the DACs are fed (stale bread?) into their respective mixers. If one "unmutes" the mixer controls in alsamixer, then sure, the audio output works, but if it doesn't work without the mixer being fed the DAC input then evidently it's not a direct output from the DAC.
ES8328/ES8388 are seemingly not alone in this. ES8323, which uses a separate driver for what appears to be a very similar register map, simply flips those two bits on in its probe function, and then pretends there is no power management whatsoever for the individual controls. Fair enough.
My theory as to why nobody has noticed this up to this point is that everyone just assumes it's their fault when they had to unmute an additional control in ALSA.
Fix this in the es8328 driver by removing the erroneous direct route, then get rid of the playback switch controls and have those bits tied to the mixer's widget instead, which until now had no register to play with.
Fixes: 567e4f98922c ("ASoC: add es8328 codec driver") Signed-off-by: Nicolas Frattaroli nicolas.frattaroli@collabora.com Link: https://patch.msgid.link/20250222-es8328-route-bludgeoning-v1-1-99bfb7fb22d9... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/es8328.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/sound/soc/codecs/es8328.c b/sound/soc/codecs/es8328.c index 160adc706cc69..8182e9b37c03d 100644 --- a/sound/soc/codecs/es8328.c +++ b/sound/soc/codecs/es8328.c @@ -234,7 +234,6 @@ static const struct snd_kcontrol_new es8328_right_line_controls =
/* Left Mixer */ static const struct snd_kcontrol_new es8328_left_mixer_controls[] = { - SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL17, 7, 1, 0), SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL17, 6, 1, 0), SOC_DAPM_SINGLE("Right Playback Switch", ES8328_DACCONTROL18, 7, 1, 0), SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL18, 6, 1, 0), @@ -244,7 +243,6 @@ static const struct snd_kcontrol_new es8328_left_mixer_controls[] = { static const struct snd_kcontrol_new es8328_right_mixer_controls[] = { SOC_DAPM_SINGLE("Left Playback Switch", ES8328_DACCONTROL19, 7, 1, 0), SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL19, 6, 1, 0), - SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL20, 7, 1, 0), SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL20, 6, 1, 0), };
@@ -337,10 +335,10 @@ static const struct snd_soc_dapm_widget es8328_dapm_widgets[] = { SND_SOC_DAPM_DAC("Left DAC", "Left Playback", ES8328_DACPOWER, ES8328_DACPOWER_LDAC_OFF, 1),
- SND_SOC_DAPM_MIXER("Left Mixer", SND_SOC_NOPM, 0, 0, + SND_SOC_DAPM_MIXER("Left Mixer", ES8328_DACCONTROL17, 7, 0, &es8328_left_mixer_controls[0], ARRAY_SIZE(es8328_left_mixer_controls)), - SND_SOC_DAPM_MIXER("Right Mixer", SND_SOC_NOPM, 0, 0, + SND_SOC_DAPM_MIXER("Right Mixer", ES8328_DACCONTROL20, 7, 0, &es8328_right_mixer_controls[0], ARRAY_SIZE(es8328_right_mixer_controls)),
@@ -419,19 +417,14 @@ static const struct snd_soc_dapm_route es8328_dapm_routes[] = { { "Right Line Mux", "PGA", "Right PGA Mux" }, { "Right Line Mux", "Differential", "Differential Mux" },
- { "Left Out 1", NULL, "Left DAC" }, - { "Right Out 1", NULL, "Right DAC" }, - { "Left Out 2", NULL, "Left DAC" }, - { "Right Out 2", NULL, "Right DAC" }, - - { "Left Mixer", "Playback Switch", "Left DAC" }, + { "Left Mixer", NULL, "Left DAC" }, { "Left Mixer", "Left Bypass Switch", "Left Line Mux" }, { "Left Mixer", "Right Playback Switch", "Right DAC" }, { "Left Mixer", "Right Bypass Switch", "Right Line Mux" },
{ "Right Mixer", "Left Playback Switch", "Left DAC" }, { "Right Mixer", "Left Bypass Switch", "Left Line Mux" }, - { "Right Mixer", "Playback Switch", "Right DAC" }, + { "Right Mixer", NULL, "Right DAC" }, { "Right Mixer", "Right Bypass Switch", "Right Line Mux" },
{ "DAC DIG", NULL, "DAC STM" },
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Philo Lu lulie@linux.alibaba.com
[ Upstream commit de2c211868b9424f9aa9b3432c4430825bafb41b ]
We found an issue when using bpf_redirect with ipvs NAT mode after commit ff70202b2d1a ("dev_forward_skb: do not scrub skb mark within the same name space"). Particularly, we use bpf_redirect to return the skb directly back to the netif it comes from, i.e., xnet is false in skb_scrub_packet(), and then ipvs_property is preserved and SNAT is skipped in the rx path.
ipvs_property has been already cleared when netns is changed in commit 2b5ec1a5f973 ("netfilter/ipvs: clear ipvs_property flag when SKB net namespace changed"). This patch just clears it in spite of netns.
Fixes: 2b5ec1a5f973 ("netfilter/ipvs: clear ipvs_property flag when SKB net namespace changed") Signed-off-by: Philo Lu lulie@linux.alibaba.com Acked-by: Julian Anastasov ja@ssi.bg Link: https://patch.msgid.link/20250222033518.126087-1-lulie@linux.alibaba.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/skbuff.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 768b8d65a5baa..d8a3ada886ffb 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -5556,11 +5556,11 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet) skb->offload_fwd_mark = 0; skb->offload_l3_fwd_mark = 0; #endif + ipvs_reset(skb);
if (!xnet) return;
- ipvs_reset(skb); skb->mark = 0; skb_clear_tstamp(skb); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wang Hai wanghai38@huawei.com
[ Upstream commit 8d52da23b6c68a0f6bad83959ebb61a2cf623c4e ]
Recently a bug was discovered where the server had entered TCP_ESTABLISHED state, but the upper layers were not notified.
The same 5-tuple packet may be processed by different CPUSs, so two CPUs may receive different ack packets at the same time when the state is TCP_NEW_SYN_RECV.
In that case, req->ts_recent in tcp_check_req may be changed concurrently, which will probably cause the newsk's ts_recent to be incorrectly large. So that tcp_validate_incoming will fail. At this point, newsk will not be able to enter the TCP_ESTABLISHED.
cpu1 cpu2 tcp_check_req tcp_check_req req->ts_recent = rcv_tsval = t1 req->ts_recent = rcv_tsval = t2
syn_recv_sock tcp_sk(child)->rx_opt.ts_recent = req->ts_recent = t2 // t1 < t2 tcp_child_process tcp_rcv_state_process tcp_validate_incoming tcp_paws_check if ((s32)(rx_opt->ts_recent - rx_opt->rcv_tsval) <= paws_win) // t2 - t1 > paws_win, failed tcp_v4_do_rcv tcp_rcv_state_process // TCP_ESTABLISHED
The cpu2's skb or a newly received skb will call tcp_v4_do_rcv to get the newsk into the TCP_ESTABLISHED state, but at this point it is no longer possible to notify the upper layer application. A notification mechanism could be added here, but the fix is more complex, so the current fix is used.
In tcp_check_req, req->ts_recent is used to assign a value to tcp_sk(child)->rx_opt.ts_recent, so removing the change in req->ts_recent and changing tcp_sk(child)->rx_opt.ts_recent directly after owning the req fixes this bug.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Wang Hai wanghai38@huawei.com Reviewed-by: Jason Xing kerneljasonxing@gmail.com Reviewed-by: Eric Dumazet edumazet@google.com Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_minisocks.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index c562cb965e742..bc94df0140bfd 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -735,12 +735,6 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
/* In sequence, PAWS is OK. */
- /* TODO: We probably should defer ts_recent change once - * we take ownership of @req. - */ - if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt)) - WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval); - if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) { /* Truncate SYN, it is out of window starting at tcp_rsk(req)->rcv_isn + 1. */ @@ -789,6 +783,10 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb, if (!child) goto listen_overflow;
+ if (own_req && tmp_opt.saw_tstamp && + !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt)) + tcp_sk(child)->rx_opt.ts_recent = tmp_opt.rcv_tsval; + if (own_req && rsk_drop_req(req)) { reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req); inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mohammad Heib mheib@redhat.com
[ Upstream commit 49806fe6e61b045b5be8610e08b5a3083c109aa0 ]
In certain cases, napi_get_frags() returns an skb that points to an old received fragment, This skb may have its skb->ip_summed, csum, and other fields set from previous fragment handling.
Some network drivers set skb->ip_summed to either CHECKSUM_COMPLETE or CHECKSUM_UNNECESSARY when getting skb from napi_get_frags(), while others only set skb->ip_summed when RX checksum offload is enabled on the device, and do not set any value for skb->ip_summed when hardware checksum offload is disabled, assuming that the skb->ip_summed initiated to zero by napi_reuse_skb, ionic driver for example will ignore/unset any value for the ip_summed filed if HW checksum offload is disabled, and if we have a situation where the user disables the checksum offload during a traffic that could lead to the following errors shown in the kernel logs: <IRQ> dump_stack_lvl+0x34/0x48 __skb_gro_checksum_complete+0x7e/0x90 tcp6_gro_receive+0xc6/0x190 ipv6_gro_receive+0x1ec/0x430 dev_gro_receive+0x188/0x360 ? ionic_rx_clean+0x25a/0x460 [ionic] napi_gro_frags+0x13c/0x300 ? __pfx_ionic_rx_service+0x10/0x10 [ionic] ionic_rx_service+0x67/0x80 [ionic] ionic_cq_service+0x58/0x90 [ionic] ionic_txrx_napi+0x64/0x1b0 [ionic] __napi_poll+0x27/0x170 net_rx_action+0x29c/0x370 handle_softirqs+0xce/0x270 __irq_exit_rcu+0xa3/0xc0 common_interrupt+0x80/0xa0 </IRQ>
This inconsistency sometimes leads to checksum validation issues in the upper layers of the network stack.
To resolve this, this patch clears the skb->ip_summed value for each reused skb in by napi_reuse_skb(), ensuring that the caller is responsible for setting the correct checksum status. This eliminates potential checksum validation issues caused by improper handling of skb->ip_summed.
Fixes: 76620aafd66f ("gro: New frags interface to avoid copying shinfo") Signed-off-by: Mohammad Heib mheib@redhat.com Reviewed-by: Shannon Nelson shannon.nelson@amd.com Reviewed-by: Eric Dumazet edumazet@google.com Link: https://patch.msgid.link/20250225112852.2507709-1-mheib@redhat.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/gro.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/net/core/gro.c b/net/core/gro.c index 47118e97ecfdd..c4cbf398c5f78 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -679,6 +679,7 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) skb->pkt_type = PACKET_HOST;
skb->encapsulation = 0; + skb->ip_summed = CHECKSUM_NONE; skb_shinfo(skb)->gso_type = 0; skb_shinfo(skb)->gso_size = 0; if (unlikely(skb->slow_gro)) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Harshal Chaudhari hchaudhari@marvell.com
[ Upstream commit 2d253726ff7106b39a44483b6864398bba8a2f74 ]
Non IP flow, with vlan tag not working as expected while running below command for vlan-priority. fixed that.
ethtool -N eth1 flow-type ether vlan 0x8000 vlan-mask 0x1fff action 0 loc 0
Fixes: 1274daede3ef ("net: mvpp2: cls: Add steering based on vlan Id and priority.") Signed-off-by: Harshal Chaudhari hchaudhari@marvell.com Reviewed-by: Maxime Chevallier maxime.chevallier@bootlin.com Link: https://patch.msgid.link/20250225042058.2643838-1-hchaudhari@marvell.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c index 40aeaa7bd739f..d2757cc116139 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c @@ -324,7 +324,7 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = { MVPP2_PRS_RI_VLAN_MASK), /* Non IP flow, with vlan tag */ MVPP2_DEF_FLOW(MVPP22_FLOW_ETHERNET, MVPP2_FL_NON_IP_TAG, - MVPP22_CLS_HEK_OPT_VLAN, + MVPP22_CLS_HEK_TAGGED, 0, 0), };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shay Drory shayd@nvidia.com
[ Upstream commit 2f5a6014eb168a97b24153adccfa663d3b282767 ]
irq_pool_alloc() debug print can print a null string. Fix it by providing a default string to print.
Fixes: 71e084e26414 ("net/mlx5: Allocating a pool of MSI-X vectors for SFs") Signed-off-by: Shay Drory shayd@nvidia.com Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/oe-kbuild-all/202501141055.SwfIphN0-lkp@intel.com/ Reviewed-by: Moshe Shemesh moshe@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Reviewed-by: Kalesh AP kalesh-anakkur.purayil@broadcom.com Link: https://patch.msgid.link/20250225072608.526866-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c index a6d3fc96e1685..10b9dc2aaf06f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c @@ -513,7 +513,7 @@ irq_pool_alloc(struct mlx5_core_dev *dev, int start, int size, char *name, pool->min_threshold = min_threshold * MLX5_EQ_REFS_PER_IRQ; pool->max_threshold = max_threshold * MLX5_EQ_REFS_PER_IRQ; mlx5_core_dbg(dev, "pool->name = %s, pool->size = %d, pool->start = %d", - name, size, start); + name ? name : "mlx5_pcif_pool", size, start); return pool; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 0600cf40e9b36fe17f9c9f04d4f9cef249eaa5e7 ]
Add static inline dst_dev_overhead() function to include/net/dst.h. This helper function is used by ioam6_iptunnel, rpl_iptunnel and seg6_iptunnel to get the dev's overhead based on a cache entry (dst_entry). If the cache is empty, the default and generic value skb->mac_len is returned. Otherwise, LL_RESERVED_SPACE() over dst's dev is returned.
Signed-off-by: Justin Iurman justin.iurman@uliege.be Cc: Alexander Lobakin aleksander.lobakin@intel.com Cc: Vadim Fedorenko vadim.fedorenko@linux.dev Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: c64a0727f9b1 ("net: ipv6: fix dst ref loop on input in seg6 lwt") Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/dst.h | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/include/net/dst.h b/include/net/dst.h index d67fda89cd0fa..3a1a6f94a8092 100644 --- a/include/net/dst.h +++ b/include/net/dst.h @@ -434,6 +434,15 @@ static inline void dst_set_expires(struct dst_entry *dst, int timeout) dst->expires = expires; }
+static inline unsigned int dst_dev_overhead(struct dst_entry *dst, + struct sk_buff *skb) +{ + if (likely(dst)) + return LL_RESERVED_SPACE(dst->dev); + + return skb->mac_len; +} + INDIRECT_CALLABLE_DECLARE(int ip6_output(struct net *, struct sock *, struct sk_buff *)); INDIRECT_CALLABLE_DECLARE(int ip_output(struct net *, struct sock *,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 40475b63761abb6f8fdef960d03228a08662c9c4 ]
This patch mitigates the two-reallocations issue with seg6_iptunnel by providing the dst_entry (in the cache) to the first call to skb_cow_head(). As a result, the very first iteration would still trigger two reallocations (i.e., empty cache), while next iterations would only trigger a single reallocation.
Performance tests before/after applying this patch, which clearly shows the improvement: - before: https://ibb.co/3Cg4sNH - after: https://ibb.co/8rQ350r
Signed-off-by: Justin Iurman justin.iurman@uliege.be Cc: David Lebrun dlebrun@google.com Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: c64a0727f9b1 ("net: ipv6: fix dst ref loop on input in seg6 lwt") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/seg6_iptunnel.c | 85 ++++++++++++++++++++++++---------------- 1 file changed, 52 insertions(+), 33 deletions(-)
diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c index ae5299c277bcf..c161298c8b335 100644 --- a/net/ipv6/seg6_iptunnel.c +++ b/net/ipv6/seg6_iptunnel.c @@ -124,8 +124,8 @@ static __be32 seg6_make_flowlabel(struct net *net, struct sk_buff *skb, return flowlabel; }
-/* encapsulate an IPv6 packet within an outer IPv6 header with a given SRH */ -int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto) +static int __seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, + int proto, struct dst_entry *cache_dst) { struct dst_entry *dst = skb_dst(skb); struct net *net = dev_net(dst->dev); @@ -137,7 +137,7 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto) hdrlen = (osrh->hdrlen + 1) << 3; tot_len = hdrlen + sizeof(*hdr);
- err = skb_cow_head(skb, tot_len + skb->mac_len); + err = skb_cow_head(skb, tot_len + dst_dev_overhead(cache_dst, skb)); if (unlikely(err)) return err;
@@ -197,11 +197,18 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
return 0; } + +/* encapsulate an IPv6 packet within an outer IPv6 header with a given SRH */ +int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto) +{ + return __seg6_do_srh_encap(skb, osrh, proto, NULL); +} EXPORT_SYMBOL_GPL(seg6_do_srh_encap);
/* encapsulate an IPv6 packet within an outer IPv6 header with reduced SRH */ static int seg6_do_srh_encap_red(struct sk_buff *skb, - struct ipv6_sr_hdr *osrh, int proto) + struct ipv6_sr_hdr *osrh, int proto, + struct dst_entry *cache_dst) { __u8 first_seg = osrh->first_segment; struct dst_entry *dst = skb_dst(skb); @@ -230,7 +237,7 @@ static int seg6_do_srh_encap_red(struct sk_buff *skb,
tot_len = red_hdrlen + sizeof(struct ipv6hdr);
- err = skb_cow_head(skb, tot_len + skb->mac_len); + err = skb_cow_head(skb, tot_len + dst_dev_overhead(cache_dst, skb)); if (unlikely(err)) return err;
@@ -317,8 +324,8 @@ static int seg6_do_srh_encap_red(struct sk_buff *skb, return 0; }
-/* insert an SRH within an IPv6 packet, just after the IPv6 header */ -int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh) +static int __seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, + struct dst_entry *cache_dst) { struct ipv6hdr *hdr, *oldhdr; struct ipv6_sr_hdr *isrh; @@ -326,7 +333,7 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
hdrlen = (osrh->hdrlen + 1) << 3;
- err = skb_cow_head(skb, hdrlen + skb->mac_len); + err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb)); if (unlikely(err)) return err;
@@ -369,9 +376,8 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
return 0; } -EXPORT_SYMBOL_GPL(seg6_do_srh_inline);
-static int seg6_do_srh(struct sk_buff *skb) +static int seg6_do_srh(struct sk_buff *skb, struct dst_entry *cache_dst) { struct dst_entry *dst = skb_dst(skb); struct seg6_iptunnel_encap *tinfo; @@ -384,7 +390,7 @@ static int seg6_do_srh(struct sk_buff *skb) if (skb->protocol != htons(ETH_P_IPV6)) return -EINVAL;
- err = seg6_do_srh_inline(skb, tinfo->srh); + err = __seg6_do_srh_inline(skb, tinfo->srh, cache_dst); if (err) return err; break; @@ -402,9 +408,11 @@ static int seg6_do_srh(struct sk_buff *skb) return -EINVAL;
if (tinfo->mode == SEG6_IPTUN_MODE_ENCAP) - err = seg6_do_srh_encap(skb, tinfo->srh, proto); + err = __seg6_do_srh_encap(skb, tinfo->srh, + proto, cache_dst); else - err = seg6_do_srh_encap_red(skb, tinfo->srh, proto); + err = seg6_do_srh_encap_red(skb, tinfo->srh, + proto, cache_dst);
if (err) return err; @@ -425,11 +433,13 @@ static int seg6_do_srh(struct sk_buff *skb) skb_push(skb, skb->mac_len);
if (tinfo->mode == SEG6_IPTUN_MODE_L2ENCAP) - err = seg6_do_srh_encap(skb, tinfo->srh, - IPPROTO_ETHERNET); + err = __seg6_do_srh_encap(skb, tinfo->srh, + IPPROTO_ETHERNET, + cache_dst); else err = seg6_do_srh_encap_red(skb, tinfo->srh, - IPPROTO_ETHERNET); + IPPROTO_ETHERNET, + cache_dst);
if (err) return err; @@ -444,6 +454,13 @@ static int seg6_do_srh(struct sk_buff *skb) return 0; }
+/* insert an SRH within an IPv6 packet, just after the IPv6 header */ +int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh) +{ + return __seg6_do_srh_inline(skb, osrh, NULL); +} +EXPORT_SYMBOL_GPL(seg6_do_srh_inline); + static int seg6_input_finish(struct net *net, struct sock *sk, struct sk_buff *skb) { @@ -458,14 +475,15 @@ static int seg6_input_core(struct net *net, struct sock *sk, struct seg6_lwt *slwt; int err;
- err = seg6_do_srh(skb); - if (unlikely(err)) - goto drop; - slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
local_bh_disable(); dst = dst_cache_get(&slwt->cache); + local_bh_enable(); + + err = seg6_do_srh(skb, dst); + if (unlikely(err)) + goto drop;
skb_dst_drop(skb);
@@ -473,17 +491,18 @@ static int seg6_input_core(struct net *net, struct sock *sk, ip6_route_input(skb); dst = skb_dst(skb); if (!dst->error) { + local_bh_disable(); dst_cache_set_ip6(&slwt->cache, dst, &ipv6_hdr(skb)->saddr); + local_bh_enable(); } + + err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); + if (unlikely(err)) + goto drop; } else { skb_dst_set(skb, dst); } - local_bh_enable(); - - err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); - if (unlikely(err)) - goto drop;
if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled)) return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, @@ -529,16 +548,16 @@ static int seg6_output_core(struct net *net, struct sock *sk, struct seg6_lwt *slwt; int err;
- err = seg6_do_srh(skb); - if (unlikely(err)) - goto drop; - slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
local_bh_disable(); dst = dst_cache_get(&slwt->cache); local_bh_enable();
+ err = seg6_do_srh(skb, dst); + if (unlikely(err)) + goto drop; + if (unlikely(!dst)) { struct ipv6hdr *hdr = ipv6_hdr(skb); struct flowi6 fl6; @@ -560,15 +579,15 @@ static int seg6_output_core(struct net *net, struct sock *sk, local_bh_disable(); dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr); local_bh_enable(); + + err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); + if (unlikely(err)) + goto drop; }
skb_dst_drop(skb); skb_dst_set(skb, dst);
- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); - if (unlikely(err)) - goto drop; - if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled)) return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, skb, NULL, skb_dst(skb)->dev, dst_output);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit c64a0727f9b1cbc63a5538c8c0014e9a175ad864 ]
Prevent a dst ref loop on input in seg6_iptunnel.
Fixes: af4a2209b134 ("ipv6: sr: use dst_cache in seg6_input") Cc: David Lebrun dlebrun@google.com Cc: Ido Schimmel idosch@nvidia.com Reviewed-by: Ido Schimmel idosch@nvidia.com Signed-off-by: Justin Iurman justin.iurman@uliege.be Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/seg6_iptunnel.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c index c161298c8b335..b186d85ec5b3f 100644 --- a/net/ipv6/seg6_iptunnel.c +++ b/net/ipv6/seg6_iptunnel.c @@ -472,10 +472,18 @@ static int seg6_input_core(struct net *net, struct sock *sk, { struct dst_entry *orig_dst = skb_dst(skb); struct dst_entry *dst = NULL; + struct lwtunnel_state *lwtst; struct seg6_lwt *slwt; int err;
- slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate); + /* We cannot dereference "orig_dst" once ip6_route_input() or + * skb_dst_drop() is called. However, in order to detect a dst loop, we + * need the address of its lwtstate. So, save the address of lwtstate + * now and use it later as a comparison. + */ + lwtst = orig_dst->lwtstate; + + slwt = seg6_lwt_lwtunnel(lwtst);
local_bh_disable(); dst = dst_cache_get(&slwt->cache); @@ -490,7 +498,9 @@ static int seg6_input_core(struct net *net, struct sock *sk, if (!dst) { ip6_route_input(skb); dst = skb_dst(skb); - if (!dst->error) { + + /* cache only if we don't create a dst reference loop */ + if (!dst->error && lwtst != dst->lwtstate) { local_bh_disable(); dst_cache_set_ip6(&slwt->cache, dst, &ipv6_hdr(skb)->saddr);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 985ec6f5e6235242191370628acb73d7a9f0c0ea ]
This patch mitigates the two-reallocations issue with rpl_iptunnel by providing the dst_entry (in the cache) to the first call to skb_cow_head(). As a result, the very first iteration would still trigger two reallocations (i.e., empty cache), while next iterations would only trigger a single reallocation.
Performance tests before/after applying this patch, which clearly shows there is no impact (it even shows improvement): - before: https://ibb.co/nQJhqwc - after: https://ibb.co/4ZvW6wV
Signed-off-by: Justin Iurman justin.iurman@uliege.be Cc: Alexander Aring aahringo@redhat.com Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: 13e55fbaec17 ("net: ipv6: fix dst ref loop on input in rpl lwt") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/rpl_iptunnel.c | 46 ++++++++++++++++++++++------------------- 1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c index c1d0f947a7c87..69b9bd90140dd 100644 --- a/net/ipv6/rpl_iptunnel.c +++ b/net/ipv6/rpl_iptunnel.c @@ -125,7 +125,8 @@ static void rpl_destroy_state(struct lwtunnel_state *lwt) }
static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt, - const struct ipv6_rpl_sr_hdr *srh) + const struct ipv6_rpl_sr_hdr *srh, + struct dst_entry *cache_dst) { struct ipv6_rpl_sr_hdr *isrh, *csrh; const struct ipv6hdr *oldhdr; @@ -153,7 +154,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
hdrlen = ((csrh->hdrlen + 1) << 3);
- err = skb_cow_head(skb, hdrlen + skb->mac_len); + err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb)); if (unlikely(err)) { kfree(buf); return err; @@ -186,7 +187,8 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt, return 0; }
-static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt) +static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt, + struct dst_entry *cache_dst) { struct dst_entry *dst = skb_dst(skb); struct rpl_iptunnel_encap *tinfo; @@ -196,7 +198,7 @@ static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt)
tinfo = rpl_encap_lwtunnel(dst->lwtstate);
- return rpl_do_srh_inline(skb, rlwt, tinfo->srh); + return rpl_do_srh_inline(skb, rlwt, tinfo->srh, cache_dst); }
static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb) @@ -208,14 +210,14 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
- err = rpl_do_srh(skb, rlwt); - if (unlikely(err)) - goto drop; - local_bh_disable(); dst = dst_cache_get(&rlwt->cache); local_bh_enable();
+ err = rpl_do_srh(skb, rlwt, dst); + if (unlikely(err)) + goto drop; + if (unlikely(!dst)) { struct ipv6hdr *hdr = ipv6_hdr(skb); struct flowi6 fl6; @@ -237,15 +239,15 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb) local_bh_disable(); dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr); local_bh_enable(); + + err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); + if (unlikely(err)) + goto drop; }
skb_dst_drop(skb); skb_dst_set(skb, dst);
- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); - if (unlikely(err)) - goto drop; - return dst_output(net, sk, skb);
drop: @@ -262,12 +264,13 @@ static int rpl_input(struct sk_buff *skb)
rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
- err = rpl_do_srh(skb, rlwt); - if (unlikely(err)) - goto drop; - local_bh_disable(); dst = dst_cache_get(&rlwt->cache); + local_bh_enable(); + + err = rpl_do_srh(skb, rlwt, dst); + if (unlikely(err)) + goto drop;
skb_dst_drop(skb);
@@ -275,17 +278,18 @@ static int rpl_input(struct sk_buff *skb) ip6_route_input(skb); dst = skb_dst(skb); if (!dst->error) { + local_bh_disable(); dst_cache_set_ip6(&rlwt->cache, dst, &ipv6_hdr(skb)->saddr); + local_bh_enable(); } + + err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); + if (unlikely(err)) + goto drop; } else { skb_dst_set(skb, dst); } - local_bh_enable(); - - err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); - if (unlikely(err)) - goto drop;
return dst_input(skb);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 13e55fbaec176119cff68a7e1693b251c8883c5f ]
Prevent a dst ref loop on input in rpl_iptunnel.
Fixes: a7a29f9c361f ("net: ipv6: add rpl sr tunnel") Cc: Alexander Aring alex.aring@gmail.com Cc: Ido Schimmel idosch@nvidia.com Reviewed-by: Ido Schimmel idosch@nvidia.com Signed-off-by: Justin Iurman justin.iurman@uliege.be Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/rpl_iptunnel.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c index 69b9bd90140dd..862ac1e2e191c 100644 --- a/net/ipv6/rpl_iptunnel.c +++ b/net/ipv6/rpl_iptunnel.c @@ -259,10 +259,18 @@ static int rpl_input(struct sk_buff *skb) { struct dst_entry *orig_dst = skb_dst(skb); struct dst_entry *dst = NULL; + struct lwtunnel_state *lwtst; struct rpl_lwt *rlwt; int err;
- rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate); + /* We cannot dereference "orig_dst" once ip6_route_input() or + * skb_dst_drop() is called. However, in order to detect a dst loop, we + * need the address of its lwtstate. So, save the address of lwtstate + * now and use it later as a comparison. + */ + lwtst = orig_dst->lwtstate; + + rlwt = rpl_lwt_lwtunnel(lwtst);
local_bh_disable(); dst = dst_cache_get(&rlwt->cache); @@ -277,7 +285,9 @@ static int rpl_input(struct sk_buff *skb) if (!dst) { ip6_route_input(skb); dst = skb_dst(skb); - if (!dst->error) { + + /* cache only if we don't create a dst reference loop */ + if (!dst->error && lwtst != dst->lwtstate) { local_bh_disable(); dst_cache_set_ip6(&rlwt->cache, dst, &ipv6_hdr(skb)->saddr);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells dhowells@redhat.com
[ Upstream commit c8070b78751955e59b42457b974bea4a4fe00187 ]
Make pin_user_pages*() leave a ZERO_PAGE unpinned if it extracts a pointer to it from the page tables and make unpin_user_page*() correspondingly ignore a ZERO_PAGE when unpinning. We don't want to risk overrunning a zero page's refcount as we're only allowed ~2 million pins on it - something that userspace can conceivably trigger.
Add a pair of functions to test whether a page or a folio is a ZERO_PAGE.
Signed-off-by: David Howells dhowells@redhat.com cc: Christoph Hellwig hch@infradead.org cc: David Hildenbrand david@redhat.com cc: Lorenzo Stoakes lstoakes@gmail.com cc: Andrew Morton akpm@linux-foundation.org cc: Jens Axboe axboe@kernel.dk cc: Al Viro viro@zeniv.linux.org.uk cc: Matthew Wilcox willy@infradead.org cc: Jan Kara jack@suse.cz cc: Jeff Layton jlayton@kernel.org cc: Jason Gunthorpe jgg@nvidia.com cc: Logan Gunthorpe logang@deltatee.com cc: Hillf Danton hdanton@sina.com cc: Christian Brauner brauner@kernel.org cc: Linus Torvalds torvalds@linux-foundation.org cc: linux-fsdevel@vger.kernel.org cc: linux-block@vger.kernel.org cc: linux-kernel@vger.kernel.org cc: linux-mm@kvack.org Reviewed-by: Lorenzo Stoakes lstoakes@gmail.com Reviewed-by: Christoph Hellwig hch@lst.de Acked-by: David Hildenbrand david@redhat.com Link: https://lore.kernel.org/r/20230526214142.958751-2-dhowells@redhat.com Signed-off-by: Jens Axboe axboe@kernel.dk Stable-dep-of: bddf10d26e6e ("uprobes: Reject the shared zeropage in uprobe_write_opcode()") Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/core-api/pin_user_pages.rst | 6 +++++ include/linux/mm.h | 26 +++++++++++++++++-- mm/gup.c | 31 ++++++++++++++++++++++- 3 files changed, 60 insertions(+), 3 deletions(-)
diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst index b18416f4500fe..7995ce2b9676a 100644 --- a/Documentation/core-api/pin_user_pages.rst +++ b/Documentation/core-api/pin_user_pages.rst @@ -113,6 +113,12 @@ pages: This also leads to limitations: there are only 31-10==21 bits available for a counter that increments 10 bits at a time.
+* Because of that limitation, special handling is applied to the zero pages + when using FOLL_PIN. We only pretend to pin a zero page - we don't alter its + refcount or pincount at all (it is permanent, so there's no need). The + unpinning functions also don't do anything to a zero page. This is + transparent to the caller. + * Callers must specifically request "dma-pinned tracking of pages". In other words, just calling get_user_pages() will not suffice; a new set of functions, pin_user_page() and related, must be used. diff --git a/include/linux/mm.h b/include/linux/mm.h index 971186f0b7b07..03357c196e0ba 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1610,6 +1610,28 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, return page_maybe_dma_pinned(page); }
+/** + * is_zero_page - Query if a page is a zero page + * @page: The page to query + * + * This returns true if @page is one of the permanent zero pages. + */ +static inline bool is_zero_page(const struct page *page) +{ + return is_zero_pfn(page_to_pfn(page)); +} + +/** + * is_zero_folio - Query if a folio is a zero page + * @folio: The folio to query + * + * This returns true if @folio is one of the permanent zero pages. + */ +static inline bool is_zero_folio(const struct folio *folio) +{ + return is_zero_page(&folio->page); +} + /* MIGRATE_CMA and ZONE_MOVABLE do not allow pin pages */ #ifdef CONFIG_MIGRATION static inline bool is_longterm_pinnable_page(struct page *page) @@ -1620,8 +1642,8 @@ static inline bool is_longterm_pinnable_page(struct page *page) if (mt == MIGRATE_CMA || mt == MIGRATE_ISOLATE) return false; #endif - /* The zero page may always be pinned */ - if (is_zero_pfn(page_to_pfn(page))) + /* The zero page can be "pinned" but gets special handling. */ + if (is_zero_page(page)) return true;
/* Coherent device memory must always allow eviction. */ diff --git a/mm/gup.c b/mm/gup.c index e31d00443c4e6..b1daaa9d89aab 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -51,7 +51,8 @@ static inline void sanity_check_pinned_pages(struct page **pages, struct page *page = *pages; struct folio *folio = page_folio(page);
- if (!folio_test_anon(folio)) + if (is_zero_page(page) || + !folio_test_anon(folio)) continue; if (!folio_test_large(folio) || folio_test_hugetlb(folio)) VM_BUG_ON_PAGE(!PageAnonExclusive(&folio->page), page); @@ -128,6 +129,13 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) else if (flags & FOLL_PIN) { struct folio *folio;
+ /* + * Don't take a pin on the zero page - it's not going anywhere + * and it is used in a *lot* of places. + */ + if (is_zero_page(page)) + return page_folio(page); + /* * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a * right zone, so fail and let the caller fall back to the slow @@ -177,6 +185,8 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) { if (flags & FOLL_PIN) { + if (is_zero_folio(folio)) + return; node_stat_mod_folio(folio, NR_FOLL_PIN_RELEASED, refs); if (folio_test_large(folio)) atomic_sub(refs, folio_pincount_ptr(folio)); @@ -217,6 +227,13 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags) if (flags & FOLL_GET) folio_ref_inc(folio); else if (flags & FOLL_PIN) { + /* + * Don't take a pin on the zero page - it's not going anywhere + * and it is used in a *lot* of places. + */ + if (is_zero_page(page)) + return 0; + /* * Similar to try_grab_folio(): be sure to *also* * increment the normal page refcount field at least once, @@ -3149,6 +3166,9 @@ EXPORT_SYMBOL_GPL(get_user_pages_fast); * * FOLL_PIN means that the pages must be released via unpin_user_page(). Please * see Documentation/core-api/pin_user_pages.rst for further details. + * + * Note that if a zero_page is amongst the returned pages, it will not have + * pins in it and unpin_user_page() will not remove pins from it. */ int pin_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) @@ -3225,6 +3245,9 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast_only); * * FOLL_PIN means that the pages must be released via unpin_user_page(). Please * see Documentation/core-api/pin_user_pages.rst for details. + * + * Note that if a zero_page is amongst the returned pages, it will not have + * pins in it and unpin_user_page*() will not remove pins from it. */ long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, @@ -3260,6 +3283,9 @@ EXPORT_SYMBOL(pin_user_pages_remote); * * FOLL_PIN means that the pages must be released via unpin_user_page(). Please * see Documentation/core-api/pin_user_pages.rst for details. + * + * Note that if a zero_page is amongst the returned pages, it will not have + * pins in it and unpin_user_page*() will not remove pins from it. */ long pin_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, @@ -3282,6 +3308,9 @@ EXPORT_SYMBOL(pin_user_pages); * pin_user_pages_unlocked() is the FOLL_PIN variant of * get_user_pages_unlocked(). Behavior is the same, except that this one sets * FOLL_PIN and rejects FOLL_GET. + * + * Note that if a zero_page is amongst the returned pages, it will not have + * pins in it and unpin_user_page*() will not remove pins from it. */ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tong Tiangen tongtiangen@huawei.com
[ Upstream commit bddf10d26e6e5114e7415a0e442ec6f51a559468 ]
We triggered the following crash in syzkaller tests:
BUG: Bad page state in process syz.7.38 pfn:1eff3 page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1eff3 flags: 0x3fffff00004004(referenced|reserved|node=0|zone=1|lastcpupid=0x1fffff) raw: 003fffff00004004 ffffe6c6c07bfcc8 ffffe6c6c07bfcc8 0000000000000000 raw: 0000000000000000 0000000000000000 00000000fffffffe 0000000000000000 page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x32/0x50 bad_page+0x69/0xf0 free_unref_page_prepare+0x401/0x500 free_unref_page+0x6d/0x1b0 uprobe_write_opcode+0x460/0x8e0 install_breakpoint.part.0+0x51/0x80 register_for_each_vma+0x1d9/0x2b0 __uprobe_register+0x245/0x300 bpf_uprobe_multi_link_attach+0x29b/0x4f0 link_create+0x1e2/0x280 __sys_bpf+0x75f/0xac0 __x64_sys_bpf+0x1a/0x30 do_syscall_64+0x56/0x100 entry_SYSCALL_64_after_hwframe+0x78/0xe2
BUG: Bad rss-counter state mm:00000000452453e0 type:MM_FILEPAGES val:-1
The following syzkaller test case can be used to reproduce:
r2 = creat(&(0x7f0000000000)='./file0\x00', 0x8) write$nbd(r2, &(0x7f0000000580)=ANY=[], 0x10) r4 = openat(0xffffffffffffff9c, &(0x7f0000000040)='./file0\x00', 0x42, 0x0) mmap$IORING_OFF_SQ_RING(&(0x7f0000ffd000/0x3000)=nil, 0x3000, 0x0, 0x12, r4, 0x0) r5 = userfaultfd(0x80801) ioctl$UFFDIO_API(r5, 0xc018aa3f, &(0x7f0000000040)={0xaa, 0x20}) r6 = userfaultfd(0x80801) ioctl$UFFDIO_API(r6, 0xc018aa3f, &(0x7f0000000140)) ioctl$UFFDIO_REGISTER(r6, 0xc020aa00, &(0x7f0000000100)={{&(0x7f0000ffc000/0x4000)=nil, 0x4000}, 0x2}) ioctl$UFFDIO_ZEROPAGE(r5, 0xc020aa04, &(0x7f0000000000)={{&(0x7f0000ffd000/0x1000)=nil, 0x1000}}) r7 = bpf$PROG_LOAD(0x5, &(0x7f0000000140)={0x2, 0x3, &(0x7f0000000200)=ANY=[@ANYBLOB="1800000000120000000000000000000095"], &(0x7f0000000000)='GPL\x00', 0x7, 0x0, 0x0, 0x0, 0x0, '\x00', 0x0, @fallback=0x30, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x10, 0x0, @void, @value}, 0x94) bpf$BPF_LINK_CREATE_XDP(0x1c, &(0x7f0000000040)={r7, 0x0, 0x30, 0x1e, @val=@uprobe_multi={&(0x7f0000000080)='./file0\x00', &(0x7f0000000100)=[0x2], 0x0, 0x0, 0x1}}, 0x40)
The cause is that zero pfn is set to the PTE without increasing the RSS count in mfill_atomic_pte_zeropage() and the refcount of zero folio does not increase accordingly. Then, the operation on the same pfn is performed in uprobe_write_opcode()->__replace_page() to unconditional decrease the RSS count and old_folio's refcount.
Therefore, two bugs are introduced:
1. The RSS count is incorrect, when process exit, the check_mm() report error "Bad rss-count".
2. The reserved folio (zero folio) is freed when folio->refcount is zero, then free_pages_prepare->free_page_is_bad() report error "Bad page state".
There is more, the following warning could also theoretically be triggered:
__replace_page() -> ... -> folio_remove_rmap_pte() -> VM_WARN_ON_FOLIO(is_zero_folio(folio), folio)
Considering that uprobe hit on the zero folio is a very rare case, just reject zero old folio immediately after get_user_page_vma_remote().
[ mingo: Cleaned up the changelog ]
Fixes: 7396fa818d62 ("uprobes/core: Make background page replacement logic account for rss_stat counters") Fixes: 2b1444983508 ("uprobes, mm, x86: Add the ability to install and remove uprobes breakpoints") Signed-off-by: Tong Tiangen tongtiangen@huawei.com Signed-off-by: Ingo Molnar mingo@kernel.org Reviewed-by: David Hildenbrand david@redhat.com Reviewed-by: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Masami Hiramatsu mhiramat@kernel.org Link: https://lore.kernel.org/r/20250224031149.1598949-1-tongtiangen@huawei.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/uprobes.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 9ee25351cecac..7a22db17f3b5e 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -484,6 +484,11 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (ret <= 0) goto put_old;
+ if (is_zero_page(old_page)) { + ret = -EINVAL; + goto put_old; + } + if (WARN(!is_register && PageCompound(old_page), "uprobe unregister should never work on compound page\n")) { ret = -EINVAL;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pavel Begunkov asml.silence@gmail.com
[ Upstream commit 6ebf05189dfc6d0d597c99a6448a4d1064439a18 ]
Match the compat part of io_sendmsg_copy_hdr() with its counterpart and save msg_control.
Fixes: c55978024d123 ("io_uring/net: move receive multishot out of the generic msghdr path") Signed-off-by: Pavel Begunkov asml.silence@gmail.com Link: https://lore.kernel.org/r/2a8418821fe83d3b64350ad2b3c0303e9b732bbd.174049850... Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- io_uring/net.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/io_uring/net.c b/io_uring/net.c index dc7c1e44ec47b..d56e8a47e50f2 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -282,7 +282,9 @@ static int io_sendmsg_copy_hdr(struct io_kiocb *req, if (unlikely(ret)) return ret;
- return __get_compat_msghdr(&iomsg->msg, &cmsg, NULL); + ret = __get_compat_msghdr(&iomsg->msg, &cmsg, NULL); + sr->msg_control = iomsg->msg.msg_control_user; + return ret; } #endif
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Russell Senior russell@personaltelco.net
[ Upstream commit bebe35bb738b573c32a5033499cd59f20293f2a3 ]
I still have some Soekris net4826 in a Community Wireless Network I volunteer with. These devices use an AMD SC1100 SoC. I am running OpenWrt on them, which uses a patched kernel, that naturally has evolved over time. I haven't updated the ones in the field in a number of years (circa 2017), but have one in a test bed, where I have intermittently tried out test builds.
A few years ago, I noticed some trouble, particularly when "warm booting", that is, doing a reboot without removing power, and noticed the device was hanging after the kernel message:
[ 0.081615] Working around Cyrix MediaGX virtual DMA bugs.
If I removed power and then restarted, it would boot fine, continuing through the message above, thusly:
[ 0.081615] Working around Cyrix MediaGX virtual DMA bugs. [ 0.090076] Enable Memory-Write-back mode on Cyrix/NSC processor. [ 0.100000] Enable Memory access reorder on Cyrix/NSC processor. [ 0.100070] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.110058] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 [ 0.120037] CPU: NSC Geode(TM) Integrated Processor by National Semi (family: 0x5, model: 0x9, stepping: 0x1) [...]
In order to continue using modern tools, like ssh, to interact with the software on these old devices, I need modern builds of the OpenWrt firmware on the devices. I confirmed that the warm boot hang was still an issue in modern OpenWrt builds (currently using a patched linux v6.6.65).
Last night, I decided it was time to get to the bottom of the warm boot hang, and began bisecting. From preserved builds, I narrowed down the bisection window from late February to late May 2019. During this period, the OpenWrt builds were using 4.14.x. I was able to build using period-correct Ubuntu 18.04.6. After a number of bisection iterations, I identified a kernel bump from 4.14.112 to 4.14.113 as the commit that introduced the warm boot hang.
https://github.com/openwrt/openwrt/commit/07aaa7e3d62ad32767d7067107db64b6ad...
Looking at the upstream changes in the stable kernel between 4.14.112 and 4.14.113 (tig v4.14.112..v4.14.113), I spotted a likely suspect:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
So, I tried reverting just that kernel change on top of the breaking OpenWrt commit, and my warm boot hang went away.
Presumably, the warm boot hang is due to some register not getting cleared in the same way that a loss of power does. That is approximately as much as I understand about the problem.
More poking/prodding and coaching from Jonas Gorski, it looks like this test patch fixes the problem on my board: Tested against v6.6.67 and v4.14.113.
Fixes: 18fb053f9b82 ("x86/cpu/cyrix: Use correct macros for Cyrix calls on Geode processors") Debugged-by: Jonas Gorski jonas.gorski@gmail.com Signed-off-by: Russell Senior russell@personaltelco.net Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/CAHP3WfOgs3Ms4Z+L9i0-iBOE21sdMk5erAiJurPjnrL9LSsgR... Cc: Matthew Whitehead tedheadster@gmail.com Cc: Thomas Gleixner tglx@linutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/cpu/cyrix.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/cyrix.c b/arch/x86/kernel/cpu/cyrix.c index 9651275aecd1b..dfec2c61e3547 100644 --- a/arch/x86/kernel/cpu/cyrix.c +++ b/arch/x86/kernel/cpu/cyrix.c @@ -153,8 +153,8 @@ static void geode_configure(void) u8 ccr3; local_irq_save(flags);
- /* Suspend on halt power saving and enable #SUSP pin */ - setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88); + /* Suspend on halt power saving */ + setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x08);
ccr3 = getCx86(CX86_CCR3); setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chukun Pan amadeus@jmu.edu.cn
[ Upstream commit 3126ea9be66b53e607f87f067641ba724be24181 ]
The device tree of RK3568 did not specify reset-names before. So add fallback to old behaviour to be compatible with old DT.
Fixes: fbcbffbac994 ("phy: rockchip: naneng-combphy: fix phy reset") Cc: Jianfeng Liu liujianfeng1994@gmail.com Signed-off-by: Chukun Pan amadeus@jmu.edu.cn Reviewed-by: Jonas Karlman jonas@kwiboo.se Link: https://lore.kernel.org/r/20250106100001.1344418-2-amadeus@jmu.edu.cn Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/phy/rockchip/phy-rockchip-naneng-combphy.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c index d97a7164c4964..2c73cc8dd1edb 100644 --- a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c +++ b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c @@ -299,7 +299,10 @@ static int rockchip_combphy_parse_dt(struct device *dev, struct rockchip_combphy
priv->ext_refclk = device_property_present(dev, "rockchip,ext-refclk");
- priv->phy_rst = devm_reset_control_get(dev, "phy"); + priv->phy_rst = devm_reset_control_get_exclusive(dev, "phy"); + /* fallback to old behaviour */ + if (PTR_ERR(priv->phy_rst) == -ENOENT) + priv->phy_rst = devm_reset_control_array_get_exclusive(dev); if (IS_ERR(priv->phy_rst)) return dev_err_probe(dev, PTR_ERR(priv->phy_rst), "failed to get phy reset\n");
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steven Rostedt rostedt@goodmis.org
commit 6f86bdeab633a56d5c6dccf1a2c5989b6a5e323e upstream.
The following commands causes a crash:
~# cd /sys/kernel/tracing/events/rcu/rcu_callback ~# echo 'hist:name=bad:keys=common_pid:onmax(bogus).save(common_pid)' > trigger bash: echo: write error: Invalid argument ~# echo 'hist:name=bad:keys=common_pid' > trigger
Because the following occurs:
event_trigger_write() { trigger_process_regex() { event_hist_trigger_parse() {
data = event_trigger_alloc(..);
event_trigger_register(.., data) { cmd_ops->reg(.., data, ..) [hist_register_trigger()] { data->ops->init() [event_hist_trigger_init()] { save_named_trigger(name, data) { list_add(&data->named_list, &named_triggers); } } } }
ret = create_actions(); (return -EINVAL) if (ret) goto out_unreg; [..] ret = hist_trigger_enable(data, ...) { list_add_tail_rcu(&data->list, &file->triggers); <<<---- SKIPPED!!! (this is important!) [..] out_unreg: event_hist_unregister(.., data) { cmd_ops->unreg(.., data, ..) [hist_unregister_trigger()] { list_for_each_entry(iter, &file->triggers, list) { if (!hist_trigger_match(data, iter, named_data, false)) <- never matches continue; [..] test = iter; } if (test && test->ops->free) <<<-- test is NULL
test->ops->free(test) [event_hist_trigger_free()] { [..] if (data->name) del_named_trigger(data) { list_del(&data->named_list); <<<<-- NEVER gets removed! } } } }
[..] kfree(data); <<<-- frees item but it is still on list
The next time a hist with name is registered, it causes an u-a-f bug and the kernel can crash.
Move the code around such that if event_trigger_register() succeeds, the next thing called is hist_trigger_enable() which adds it to the list.
A bunch of actions is called if get_named_trigger_data() returns false. But that doesn't need to be called after event_trigger_register(), so it can be moved up, allowing event_trigger_register() to be called just before hist_trigger_enable() keeping them together and allowing the file->triggers to be properly populated.
Cc: stable@vger.kernel.org Cc: Masami Hiramatsu mhiramat@kernel.org Cc: Mathieu Desnoyers mathieu.desnoyers@efficios.com Link: https://lore.kernel.org/20250227163944.1c37f85f@gandalf.local.home Fixes: 067fe038e70f6 ("tracing: Add variable reference handling to hist triggers") Reported-by: Tomas Glozar tglozar@redhat.com Tested-by: Tomas Glozar tglozar@redhat.com Reviewed-by: Tom Zanussi zanussi@kernel.org Closes: https://lore.kernel.org/all/CAP4=nvTsxjckSBTz=Oe_UYh8keD9_sZC4i++4h72mJLic4_... Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_events_hist.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-)
--- a/kernel/trace/trace_events_hist.c +++ b/kernel/trace/trace_events_hist.c @@ -6551,27 +6551,27 @@ static int event_hist_trigger_parse(stru if (existing_hist_update_only(glob, trigger_data, file)) goto out_free;
- ret = event_trigger_register(cmd_ops, file, glob, trigger_data); - if (ret < 0) - goto out_free; + if (!get_named_trigger_data(trigger_data)) {
- if (get_named_trigger_data(trigger_data)) - goto enable; + ret = create_actions(hist_data); + if (ret) + goto out_free;
- ret = create_actions(hist_data); - if (ret) - goto out_unreg; + if (has_hist_vars(hist_data) || hist_data->n_var_refs) { + ret = save_hist_vars(hist_data); + if (ret) + goto out_free; + }
- if (has_hist_vars(hist_data) || hist_data->n_var_refs) { - ret = save_hist_vars(hist_data); + ret = tracing_map_init(hist_data->map); if (ret) - goto out_unreg; + goto out_free; }
- ret = tracing_map_init(hist_data->map); - if (ret) - goto out_unreg; -enable: + ret = event_trigger_register(cmd_ops, file, glob, trigger_data); + if (ret < 0) + goto out_free; + ret = hist_trigger_enable(trigger_data, file); if (ret) goto out_unreg;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikolay Kuratov kniv@yandex-team.ru
commit a1a7eb89ca0b89dc1c326eeee2596f263291aca3 upstream.
Check whether denominator expression x * (x - 1) * 1000 mod {2^32, 2^64} produce zero and skip stddev computation in that case.
For now don't care about rec->counter * rec->counter overflow because rec->time * rec->time overflow will likely happen earlier.
Cc: stable@vger.kernel.org Cc: Wen Yang wenyang@linux.alibaba.com Cc: Mark Rutland mark.rutland@arm.com Cc: Mathieu Desnoyers mathieu.desnoyers@efficios.com Link: https://lore.kernel.org/20250206090156.1561783-1-kniv@yandex-team.ru Fixes: e31f7939c1c27 ("ftrace: Avoid potential division by zero in function profiler") Signed-off-by: Nikolay Kuratov kniv@yandex-team.ru Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/ftrace.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-)
--- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -507,6 +507,7 @@ static int function_stat_show(struct seq static struct trace_seq s; unsigned long long avg; unsigned long long stddev; + unsigned long long stddev_denom; #endif mutex_lock(&ftrace_profile_lock);
@@ -528,23 +529,19 @@ static int function_stat_show(struct seq #ifdef CONFIG_FUNCTION_GRAPH_TRACER seq_puts(m, " ");
- /* Sample standard deviation (s^2) */ - if (rec->counter <= 1) - stddev = 0; - else { - /* - * Apply Welford's method: - * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2) - */ + /* + * Variance formula: + * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2) + * Maybe Welford's method is better here? + * Divide only by 1000 for ns^2 -> us^2 conversion. + * trace_print_graph_duration will divide by 1000 again. + */ + stddev = 0; + stddev_denom = rec->counter * (rec->counter - 1) * 1000; + if (stddev_denom) { stddev = rec->counter * rec->time_squared - rec->time * rec->time; - - /* - * Divide only 1000 for ns^2 -> us^2 conversion. - * trace_print_graph_duration will divide 1000 again. - */ - stddev = div64_ul(stddev, - rec->counter * (rec->counter - 1) * 1000); + stddev = div64_ul(stddev, stddev_denom); }
trace_seq_init(&s);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Panchenko dmitry@d-systems.ee
commit 9af3b4f2d879da01192d6168e6c651e7fb5b652d upstream.
Re-add the sample-rate quirk for the Pioneer DJM-900NXS2. This device does not work without setting sample-rate.
Signed-off-by: Dmitry Panchenko dmitry@d-systems.ee Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20250220161540.3624660-1-dmitry@d-systems.ee Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/usb/quirks.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/usb/quirks.c +++ b/sound/usb/quirks.c @@ -1773,6 +1773,7 @@ void snd_usb_set_format_quirk(struct snd case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */ subs->stream_offset_adj = 2; break; + case USB_ID(0x2b73, 0x000a): /* Pioneer DJM-900NXS2 */ case USB_ID(0x2b73, 0x0013): /* Pioneer DJM-450 */ pioneer_djm_set_format_quirk(subs, 0x0082); break;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kan Liang kan.liang@linux.intel.com
commit 88ec7eedbbd21cad38707620ad6c48a4e9a87c18 upstream.
Perf doesn't work at low frequencies:
$ perf record -e cpu_core/instructions/ppp -F 120 Error: The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (cpu_core/instructions/ppp). "dmesg | grep -i perf" may provide additional information.
The limit_period() check avoids a low sampling period on a counter. It doesn't intend to limit the frequency.
The check in the x86_pmu_hw_config() should be limited to non-freq mode. The attr.sample_period and attr.sample_freq are union. The attr.sample_period should not be used to indicate the frequency mode.
Fixes: c46e665f0377 ("perf/x86: Add INST_RETIRED.ALL workarounds") Signed-off-by: Kan Liang kan.liang@linux.intel.com Signed-off-by: Ingo Molnar mingo@kernel.org Reviewed-by: Ravi Bangoria ravi.bangoria@amd.com Cc: Peter Zijlstra peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250117151913.3043942-1-kan.liang@linux.intel.com Closes: https://lore.kernel.org/lkml/20250115154949.3147-1-ravi.bangoria@amd.com/ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/events/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -623,7 +623,7 @@ int x86_pmu_hw_config(struct perf_event if (event->attr.type == event->pmu->type) event->hw.config |= event->attr.config & X86_RAW_EVENT_MASK;
- if (event->attr.sample_period && x86_pmu.limit_period) { + if (!event->attr.freq && x86_pmu.limit_period) { s64 left = event->attr.sample_period; x86_pmu.limit_period(event, &left); if (left > event->attr.sample_period)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kan Liang kan.liang@linux.intel.com
commit 0d39844150546fa1415127c5fbae26db64070dd3 upstream.
A low attr::freq value cannot be set via IOC_PERIOD on some platforms.
The perf_event_check_period() introduced in:
81ec3f3c4c4d ("perf/x86: Add check_period PMU callback")
was intended to check the period, rather than the frequency. A low frequency may be mistakenly rejected by limit_period().
Fix it.
Fixes: 81ec3f3c4c4d ("perf/x86: Add check_period PMU callback") Signed-off-by: Kan Liang kan.liang@linux.intel.com Signed-off-by: Ingo Molnar mingo@kernel.org Reviewed-by: Ravi Bangoria ravi.bangoria@amd.com Cc: Peter Zijlstra peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250117151913.3043942-2-kan.liang@linux.intel.com Closes: https://lore.kernel.org/lkml/20250115154949.3147-1-ravi.bangoria@amd.com/ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/events/core.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)
--- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5679,14 +5679,15 @@ static int _perf_event_period(struct per if (!value) return -EINVAL;
- if (event->attr.freq && value > sysctl_perf_event_sample_rate) - return -EINVAL; - - if (perf_event_check_period(event, value)) - return -EINVAL; - - if (!event->attr.freq && (value & (1ULL << 63))) - return -EINVAL; + if (event->attr.freq) { + if (value > sysctl_perf_event_sample_rate) + return -EINVAL; + } else { + if (perf_event_check_period(event, value)) + return -EINVAL; + if (value & (1ULL << 63)) + return -EINVAL; + }
event_function_call(event, __perf_event_period, &value);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tom Chung chiahsuan.chung@amd.com
commit e8863f8b0316d8ee1e7e5291e8f2f72c91ac967d upstream.
[Why] PSR-SU may cause some glitching randomly on several panels.
[How] Temporarily disable the PSR-SU and fallback to PSR1 for all eDP panels.
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/3388 Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Reviewed-by: Sun peng Li sunpeng.li@amd.com Signed-off-by: Tom Chung chiahsuan.chung@amd.com Signed-off-by: Roman Li roman.li@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit 6deeefb820d0efb0b36753622fb982d03b37b3ad) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c @@ -51,7 +51,8 @@ static bool link_supports_psrsu(struct d !link->dpcd_caps.psr_info.psr2_su_y_granularity_cap) return false;
- return dc_dmub_check_min_version(dc->ctx->dmub_srv->dmub); + /* Temporarily disable PSR-SU to avoid glitches */ + return false; }
/*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Roman Li Roman.Li@amd.com
commit 4de141b8b1b7991b607f77e5f4580e1c67c24717 upstream.
[Why] DC is not using amdgpu_irq_get/put to manage the HPD interrupt refcounts. So when amdgpu_irq_gpu_reset_resume_helper() reprograms all of the IRQs, HPD gets disabled.
[How] Use amdgpu_irq_get/put() for HPD init/fini in DM in order to sync refcounts
Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Aurabindo Pillai aurabindo.pillai@amd.com Signed-off-by: Roman Li Roman.Li@amd.com Signed-off-by: Zaeem Mohamed zaeem.mohamed@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit f3dde2ff7fcaacd77884502e8f572f2328e9c745) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c @@ -885,6 +885,7 @@ void amdgpu_dm_hpd_init(struct amdgpu_de struct drm_device *dev = adev_to_drm(adev); struct drm_connector *connector; struct drm_connector_list_iter iter; + int i;
drm_connector_list_iter_begin(dev, &iter); drm_for_each_connector_iter(connector, &iter) { @@ -906,6 +907,12 @@ void amdgpu_dm_hpd_init(struct amdgpu_de } } drm_connector_list_iter_end(&iter); + + /* Update reference counts for HPDs */ + for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { + if (amdgpu_irq_get(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) + drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", i); + } }
/** @@ -921,6 +928,7 @@ void amdgpu_dm_hpd_fini(struct amdgpu_de struct drm_device *dev = adev_to_drm(adev); struct drm_connector *connector; struct drm_connector_list_iter iter; + int i;
drm_connector_list_iter_begin(dev, &iter); drm_for_each_connector_iter(connector, &iter) { @@ -941,4 +949,10 @@ void amdgpu_dm_hpd_fini(struct amdgpu_de } } drm_connector_list_iter_end(&iter); + + /* Update reference counts for HPDs */ + for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { + if (amdgpu_irq_put(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) + drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", i); + } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyrone Ting kfting@nuvoton.com
commit dd1998e243f5fa25d348a384ba0b6c84d980f2b2 upstream.
The customer reports that there is a soft lockup issue related to the i2c driver. After checking, the i2c module was doing a tx transfer and the bmc machine reboots in the middle of the i2c transaction, the i2c module keeps the status without being reset.
Due to such an i2c module status, the i2c irq handler keeps getting triggered since the i2c irq handler is registered in the kernel booting process after the bmc machine is doing a warm rebooting. The continuous triggering is stopped by the soft lockup watchdog timer.
Disable the interrupt enable bit in the i2c module before calling devm_request_irq to fix this issue since the i2c relative status bit is read-only.
Here is the soft lockup log. [ 28.176395] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [swapper/0:1] [ 28.183351] Modules linked in: [ 28.186407] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.15.120-yocto-s-dirty-bbebc78 #1 [ 28.201174] pstate: 40000005 (nZcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 28.208128] pc : __do_softirq+0xb0/0x368 [ 28.212055] lr : __do_softirq+0x70/0x368 [ 28.215972] sp : ffffff8035ebca00 [ 28.219278] x29: ffffff8035ebca00 x28: 0000000000000002 x27: ffffff80071a3780 [ 28.226412] x26: ffffffc008bdc000 x25: ffffffc008bcc640 x24: ffffffc008be50c0 [ 28.233546] x23: ffffffc00800200c x22: 0000000000000000 x21: 000000000000001b [ 28.240679] x20: 0000000000000000 x19: ffffff80001c3200 x18: ffffffffffffffff [ 28.247812] x17: ffffffc02d2e0000 x16: ffffff8035eb8b40 x15: 00001e8480000000 [ 28.254945] x14: 02c3647e37dbfcb6 x13: 02c364f2ab14200c x12: 0000000002c364f2 [ 28.262078] x11: 00000000fa83b2da x10: 000000000000b67e x9 : ffffffc008010250 [ 28.269211] x8 : 000000009d983d00 x7 : 7fffffffffffffff x6 : 0000036d74732434 [ 28.276344] x5 : 00ffffffffffffff x4 : 0000000000000015 x3 : 0000000000000198 [ 28.283476] x2 : ffffffc02d2e0000 x1 : 00000000000000e0 x0 : ffffffc008bdcb40 [ 28.290611] Call trace: [ 28.293052] __do_softirq+0xb0/0x368 [ 28.296625] __irq_exit_rcu+0xe0/0x100 [ 28.300374] irq_exit+0x14/0x20 [ 28.303513] handle_domain_irq+0x68/0x90 [ 28.307440] gic_handle_irq+0x78/0xb0 [ 28.311098] call_on_irq_stack+0x20/0x38 [ 28.315019] do_interrupt_handler+0x54/0x5c [ 28.319199] el1_interrupt+0x2c/0x4c [ 28.322777] el1h_64_irq_handler+0x14/0x20 [ 28.326872] el1h_64_irq+0x74/0x78 [ 28.330269] __setup_irq+0x454/0x780 [ 28.333841] request_threaded_irq+0xd0/0x1b4 [ 28.338107] devm_request_threaded_irq+0x84/0x100 [ 28.342809] npcm_i2c_probe_bus+0x188/0x3d0 [ 28.346990] platform_probe+0x6c/0xc4 [ 28.350653] really_probe+0xcc/0x45c [ 28.354227] __driver_probe_device+0x8c/0x160 [ 28.358578] driver_probe_device+0x44/0xe0 [ 28.362670] __driver_attach+0x124/0x1d0 [ 28.366589] bus_for_each_dev+0x7c/0xe0 [ 28.370426] driver_attach+0x28/0x30 [ 28.373997] bus_add_driver+0x124/0x240 [ 28.377830] driver_register+0x7c/0x124 [ 28.381662] __platform_driver_register+0x2c/0x34 [ 28.386362] npcm_i2c_init+0x3c/0x5c [ 28.389937] do_one_initcall+0x74/0x230 [ 28.393768] kernel_init_freeable+0x24c/0x2b4 [ 28.398126] kernel_init+0x28/0x130 [ 28.401614] ret_from_fork+0x10/0x20 [ 28.405189] Kernel panic - not syncing: softlockup: hung tasks [ 28.411011] SMP: stopping secondary CPUs [ 28.414933] Kernel Offset: disabled [ 28.418412] CPU features: 0x00000000,00000802 [ 28.427644] Rebooting in 20 seconds..
Fixes: 56a1485b102e ("i2c: npcm7xx: Add Nuvoton NPCM I2C controller driver") Signed-off-by: Tyrone Ting kfting@nuvoton.com Cc: stable@vger.kernel.org # v5.8+ Reviewed-by: Tali Perry tali.perry1@gmail.com Signed-off-by: Andi Shyti andi.shyti@kernel.org Link: https://lore.kernel.org/r/20250220040029.27596-2-kfting@nuvoton.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/i2c/busses/i2c-npcm7xx.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/drivers/i2c/busses/i2c-npcm7xx.c +++ b/drivers/i2c/busses/i2c-npcm7xx.c @@ -2335,6 +2335,13 @@ static int npcm_i2c_probe_bus(struct pla if (irq < 0) return irq;
+ /* + * Disable the interrupt to avoid the interrupt handler being triggered + * incorrectly by the asynchronous interrupt status since the machine + * might do a warm reset during the last smbus/i2c transfer session. + */ + npcm_i2c_int_enable(bus, false); + ret = devm_request_irq(bus->dev, irq, npcm_i2c_bus_irq, 0, dev_name(bus->dev), bus); if (ret)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikita Zhandarovich n.zhandarovich@fintech.ru
commit 1cf9631d836b289bd5490776551961c883ae8a4f upstream.
Syzbot reports [1] a warning in usb_submit_urb() triggered by inconsistencies between expected and actually present endpoints in gl620a driver. Since genelink_bind() does not properly verify whether specified eps are in fact provided by the device, in this case, an artificially manufactured one, one may get a mismatch.
Fix the issue by resorting to a usbnet utility function usbnet_get_endpoints(), usually reserved for this very problem. Check for endpoints and return early before proceeding further if any are missing.
[1] Syzbot report: usb 5-1: Manufacturer: syz usb 5-1: SerialNumber: syz usb 5-1: config 0 descriptor?? gl620a 5-1:0.23 usb0: register 'gl620a' at usb-dummy_hcd.0-1, ... ------------[ cut here ]------------ usb 5-1: BOGUS urb xfer, pipe 3 != type 1 WARNING: CPU: 2 PID: 1841 at drivers/usb/core/urb.c:503 usb_submit_urb+0xe4b/0x1730 drivers/usb/core/urb.c:503 Modules linked in: CPU: 2 UID: 0 PID: 1841 Comm: kworker/2:2 Not tainted 6.12.0-syzkaller-07834-g06afb0f36106 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Workqueue: mld mld_ifc_work RIP: 0010:usb_submit_urb+0xe4b/0x1730 drivers/usb/core/urb.c:503 ... Call Trace: <TASK> usbnet_start_xmit+0x6be/0x2780 drivers/net/usb/usbnet.c:1467 __netdev_start_xmit include/linux/netdevice.h:5002 [inline] netdev_start_xmit include/linux/netdevice.h:5011 [inline] xmit_one net/core/dev.c:3590 [inline] dev_hard_start_xmit+0x9a/0x7b0 net/core/dev.c:3606 sch_direct_xmit+0x1ae/0xc30 net/sched/sch_generic.c:343 __dev_xmit_skb net/core/dev.c:3827 [inline] __dev_queue_xmit+0x13d4/0x43e0 net/core/dev.c:4400 dev_queue_xmit include/linux/netdevice.h:3168 [inline] neigh_resolve_output net/core/neighbour.c:1514 [inline] neigh_resolve_output+0x5bc/0x950 net/core/neighbour.c:1494 neigh_output include/net/neighbour.h:539 [inline] ip6_finish_output2+0xb1b/0x2070 net/ipv6/ip6_output.c:141 __ip6_finish_output net/ipv6/ip6_output.c:215 [inline] ip6_finish_output+0x3f9/0x1360 net/ipv6/ip6_output.c:226 NF_HOOK_COND include/linux/netfilter.h:303 [inline] ip6_output+0x1f8/0x540 net/ipv6/ip6_output.c:247 dst_output include/net/dst.h:450 [inline] NF_HOOK include/linux/netfilter.h:314 [inline] NF_HOOK include/linux/netfilter.h:308 [inline] mld_sendpack+0x9f0/0x11d0 net/ipv6/mcast.c:1819 mld_send_cr net/ipv6/mcast.c:2120 [inline] mld_ifc_work+0x740/0xca0 net/ipv6/mcast.c:2651 process_one_work+0x9c5/0x1ba0 kernel/workqueue.c:3229 process_scheduled_works kernel/workqueue.c:3310 [inline] worker_thread+0x6c8/0xf00 kernel/workqueue.c:3391 kthread+0x2c1/0x3a0 kernel/kthread.c:389 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 </TASK>
Reported-by: syzbot+d693c07c6f647e0388d3@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=d693c07c6f647e0388d3 Fixes: 47ee3051c856 ("[PATCH] USB: usbnet (5/9) module for genesys gl620a cables") Cc: stable@vger.kernel.org Signed-off-by: Nikita Zhandarovich n.zhandarovich@fintech.ru Link: https://patch.msgid.link/20250224172919.1220522-1-n.zhandarovich@fintech.ru Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/gl620a.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
--- a/drivers/net/usb/gl620a.c +++ b/drivers/net/usb/gl620a.c @@ -179,9 +179,7 @@ static int genelink_bind(struct usbnet * { dev->hard_mtu = GL_RCV_BUF_SIZE; dev->net->hard_header_len += 4; - dev->in = usb_rcvbulkpipe(dev->udev, dev->driver_info->in); - dev->out = usb_sndbulkpipe(dev->udev, dev->driver_info->out); - return 0; + return usbnet_get_endpoints(dev, intf); }
static const struct driver_info genelink_info = {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit 39ab773e4c120f7f98d759415ccc2aca706bbc10 upstream.
When a DMA mapping error occurs while processing skb frags, it will free one more tx_swbd than expected, so fix this off-by-one issue.
Fixes: d4fd0404c1c9 ("enetc: Introduce basic PF and VF ENETC ethernet drivers") Cc: stable@vger.kernel.org Suggested-by: Vladimir Oltean vladimir.oltean@nxp.com Suggested-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Reviewed-by: Claudiu Manoil claudiu.manoil@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-2-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -123,6 +123,24 @@ static int enetc_ptp_parse(struct sk_buf return 0; }
+/** + * enetc_unwind_tx_frame() - Unwind the DMA mappings of a multi-buffer Tx frame + * @tx_ring: Pointer to the Tx ring on which the buffer descriptors are located + * @count: Number of Tx buffer descriptors which need to be unmapped + * @i: Index of the last successfully mapped Tx buffer descriptor + */ +static void enetc_unwind_tx_frame(struct enetc_bdr *tx_ring, int count, int i) +{ + while (count--) { + struct enetc_tx_swbd *tx_swbd = &tx_ring->tx_swbd[i]; + + enetc_free_tx_frame(tx_ring, tx_swbd); + if (i == 0) + i = tx_ring->bd_count; + i--; + } +} + static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb) { bool do_vlan, do_onestep_tstamp = false, do_twostep_tstamp = false; @@ -306,13 +324,7 @@ static int enetc_map_tx_buffs(struct ene dma_err: dev_err(tx_ring->dev, "DMA map error");
- do { - tx_swbd = &tx_ring->tx_swbd[i]; - enetc_free_tx_frame(tx_ring, tx_swbd); - if (i == 0) - i = tx_ring->bd_count; - i--; - } while (count--); + enetc_unwind_tx_frame(tx_ring, count, i);
return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit da291996b16ebd10626d4b20288327b743aff110 upstream.
When creating a TSO header, if the skb is VLAN tagged, the extended BD will be used and the 'count' should be increased by 2 instead of 1. Otherwise, when an error occurs, less tx_swbd will be freed than the actual number.
Fixes: fb8629e2cbfc ("net: enetc: add support for software TSO") Cc: stable@vger.kernel.org Suggested-by: Vladimir Oltean vladimir.oltean@nxp.com Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Reviewed-by: Claudiu Manoil claudiu.manoil@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-3-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -329,14 +329,15 @@ dma_err: return 0; }
-static void enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb, - struct enetc_tx_swbd *tx_swbd, - union enetc_tx_bd *txbd, int *i, int hdr_len, - int data_len) +static int enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb, + struct enetc_tx_swbd *tx_swbd, + union enetc_tx_bd *txbd, int *i, int hdr_len, + int data_len) { union enetc_tx_bd txbd_tmp; u8 flags = 0, e_flags = 0; dma_addr_t addr; + int count = 1;
enetc_clear_tx_bd(&txbd_tmp); addr = tx_ring->tso_headers_dma + *i * TSO_HEADER_SIZE; @@ -379,7 +380,10 @@ static void enetc_map_tx_tso_hdr(struct /* Write the BD */ txbd_tmp.ext.e_flags = e_flags; *txbd = txbd_tmp; + count++; } + + return count; }
static int enetc_map_tx_tso_data(struct enetc_bdr *tx_ring, struct sk_buff *skb, @@ -511,9 +515,9 @@ static int enetc_map_tx_tso_buffs(struct
/* compute the csum over the L4 header */ csum = enetc_tso_hdr_csum(&tso, skb, hdr, hdr_len, &pos); - enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd, &i, hdr_len, data_len); + count += enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd, + &i, hdr_len, data_len); bd_data_num = 0; - count++;
while (data_len > 0) { int size;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit bbcbc906ab7b5834c1219cd17a38d78dba904aa0 upstream.
There is an issue with one-step timestamp based on UDP/IP. The peer will discard the sync packet because of the wrong UDP checksum. For ENETC v1, the software needs to update the UDP checksum when updating the originTimestamp field, so that the hardware can correctly update the UDP checksum when updating the correction field. Otherwise, the UDP checksum in the sync packet will be wrong.
Fixes: 7294380c5211 ("enetc: support PTP Sync packet one-step timestamping") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Tested-by: Vladimir Oltean vladimir.oltean@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-6-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 41 ++++++++++++++++++++++----- 1 file changed, 34 insertions(+), 7 deletions(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -231,9 +231,11 @@ static int enetc_map_tx_buffs(struct ene }
if (do_onestep_tstamp) { - u32 lo, hi, val; - u64 sec, nsec; + __be32 new_sec_l, new_nsec; + u32 lo, hi, nsec, val; + __be16 new_sec_h; u8 *data; + u64 sec;
lo = enetc_rd_hot(hw, ENETC_SICTR0); hi = enetc_rd_hot(hw, ENETC_SICTR1); @@ -247,13 +249,38 @@ static int enetc_map_tx_buffs(struct ene /* Update originTimestamp field of Sync packet * - 48 bits seconds field * - 32 bits nanseconds field + * + * In addition, the UDP checksum needs to be updated + * by software after updating originTimestamp field, + * otherwise the hardware will calculate the wrong + * checksum when updating the correction field and + * update it to the packet. */ data = skb_mac_header(skb); - *(__be16 *)(data + offset2) = - htons((sec >> 32) & 0xffff); - *(__be32 *)(data + offset2 + 2) = - htonl(sec & 0xffffffff); - *(__be32 *)(data + offset2 + 6) = htonl(nsec); + new_sec_h = htons((sec >> 32) & 0xffff); + new_sec_l = htonl(sec & 0xffffffff); + new_nsec = htonl(nsec); + if (udp) { + struct udphdr *uh = udp_hdr(skb); + __be32 old_sec_l, old_nsec; + __be16 old_sec_h; + + old_sec_h = *(__be16 *)(data + offset2); + inet_proto_csum_replace2(&uh->check, skb, old_sec_h, + new_sec_h, false); + + old_sec_l = *(__be32 *)(data + offset2 + 2); + inet_proto_csum_replace4(&uh->check, skb, old_sec_l, + new_sec_l, false); + + old_nsec = *(__be32 *)(data + offset2 + 6); + inet_proto_csum_replace4(&uh->check, skb, old_nsec, + new_nsec, false); + } + + *(__be16 *)(data + offset2) = new_sec_h; + *(__be32 *)(data + offset2 + 2) = new_sec_l; + *(__be32 *)(data + offset2 + 6) = new_nsec;
/* Configure single-step register */ val = ENETC_PM0_SINGLE_STEP_EN;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit 432a2cb3ee97a7c6ea578888fe81baad035b9307 upstream.
The 'xdp_tx' is used to count the number of XDP_TX frames sent, not the number of Tx BDs.
Fixes: 7ed2bc80074e ("net: enetc: add support for XDP_TX") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Ioana Ciornei ioana.ciornei@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-4-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -1624,7 +1624,7 @@ static int enetc_clean_rx_ring_xdp(struc enetc_xdp_drop(rx_ring, orig_i, i); tx_ring->stats.xdp_tx_drops++; } else { - tx_ring->stats.xdp_tx += xdp_tx_bd_cnt; + tx_ring->stats.xdp_tx++; rx_ring->xdp.xdp_tx_in_flight += xdp_tx_bd_cnt; xdp_tx_frm_cnt++; /* The XDP_TX enqueue was successful, so we
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit 249df695c3ffe8c8d36d46c2580ce72410976f96 upstream.
There is an off-by-one issue for the err_chained_bd path, it will free one more tx_swbd than expected. But there is no such issue for the err_map_data path. To fix this off-by-one issue and make the two error handling consistent, the increment of 'i' and 'count' remain in sync and enetc_unwind_tx_frame() is called for error handling.
Fixes: fb8629e2cbfc ("net: enetc: add support for software TSO") Cc: stable@vger.kernel.org Suggested-by: Vladimir Oltean vladimir.oltean@nxp.com Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Reviewed-by: Claudiu Manoil claudiu.manoil@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-9-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -568,8 +568,13 @@ static int enetc_map_tx_tso_buffs(struct err = enetc_map_tx_tso_data(tx_ring, skb, tx_swbd, txbd, tso.data, size, size == data_len); - if (err) + if (err) { + if (i == 0) + i = tx_ring->bd_count; + i--; + goto err_map_data; + }
data_len -= size; count++; @@ -598,13 +603,7 @@ err_map_data: dev_err(tx_ring->dev, "DMA map error");
err_chained_bd: - do { - tx_swbd = &tx_ring->tx_swbd[i]; - enetc_free_tx_frame(tx_ring, tx_swbd); - if (i == 0) - i = tx_ring->bd_count; - i--; - } while (count--); + enetc_unwind_tx_frame(tx_ring, count, i);
return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: BH Hsieh bhsieh@nvidia.com
commit 55f1a5f7c97c3c92ba469e16991a09274410ceb7 upstream.
Observed VBUS_OVERRIDE & ID_OVERRIDE might be programmed with unexpected value prior to XUSB PADCTL driver, this could also occur in virtualization scenario.
For example, UEFI firmware programs ID_OVERRIDE=GROUNDED to set a type-c port to host mode and keeps the value to kernel. If the type-c port is connected a usb host, below errors can be observed right after usb host mode driver gets probed. The errors would keep until usb role class driver detects the type-c port as device mode and notifies usb device mode driver to set both ID_OVERRIDE and VBUS_OVERRIDE to correct value by XUSB PADCTL driver.
[ 173.765814] usb usb3-port2: Cannot enable. Maybe the USB cable is bad? [ 173.765837] usb usb3-port2: config error
Taking virtualization into account, asserting XUSB PADCTL reset would break XUSB functions used by other guest OS, hence only reset VBUS & ID OVERRIDE of the port in utmi_phy_init.
Fixes: bbf711682cd5 ("phy: tegra: xusb: Add Tegra186 support") Cc: stable@vger.kernel.org Change-Id: Ic63058d4d49b4a1f8f9ab313196e20ad131cc591 Signed-off-by: BH Hsieh bhsieh@nvidia.com Signed-off-by: Henry Lin henryl@nvidia.com Link: https://lore.kernel.org/r/20250122105943.8057-1-henryl@nvidia.com Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/phy/tegra/xusb-tegra186.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/drivers/phy/tegra/xusb-tegra186.c +++ b/drivers/phy/tegra/xusb-tegra186.c @@ -900,6 +900,7 @@ static int tegra186_utmi_phy_exit(struct unsigned int index = lane->index; struct device *dev = padctl->dev; int err; + u32 reg;
port = tegra_xusb_find_usb2_port(padctl, index); if (!port) { @@ -907,6 +908,16 @@ static int tegra186_utmi_phy_exit(struct return -ENODEV; }
+ if (port->mode == USB_DR_MODE_OTG || + port->mode == USB_DR_MODE_PERIPHERAL) { + /* reset VBUS&ID OVERRIDE */ + reg = padctl_readl(padctl, USB2_VBUS_ID); + reg &= ~VBUS_OVERRIDE; + reg &= ~ID_OVERRIDE(~0); + reg |= ID_OVERRIDE_FLOATING; + padctl_writel(padctl, reg, USB2_VBUS_ID); + } + if (port->supply && port->mode == USB_DR_MODE_HOST) { err = regulator_disable(port->supply); if (err) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kaustabh Chakraborty kauschluss@disroot.org
commit e2158c953c973adb49383ddea2504faf08d375b7 upstream.
In exynos5_usbdrd_{pipe3,utmi}_set_refclk(), the masks PHYCLKRST_MPLL_MULTIPLIER_MASK and PHYCLKRST_SSC_REFCLKSEL_MASK are not inverted when applied to the register values. Fix it.
Cc: stable@vger.kernel.org Fixes: 59025887fb08 ("phy: Add new Exynos5 USB 3.0 PHY driver") Signed-off-by: Kaustabh Chakraborty kauschluss@disroot.org Reviewed-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Reviewed-by: Anand Moon linux.amoon@gmail.com Link: https://lore.kernel.org/r/20250209-exynos5-usbdrd-masks-v1-1-4f7f83f323d7@di... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/phy/samsung/phy-exynos5-usbdrd.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
--- a/drivers/phy/samsung/phy-exynos5-usbdrd.c +++ b/drivers/phy/samsung/phy-exynos5-usbdrd.c @@ -288,9 +288,9 @@ exynos5_usbdrd_pipe3_set_refclk(struct p reg |= PHYCLKRST_REFCLKSEL_EXT_REFCLK;
/* FSEL settings corresponding to reference clock */ - reg &= ~PHYCLKRST_FSEL_PIPE_MASK | - PHYCLKRST_MPLL_MULTIPLIER_MASK | - PHYCLKRST_SSC_REFCLKSEL_MASK; + reg &= ~(PHYCLKRST_FSEL_PIPE_MASK | + PHYCLKRST_MPLL_MULTIPLIER_MASK | + PHYCLKRST_SSC_REFCLKSEL_MASK); switch (phy_drd->extrefclk) { case EXYNOS5_FSEL_50MHZ: reg |= (PHYCLKRST_MPLL_MULTIPLIER_50M_REF | @@ -332,9 +332,9 @@ exynos5_usbdrd_utmi_set_refclk(struct ph reg &= ~PHYCLKRST_REFCLKSEL_MASK; reg |= PHYCLKRST_REFCLKSEL_EXT_REFCLK;
- reg &= ~PHYCLKRST_FSEL_UTMI_MASK | - PHYCLKRST_MPLL_MULTIPLIER_MASK | - PHYCLKRST_SSC_REFCLKSEL_MASK; + reg &= ~(PHYCLKRST_FSEL_UTMI_MASK | + PHYCLKRST_MPLL_MULTIPLIER_MASK | + PHYCLKRST_SSC_REFCLKSEL_MASK); reg |= PHYCLKRST_FSEL(phy_drd->extrefclk);
return reg;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Abeni pabeni@redhat.com
commit f865c24bc55158313d5779fc81116023a6940ca3 upstream.
Syzkaller reported a lockdep splat in the PM control path:
WARNING: CPU: 0 PID: 6693 at ./include/net/sock.h:1711 sock_owned_by_me include/net/sock.h:1711 [inline] WARNING: CPU: 0 PID: 6693 at ./include/net/sock.h:1711 msk_owned_by_me net/mptcp/protocol.h:363 [inline] WARNING: CPU: 0 PID: 6693 at ./include/net/sock.h:1711 mptcp_pm_nl_addr_send_ack+0x57c/0x610 net/mptcp/pm_netlink.c:788 Modules linked in: CPU: 0 UID: 0 PID: 6693 Comm: syz.0.205 Not tainted 6.14.0-rc2-syzkaller-00303-gad1b832bf1cf #0 Hardware name: Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 RIP: 0010:sock_owned_by_me include/net/sock.h:1711 [inline] RIP: 0010:msk_owned_by_me net/mptcp/protocol.h:363 [inline] RIP: 0010:mptcp_pm_nl_addr_send_ack+0x57c/0x610 net/mptcp/pm_netlink.c:788 Code: 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc e8 ca 7b d3 f5 eb b9 e8 c3 7b d3 f5 90 0f 0b 90 e9 dd fb ff ff e8 b5 7b d3 f5 90 <0f> 0b 90 e9 3e fb ff ff 44 89 f1 80 e1 07 38 c1 0f 8c eb fb ff ff RSP: 0000:ffffc900034f6f60 EFLAGS: 00010283 RAX: ffffffff8bee3c2b RBX: 0000000000000001 RCX: 0000000000080000 RDX: ffffc90004d42000 RSI: 000000000000a407 RDI: 000000000000a408 RBP: ffffc900034f7030 R08: ffffffff8bee37f6 R09: 0100000000000000 R10: dffffc0000000000 R11: ffffed100bcc62e4 R12: ffff88805e6316e0 R13: ffff88805e630c00 R14: dffffc0000000000 R15: ffff88805e630c00 FS: 00007f7e9a7e96c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b2fd18ff8 CR3: 0000000032c24000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> mptcp_pm_remove_addr+0x103/0x1d0 net/mptcp/pm.c:59 mptcp_pm_remove_anno_addr+0x1f4/0x2f0 net/mptcp/pm_netlink.c:1486 mptcp_nl_remove_subflow_and_signal_addr net/mptcp/pm_netlink.c:1518 [inline] mptcp_pm_nl_del_addr_doit+0x118d/0x1af0 net/mptcp/pm_netlink.c:1629 genl_family_rcv_msg_doit net/netlink/genetlink.c:1115 [inline] genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline] genl_rcv_msg+0xb1f/0xec0 net/netlink/genetlink.c:1210 netlink_rcv_skb+0x206/0x480 net/netlink/af_netlink.c:2543 genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219 netlink_unicast_kernel net/netlink/af_netlink.c:1322 [inline] netlink_unicast+0x7f6/0x990 net/netlink/af_netlink.c:1348 netlink_sendmsg+0x8de/0xcb0 net/netlink/af_netlink.c:1892 sock_sendmsg_nosec net/socket.c:718 [inline] __sock_sendmsg+0x221/0x270 net/socket.c:733 ____sys_sendmsg+0x53a/0x860 net/socket.c:2573 ___sys_sendmsg net/socket.c:2627 [inline] __sys_sendmsg+0x269/0x350 net/socket.c:2659 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7e9998cde9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7e9a7e9038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007f7e99ba5fa0 RCX: 00007f7e9998cde9 RDX: 000000002000c094 RSI: 0000400000000000 RDI: 0000000000000007 RBP: 00007f7e99a0e2a0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f7e99ba5fa0 R15: 00007fff49231088
Indeed the PM can try to send a RM_ADDR over a msk without acquiring first the msk socket lock.
The bugged code-path comes from an early optimization: when there are no subflows, the PM should (usually) not send RM_ADDR notifications.
The above statement is incorrect, as without locks another process could concurrent create a new subflow and cause the RM_ADDR generation.
Additionally the supposed optimization is not very effective even performance-wise, as most mptcp sockets should have at least one subflow: the MPC one.
Address the issue removing the buggy code path, the existing "slow-path" will handle correctly even the edge case.
Fixes: b6c08380860b ("mptcp: remove addr and subflow in PM netlink") Cc: stable@vger.kernel.org Reported-by: syzbot+cd3ce3d03a3393ae9700@syzkaller.appspotmail.com Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/546 Signed-off-by: Paolo Abeni pabeni@redhat.com Reviewed-by: Matthieu Baerts (NGI0) matttbe@kernel.org Signed-off-by: Matthieu Baerts (NGI0) matttbe@kernel.org Link: https://patch.msgid.link/20250224-net-mptcp-misc-fixes-v1-1-f550f636b435@ker... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/mptcp/pm_netlink.c | 5 ----- 1 file changed, 5 deletions(-)
--- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -1554,11 +1554,6 @@ static int mptcp_nl_remove_subflow_and_s if (mptcp_pm_is_userspace(msk)) goto next;
- if (list_empty(&msk->conn_list)) { - mptcp_pm_remove_anno_addr(msk, addr, false); - goto next; - } - lock_sock(sk); remove_subflow = lookup_subflow_by_saddr(&msk->conn_list, addr); mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthieu Baerts (NGI0) matttbe@kernel.org
commit 8668860b0ad32a13fcd6c94a0995b7aa7638c9ef upstream.
Before this patch, if the checksum was not used, the subflow was only reset if map_data_len was != 0. If there were no MPTCP options or an invalid mapping, map_data_len was not set to the data len, and then the subflow was not reset as it should have been, leaving the MPTCP connection in a wrong fallback mode.
This map_data_len condition has been introduced to handle the reception of the infinite mapping. Instead, a new dedicated mapping error could have been returned and treated as a special case. However, the commit 31bf11de146c ("mptcp: introduce MAPPING_BAD_CSUM") has been introduced by Paolo Abeni soon after, and backported later on to stable. It better handle the csum case, and it means the exception for valid_csum_seen in subflow_can_fallback(), plus this one for the infinite mapping in subflow_check_data_avail(), are no longer needed.
In other words, the code can be simplified there: a fallback should only be done if msk->allow_infinite_fallback is set. This boolean is set to false once MPTCP-specific operations acting on the whole MPTCP connection vs the initial path have been done, e.g. a second path has been created, or an MPTCP re-injection -- yes, possible even with a single subflow. The subflow_can_fallback() helper can then be dropped, and replaced by this single condition.
This also makes the code clearer: a fallback should only be done if it is possible to do so.
While at it, no need to set map_data_len to 0 in get_mapping_status() for the infinite mapping case: it will be set to skb->len just after, at the end of subflow_check_data_avail(), and not read in between.
Fixes: f8d4bcacff3b ("mptcp: infinite mapping receiving") Cc: stable@vger.kernel.org Reported-by: Chester A. Unal chester.a.unal@xpedite-tech.com Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/544 Acked-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Matthieu Baerts (NGI0) matttbe@kernel.org Tested-by: Chester A. Unal chester.a.unal@xpedite-tech.com Link: https://patch.msgid.link/20250224-net-mptcp-misc-fixes-v1-2-f550f636b435@ker... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/mptcp/subflow.c | 15 +-------------- 1 file changed, 1 insertion(+), 14 deletions(-)
--- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -1020,7 +1020,6 @@ static enum mapping_status get_mapping_s if (data_len == 0) { pr_debug("infinite mapping received\n"); MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX); - subflow->map_data_len = 0; return MAPPING_INVALID; }
@@ -1162,18 +1161,6 @@ static void subflow_sched_work_if_closed mptcp_schedule_work(sk); }
-static bool subflow_can_fallback(struct mptcp_subflow_context *subflow) -{ - struct mptcp_sock *msk = mptcp_sk(subflow->conn); - - if (subflow->mp_join) - return false; - else if (READ_ONCE(msk->csum_enabled)) - return !subflow->valid_csum_seen; - else - return READ_ONCE(msk->allow_infinite_fallback); -} - static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk) { struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); @@ -1277,7 +1264,7 @@ fallback: return true; }
- if (!subflow_can_fallback(subflow) && subflow->map_data_len) { + if (!READ_ONCE(msk->allow_infinite_fallback)) { /* fatal protocol error, close the socket. * subflow_error_report() will introduce the appropriate barriers */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ard Biesheuvel ardb@kernel.org
commit 68f3ea7ee199ef77551e090dfef5a49046ea8443 upstream.
In the kernel, there are architectures (x86, arm64) that perform boot-time relocation (for KASLR) without relying on PIE codegen. In this case, all const global objects are emitted into .rodata, including const objects with fields that will be fixed up by the boot-time relocation code. This implies that .rodata (and .text in some cases) need to be writable at boot, but they will usually be mapped read-only as soon as the boot completes.
When using PIE codegen, the compiler will emit const global objects into .data.rel.ro rather than .rodata if the object contains fields that need such fixups at boot-time. This permits the linker to annotate such regions as requiring read-write access only at load time, but not at execution time (in user space), while keeping .rodata truly const (in user space, this is important for reducing the CoW footprint of dynamic executables).
This distinction does not matter for the kernel, but it does imply that const data will end up in writable memory if the .data.rel.ro sections are not treated in a special way, as they will end up in the writable .data segment by default.
So emit .data.rel.ro into the .rodata segment.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org Link: https://lore.kernel.org/r/20250221135704.431269-5-ardb+git@google.com Signed-off-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/asm-generic/vmlinux.lds.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -461,7 +461,7 @@ . = ALIGN((align)); \ .rodata : AT(ADDR(.rodata) - LOAD_OFFSET) { \ __start_rodata = .; \ - *(.rodata) *(.rodata.*) \ + *(.rodata) *(.rodata.*) *(.data.rel.ro*) \ SCHED_DATA \ RO_AFTER_INIT_DATA /* Read only after init */ \ . = ALIGN(8); \
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 82c387ef7568c0d96a918a5a78d9cad6256cfa15 upstream.
David reported a warning observed while loop testing kexec jump:
Interrupts enabled after irqrouter_resume+0x0/0x50 WARNING: CPU: 0 PID: 560 at drivers/base/syscore.c:103 syscore_resume+0x18a/0x220 kernel_kexec+0xf6/0x180 __do_sys_reboot+0x206/0x250 do_syscall_64+0x95/0x180
The corresponding interrupt flag trace:
hardirqs last enabled at (15573): [<ffffffffa8281b8e>] __up_console_sem+0x7e/0x90 hardirqs last disabled at (15580): [<ffffffffa8281b73>] __up_console_sem+0x63/0x90
That means __up_console_sem() was invoked with interrupts enabled. Further instrumentation revealed that in the interrupt disabled section of kexec jump one of the syscore_suspend() callbacks woke up a task, which set the NEED_RESCHED flag. A later callback in the resume path invoked cond_resched() which in turn led to the invocation of the scheduler:
__cond_resched+0x21/0x60 down_timeout+0x18/0x60 acpi_os_wait_semaphore+0x4c/0x80 acpi_ut_acquire_mutex+0x3d/0x100 acpi_ns_get_node+0x27/0x60 acpi_ns_evaluate+0x1cb/0x2d0 acpi_rs_set_srs_method_data+0x156/0x190 acpi_pci_link_set+0x11c/0x290 irqrouter_resume+0x54/0x60 syscore_resume+0x6a/0x200 kernel_kexec+0x145/0x1c0 __do_sys_reboot+0xeb/0x240 do_syscall_64+0x95/0x180
This is a long standing problem, which probably got more visible with the recent printk changes. Something does a task wakeup and the scheduler sets the NEED_RESCHED flag. cond_resched() sees it set and invokes schedule() from a completely bogus context. The scheduler enables interrupts after context switching, which causes the above warning at the end.
Quite some of the code paths in syscore_suspend()/resume() can result in triggering a wakeup with the exactly same consequences. They might not have done so yet, but as they share a lot of code with normal operations it's just a question of time.
The problem only affects the PREEMPT_NONE and PREEMPT_VOLUNTARY scheduling models. Full preemption is not affected as cond_resched() is disabled and the preemption check preemptible() takes the interrupt disabled flag into account.
Cure the problem by adding a corresponding check into cond_resched().
Reported-by: David Woodhouse dwmw@amazon.co.uk Suggested-by: Peter Zijlstra peterz@infradead.org Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Ingo Molnar mingo@kernel.org Tested-by: David Woodhouse dwmw@amazon.co.uk Cc: Linus Torvalds torvalds@linux-foundation.org Cc: stable@vger.kernel.org Closes: https://lore.kernel.org/all/7717fe2ac0ce5f0a2c43fdab8b11f4483d54a2a4.camel@i... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8369,7 +8369,7 @@ SYSCALL_DEFINE0(sched_yield) #if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC) int __sched __cond_resched(void) { - if (should_resched(0)) { + if (should_resched(0) && !irqs_disabled()) { preempt_schedule_common(); return 1; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andreas Schwab schwab@suse.de
commit 599c44cd21f4967774e0acf58f734009be4aea9a upstream.
Make sure the compare value in the lr/sc loop is sign extended to match what lr.w does. Fortunately, due to the compiler keeping the register contents sign extended anyway the lack of the explicit extension didn't result in wrong code so far, but this cannot be relied upon.
Fixes: b90edb33010b ("RISC-V: Add futex support.") Signed-off-by: Andreas Schwab schwab@suse.de Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com Reviewed-by: Björn Töpel bjorn@rivosinc.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/mvmfrkv2vhz.fsf@suse.de Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/riscv/include/asm/futex.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/riscv/include/asm/futex.h +++ b/arch/riscv/include/asm/futex.h @@ -93,7 +93,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %[r]) \ _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %[r]) \ : [r] "+r" (ret), [v] "=&r" (val), [u] "+m" (*uaddr), [t] "=&r" (tmp) - : [ov] "Jr" (oldval), [nv] "Jr" (newval) + : [ov] "Jr" ((long)(int)oldval), [nv] "Jr" (newval) : "memory"); __disable_user_access();
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sohaib Nadeem sohaib.nadeem@amd.com
commit 0484e05d048b66d01d1f3c1d2306010bb57d8738 upstream.
[why]: issues fixed: - comparison with wider integer type in loop condition which can cause infinite loops - pointer dereference before null check
Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Reviewed-by: Josip Pavic josip.pavic@amd.com Acked-by: Aurabindo Pillai aurabindo.pillai@amd.com Signed-off-by: Sohaib Nadeem sohaib.nadeem@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com [ delete changes made in drivers/gpu/drm/amd/display/dc/link/link_validation.c for this file is not present in linux-6.1.y ] Signed-off-by: Jianqi Ren jianqi.ren.cn@windriver.com Signed-off-by: He Zhe zhe.he@windriver.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c +++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c @@ -1862,19 +1862,21 @@ static enum bp_result get_firmware_info_ /* Vega12 */ smu_info_v3_2 = GET_IMAGE(struct atom_smu_info_v3_2, DATA_TABLES(smu_info)); - DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_2->gpuclk_ss_percentage); if (!smu_info_v3_2) return BP_RESULT_BADBIOSTABLE;
+ DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_2->gpuclk_ss_percentage); + info->default_engine_clk = smu_info_v3_2->bootup_dcefclk_10khz * 10; } else if (revision.minor == 3) { /* Vega20 */ smu_info_v3_3 = GET_IMAGE(struct atom_smu_info_v3_3, DATA_TABLES(smu_info)); - DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_3->gpuclk_ss_percentage); if (!smu_info_v3_3) return BP_RESULT_BADBIOSTABLE;
+ DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_3->gpuclk_ss_percentage); + info->default_engine_clk = smu_info_v3_3->bootup_dcefclk_10khz * 10; }
@@ -2439,10 +2441,11 @@ static enum bp_result get_integrated_inf info_v11 = GET_IMAGE(struct atom_integrated_system_info_v1_11, DATA_TABLES(integratedsysteminfo));
- DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v11->gpuclk_ss_percentage); if (info_v11 == NULL) return BP_RESULT_BADBIOSTABLE;
+ DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v11->gpuclk_ss_percentage); + info->gpu_cap_info = le32_to_cpu(info_v11->gpucapinfo); /* @@ -2654,11 +2657,12 @@ static enum bp_result get_integrated_inf
info_v2_1 = GET_IMAGE(struct atom_integrated_system_info_v2_1, DATA_TABLES(integratedsysteminfo)); - DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_1->gpuclk_ss_percentage);
if (info_v2_1 == NULL) return BP_RESULT_BADBIOSTABLE;
+ DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_1->gpuclk_ss_percentage); + info->gpu_cap_info = le32_to_cpu(info_v2_1->gpucapinfo); /* @@ -2816,11 +2820,11 @@ static enum bp_result get_integrated_inf info_v2_2 = GET_IMAGE(struct atom_integrated_system_info_v2_2, DATA_TABLES(integratedsysteminfo));
- DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_2->gpuclk_ss_percentage); - if (info_v2_2 == NULL) return BP_RESULT_BADBIOSTABLE;
+ DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_2->gpuclk_ss_percentage); + info->gpu_cap_info = le32_to_cpu(info_v2_2->gpucapinfo); /*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: chr[] chris@rudorff.com
commit 91dcc66b34beb72dde8412421bdc1b4cd40e4fb8 upstream.
resume and irq handler happily races in set_power_state()
* amdgpu_legacy_dpm_compute_clocks() needs lock * protect irq work handler * fix dpm_enabled usage
v2: fix clang build, integrate Lijo's comments (Alex)
Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/2524 Fixes: 3712e7a49459 ("drm/amd/pm: unified lock protections in amdgpu_dpm.c") Reviewed-by: Lijo Lazar lijo.lazar@amd.com Tested-by: Maciej S. Szmigiero mail@maciej.szmigiero.name # on Oland PRO Signed-off-by: chr[] chris@rudorff.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit ee3dc9e204d271c9c7a8d4d38a0bce4745d33e71) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 25 ++++++++++++++++++------ drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c | 8 +++++-- drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c | 26 +++++++++++++++++++------ 3 files changed, 45 insertions(+), 14 deletions(-)
--- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c @@ -3056,6 +3056,7 @@ static int kv_dpm_hw_init(void *handle) if (!amdgpu_dpm) return 0;
+ mutex_lock(&adev->pm.mutex); kv_dpm_setup_asic(adev); ret = kv_dpm_enable(adev); if (ret) @@ -3063,6 +3064,8 @@ static int kv_dpm_hw_init(void *handle) else adev->pm.dpm_enabled = true; amdgpu_legacy_dpm_compute_clocks(adev); + mutex_unlock(&adev->pm.mutex); + return ret; }
@@ -3080,32 +3083,42 @@ static int kv_dpm_suspend(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ cancel_work_sync(&adev->pm.dpm.thermal.work); + if (adev->pm.dpm_enabled) { + mutex_lock(&adev->pm.mutex); + adev->pm.dpm_enabled = false; /* disable dpm */ kv_dpm_disable(adev); /* reset the power state */ adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps; + mutex_unlock(&adev->pm.mutex); } return 0; }
static int kv_dpm_resume(void *handle) { - int ret; + int ret = 0; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
- if (adev->pm.dpm_enabled) { + if (!amdgpu_dpm) + return 0; + + if (!adev->pm.dpm_enabled) { + mutex_lock(&adev->pm.mutex); /* asic init will reset to the boot state */ kv_dpm_setup_asic(adev); ret = kv_dpm_enable(adev); - if (ret) + if (ret) { adev->pm.dpm_enabled = false; - else + } else { adev->pm.dpm_enabled = true; - if (adev->pm.dpm_enabled) amdgpu_legacy_dpm_compute_clocks(adev); + } + mutex_unlock(&adev->pm.mutex); } - return 0; + return ret; }
static bool kv_dpm_is_idle(void *handle) --- a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c @@ -1018,9 +1018,12 @@ void amdgpu_dpm_thermal_work_handler(str enum amd_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL; int temp, size = sizeof(temp);
- if (!adev->pm.dpm_enabled) - return; + mutex_lock(&adev->pm.mutex);
+ if (!adev->pm.dpm_enabled) { + mutex_unlock(&adev->pm.mutex); + return; + } if (!pp_funcs->read_sensor(adev->powerplay.pp_handle, AMDGPU_PP_SENSOR_GPU_TEMP, (void *)&temp, @@ -1042,4 +1045,5 @@ void amdgpu_dpm_thermal_work_handler(str adev->pm.dpm.state = dpm_state;
amdgpu_legacy_dpm_compute_clocks(adev->powerplay.pp_handle); + mutex_unlock(&adev->pm.mutex); } --- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c @@ -7796,6 +7796,7 @@ static int si_dpm_hw_init(void *handle) if (!amdgpu_dpm) return 0;
+ mutex_lock(&adev->pm.mutex); si_dpm_setup_asic(adev); ret = si_dpm_enable(adev); if (ret) @@ -7803,6 +7804,7 @@ static int si_dpm_hw_init(void *handle) else adev->pm.dpm_enabled = true; amdgpu_legacy_dpm_compute_clocks(adev); + mutex_unlock(&adev->pm.mutex); return ret; }
@@ -7820,32 +7822,44 @@ static int si_dpm_suspend(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ cancel_work_sync(&adev->pm.dpm.thermal.work); + if (adev->pm.dpm_enabled) { + mutex_lock(&adev->pm.mutex); + adev->pm.dpm_enabled = false; /* disable dpm */ si_dpm_disable(adev); /* reset the power state */ adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps; + mutex_unlock(&adev->pm.mutex); } + return 0; }
static int si_dpm_resume(void *handle) { - int ret; + int ret = 0; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
- if (adev->pm.dpm_enabled) { + if (!amdgpu_dpm) + return 0; + + if (!adev->pm.dpm_enabled) { /* asic init will reset to the boot state */ + mutex_lock(&adev->pm.mutex); si_dpm_setup_asic(adev); ret = si_dpm_enable(adev); - if (ret) + if (ret) { adev->pm.dpm_enabled = false; - else + } else { adev->pm.dpm_enabled = true; - if (adev->pm.dpm_enabled) amdgpu_legacy_dpm_compute_clocks(adev); + } + mutex_unlock(&adev->pm.mutex); } - return 0; + + return ret; }
static bool si_dpm_is_idle(void *handle)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit c157d351460bcf202970e97e611cb6b54a3dd4a4 upstream.
The Intel idle driver is preferred over the ACPI processor idle driver, but fails to implement the work around for Core2 generation CPUs, where the TSC stops in C2 and deeper C-states. This causes stalls and boot delays, when the clocksource watchdog does not catch the unstable TSC before the CPU goes deep idle for the first time.
The ACPI driver marks the TSC unstable when it detects that the CPU supports C2 or deeper and the CPU does not have a non-stop TSC.
Add the equivivalent work around to the Intel idle driver to cure that.
Fixes: 18734958e9bf ("intel_idle: Use ACPI _CST for processor models without C-state tables") Reported-by: Fab Stz fabstz-it@yahoo.fr Signed-off-by: Thomas Gleixner tglx@linutronix.de Tested-by: Fab Stz fabstz-it@yahoo.fr Cc: All applicable stable@vger.kernel.org Closes: https://lore.kernel.org/all/10cf96aa-1276-4bd4-8966-c890377030c3@yahoo.fr Link: https://patch.msgid.link/87bjupfy7f.ffs@tglx Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/idle/intel_idle.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -56,6 +56,7 @@ #include <asm/nospec-branch.h> #include <asm/mwait.h> #include <asm/msr.h> +#include <asm/tsc.h> #include <asm/fpu/api.h>
#define INTEL_IDLE_VERSION "0.5.1" @@ -1583,6 +1584,9 @@ static void __init intel_idle_init_cstat if (intel_idle_state_needs_timer_stop(state)) state->flags |= CPUIDLE_FLAG_TIMER_STOP;
+ if (cx->type > ACPI_STATE_C1 && !boot_cpu_has(X86_FEATURE_NONSTOP_TSC)) + mark_tsc_unstable("TSC halts in idle"); + state->enter = intel_idle; state->enter_s2idle = intel_idle_s2idle; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiaxun Yang jiaxun.yang@flygoat.com
commit 11ba1728be3edb6928791f4c622f154ebe228ae6 upstream.
On architectures with delay slot, architecture level instruction pointer (or program counter) in pt_regs may differ from where exception was triggered.
Introduce exception_ip hook to invoke architecture code and determine actual instruction pointer to the exception.
Link: https://lore.kernel.org/lkml/00d1b813-c55f-4365-8d81-d70258e10b16@app.fastma... Signed-off-by: Jiaxun Yang jiaxun.yang@flygoat.com Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Cc: Salvatore Bonaccorso carnil@debian.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/mips/include/asm/ptrace.h | 2 ++ arch/mips/kernel/ptrace.c | 7 +++++++ include/linux/ptrace.h | 4 ++++ 3 files changed, 13 insertions(+)
--- a/arch/mips/include/asm/ptrace.h +++ b/arch/mips/include/asm/ptrace.h @@ -155,6 +155,8 @@ static inline long regs_return_value(str }
#define instruction_pointer(regs) ((regs)->cp0_epc) +extern unsigned long exception_ip(struct pt_regs *regs); +#define exception_ip(regs) exception_ip(regs) #define profile_pc(regs) instruction_pointer(regs)
extern asmlinkage long syscall_trace_enter(struct pt_regs *regs); --- a/arch/mips/kernel/ptrace.c +++ b/arch/mips/kernel/ptrace.c @@ -31,6 +31,7 @@ #include <linux/seccomp.h> #include <linux/ftrace.h>
+#include <asm/branch.h> #include <asm/byteorder.h> #include <asm/cpu.h> #include <asm/cpu-info.h> @@ -48,6 +49,12 @@ #define CREATE_TRACE_POINTS #include <trace/events/syscalls.h>
+unsigned long exception_ip(struct pt_regs *regs) +{ + return exception_epc(regs); +} +EXPORT_SYMBOL(exception_ip); + /* * Called by kernel/ptrace.c when detaching.. * --- a/include/linux/ptrace.h +++ b/include/linux/ptrace.h @@ -402,6 +402,10 @@ static inline void user_single_step_repo #define current_user_stack_pointer() user_stack_pointer(current_pt_regs()) #endif
+#ifndef exception_ip +#define exception_ip(x) instruction_pointer(x) +#endif + extern int task_current_syscall(struct task_struct *target, struct syscall_info *info);
extern void sigaction_compat_abi(struct k_sigaction *act, struct k_sigaction *oact);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiaxun Yang jiaxun.yang@flygoat.com
commit 8fa5070833886268e4fb646daaca99f725b378e9 upstream.
On architectures with delay slot, instruction_pointer() may differ from where exception was triggered.
Use exception_ip we just introduced to search exception tables to get rid of the problem.
Fixes: 4bce37a68ff8 ("mips/mm: Convert to using lock_mm_and_find_vma()") Reported-by: Xi Ruoyao xry111@xry111.site Link: https://lore.kernel.org/r/75e9fd7b08562ad9b456a5bdaacb7cc220311cc9.camel@xry... Suggested-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Jiaxun Yang jiaxun.yang@flygoat.com Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Cc: Salvatore Bonaccorso carnil@debian.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memory.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/mm/memory.c +++ b/mm/memory.c @@ -5323,7 +5323,7 @@ static inline bool get_mmap_lock_careful }
if (regs && !user_mode(regs)) { - unsigned long ip = instruction_pointer(regs); + unsigned long ip = exception_ip(regs); if (!search_exception_tables(ip)) return false; } @@ -5348,7 +5348,7 @@ static inline bool upgrade_mmap_lock_car { mmap_read_unlock(mm); if (regs && !user_mode(regs)) { - unsigned long ip = instruction_pointer(regs); + unsigned long ip = exception_ip(regs); if (!search_exception_tables(ip)) return false; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Phillip Lougher phillip@squashfs.org.uk
commit 9253c54e01b6505d348afbc02abaa4d9f8a01395 upstream.
Syskiller has produced an out of bounds access in fill_meta_index().
That out of bounds access is ultimately caused because the inode has an inode number with the invalid value of zero, which was not checked.
The reason this causes the out of bounds access is due to following sequence of events:
1. Fill_meta_index() is called to allocate (via empty_meta_index()) and fill a metadata index. It however suffers a data read error and aborts, invalidating the newly returned empty metadata index. It does this by setting the inode number of the index to zero, which means unused (zero is not a valid inode number).
2. When fill_meta_index() is subsequently called again on another read operation, locate_meta_index() returns the previous index because it matches the inode number of 0. Because this index has been returned it is expected to have been filled, and because it hasn't been, an out of bounds access is performed.
This patch adds a sanity check which checks that the inode number is not zero when the inode is created and returns -EINVAL if it is.
[phillip@squashfs.org.uk: whitespace fix] Link: https://lkml.kernel.org/r/20240409204723.446925-1-phillip@squashfs.org.uk Link: https://lkml.kernel.org/r/20240408220206.435788-1-phillip@squashfs.org.uk Signed-off-by: Phillip Lougher phillip@squashfs.org.uk Reported-by: "Ubisectech Sirius" bugreport@ubisectech.com Closes: https://lore.kernel.org/lkml/87f5c007-b8a5-41ae-8b57-431e924c5915.bugreport@... Cc: Christian Brauner brauner@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Xiangyu Chen xiangyu.chen@windriver.com Signed-off-by: He Zhe zhe.he@windriver.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/squashfs/inode.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/fs/squashfs/inode.c +++ b/fs/squashfs/inode.c @@ -48,6 +48,10 @@ static int squashfs_new_inode(struct sup gid_t i_gid; int err;
+ inode->i_ino = le32_to_cpu(sqsh_ino->inode_number); + if (inode->i_ino == 0) + return -EINVAL; + err = squashfs_get_id(sb, le16_to_cpu(sqsh_ino->uid), &i_uid); if (err) return err; @@ -58,7 +62,6 @@ static int squashfs_new_inode(struct sup
i_uid_write(inode, i_uid); i_gid_write(inode, i_gid); - inode->i_ino = le32_to_cpu(sqsh_ino->inode_number); inode->i_mtime.tv_sec = le32_to_cpu(sqsh_ino->mtime); inode->i_atime.tv_sec = inode->i_mtime.tv_sec; inode->i_ctime.tv_sec = inode->i_mtime.tv_sec;
Hi Gerg,
Could you please help to cherry-pick this commit from 6.1 to 5.15 and 5.10 branch? Thanks!
This should also impact on 5.15/10 branch, I tried to cherry-pick to 5.15/5.10 in my local setup, no conflict happens.
Here is 6.1 commit information:
Squashfs: check the inode number is not the invalid value of zero author Phillip Lougher phillip@squashfs.org.uk 2024-04-08 23:02:06 +0100 committer Greg Kroah-Hartman gregkh@linuxfoundation.org 2025-03-07 16:56:51 +0100 commit 5b99dea79650b50909c50aba24fbae00f203f013 (patch)
Br,
Xiangyu
On 3/6/25 01:49, Greg Kroah-Hartman wrote:
CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe.
6.1-stable review patch. If anyone has any objections, please let me know.
From: Phillip Lougher phillip@squashfs.org.uk
commit 9253c54e01b6505d348afbc02abaa4d9f8a01395 upstream.
Syskiller has produced an out of bounds access in fill_meta_index().
That out of bounds access is ultimately caused because the inode has an inode number with the invalid value of zero, which was not checked.
The reason this causes the out of bounds access is due to following sequence of events:
Fill_meta_index() is called to allocate (via empty_meta_index()) and fill a metadata index. It however suffers a data read error and aborts, invalidating the newly returned empty metadata index. It does this by setting the inode number of the index to zero, which means unused (zero is not a valid inode number).
When fill_meta_index() is subsequently called again on another read operation, locate_meta_index() returns the previous index because it matches the inode number of 0. Because this index has been returned it is expected to have been filled, and because it hasn't been, an out of bounds access is performed.
This patch adds a sanity check which checks that the inode number is not zero when the inode is created and returns -EINVAL if it is.
[phillip@squashfs.org.uk: whitespace fix] Link: https://lkml.kernel.org/r/20240409204723.446925-1-phillip@squashfs.org.uk Link: https://lkml.kernel.org/r/20240408220206.435788-1-phillip@squashfs.org.uk Signed-off-by: Phillip Lougher phillip@squashfs.org.uk Reported-by: "Ubisectech Sirius" bugreport@ubisectech.com Closes: https://lore.kernel.org/lkml/87f5c007-b8a5-41ae-8b57-431e924c5915.bugreport@... Cc: Christian Brauner brauner@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Xiangyu Chen xiangyu.chen@windriver.com Signed-off-by: He Zhe zhe.he@windriver.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
fs/squashfs/inode.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/fs/squashfs/inode.c +++ b/fs/squashfs/inode.c @@ -48,6 +48,10 @@ static int squashfs_new_inode(struct sup gid_t i_gid; int err;
inode->i_ino = le32_to_cpu(sqsh_ino->inode_number);
if (inode->i_ino == 0)
return -EINVAL;
err = squashfs_get_id(sb, le16_to_cpu(sqsh_ino->uid), &i_uid); if (err) return err;
@@ -58,7 +62,6 @@ static int squashfs_new_inode(struct sup
i_uid_write(inode, i_uid); i_gid_write(inode, i_gid);
inode->i_ino = le32_to_cpu(sqsh_ino->inode_number); inode->i_mtime.tv_sec = le32_to_cpu(sqsh_ino->mtime); inode->i_atime.tv_sec = inode->i_mtime.tv_sec; inode->i_ctime.tv_sec = inode->i_mtime.tv_sec;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Quang Le quanglex97@gmail.com
commit 647cef20e649c576dff271e018d5d15d998b629d upstream.
Expected behaviour: In case we reach scheduler's limit, pfifo_tail_enqueue() will drop a packet in scheduler's queue and decrease scheduler's qlen by one. Then, pfifo_tail_enqueue() enqueue new packet and increase scheduler's qlen by one. Finally, pfifo_tail_enqueue() return `NET_XMIT_CN` status code.
Weird behaviour: In case we set `sch->limit == 0` and trigger pfifo_tail_enqueue() on a scheduler that has no packet, the 'drop a packet' step will do nothing. This means the scheduler's qlen still has value equal 0. Then, we continue to enqueue new packet and increase scheduler's qlen by one. In summary, we can leverage pfifo_tail_enqueue() to increase qlen by one and return `NET_XMIT_CN` status code.
The problem is: Let's say we have two qdiscs: Qdisc_A and Qdisc_B. - Qdisc_A's type must have '->graft()' function to create parent/child relationship. Let's say Qdisc_A's type is `hfsc`. Enqueue packet to this qdisc will trigger `hfsc_enqueue`. - Qdisc_B's type is pfifo_head_drop. Enqueue packet to this qdisc will trigger `pfifo_tail_enqueue`. - Qdisc_B is configured to have `sch->limit == 0`. - Qdisc_A is configured to route the enqueued's packet to Qdisc_B.
Enqueue packet through Qdisc_A will lead to: - hfsc_enqueue(Qdisc_A) -> pfifo_tail_enqueue(Qdisc_B) - Qdisc_B->q.qlen += 1 - pfifo_tail_enqueue() return `NET_XMIT_CN` - hfsc_enqueue() check for `NET_XMIT_SUCCESS` and see `NET_XMIT_CN` => hfsc_enqueue() don't increase qlen of Qdisc_A.
The whole process lead to a situation where Qdisc_A->q.qlen == 0 and Qdisc_B->q.qlen == 1. Replace 'hfsc' with other type (for example: 'drr') still lead to the same problem. This violate the design where parent's qlen should equal to the sum of its childrens'qlen.
Bug impact: This issue can be used for user->kernel privilege escalation when it is reachable.
Fixes: 57dbb2d83d10 ("sched: add head drop fifo queue") Reported-by: Quang Le quanglex97@gmail.com Signed-off-by: Quang Le quanglex97@gmail.com Signed-off-by: Cong Wang cong.wang@bytedance.com Link: https://patch.msgid.link/20250204005841.223511-2-xiyou.wangcong@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lee Jones lee@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/sched/sch_fifo.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/net/sched/sch_fifo.c +++ b/net/sched/sch_fifo.c @@ -39,6 +39,9 @@ static int pfifo_tail_enqueue(struct sk_ { unsigned int prev_backlog;
+ if (unlikely(READ_ONCE(sch->limit) == 0)) + return qdisc_drop(skb, sch, to_free); + if (likely(sch->q.qlen < sch->limit)) return qdisc_enqueue_tail(skb, sch);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Fullway Wang fullwaywang@outlook.com
commit 53dbe08504442dc7ba4865c09b3bbf5fe849681b upstream.
The return value of devm_kzalloc() needs to be checked to avoid NULL pointer deference. This is similar to CVE-2022-3113.
Link: https://lore.kernel.org/linux-media/PH7PR20MB5925094DAE3FD750C7E39E01BF712@P... Signed-off-by: Fullway Wang fullwaywang@outlook.com Signed-off-by: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: Jianqi Ren jianqi.ren.cn@windriver.com Signed-off-by: He Zhe zhe.he@windriver.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/media/platform/mediatek/vcodec/mtk_vcodec_fw_scp.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_fw_scp.c +++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_fw_scp.c @@ -65,6 +65,8 @@ struct mtk_vcodec_fw *mtk_vcodec_fw_scp_ }
fw = devm_kzalloc(&dev->plat_dev->dev, sizeof(*fw), GFP_KERNEL); + if (!fw) + return ERR_PTR(-ENOMEM); fw->type = SCP; fw->ops = &mtk_vcodec_rproc_msg; fw->scp = scp;
Hi!
This is the start of the stable review cycle for the 6.1.130 release. There are 176 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
CIP testing did not find any problems here:
https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/tree/linux-6...
Tested-by: Pavel Machek (CIP) pavel@denx.de
Best regards, Pavel
Hello,
On Wed, 5 Mar 2025 18:46:09 +0100 Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.130 release. There are 176 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
This rc kernel passes DAMON functionality test[1] on my test machine. Attaching the test results summary below. Please note that I retrieved the kernel from linux-stable-rc tree[2].
Tested-by: SeongJae Park sj@kernel.org
[1] https://github.com/damonitor/damon-tests/tree/next/corr [2] 34da6dd4fda1 ("Linux 6.1.130-rc1")
Thanks, SJ
[...]
---
ok 1 selftests: damon: debugfs_attrs.sh ok 2 selftests: damon: debugfs_schemes.sh ok 3 selftests: damon: debugfs_target_ids.sh ok 4 selftests: damon: debugfs_empty_targets.sh ok 5 selftests: damon: debugfs_huge_count_read_write.sh ok 6 selftests: damon: debugfs_duplicate_context_creation.sh ok 7 selftests: damon: sysfs.sh ok 1 selftests: damon-tests: kunit.sh ok 2 selftests: damon-tests: huge_count_read_write.sh ok 3 selftests: damon-tests: buffer_overflow.sh ok 4 selftests: damon-tests: rm_contexts.sh ok 5 selftests: damon-tests: record_null_deref.sh ok 6 selftests: damon-tests: dbgfs_target_ids_read_before_terminate_race.sh ok 7 selftests: damon-tests: dbgfs_target_ids_pid_leak.sh ok 8 selftests: damon-tests: damo_tests.sh ok 9 selftests: damon-tests: masim-record.sh ok 10 selftests: damon-tests: build_i386.sh ok 11 selftests: damon-tests: build_arm64.sh # SKIP ok 12 selftests: damon-tests: build_m68k.sh # SKIP ok 13 selftests: damon-tests: build_i386_idle_flag.sh ok 14 selftests: damon-tests: build_i386_highpte.sh ok 15 selftests: damon-tests: build_nomemcg.sh [33m [92mPASS [39m
Am 05.03.2025 um 18:46 schrieb Greg Kroah-Hartman:
This is the start of the stable review cycle for the 6.1.130 release. There are 176 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Builds, boots and works on my 2-socket Ivy Bridge Xeon E5-2697 v2 server. No dmesg oddities or regressions found.
Tested-by: Peter Schneider pschneider1968@googlemail.com
Beste Grüße, Peter Schneider
On 3/5/25 09:46, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.130 release. There are 176 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.130-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On Wed, Mar 05, 2025 at 06:46:09PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.130 release. There are 176 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Tested-by: Mark Brown broonie@kernel.org
On Wed, 5 Mar 2025 at 23:21, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.130 release. There are 176 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.130-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 6.1.130-rc1 * git: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git * git commit: 34da6dd4fda1d2daf1b0df768fe6224d0993e050 * git describe: v6.1.128-747-g34da6dd4fda1 * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.12...
## Test Regressions (compared to v6.1.128-570-gfdd3f50c8e3e)
## Metric Regressions (compared to v6.1.128-570-gfdd3f50c8e3e)
## Test Fixes (compared to v6.1.128-570-gfdd3f50c8e3e)
## Metric Fixes (compared to v6.1.128-570-gfdd3f50c8e3e)
## Test result summary total: 75778, pass: 58508, fail: 3115, skip: 13933, xfail: 222
## Build Summary * arc: 6 total, 5 passed, 1 failed * arm: 139 total, 139 passed, 0 failed * arm64: 46 total, 42 passed, 4 failed * i386: 31 total, 25 passed, 6 failed * mips: 30 total, 25 passed, 5 failed * parisc: 5 total, 5 passed, 0 failed * powerpc: 36 total, 33 passed, 3 failed * riscv: 14 total, 13 passed, 1 failed * s390: 18 total, 15 passed, 3 failed * sh: 12 total, 10 passed, 2 failed * sparc: 9 total, 8 passed, 1 failed * x86_64: 38 total, 38 passed, 0 failed
## Test suites summary * boot * commands * kselftest-arm64 * kselftest-breakpoints * kselftest-capabilities * kselftest-cgroup * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-cpufreq * kselftest-efivarfs * kselftest-exec * kselftest-fpu * kselftest-ftrace * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-kcmp * kselftest-kvm * kselftest-livepatch * kselftest-membarrier * kselftest-memfd * kselftest-mincore * kselftest-mqueue * kselftest-net * kselftest-net-mptcp * kselftest-openat2 * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-tc-testing * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user_events * kselftest-vDSO * kselftest-x86 * kunit * kvm-unit-tests * libgpiod * libhugetlbfs * log-parser-boot * log-parser-build-clang * log-parser-build-gcc * log-parser-test * ltp-capability * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-filecaps * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-hugetlb * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-smoke * ltp-syscalls * ltp-tracing * perf * rcutorture
-- Linaro LKFT https://lkft.linaro.org
On 3/5/25 10:46, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.130 release. There are 176 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.130-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
linux-stable-mirror@lists.linaro.org