This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.4.10-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.4.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.4.10-rc1
Andi Shyti andi.shyti@linux.intel.com drm/i915/gt: Enable the CCS_FLUSH bit in the pipe control and in the CS
Andi Shyti andi.shyti@linux.intel.com drm/i915/gt: Support aux invalidation on all engines
Jonathan Cavitt jonathan.cavitt@intel.com drm/i915/gt: Poll aux invalidation register bit on invalidation
Andi Shyti andi.shyti@linux.intel.com drm/i915/gt: Rename flags with bit_group_X according to the datasheet
Tejas Upadhyay tejas.upadhyay@intel.com drm/i915/gt: Add workaround 14016712196
Jonathan Cavitt jonathan.cavitt@intel.com drm/i915/gt: Ensure memory quiesced before invalidation
Andi Shyti andi.shyti@linux.intel.com drm/i915: Add the gen12_needs_ccs_aux_inv helper
Xu Yang xu.yang_2@nxp.com ARM: dts: nxp/imx6sll: fix wrong property name in usbphy node
Sean Christopherson seanjc@google.com selftests/rseq: Play nice with binaries statically linked against glibc 2.35+
Lijo Lazar lijo.lazar@amd.com drm/amdgpu: Use apt name for FW reserved region
Alexander Stein alexander.stein@ew.tq-group.com drm/imx/ipuv3: Fix front porch adjustment upon hactive aligning
Aneesh Kumar K.V aneesh.kumar@linux.ibm.com powerpc/mm/altmap: Fix altmap boundary check
Christophe JAILLET christophe.jaillet@wanadoo.fr mtd: rawnand: fsl_upm: Fix an off-by one test in fun_exec_op()
Arnd Bergmann arnd@arndb.de mtd: spi-nor: avoid holes in struct spi_mem_op
Chen-Yu Tsai wenst@chromium.org clk: mediatek: mt8183: Add back SSPM related clocks
Johan Jonker jbx6244@gmail.com mtd: rawnand: rockchip: Align hwecc vs. raw page helper layouts
Johan Jonker jbx6244@gmail.com mtd: rawnand: rockchip: fix oobfree offset and description
Roger Quadros rogerq@kernel.org mtd: rawnand: omap_elm: Fix incorrect type in assignment
Pavel Begunkov asml.silence@gmail.com io_uring: annotate offset timeout races
Chao Yu chao@kernel.org f2fs: fix to do sanity check on direct node in truncate_dnode()
Filipe Manana fdmanana@suse.com btrfs: remove BUG_ON()'s in add_new_free_space()
Jan Kara jack@suse.cz ext2: Drop fragment support
Jason Gunthorpe jgg@ziepe.ca mm/gup: do not return 0 from pin_user_pages_fast() for bad args
Jan Kara jack@suse.cz fs: Protect reconfiguration of sb read-write from racing writes
Alan Stern stern@rowland.harvard.edu net: usbnet: Fix WARNING in usbnet_start_xmit/usb_submit_urb
Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp debugobjects: Recheck debug_objects_enabled before reporting
Sungwoo Kim iam@sung-woo.kim Bluetooth: L2CAP: Fix use-after-free in l2cap_sock_ready_cb
Prince Kumar Maurya princekumarmaurya06@gmail.com fs/sysv: Null check to prevent null-ptr-deref bug
Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp kasan,kmsan: remove __GFP_KSWAPD_RECLAIM usage from kasan/kmsan
Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp fs/ntfs3: Use __GFP_NOWARN allocation at ntfs_load_attr_list()
Roman Gushchin roman.gushchin@linux.dev mm: kmem: fix a NULL pointer dereference in obj_stock_flush_required()
Linus Torvalds torvalds@linux-foundation.org file: reinstate f_pos locking optimization for regular files
Geert Uytterhoeven geert+renesas@glider.be clk: imx93: Propagate correct error in imx93_clocks_probe()
Stephen Rothwell sfr@canb.auug.org.au sunvnet: fix sparc64 build error after gso code split
Mike Kravetz mike.kravetz@oracle.com Revert "page cache: fix page_cache_next/prev_miss off by one"
Andi Shyti andi.shyti@linux.intel.com drm/i915/gt: Cleanup aux invalidation registers
Janusz Krzysztofik janusz.krzysztofik@linux.intel.com drm/i915: Fix premature release of request's reusable memory
Guchun Chen guchun.chen@amd.com drm/ttm: check null pointer before accessing when swapping
Aleksa Sarai cyphar@cyphar.com open: make RESOLVE_CACHED correctly test for O_TMPFILE
Mark Brown broonie@kernel.org arm64/ptrace: Don't enable SVE when setting streaming SVE
Mark Brown broonie@kernel.org arm64/ptrace: Flush FP state when setting ZT0
Mark Brown broonie@kernel.org arm64/fpsimd: Sync FPSIMD state with SVE for SME only systems
Mark Brown broonie@kernel.org arm64/fpsimd: Clear SME state in the target task when setting the VL
Mark Brown broonie@kernel.org arm64/fpsimd: Sync and zero pad FPSIMD state for streaming SVE
Mike Rapoport (IBM) rppt@kernel.org parisc/mm: preallocate fixmap page tables at init
Naveen N Rao naveen@kernel.org powerpc/ftrace: Create a dummy stackframe to fix stack unwind
Paulo Alcantara pc@manguebit.com smb: client: fix dfs link mount against w2k8
Jiri Olsa jolsa@kernel.org bpf: Disable preemption in bpf_event_output
Ilya Dryomov idryomov@gmail.com rbd: prevent busy loop when requesting exclusive lock
Michael Kelley mikelley@microsoft.com x86/hyperv: Disable IBT when hypercall page lacks ENDBR instruction
Paul Fertser fercerpav@gmail.com wifi: mt76: mt7615: do not advertise 5 GHz on first phy of MT7615D (DBDC)
Laszlo Ersek lersek@redhat.com net: tap_open(): set sk_uid from current_fsuid()
Laszlo Ersek lersek@redhat.com net: tun_chr_open(): set sk_uid from current_fsuid()
Dinh Nguyen dinguyen@kernel.org arm64: dts: stratix10: fix incorrect I2C property for SCL signal
Jiri Olsa jolsa@kernel.org bpf: Disable preemption in bpf_perf_event_output
Song Shuai suagrfillet@gmail.com riscv: Export va_kernel_pa_offset in vmcoreinfo
Arseniy Krasnov AVKrasnov@sberdevices.ru mtd: rawnand: meson: fix OOB available bytes for ECC
Olivier Maignial olivier.maignial@hotmail.fr mtd: spinand: winbond: Fix ecc_get_status
Olivier Maignial olivier.maignial@hotmail.fr mtd: spinand: toshiba: Fix ecc_get_status
Sungjong Seo sj1557.seo@samsung.com exfat: release s_lock before calling dir_emit()
Namjae Jeon linkinjeon@kernel.org exfat: check if filename entries exceeds max filename length
gaoming gaoming20@hihonor.com exfat: use kvmalloc_array/kvfree instead of kmalloc_array/kfree
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org firmware: arm_scmi: Drop OF node reference in the transport channel setup
Xiubo Li xiubli@redhat.com ceph: defer stopping mdsc delayed_work
Ross Maynard bids.7405@bigpond.com USB: zaurus: Add ID for A-300/B-500/C-700
Ilya Dryomov idryomov@gmail.com libceph: fix potential hang in ceph_osdc_notify()
Song Shuai suagrfillet@gmail.com Documentation: kdump: Add va_kernel_pa_offset for RISCV64
Michael Kelley mikelley@microsoft.com scsi: storvsc: Limit max_sectors for virtual Fibre Channel devices
Steffen Maier maier@linux.ibm.com scsi: zfcp: Defer fc_rport blocking until after ADISC response
Boqun Feng boqun.feng@gmail.com rust: allocator: Prevent mis-aligned allocation
Stefano Garzarella sgarzare@redhat.com test/vsock: remove vsock_perf executable on `make clean`
Eric Dumazet edumazet@google.com tcp_metrics: fix data-race in tcpm_suck_dst() vs fastopen
Eric Dumazet edumazet@google.com tcp_metrics: annotate data-races around tm->tcpm_net
Eric Dumazet edumazet@google.com tcp_metrics: annotate data-races around tm->tcpm_vals[]
Eric Dumazet edumazet@google.com tcp_metrics: annotate data-races around tm->tcpm_lock
Eric Dumazet edumazet@google.com tcp_metrics: annotate data-races around tm->tcpm_stamp
Eric Dumazet edumazet@google.com tcp_metrics: fix addr_same() helper
Jonas Gorski jonas.gorski@bisdn.de prestera: fix fallback to previous version on same major version
Leon Romanovsky leon@kernel.org net/mlx5e: Set proper IPsec source port in L4 selector
Jianbo Liu jianbol@nvidia.com net/mlx5: fs_core: Skip the FTs in the same FS_TYPE_PRIO_CHAINS fs_prio
Jianbo Liu jianbol@nvidia.com net/mlx5: fs_core: Make find_closest_ft more generic
Benjamin Poirier bpoirier@nvidia.com vxlan: Fix nexthop hash size
Yue Haibing yuehaibing@huawei.com ip6mr: Fix skb_under_panic in ip6mr_cache_report()
Alexandra Winter wintera@linux.ibm.com s390/qeth: Don't call dev_close/dev_open (DOWN/UP)
Lin Ma linma@zju.edu.cn net: dcb: choose correct policy to parse DCB_ATTR_BCN
Michael Chan michael.chan@broadcom.com bnxt_en: Fix max_mtu setting for multi-buf XDP
Somnath Kotur somnath.kotur@broadcom.com bnxt_en: Fix page pool logic for page size >= 64K
Kuniyuki Iwashima kuniyu@amazon.com selftest: net: Assert on a proper value in so_incoming_cpu.c.
Mark Brown broonie@kernel.org net: netsec: Ignore 'phy-mode' on SynQuacer in DT mode
Yuanjun Gong ruc_gongyuanjun@163.com net: korina: handle clk prepare error in korina_probe()
Dan Carpenter dan.carpenter@linaro.org net: ll_temac: fix error checking of irq_of_parse_and_map()
Tomas Glozar tglozar@redhat.com bpf: sockmap: Remove preempt_disable in sock_map_sk_acquire
valis sec@valis.email net/sched: cls_route: No longer copy tcf_result on update to avoid use-after-free
valis sec@valis.email net/sched: cls_fw: No longer copy tcf_result on update to avoid use-after-free
valis sec@valis.email net/sched: cls_u32: No longer copy tcf_result on update to avoid use-after-free
Hou Tao houtao1@huawei.com bpf, cpumap: Handle skb as well when clean up ptr_ring
Hou Tao houtao1@huawei.com bpf, cpumap: Make sure kthread is running before map update returns
Andrii Nakryiko andrii@kernel.org bpf: Centralize permissions checks for all BPF map types
Andrii Nakryiko andrii@kernel.org bpf: Inline map creation logic in map_create() function
Andrii Nakryiko andrii@kernel.org bpf: Move unprivileged checks into map_create() and bpf_prog_load()
Michal Schmidt mschmidt@redhat.com octeon_ep: initialize mbox mutexes
Jakub Kicinski kuba@kernel.org bnxt: don't handle XDP in netpoll
Rafal Rogalski rafalx.rogalski@intel.com ice: Fix RDMA VSI removal during queue rebuild
Duoming Zhou duoming@zju.edu.cn net: usb: lan78xx: reorder cleanup operations to avoid UAF bugs
Kuniyuki Iwashima kuniyu@amazon.com net/sched: taprio: Limit TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME to INT_MAX.
Eric Dumazet edumazet@google.com net: annotate data-races around sk->sk_priority
Eric Dumazet edumazet@google.com net: add missing data-race annotation for sk_ll_usec
Eric Dumazet edumazet@google.com net: add missing data-race annotations around sk->sk_peek_off
Eric Dumazet edumazet@google.com net: annotate data-races around sk->sk_mark
Eric Dumazet edumazet@google.com net: add missing READ_ONCE(sk->sk_rcvbuf) annotation
Eric Dumazet edumazet@google.com net: add missing READ_ONCE(sk->sk_sndbuf) annotation
Eric Dumazet edumazet@google.com net: add missing READ_ONCE(sk->sk_rcvlowat) annotation
Eric Dumazet edumazet@google.com net: annotate data-races around sk->sk_max_pacing_rate
Eric Dumazet edumazet@google.com net: annotate data-race around sk->sk_txrehash
Eric Dumazet edumazet@google.com net: annotate data-races around sk->sk_reserved_mem
Richard Gobert richardbgobert@gmail.com net: gro: fix misuse of CB in udp socket lookup
Eric Dumazet edumazet@google.com net: move gso declarations and functions to their own files
Konstantin Khorenko khorenko@virtuozzo.com qed: Fix scheduling in a tasklet while getting stats
Thierry Reding treding@nvidia.com net: stmmac: tegra: Properly allocate clock bulk data
Chengfeng Ye dg573847474@gmail.com mISDN: hfcpci: Fix potential deadlock on &hc->lock
Jamal Hadi Salim jhs@mojatatu.com net: sched: cls_u32: Fix match key mis-addressing
Georg Müller georgmueller@gmx.net perf test uprobe_from_different_cu: Skip if there is no gcc
Yuanjun Gong ruc_gongyuanjun@163.com net: dsa: fix value check in bcm_sf2_sw_probe()
Lin Ma linma@zju.edu.cn rtnetlink: let rtnl_bridge_setlink checks IFLA_BRIDGE_MODE length
Lin Ma linma@zju.edu.cn bpf: Add length check for SK_DIAG_BPF_STORAGE_REQ_MAP_FD parsing
Shay Drory shayd@nvidia.com net/mlx5: Unregister devlink params in case interface is down
Chris Mi cmi@nvidia.com net/mlx5: fs_chains: Fix ft prio if ignore_flow_level is not supported
Jianbo Liu jianbol@nvidia.com net/mlx5e: kTLS, Fix protection domain in use syndrome when devlink reload
Dragos Tatulea dtatulea@nvidia.com net/mlx5e: xsk: Fix crash on regular rq reactivation
Dragos Tatulea dtatulea@nvidia.com net/mlx5e: xsk: Fix invalid buffer access for legacy rq
Jianbo Liu jianbol@nvidia.com net/mlx5e: Move representor neigh cleanup to profile cleanup_tx
Amir Tzin amirtz@nvidia.com net/mlx5e: Fix crash moving to switchdev mode when ntuple offload is set
Chris Mi cmi@nvidia.com net/mlx5e: Don't hold encap tbl lock if there is no encap action
Shay Drory shayd@nvidia.com net/mlx5: Honor user input for migratable port fn attr
Yuanjun Gong ruc_gongyuanjun@163.com net/mlx5e: fix return value check in mlx5e_ipsec_remove_trailer()
Zhengchao Shao shaozhengchao@huawei.com net/mlx5: fix potential memory leak in mlx5e_init_rep_rx
Zhengchao Shao shaozhengchao@huawei.com net/mlx5: DR, fix memory leak in mlx5dr_cmd_create_reformat_ctx
Zhengchao Shao shaozhengchao@huawei.com net/mlx5e: fix double free in macsec_fs_tx_create_crypto_table_groups
Ilan Peer ilan.peer@intel.com wifi: cfg80211: Fix return value in scan logic
Haixin Yu yuhaixin.yhx@linux.alibaba.com perf pmu arm64: Fix reading the PMU cpu slots in sysfs
Gao Xiang xiang@kernel.org erofs: fix wrong primary bvec selection on deduplicated extents
Heiko Carstens hca@linux.ibm.com KVM: s390: fix sthyi error handling
Sven Schnelle svens@linux.ibm.com s390/vmem: split pages when debug pagealloc is enabled
ndesaulniers@google.com ndesaulniers@google.com word-at-a-time: use the same return type for has_zero regardless of endianness
Durai Manickam KR durai.manickamkr@microchip.com ARM: dts: at91: sam9x60: fix the SOC detection
Claudiu Beznea claudiu.beznea@microchip.com ARM: dts: at91: use generic name for shutdown controller
Claudiu Beznea claudiu.beznea@microchip.com ARM: dts: at91: use clock-controller name for sckc nodes
Claudiu Beznea claudiu.beznea@microchip.com ARM: dts: at91: use clock-controller name for PMC nodes
Cristian Marussi cristian.marussi@arm.com firmware: arm_scmi: Fix chan_free cleanup on SMC
Lucas Stach l.stach@pengutronix.de soc: imx: imx8mp-blk-ctrl: register HSIO PLL clock as bus_power_dev child
Dmitry Baryshkov dmitry.baryshkov@linaro.org ARM: dts: nxp/imx: limit sk-imx53 supported frequencies
Yury Norov yury.norov@gmail.com lib/bitmap: workaround const_eval test build failure
Sukrut Bellary sukrut.bellary@linux.com firmware: arm_scmi: Fix signed error return values handling
Punit Agrawal punit.agrawal@bytedance.com firmware: smccc: Fix use of uninitialised results structure
Benjamin Gaignard benjamin.gaignard@collabora.com arm64: dts: freescale: Fix VPU G2 clock
Hugo Villeneuve hvilleneuve@dimonoff.com arm64: dts: imx8mn-var-som: add missing pull-up for onboard PHY reset pinmux
Yashwanth Varakala y.varakala@phytec.de arm64: dts: phycore-imx8mm: Correction in gpio-line-names
Yashwanth Varakala y.varakala@phytec.de arm64: dts: phycore-imx8mm: Label typo-fix of VPU
Tim Harvey tharvey@gateworks.com arm64: dts: imx8mm-venice-gw7904: disable disp_blk_ctrl
Tim Harvey tharvey@gateworks.com arm64: dts: imx8mm-venice-gw7903: disable disp_blk_ctrl
Robin Murphy robin.murphy@arm.com iommu/arm-smmu-v3: Document nesting-related errata
Robin Murphy robin.murphy@arm.com iommu/arm-smmu-v3: Add explicit feature for nesting
Robin Murphy robin.murphy@arm.com iommu/arm-smmu-v3: Document MMU-700 erratum 2812531
Robin Murphy robin.murphy@arm.com iommu/arm-smmu-v3: Work around MMU-600 erratum 1076982
Jann Horn jannh@google.com mm: lock_vma_under_rcu() must check vma->anon_vma under vma lock
-------------
Diffstat:
Documentation/admin-guide/kdump/vmcoreinfo.rst | 6 + Documentation/arm64/silicon-errata.rst | 4 + Makefile | 4 +- arch/arm/boot/dts/at91-qil_a9260.dts | 2 +- arch/arm/boot/dts/at91-sama5d27_som1_ek.dts | 2 +- arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts | 2 +- arch/arm/boot/dts/at91-sama5d2_xplained.dts | 2 +- arch/arm/boot/dts/at91rm9200.dtsi | 2 +- arch/arm/boot/dts/at91sam9260.dtsi | 4 +- arch/arm/boot/dts/at91sam9260ek.dts | 2 +- arch/arm/boot/dts/at91sam9261.dtsi | 4 +- arch/arm/boot/dts/at91sam9263.dtsi | 4 +- arch/arm/boot/dts/at91sam9g20.dtsi | 2 +- arch/arm/boot/dts/at91sam9g20ek_common.dtsi | 2 +- arch/arm/boot/dts/at91sam9g25.dtsi | 2 +- arch/arm/boot/dts/at91sam9g35.dtsi | 2 +- arch/arm/boot/dts/at91sam9g45.dtsi | 6 +- arch/arm/boot/dts/at91sam9n12.dtsi | 4 +- arch/arm/boot/dts/at91sam9rl.dtsi | 6 +- arch/arm/boot/dts/at91sam9x25.dtsi | 2 +- arch/arm/boot/dts/at91sam9x35.dtsi | 2 +- arch/arm/boot/dts/at91sam9x5.dtsi | 6 +- arch/arm/boot/dts/imx53-sk-imx53.dts | 10 + arch/arm/boot/dts/imx6sll.dtsi | 2 +- arch/arm/boot/dts/sam9x60.dtsi | 32 +-- arch/arm/boot/dts/sama5d2.dtsi | 6 +- arch/arm/boot/dts/sama5d3.dtsi | 6 +- arch/arm/boot/dts/sama5d3_emac.dtsi | 2 +- arch/arm/boot/dts/sama5d4.dtsi | 6 +- arch/arm/boot/dts/sama7g5.dtsi | 4 +- arch/arm/boot/dts/usb_a9260.dts | 2 +- arch/arm/boot/dts/usb_a9263.dts | 2 +- .../boot/dts/altera/socfpga_stratix10_socdk.dts | 2 +- .../dts/altera/socfpga_stratix10_socdk_nand.dts | 2 +- .../dts/freescale/imx8mm-phyboard-polis-rdk.dts | 2 +- .../boot/dts/freescale/imx8mm-phycore-som.dtsi | 4 +- .../boot/dts/freescale/imx8mm-venice-gw7903.dts | 4 + .../boot/dts/freescale/imx8mm-venice-gw7904.dts | 4 + arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi | 2 +- arch/arm64/boot/dts/freescale/imx8mq.dtsi | 2 +- arch/arm64/kernel/fpsimd.c | 9 +- arch/arm64/kernel/ptrace.c | 10 +- arch/parisc/mm/fixmap.c | 3 - arch/parisc/mm/init.c | 34 +++ arch/powerpc/include/asm/word-at-a-time.h | 2 +- arch/powerpc/kernel/trace/ftrace_mprofile.S | 9 +- arch/powerpc/mm/init_64.c | 3 +- arch/riscv/kernel/crash_core.c | 2 + arch/s390/kernel/sthyi.c | 6 +- arch/s390/kvm/intercept.c | 9 +- arch/s390/mm/vmem.c | 2 + arch/x86/hyperv/hv_init.c | 21 ++ drivers/block/rbd.c | 28 ++- drivers/clk/imx/clk-imx93.c | 2 +- drivers/clk/mediatek/clk-mt8183.c | 27 ++ drivers/firmware/arm_scmi/mailbox.c | 4 +- drivers/firmware/arm_scmi/raw_mode.c | 5 +- drivers/firmware/arm_scmi/smc.c | 21 +- drivers/firmware/smccc/soc_id.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 35 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 3 +- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 178 ++++++++++---- drivers/gpu/drm/i915/gt/gen8_engine_cs.h | 21 +- drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 2 + drivers/gpu/drm/i915/gt/intel_gt_regs.h | 16 +- drivers/gpu/drm/i915/gt/intel_lrc.c | 17 +- drivers/gpu/drm/i915/i915_active.c | 99 +++++--- drivers/gpu/drm/i915/i915_request.c | 11 + drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c | 2 +- drivers/gpu/drm/ttm/ttm_bo.c | 3 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 50 ++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 8 + drivers/isdn/hardware/mISDN/hfcpci.c | 10 +- drivers/mtd/nand/raw/fsl_upm.c | 2 +- drivers/mtd/nand/raw/meson_nand.c | 3 +- drivers/mtd/nand/raw/omap_elm.c | 24 +- drivers/mtd/nand/raw/rockchip-nand-controller.c | 45 ++-- drivers/mtd/nand/spi/toshiba.c | 4 +- drivers/mtd/nand/spi/winbond.c | 4 +- drivers/mtd/spi-nor/spansion.c | 4 +- drivers/net/dsa/bcm_sf2.c | 8 +- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 85 ++++--- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 14 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 2 +- drivers/net/ethernet/broadcom/tg3.c | 1 + drivers/net/ethernet/intel/ice/ice_main.c | 18 ++ drivers/net/ethernet/korina.c | 3 +- .../ethernet/marvell/octeon_ep/octep_ctrl_mbox.c | 3 + .../net/ethernet/marvell/prestera/prestera_pci.c | 3 +- .../ethernet/mellanox/mlx5/core/en/tc_tun_encap.c | 3 - .../net/ethernet/mellanox/mlx5/core/en/xsk/rx.c | 5 +- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 4 +- .../mellanox/mlx5/core/en_accel/ipsec_rxtx.c | 4 +- .../ethernet/mellanox/mlx5/core/en_accel/ktls.c | 8 - .../ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 29 ++- .../mellanox/mlx5/core/en_accel/macsec_fs.c | 1 + drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c | 10 + drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 29 ++- drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 20 +- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 21 +- .../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 3 +- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 105 ++++++-- .../ethernet/mellanox/mlx5/core/lib/fs_chains.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/main.c | 1 + .../ethernet/mellanox/mlx5/core/steering/dr_cmd.c | 5 +- drivers/net/ethernet/myricom/myri10ge/myri10ge.c | 1 + drivers/net/ethernet/qlogic/qed/qed_dev_api.h | 16 ++ drivers/net/ethernet/qlogic/qed/qed_fcoe.c | 19 +- drivers/net/ethernet/qlogic/qed/qed_fcoe.h | 17 +- drivers/net/ethernet/qlogic/qed/qed_hw.c | 26 +- drivers/net/ethernet/qlogic/qed/qed_iscsi.c | 19 +- drivers/net/ethernet/qlogic/qed/qed_iscsi.h | 8 +- drivers/net/ethernet/qlogic/qed/qed_l2.c | 19 +- drivers/net/ethernet/qlogic/qed/qed_l2.h | 24 ++ drivers/net/ethernet/qlogic/qed/qed_main.c | 6 +- drivers/net/ethernet/sfc/siena/tx_common.c | 1 + drivers/net/ethernet/sfc/tx_common.c | 1 + drivers/net/ethernet/socionext/netsec.c | 11 + drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c | 3 +- drivers/net/ethernet/sun/sunvnet_common.c | 1 + drivers/net/ethernet/xilinx/ll_temac_main.c | 12 +- drivers/net/tap.c | 3 +- drivers/net/tun.c | 2 +- drivers/net/usb/cdc_ether.c | 21 ++ drivers/net/usb/lan78xx.c | 7 +- drivers/net/usb/r8152.c | 1 + drivers/net/usb/usbnet.c | 6 + drivers/net/usb/zaurus.c | 21 ++ drivers/net/wireguard/device.c | 1 + drivers/net/wireless/intel/iwlwifi/mvm/tx.c | 1 + drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c | 6 +- drivers/s390/net/qeth_core.h | 1 - drivers/s390/net/qeth_core_main.c | 2 - drivers/s390/net/qeth_l2_main.c | 9 +- drivers/s390/net/qeth_l3_main.c | 8 +- drivers/s390/scsi/zfcp_fc.c | 6 +- drivers/scsi/storvsc_drv.c | 4 + drivers/soc/imx/imx8mp-blk-ctrl.c | 2 +- fs/btrfs/block-group.c | 51 ++-- fs/btrfs/block-group.h | 4 +- fs/btrfs/free-space-tree.c | 24 +- fs/ceph/mds_client.c | 4 +- fs/ceph/mds_client.h | 5 + fs/ceph/super.c | 10 + fs/erofs/zdata.c | 7 +- fs/exfat/balloc.c | 6 +- fs/exfat/dir.c | 36 +-- fs/ext2/ext2.h | 12 - fs/ext2/super.c | 23 +- fs/f2fs/f2fs.h | 1 - fs/f2fs/file.c | 5 - fs/f2fs/node.c | 14 +- fs/file.c | 18 +- fs/ntfs3/attrlist.c | 4 +- fs/open.c | 2 +- fs/smb/client/dfs.c | 6 +- fs/super.c | 11 +- fs/sysv/itree.c | 4 + include/asm-generic/word-at-a-time.h | 2 +- include/linux/f2fs_fs.h | 1 + include/linux/netdevice.h | 26 +- include/linux/skbuff.h | 71 ------ include/linux/spi/spi-mem.h | 4 + include/net/gro.h | 44 ++++ include/net/gso.h | 109 ++++++++ include/net/inet_sock.h | 7 +- include/net/ip.h | 2 +- include/net/route.h | 4 +- include/net/udp.h | 1 + include/net/vxlan.h | 4 +- io_uring/timeout.c | 2 +- kernel/bpf/bloom_filter.c | 3 - kernel/bpf/bpf_local_storage.c | 3 - kernel/bpf/bpf_struct_ops.c | 3 - kernel/bpf/cpumap.c | 39 +-- kernel/bpf/devmap.c | 3 - kernel/bpf/hashtab.c | 6 - kernel/bpf/lpm_trie.c | 3 - kernel/bpf/queue_stack_maps.c | 4 - kernel/bpf/reuseport_array.c | 3 - kernel/bpf/stackmap.c | 3 - kernel/bpf/syscall.c | 136 ++++++---- kernel/trace/bpf_trace.c | 17 +- lib/Makefile | 6 + lib/debugobjects.c | 9 + lib/test_bitmap.c | 8 +- mm/filemap.c | 26 +- mm/gup.c | 2 +- mm/kasan/generic.c | 4 +- mm/kasan/tags.c | 2 +- mm/kmsan/core.c | 6 +- mm/kmsan/instrumentation.c | 2 +- mm/memcontrol.c | 19 +- mm/memory.c | 28 ++- net/bluetooth/l2cap_sock.c | 2 + net/can/raw.c | 2 +- net/ceph/osd_client.c | 20 +- net/core/Makefile | 2 +- net/core/bpf_sk_storage.c | 5 +- net/core/dev.c | 70 +----- net/core/gro.c | 59 +---- net/core/gso.c | 273 +++++++++++++++++++++ net/core/rtnetlink.c | 8 +- net/core/skbuff.c | 142 +---------- net/core/sock.c | 45 ++-- net/core/sock_map.c | 6 - net/dcb/dcbnl.c | 2 +- net/dccp/ipv6.c | 4 +- net/ipv4/af_inet.c | 1 + net/ipv4/esp4_offload.c | 1 + net/ipv4/gre_offload.c | 1 + net/ipv4/inet_diag.c | 4 +- net/ipv4/ip_output.c | 9 +- net/ipv4/ip_sockglue.c | 2 +- net/ipv4/raw.c | 2 +- net/ipv4/route.c | 4 +- net/ipv4/tcp_ipv4.c | 4 +- net/ipv4/tcp_metrics.c | 70 ++++-- net/ipv4/tcp_offload.c | 1 + net/ipv4/udp.c | 9 +- net/ipv4/udp_offload.c | 8 +- net/ipv6/esp6_offload.c | 1 + net/ipv6/ip6_offload.c | 1 + net/ipv6/ip6_output.c | 1 + net/ipv6/ip6mr.c | 2 +- net/ipv6/ping.c | 2 +- net/ipv6/raw.c | 6 +- net/ipv6/route.c | 7 +- net/ipv6/tcp_ipv6.c | 9 +- net/ipv6/udp.c | 12 +- net/ipv6/udp_offload.c | 8 +- net/l2tp/l2tp_ip6.c | 2 +- net/mac80211/tx.c | 1 + net/mpls/af_mpls.c | 1 + net/mpls/mpls_gso.c | 1 + net/mptcp/sockopt.c | 2 +- net/netfilter/nf_flow_table_ip.c | 1 + net/netfilter/nfnetlink_queue.c | 1 + net/netfilter/nft_socket.c | 2 +- net/netfilter/xt_socket.c | 4 +- net/nsh/nsh.c | 1 + net/openvswitch/actions.c | 1 + net/openvswitch/datapath.c | 1 + net/packet/af_packet.c | 12 +- net/sched/act_police.c | 1 + net/sched/cls_fw.c | 1 - net/sched/cls_route.c | 1 - net/sched/cls_u32.c | 57 ++++- net/sched/sch_cake.c | 1 + net/sched/sch_netem.c | 1 + net/sched/sch_taprio.c | 16 +- net/sched/sch_tbf.c | 1 + net/sctp/offload.c | 1 + net/smc/af_smc.c | 2 +- net/unix/af_unix.c | 2 +- net/wireless/scan.c | 2 +- net/xdp/xsk.c | 2 +- net/xdp/xskmap.c | 4 - net/xfrm/xfrm_device.c | 1 + net/xfrm/xfrm_interface_core.c | 1 + net/xfrm/xfrm_output.c | 1 + net/xfrm/xfrm_policy.c | 2 +- rust/bindings/bindings_helper.h | 1 + rust/kernel/allocator.rs | 74 ++++-- tools/perf/arch/arm64/util/pmu.c | 7 +- .../tests/shell/test_uprobe_from_different_cu.sh | 8 +- .../selftests/bpf/prog_tests/unpriv_bpf_disabled.c | 6 +- tools/testing/selftests/net/so_incoming_cpu.c | 2 +- tools/testing/selftests/rseq/rseq.c | 28 ++- .../tc-testing/tc-tests/qdiscs/taprio.json | 25 ++ tools/testing/vsock/Makefile | 2 +- 272 files changed, 2293 insertions(+), 1234 deletions(-)
From: Jann Horn jannh@google.com
commit 657b5146955eba331e01b9a6ae89ce2e716ba306 upstream.
lock_vma_under_rcu() tries to guarantee that __anon_vma_prepare() can't be called in the VMA-locked page fault path by ensuring that vma->anon_vma is set.
However, this check happens before the VMA is locked, which means a concurrent move_vma() can concurrently call unlink_anon_vmas(), which disassociates the VMA's anon_vma.
This means we can get UAF in the following scenario:
THREAD 1 THREAD 2 ======== ======== <page fault> lock_vma_under_rcu() rcu_read_lock() mas_walk() check vma->anon_vma
mremap() syscall move_vma() vma_start_write() unlink_anon_vmas() <syscall end>
handle_mm_fault() __handle_mm_fault() handle_pte_fault() do_pte_missing() do_anonymous_page() anon_vma_prepare() __anon_vma_prepare() find_mergeable_anon_vma() mas_walk() [looks up VMA X]
munmap() syscall (deletes VMA X)
reusable_anon_vma() [called on freed VMA X]
This is a security bug if you can hit it, although an attacker would have to win two races at once where the first race window is only a few instructions wide.
This patch is based on some previous discussion with Linus Torvalds on the security list.
Cc: stable@vger.kernel.org Fixes: 5e31275cc997 ("mm: add per-VMA lock and helper functions to control it") Signed-off-by: Jann Horn jannh@google.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memory.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-)
--- a/mm/memory.c +++ b/mm/memory.c @@ -5410,27 +5410,28 @@ retry: if (!vma_is_anonymous(vma)) goto inval;
- /* find_mergeable_anon_vma uses adjacent vmas which are not locked */ - if (!vma->anon_vma) - goto inval; - if (!vma_start_read(vma)) goto inval;
/* + * find_mergeable_anon_vma uses adjacent vmas which are not locked. + * This check must happen after vma_start_read(); otherwise, a + * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA + * from its anon_vma. + */ + if (unlikely(!vma->anon_vma)) + goto inval_end_read; + + /* * Due to the possibility of userfault handler dropping mmap_lock, avoid * it for now and fall back to page fault handling under mmap_lock. */ - if (userfaultfd_armed(vma)) { - vma_end_read(vma); - goto inval; - } + if (userfaultfd_armed(vma)) + goto inval_end_read;
/* Check since vm_start/vm_end might change before we lock the VMA */ - if (unlikely(address < vma->vm_start || address >= vma->vm_end)) { - vma_end_read(vma); - goto inval; - } + if (unlikely(address < vma->vm_start || address >= vma->vm_end)) + goto inval_end_read;
/* Check if the VMA got isolated after we found it */ if (vma->detached) { @@ -5442,6 +5443,9 @@ retry:
rcu_read_unlock(); return vma; + +inval_end_read: + vma_end_read(vma); inval: rcu_read_unlock(); count_vm_vma_lock_event(VMA_LOCK_ABORT);
From: Robin Murphy robin.murphy@arm.com
commit f322e8af35c7f23a8c08b595c38d6c855b2d836f upstream
MMU-600 versions prior to r1p0 fail to correctly generate a WFE wakeup event when the command queue transitions fom full to non-full. We can easily work around this by simply hiding the SEV capability such that we fall back to polling for space in the queue - since MMU-600 implements MSIs we wouldn't expect to need SEV for sync completion either, so this should have little to no impact.
Signed-off-by: Robin Murphy robin.murphy@arm.com Reviewed-by: Nicolin Chen nicolinc@nvidia.com Tested-by: Nicolin Chen nicolinc@nvidia.com Link: https://lore.kernel.org/r/08adbe3d01024d8382a478325f73b56851f76e49.168373125... Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Easwar Hariharan eahariha@linux.microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/arm64/silicon-errata.rst | 2 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 29 ++++++++++++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 6 +++++ 3 files changed, 37 insertions(+)
--- a/Documentation/arm64/silicon-errata.rst +++ b/Documentation/arm64/silicon-errata.rst @@ -143,6 +143,8 @@ stable kernels. +----------------+-----------------+-----------------+-----------------------------+ | ARM | MMU-500 | #841119,826419 | N/A | +----------------+-----------------+-----------------+-----------------------------+ +| ARM | MMU-600 | #1076982 | N/A | ++----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+ | Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_845719 | +----------------+-----------------+-----------------+-----------------------------+ --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3429,6 +3429,33 @@ static int arm_smmu_device_reset(struct return 0; }
+#define IIDR_IMPLEMENTER_ARM 0x43b +#define IIDR_PRODUCTID_ARM_MMU_600 0x483 + +static void arm_smmu_device_iidr_probe(struct arm_smmu_device *smmu) +{ + u32 reg; + unsigned int implementer, productid, variant, revision; + + reg = readl_relaxed(smmu->base + ARM_SMMU_IIDR); + implementer = FIELD_GET(IIDR_IMPLEMENTER, reg); + productid = FIELD_GET(IIDR_PRODUCTID, reg); + variant = FIELD_GET(IIDR_VARIANT, reg); + revision = FIELD_GET(IIDR_REVISION, reg); + + switch (implementer) { + case IIDR_IMPLEMENTER_ARM: + switch (productid) { + case IIDR_PRODUCTID_ARM_MMU_600: + /* Arm erratum 1076982 */ + if (variant == 0 && revision <= 2) + smmu->features &= ~ARM_SMMU_FEAT_SEV; + break; + } + break; + } +} + static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; @@ -3635,6 +3662,8 @@ static int arm_smmu_device_hw_probe(stru
smmu->ias = max(smmu->ias, smmu->oas);
+ arm_smmu_device_iidr_probe(smmu); + if (arm_smmu_sva_supported(smmu)) smmu->features |= ARM_SMMU_FEAT_SVA;
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -69,6 +69,12 @@ #define IDR5_VAX GENMASK(11, 10) #define IDR5_VAX_52_BIT 1
+#define ARM_SMMU_IIDR 0x18 +#define IIDR_PRODUCTID GENMASK(31, 20) +#define IIDR_VARIANT GENMASK(19, 16) +#define IIDR_REVISION GENMASK(15, 12) +#define IIDR_IMPLEMENTER GENMASK(11, 0) + #define ARM_SMMU_CR0 0x20 #define CR0_ATSCHK (1 << 4) #define CR0_CMDQEN (1 << 3)
From: Robin Murphy robin.murphy@arm.com
commit 309a15cb16bb075da1c99d46fb457db6a1a2669e upstream
To work around MMU-700 erratum 2812531 we need to ensure that certain sequences of commands cannot be issued without an intervening sync. In practice this falls out of our current command-batching machinery anyway - each batch only contains a single type of invalidation command, and ends with a sync. The only exception is when a batch is sufficiently large to need issuing across multiple command queue slots, wherein the earlier slots will not contain a sync and thus may in theory interleave with another batch being issued in parallel to create an affected sequence across the slot boundary.
Since MMU-700 supports range invalidate commands and thus we will prefer to use them (which also happens to avoid conditions for other errata), I'm not entirely sure it's even possible for a single high-level invalidate call to generate a batch of more than 63 commands, but for the sake of robustness and documentation, wire up an option to enforce that a sync is always inserted for every slot issued.
The other aspect is that the relative order of DVM commands cannot be controlled, so DVM cannot be used. Again that is already the status quo, but since we have at least defined ARM_SMMU_FEAT_BTM, we can explicitly disable it for documentation purposes even if it's not wired up anywhere yet.
Signed-off-by: Robin Murphy robin.murphy@arm.com Reviewed-by: Nicolin Chen nicolinc@nvidia.com Link: https://lore.kernel.org/r/330221cdfd0003cd51b6c04e7ff3566741ad8374.168373125... Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Easwar Hariharan eahariha@linux.microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/arm64/silicon-errata.rst | 2 ++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 12 ++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 + 3 files changed, 15 insertions(+)
--- a/Documentation/arm64/silicon-errata.rst +++ b/Documentation/arm64/silicon-errata.rst @@ -145,6 +145,8 @@ stable kernels. +----------------+-----------------+-----------------+-----------------------------+ | ARM | MMU-600 | #1076982 | N/A | +----------------+-----------------+-----------------+-----------------------------+ +| ARM | MMU-700 | #2812531 | N/A | ++----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+ | Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_845719 | +----------------+-----------------+-----------------+-----------------------------+ --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -894,6 +894,12 @@ static void arm_smmu_cmdq_batch_add(stru { int index;
+ if (cmds->num == CMDQ_BATCH_ENTRIES - 1 && + (smmu->options & ARM_SMMU_OPT_CMDQ_FORCE_SYNC)) { + arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true); + cmds->num = 0; + } + if (cmds->num == CMDQ_BATCH_ENTRIES) { arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false); cmds->num = 0; @@ -3431,6 +3437,7 @@ static int arm_smmu_device_reset(struct
#define IIDR_IMPLEMENTER_ARM 0x43b #define IIDR_PRODUCTID_ARM_MMU_600 0x483 +#define IIDR_PRODUCTID_ARM_MMU_700 0x487
static void arm_smmu_device_iidr_probe(struct arm_smmu_device *smmu) { @@ -3451,6 +3458,11 @@ static void arm_smmu_device_iidr_probe(s if (variant == 0 && revision <= 2) smmu->features &= ~ARM_SMMU_FEAT_SEV; break; + case IIDR_PRODUCTID_ARM_MMU_700: + /* Arm erratum 2812531 */ + smmu->features &= ~ARM_SMMU_FEAT_BTM; + smmu->options |= ARM_SMMU_OPT_CMDQ_FORCE_SYNC; + break; } break; } --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -650,6 +650,7 @@ struct arm_smmu_device { #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) #define ARM_SMMU_OPT_PAGE0_REGS_ONLY (1 << 1) #define ARM_SMMU_OPT_MSIPOLL (1 << 2) +#define ARM_SMMU_OPT_CMDQ_FORCE_SYNC (1 << 3) u32 options;
struct arm_smmu_cmdq cmdq;
From: Robin Murphy robin.murphy@arm.com
commit 1d9777b9f3d55b4b6faf186ba4f1d6fb560c0523 upstream
In certain cases we may want to refuse to allow nested translation even when both stages are implemented, so let's add an explicit feature for nesting support which we can control in its own right. For now this merely serves as documentation, but it means a nice convenient check will be ready and waiting for the future nesting code.
Signed-off-by: Robin Murphy robin.murphy@arm.com Reviewed-by: Nicolin Chen nicolinc@nvidia.com Link: https://lore.kernel.org/r/136c3f4a3a84cc14a5a1978ace57dfd3ed67b688.168373125... Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Easwar Hariharan eahariha@linux.microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 4 ++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 + 2 files changed, 5 insertions(+)
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3674,6 +3674,10 @@ static int arm_smmu_device_hw_probe(stru
smmu->ias = max(smmu->ias, smmu->oas);
+ if ((smmu->features & ARM_SMMU_FEAT_TRANS_S1) && + (smmu->features & ARM_SMMU_FEAT_TRANS_S2)) + smmu->features |= ARM_SMMU_FEAT_NESTING; + arm_smmu_device_iidr_probe(smmu);
if (arm_smmu_sva_supported(smmu)) --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -645,6 +645,7 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_BTM (1 << 16) #define ARM_SMMU_FEAT_SVA (1 << 17) #define ARM_SMMU_FEAT_E2H (1 << 18) +#define ARM_SMMU_FEAT_NESTING (1 << 19) u32 features;
#define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
From: Robin Murphy robin.murphy@arm.com
commit 0bfbfc526c70606bf0fad302e4821087cbecfaf4 upstream
Both MMU-600 and MMU-700 have similar errata around TLB invalidation while both stages of translation are active, which will need some consideration once nesting support is implemented. For now, though, it's very easy to make our implicit lack of nesting support explicit for those cases, so they're less likely to be missed in future.
Signed-off-by: Robin Murphy robin.murphy@arm.com Reviewed-by: Nicolin Chen nicolinc@nvidia.com Link: https://lore.kernel.org/r/696da78d32bb4491f898f11b0bb4d850a8aa7c6a.168373125... Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Easwar Hariharan eahariha@linux.microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/arm64/silicon-errata.rst | 4 ++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 5 +++++ 2 files changed, 7 insertions(+), 2 deletions(-)
--- a/Documentation/arm64/silicon-errata.rst +++ b/Documentation/arm64/silicon-errata.rst @@ -143,9 +143,9 @@ stable kernels. +----------------+-----------------+-----------------+-----------------------------+ | ARM | MMU-500 | #841119,826419 | N/A | +----------------+-----------------+-----------------+-----------------------------+ -| ARM | MMU-600 | #1076982 | N/A | +| ARM | MMU-600 | #1076982,1209401| N/A | +----------------+-----------------+-----------------+-----------------------------+ -| ARM | MMU-700 | #2812531 | N/A | +| ARM | MMU-700 | #2268618,2812531| N/A | +----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+ | Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_845719 | --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3457,11 +3457,16 @@ static void arm_smmu_device_iidr_probe(s /* Arm erratum 1076982 */ if (variant == 0 && revision <= 2) smmu->features &= ~ARM_SMMU_FEAT_SEV; + /* Arm erratum 1209401 */ + if (variant < 2) + smmu->features &= ~ARM_SMMU_FEAT_NESTING; break; case IIDR_PRODUCTID_ARM_MMU_700: /* Arm erratum 2812531 */ smmu->features &= ~ARM_SMMU_FEAT_BTM; smmu->options |= ARM_SMMU_OPT_CMDQ_FORCE_SYNC; + /* Arm errata 2268618, 2812531 */ + smmu->features &= ~ARM_SMMU_FEAT_NESTING; break; } break;
From: Tim Harvey tharvey@gateworks.com
[ Upstream commit 3e7d3c5e13b05dda9db92d98803a626378e75438 ]
The GW7903 does not connect the VDD_MIPI power rails thus MIPI is disabled. However we must also disable disp_blk_ctrl as it uses the pgc_mipi power domain and without it being disabled imx8m-blk-ctrl will fail to probe: imx8m-blk-ctrl 32e28000.blk-ctrl: error -ETIMEDOUT: failed to attach power domain "mipi-dsi" imx8m-blk-ctrl: probe of 32e28000.blk-ctrl failed with error -110
Fixes: a72ba91e5bc7 ("arm64: dts: imx: Add i.mx8mm Gateworks gw7903 dts support") Signed-off-by: Tim Harvey tharvey@gateworks.com Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/freescale/imx8mm-venice-gw7903.dts | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7903.dts b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7903.dts index 363020a08c9b8..4660d086cb099 100644 --- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7903.dts +++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7903.dts @@ -567,6 +567,10 @@ status = "okay"; };
+&disp_blk_ctrl { + status = "disabled"; +}; + &pgc_mipi { status = "disabled"; };
From: Tim Harvey tharvey@gateworks.com
[ Upstream commit f7a0b57524cf811ac06257a5099f1b7c19ee7310 ]
The GW7904 does not connect the VDD_MIPI power rails thus MIPI is disabled. However we must also disable disp_blk_ctrl as it uses the pgc_mipi power domain and without it being disabled imx8m-blk-ctrl will fail to probe: imx8m-blk-ctrl 32e28000.blk-ctrl: error -ETIMEDOUT: failed to attach power domain "mipi-dsi" imx8m-blk-ctrl: probe of 32e28000.blk-ctrl failed with error -110
Fixes: b999bdaf0597 ("arm64: dts: imx: Add i.mx8mm Gateworks gw7904 dts support") Signed-off-by: Tim Harvey tharvey@gateworks.com Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/freescale/imx8mm-venice-gw7904.dts | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7904.dts b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7904.dts index 93088fa1c3b9c..d5b7168558124 100644 --- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7904.dts +++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7904.dts @@ -628,6 +628,10 @@ status = "okay"; };
+&disp_blk_ctrl { + status = "disabled"; +}; + &pgc_mipi { status = "disabled"; };
From: Yashwanth Varakala y.varakala@phytec.de
[ Upstream commit cddeefc1663294fb74b31ff5029a83c0e819ff3a ]
Corrected the label of the VPU regulator node (buck 3) from reg_vdd_gpu to reg_vdd_vpu.
Fixes: ae6847f26ac9 ("arm64: dts: freescale: Add phyBOARD-Polis-i.MX8MM support") Signed-off-by: Yashwanth Varakala y.varakala@phytec.de Signed-off-by: Cem Tenruh c.tenruh@phytec.de Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi index 92616bc4f71f5..2dd179ec923d7 100644 --- a/arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi +++ b/arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi @@ -210,7 +210,7 @@ }; };
- reg_vdd_gpu: buck3 { + reg_vdd_vpu: buck3 { regulator-always-on; regulator-boot-on; regulator-max-microvolt = <1000000>;
From: Yashwanth Varakala y.varakala@phytec.de
[ Upstream commit 1ef0aa137a96c5f0564f2db0c556a4f0f60ce8f5 ]
Remove unused nINT_ETHPHY entry from gpio-line-names in gpio1 nodes of phyCORE-i.MX8MM and phyBOARD-Polis-i.MX8MM devicetrees.
Fixes: ae6847f26ac9 ("arm64: dts: freescale: Add phyBOARD-Polis-i.MX8MM support") Signed-off-by: Yashwanth Varakala y.varakala@phytec.de Signed-off-by: Cem Tenruh c.tenruh@phytec.de Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/freescale/imx8mm-phyboard-polis-rdk.dts | 2 +- arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-phyboard-polis-rdk.dts b/arch/arm64/boot/dts/freescale/imx8mm-phyboard-polis-rdk.dts index 03e7679217b24..479948f8a4b75 100644 --- a/arch/arm64/boot/dts/freescale/imx8mm-phyboard-polis-rdk.dts +++ b/arch/arm64/boot/dts/freescale/imx8mm-phyboard-polis-rdk.dts @@ -141,7 +141,7 @@ };
&gpio1 { - gpio-line-names = "nINT_ETHPHY", "LED_RED", "WDOG_INT", "X_RTC_INT", + gpio-line-names = "", "LED_RED", "WDOG_INT", "X_RTC_INT", "", "", "", "RESET_ETHPHY", "CAN_nINT", "CAN_EN", "nENABLE_FLATLINK", "", "USB_OTG_VBUS_EN", "", "LED_GREEN", "LED_BLUE"; diff --git a/arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi index 2dd179ec923d7..847f08537b48a 100644 --- a/arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi +++ b/arch/arm64/boot/dts/freescale/imx8mm-phycore-som.dtsi @@ -111,7 +111,7 @@ };
&gpio1 { - gpio-line-names = "nINT_ETHPHY", "", "WDOG_INT", "X_RTC_INT", + gpio-line-names = "", "", "WDOG_INT", "X_RTC_INT", "", "", "", "RESET_ETHPHY", "", "", "nENABLE_FLATLINK"; };
From: Hugo Villeneuve hvilleneuve@dimonoff.com
[ Upstream commit 253be5b53c2792fb4384f8005b05421e6f040ee3 ]
For SOMs with an onboard PHY, the RESET_N pull-up resistor is currently deactivated in the pinmux configuration. When the pinmux code selects the GPIO function for this pin, with a default direction of input, this prevents the RESET_N pin from being taken to the proper 3.3V level (deasserted), and this results in the PHY being not detected since it is held in reset.
Taken from RESET_N pin description in ADIN13000 datasheet: This pin requires a 1K pull-up resistor to AVDD_3P3.
Activate the pull-up resistor to fix the issue.
Fixes: ade0176dd8a0 ("arm64: dts: imx8mn-var-som: Add Variscite VAR-SOM-MX8MN System on Module") Signed-off-by: Hugo Villeneuve hvilleneuve@dimonoff.com Reviewed-by: Fabio Estevam festevam@gmail.com Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi index cbd9d124c80d0..c9d4fb75c21d3 100644 --- a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi +++ b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi @@ -351,7 +351,7 @@ MX8MN_IOMUXC_ENET_RXC_ENET1_RGMII_RXC 0x91 MX8MN_IOMUXC_ENET_RX_CTL_ENET1_RGMII_RX_CTL 0x91 MX8MN_IOMUXC_ENET_TX_CTL_ENET1_RGMII_TX_CTL 0x1f - MX8MN_IOMUXC_GPIO1_IO09_GPIO1_IO9 0x19 + MX8MN_IOMUXC_GPIO1_IO09_GPIO1_IO9 0x159 >; };
From: Benjamin Gaignard benjamin.gaignard@collabora.com
[ Upstream commit b27bfc5103c72f84859bd32731b6a09eafdeda05 ]
Set VPU G2 clock to 300MHz like described in documentation. This fixes pixels error occurring with large resolution ( >= 2560x1600) HEVC test stream when using the postprocessor to produce NV12.
Fixes: 4ac7e4a81272 ("arm64: dts: imx8mq: Enable both G1 and G2 VPU's with vpu-blk-ctrl") Signed-off-by: Benjamin Gaignard benjamin.gaignard@collabora.com Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/freescale/imx8mq.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi index 0492556a10dbc..345c70c6c697a 100644 --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi @@ -770,7 +770,7 @@ <&clk IMX8MQ_SYS1_PLL_800M>, <&clk IMX8MQ_VPU_PLL>; assigned-clock-rates = <600000000>, - <600000000>, + <300000000>, <800000000>, <0>; };
From: Punit Agrawal punit.agrawal@bytedance.com
[ Upstream commit d05799d7b4a39fa71c65aa277128ac7c843ffcdc ]
Commit 35727af2b15d ("irqchip/gicv3: Workaround for NVIDIA erratum T241-FABRIC-4") moved the initialisation of the SoC version to arm_smccc_version_init() but forgot to update the results structure and it's usage.
Fix the use of the uninitialised results structure and update the error strings.
Fixes: 35727af2b15d ("irqchip/gicv3: Workaround for NVIDIA erratum T241-FABRIC-4") Signed-off-by: Punit Agrawal punit.agrawal@bytedance.com Cc: Sudeep Holla sudeep.holla@arm.com Cc: Marc Zyngier maz@kernel.org Cc: Vikram Sethi vsethi@nvidia.com Cc: Shanker Donthineni sdonthineni@nvidia.com Acked-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230717171702.424253-1-punit.agrawal@bytedance.co... Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/smccc/soc_id.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/firmware/smccc/soc_id.c b/drivers/firmware/smccc/soc_id.c index 890eb454599a3..1990263fbba0e 100644 --- a/drivers/firmware/smccc/soc_id.c +++ b/drivers/firmware/smccc/soc_id.c @@ -34,7 +34,6 @@ static struct soc_device_attribute *soc_dev_attr;
static int __init smccc_soc_init(void) { - struct arm_smccc_res res; int soc_id_rev, soc_id_version; static char soc_id_str[20], soc_id_rev_str[12]; static char soc_id_jep106_id_str[12]; @@ -49,13 +48,13 @@ static int __init smccc_soc_init(void) }
if (soc_id_version < 0) { - pr_err("ARCH_SOC_ID(0) returned error: %lx\n", res.a0); + pr_err("Invalid SoC Version: %x\n", soc_id_version); return -EINVAL; }
soc_id_rev = arm_smccc_get_soc_id_revision(); if (soc_id_rev < 0) { - pr_err("ARCH_SOC_ID(1) returned error: %lx\n", res.a0); + pr_err("Invalid SoC Revision: %x\n", soc_id_rev); return -EINVAL; }
From: Sukrut Bellary sukrut.bellary@linux.com
[ Upstream commit 81b233b8dd72f2d1df3da8bd4bd4f8c5e84937b9 ]
Handle signed error return values returned by simple_write_to_buffer(). In case of an error, return the error code.
Fixes: 3c3d818a9317 ("firmware: arm_scmi: Add core raw transmission support") Reported-by: Dan Carpenter dan.carpenter@linaro.org Signed-off-by: Sukrut Bellary sukrut.bellary@linux.com Reviewed-by: Cristian Marussi cristian.marussi@arm.com Tested-by: Cristian Marussi cristian.marussi@arm.com Reviewed-by: Dan Carpenter dan.carpenter@linaro.org Link: https://lore.kernel.org/r/20230718085529.258899-1-sukrut.bellary@linux.com Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_scmi/raw_mode.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/firmware/arm_scmi/raw_mode.c b/drivers/firmware/arm_scmi/raw_mode.c index 6971dcf72fb99..0493aa3c12bf5 100644 --- a/drivers/firmware/arm_scmi/raw_mode.c +++ b/drivers/firmware/arm_scmi/raw_mode.c @@ -818,10 +818,13 @@ static ssize_t scmi_dbg_raw_mode_common_write(struct file *filp, * before sending it with a single RAW xfer. */ if (rd->tx_size < rd->tx_req_size) { - size_t cnt; + ssize_t cnt;
cnt = simple_write_to_buffer(rd->tx.buf, rd->tx.len, ppos, buf, count); + if (cnt < 0) + return cnt; + rd->tx_size += cnt; if (cnt < count) return cnt;
From: Yury Norov yury.norov@gmail.com
[ Upstream commit 2356d198d2b4ddec24efea98271cb3be230bc787 ]
When building with Clang, and when KASAN and GCOV_PROFILE_ALL are both enabled, the test fails to build [1]:
lib/test_bitmap.c:920:2: error: call to '__compiletime_assert_239' declared with 'error' attribute: BUILD_BUG_ON failed: !__builtin_constant_p(res)
BUILD_BUG_ON(!__builtin_constant_p(res)); ^ include/linux/build_bug.h:50:2: note: expanded from macro 'BUILD_BUG_ON' BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition) ^ include/linux/build_bug.h:39:37: note: expanded from macro 'BUILD_BUG_ON_MSG' #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg) ^ include/linux/compiler_types.h:352:2: note: expanded from macro 'compiletime_assert' _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) ^ include/linux/compiler_types.h:340:2: note: expanded from macro '_compiletime_assert' __compiletime_assert(condition, msg, prefix, suffix) ^ include/linux/compiler_types.h:333:4: note: expanded from macro '__compiletime_assert' prefix ## suffix(); \ ^ <scratch space>:185:1: note: expanded from here __compiletime_assert_239
Originally it was attributed to s390, which now looks seemingly wrong. The issue is not related to bitmap code itself, but it breaks build for a given configuration.
Disabling the const_eval test under that config may potentially hide other bugs. Instead, workaround it by disabling GCOV for the test_bitmap unless the compiler will get fixed.
[1] https://github.com/ClangBuiltLinux/linux/issues/1874
Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/oe-kbuild-all/202307171254.yFcH97ej-lkp@intel.com/ Fixes: dc34d5036692 ("lib: test_bitmap: add compile-time optimization/evaluations assertions") Co-developed-by: Nathan Chancellor nathan@kernel.org Signed-off-by: Nathan Chancellor nathan@kernel.org Signed-off-by: Yury Norov yury.norov@gmail.com Reviewed-by: Nick Desaulniers ndesaulniers@google.com Reviewed-by: Alexander Lobakin aleksander.lobakin@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- lib/Makefile | 6 ++++++ lib/test_bitmap.c | 8 ++++---- 2 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/lib/Makefile b/lib/Makefile index 876fcdeae34ec..05d8ec332baac 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -82,7 +82,13 @@ obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_key_base.o obj-$(CONFIG_TEST_DYNAMIC_DEBUG) += test_dynamic_debug.o obj-$(CONFIG_TEST_PRINTF) += test_printf.o obj-$(CONFIG_TEST_SCANF) += test_scanf.o + obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o +ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_KASAN),yy) +# FIXME: Clang breaks test_bitmap_const_eval when KASAN and GCOV are enabled +GCOV_PROFILE_test_bitmap.o := n +endif + obj-$(CONFIG_TEST_UUID) += test_uuid.o obj-$(CONFIG_TEST_XARRAY) += test_xarray.o obj-$(CONFIG_TEST_MAPLE_TREE) += test_maple_tree.o diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index a8005ad3bd589..37a9108c4f588 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -1149,6 +1149,10 @@ static void __init test_bitmap_print_buf(void) } }
+/* + * FIXME: Clang breaks compile-time evaluations when KASAN and GCOV are enabled. + * To workaround it, GCOV is force-disabled in Makefile for this configuration. + */ static void __init test_bitmap_const_eval(void) { DECLARE_BITMAP(bitmap, BITS_PER_LONG); @@ -1174,11 +1178,7 @@ static void __init test_bitmap_const_eval(void) * the compiler is fixed. */ bitmap_clear(bitmap, 0, BITS_PER_LONG); -#if defined(__s390__) && defined(__clang__) - if (!const_test_bit(7, bitmap)) -#else if (!test_bit(7, bitmap)) -#endif bitmap_set(bitmap, 5, 2);
/* Equals to `unsigned long bitopvar = BIT(20)` */
From: Dmitry Baryshkov dmitry.baryshkov@linaro.org
[ Upstream commit c486762fb17c99fd642beea3e1e4744d093c262a ]
The SK-IMX53 board, bearing i.MX536A CPU, is not stable when running at 1.2 GHz (default iMX53 maximum). The SoC is only rated up to 800 MHz. Disable 1.2 GHz and 1 GHz frequencies.
Fixes: 0b8576d8440a ("ARM: dts: imx: Add support for SK-iMX53 board") Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Reviewed-by: Fabio Estevam festevam@gmail.com Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/imx53-sk-imx53.dts | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/arch/arm/boot/dts/imx53-sk-imx53.dts b/arch/arm/boot/dts/imx53-sk-imx53.dts index 103e73176e47d..1a00d290092ad 100644 --- a/arch/arm/boot/dts/imx53-sk-imx53.dts +++ b/arch/arm/boot/dts/imx53-sk-imx53.dts @@ -60,6 +60,16 @@ status = "okay"; };
+&cpu0 { + /* CPU rated to 800 MHz, not the default 1.2GHz. */ + operating-points = < + /* kHz uV */ + 166666 850000 + 400000 900000 + 800000 1050000 + >; +}; + &ecspi1 { pinctrl-names = "default"; pinctrl-0 = <&pinctrl_ecspi1>;
From: Lucas Stach l.stach@pengutronix.de
[ Upstream commit 53cab4d871690c49fac87c657cbf459e39c5b93b ]
The blk-ctrl device is deliberately placed outside of the GPC power domain as it needs to control the power sequencing of the blk-ctrl domains together with the GPC domains.
Clock runtime PM works by operating on the clock parent device, which doesn't translate into the neccessary GPC power domain action if the clk parent is not part of the GPC power domain. Use the bus_power_device as the parent for the clock to trigger the proper GPC domain actions on clock runtime power management.
Fixes: 2cbee26e5d59 ("soc: imx: imx8mp-blk-ctrl: expose high performance PLL clock") Reported-by: Yannic Moog Y.Moog@phytec.de Signed-off-by: Lucas Stach l.stach@pengutronix.de Tested-by: Yannic Moog y.moog@phytec.de Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/imx/imx8mp-blk-ctrl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/soc/imx/imx8mp-blk-ctrl.c b/drivers/soc/imx/imx8mp-blk-ctrl.c index 870aecc0202ae..1c1fcab4979a4 100644 --- a/drivers/soc/imx/imx8mp-blk-ctrl.c +++ b/drivers/soc/imx/imx8mp-blk-ctrl.c @@ -164,7 +164,7 @@ static int imx8mp_hsio_blk_ctrl_probe(struct imx8mp_blk_ctrl *bc) clk_hsio_pll->hw.init = &init;
hw = &clk_hsio_pll->hw; - ret = devm_clk_hw_register(bc->dev, hw); + ret = devm_clk_hw_register(bc->bus_power_dev, hw); if (ret) return ret;
From: Cristian Marussi cristian.marussi@arm.com
[ Upstream commit d1ff11d7ad8704f8d615f6446041c221b2d2ec4d ]
SCMI transport based on SMC can optionally use an additional IRQ to signal message completion. The associated interrupt handler is currently allocated using devres but on shutdown the core SCMI stack will call .chan_free() well before any managed cleanup is invoked by devres. As a consequence, the arrival of a late reply to an in-flight pending transaction could still trigger the interrupt handler well after the SCMI core has cleaned up the channels, with unpleasant results.
Inhibit further message processing on the IRQ path by explicitly freeing the IRQ inside .chan_free() callback itself.
Fixes: dd820ee21d5e ("firmware: arm_scmi: Augment SMC/HVC to allow optional interrupt") Reported-by: Bjorn Andersson andersson@kernel.org Signed-off-by: Cristian Marussi cristian.marussi@arm.com Link: https://lore.kernel.org/r/20230719173533.2739319-1-cristian.marussi@arm.com Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_scmi/smc.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/firmware/arm_scmi/smc.c b/drivers/firmware/arm_scmi/smc.c index 93272e4bbd12b..d0c9fce44d322 100644 --- a/drivers/firmware/arm_scmi/smc.c +++ b/drivers/firmware/arm_scmi/smc.c @@ -23,6 +23,7 @@ /** * struct scmi_smc - Structure representing a SCMI smc transport * + * @irq: An optional IRQ for completion * @cinfo: SCMI channel info * @shmem: Transmit/Receive shared memory area * @shmem_lock: Lock to protect access to Tx/Rx shared memory area. @@ -33,6 +34,7 @@ */
struct scmi_smc { + int irq; struct scmi_chan_info *cinfo; struct scmi_shared_mem __iomem *shmem; /* Protect access to shmem area */ @@ -106,7 +108,7 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, struct resource res; struct device_node *np; u32 func_id; - int ret, irq; + int ret;
if (!tx) return -ENODEV; @@ -142,11 +144,10 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, * completion of a message is signaled by an interrupt rather than by * the return of the SMC call. */ - irq = of_irq_get_byname(cdev->of_node, "a2p"); - if (irq > 0) { - ret = devm_request_irq(dev, irq, smc_msg_done_isr, - IRQF_NO_SUSPEND, - dev_name(dev), scmi_info); + scmi_info->irq = of_irq_get_byname(cdev->of_node, "a2p"); + if (scmi_info->irq > 0) { + ret = request_irq(scmi_info->irq, smc_msg_done_isr, + IRQF_NO_SUSPEND, dev_name(dev), scmi_info); if (ret) { dev_err(dev, "failed to setup SCMI smc irq\n"); return ret; @@ -168,6 +169,10 @@ static int smc_chan_free(int id, void *p, void *data) struct scmi_chan_info *cinfo = p; struct scmi_smc *scmi_info = cinfo->transport_info;
+ /* Ignore any possible further reception on the IRQ path */ + if (scmi_info->irq > 0) + free_irq(scmi_info->irq, scmi_info); + cinfo->transport_info = NULL; scmi_info->cinfo = NULL;
From: Claudiu Beznea claudiu.beznea@microchip.com
[ Upstream commit d08f92bdfb2dc4a2a14237cfd8a22c568781797c ]
Use clock-controller generic name for PMC nodes.
Signed-off-by: Claudiu Beznea claudiu.beznea@microchip.com Link: https://lore.kernel.org/r/20230517094119.2894220-2-claudiu.beznea@microchip.... Stable-dep-of: f6ad3c13f1b8 ("ARM: dts: at91: sam9x60: fix the SOC detection") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/at91rm9200.dtsi | 2 +- arch/arm/boot/dts/at91sam9260.dtsi | 2 +- arch/arm/boot/dts/at91sam9261.dtsi | 2 +- arch/arm/boot/dts/at91sam9263.dtsi | 2 +- arch/arm/boot/dts/at91sam9g20.dtsi | 2 +- arch/arm/boot/dts/at91sam9g25.dtsi | 2 +- arch/arm/boot/dts/at91sam9g35.dtsi | 2 +- arch/arm/boot/dts/at91sam9g45.dtsi | 2 +- arch/arm/boot/dts/at91sam9n12.dtsi | 2 +- arch/arm/boot/dts/at91sam9rl.dtsi | 2 +- arch/arm/boot/dts/at91sam9x25.dtsi | 2 +- arch/arm/boot/dts/at91sam9x35.dtsi | 2 +- arch/arm/boot/dts/at91sam9x5.dtsi | 2 +- arch/arm/boot/dts/sam9x60.dtsi | 2 +- arch/arm/boot/dts/sama5d2.dtsi | 2 +- arch/arm/boot/dts/sama5d3.dtsi | 2 +- arch/arm/boot/dts/sama5d3_emac.dtsi | 2 +- arch/arm/boot/dts/sama5d4.dtsi | 2 +- arch/arm/boot/dts/sama7g5.dtsi | 2 +- 19 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/arch/arm/boot/dts/at91rm9200.dtsi b/arch/arm/boot/dts/at91rm9200.dtsi index 6f9004ebf4245..37b500f6f3956 100644 --- a/arch/arm/boot/dts/at91rm9200.dtsi +++ b/arch/arm/boot/dts/at91rm9200.dtsi @@ -102,7 +102,7 @@ reg = <0xffffff00 0x100>; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91rm9200-pmc", "syscon"; reg = <0xfffffc00 0x100>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/at91sam9260.dtsi b/arch/arm/boot/dts/at91sam9260.dtsi index 789fe356dbf60..16e3b24b4dddb 100644 --- a/arch/arm/boot/dts/at91sam9260.dtsi +++ b/arch/arm/boot/dts/at91sam9260.dtsi @@ -115,7 +115,7 @@ reg = <0xffffee00 0x200>; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9260-pmc", "syscon"; reg = <0xfffffc00 0x100>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/at91sam9261.dtsi b/arch/arm/boot/dts/at91sam9261.dtsi index ee0bd1aceb3f0..fe9ead867e2ab 100644 --- a/arch/arm/boot/dts/at91sam9261.dtsi +++ b/arch/arm/boot/dts/at91sam9261.dtsi @@ -599,7 +599,7 @@ }; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9261-pmc", "syscon"; reg = <0xfffffc00 0x100>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/at91sam9263.dtsi b/arch/arm/boot/dts/at91sam9263.dtsi index 3ce9ea9873129..ee5e6ed44dd40 100644 --- a/arch/arm/boot/dts/at91sam9263.dtsi +++ b/arch/arm/boot/dts/at91sam9263.dtsi @@ -101,7 +101,7 @@ atmel,external-irqs = <30 31>; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9263-pmc", "syscon"; reg = <0xfffffc00 0x100>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/at91sam9g20.dtsi b/arch/arm/boot/dts/at91sam9g20.dtsi index 708e1646b7f46..738a43ffd2281 100644 --- a/arch/arm/boot/dts/at91sam9g20.dtsi +++ b/arch/arm/boot/dts/at91sam9g20.dtsi @@ -41,7 +41,7 @@ atmel,adc-startup-time = <40>; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9g20-pmc", "atmel,at91sam9260-pmc", "syscon"; }; }; diff --git a/arch/arm/boot/dts/at91sam9g25.dtsi b/arch/arm/boot/dts/at91sam9g25.dtsi index d2f13afb35eaf..ec3c77221881c 100644 --- a/arch/arm/boot/dts/at91sam9g25.dtsi +++ b/arch/arm/boot/dts/at91sam9g25.dtsi @@ -26,7 +26,7 @@ >; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9g25-pmc", "atmel,at91sam9x5-pmc", "syscon"; }; }; diff --git a/arch/arm/boot/dts/at91sam9g35.dtsi b/arch/arm/boot/dts/at91sam9g35.dtsi index 48c2bc4a7753d..c9cfb93092ee6 100644 --- a/arch/arm/boot/dts/at91sam9g35.dtsi +++ b/arch/arm/boot/dts/at91sam9g35.dtsi @@ -25,7 +25,7 @@ >; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9g35-pmc", "atmel,at91sam9x5-pmc", "syscon"; }; }; diff --git a/arch/arm/boot/dts/at91sam9g45.dtsi b/arch/arm/boot/dts/at91sam9g45.dtsi index 95f5d76234dbb..76afeb31b7f54 100644 --- a/arch/arm/boot/dts/at91sam9g45.dtsi +++ b/arch/arm/boot/dts/at91sam9g45.dtsi @@ -129,7 +129,7 @@ reg = <0xffffea00 0x200>; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9g45-pmc", "syscon"; reg = <0xfffffc00 0x100>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/at91sam9n12.dtsi b/arch/arm/boot/dts/at91sam9n12.dtsi index 83114d26f10d0..c2e7460fb7ff6 100644 --- a/arch/arm/boot/dts/at91sam9n12.dtsi +++ b/arch/arm/boot/dts/at91sam9n12.dtsi @@ -118,7 +118,7 @@ reg = <0xffffea00 0x200>; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9n12-pmc", "syscon"; reg = <0xfffffc00 0x200>; #clock-cells = <2>; diff --git a/arch/arm/boot/dts/at91sam9rl.dtsi b/arch/arm/boot/dts/at91sam9rl.dtsi index 364a2ff0a763d..a12e6c419fe3d 100644 --- a/arch/arm/boot/dts/at91sam9rl.dtsi +++ b/arch/arm/boot/dts/at91sam9rl.dtsi @@ -763,7 +763,7 @@ }; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9rl-pmc", "syscon"; reg = <0xfffffc00 0x100>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/at91sam9x25.dtsi b/arch/arm/boot/dts/at91sam9x25.dtsi index 0fe8802e1242b..7036f5f045715 100644 --- a/arch/arm/boot/dts/at91sam9x25.dtsi +++ b/arch/arm/boot/dts/at91sam9x25.dtsi @@ -27,7 +27,7 @@ >; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9x25-pmc", "atmel,at91sam9x5-pmc", "syscon"; }; }; diff --git a/arch/arm/boot/dts/at91sam9x35.dtsi b/arch/arm/boot/dts/at91sam9x35.dtsi index 0bfa21f18f870..eb03b0497e371 100644 --- a/arch/arm/boot/dts/at91sam9x35.dtsi +++ b/arch/arm/boot/dts/at91sam9x35.dtsi @@ -26,7 +26,7 @@ >; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9x35-pmc", "atmel,at91sam9x5-pmc", "syscon"; }; }; diff --git a/arch/arm/boot/dts/at91sam9x5.dtsi b/arch/arm/boot/dts/at91sam9x5.dtsi index 0c26c925761b2..af19ef2a875c4 100644 --- a/arch/arm/boot/dts/at91sam9x5.dtsi +++ b/arch/arm/boot/dts/at91sam9x5.dtsi @@ -126,7 +126,7 @@ reg = <0xffffea00 0x200>; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,at91sam9x5-pmc", "syscon"; reg = <0xfffffc00 0x200>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/sam9x60.dtsi b/arch/arm/boot/dts/sam9x60.dtsi index e67ede940071f..89aafb9a8b0fe 100644 --- a/arch/arm/boot/dts/sam9x60.dtsi +++ b/arch/arm/boot/dts/sam9x60.dtsi @@ -1282,7 +1282,7 @@ }; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "microchip,sam9x60-pmc", "syscon"; reg = <0xfffffc00 0x200>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi index 14c35c12a115f..86009dd28e623 100644 --- a/arch/arm/boot/dts/sama5d2.dtsi +++ b/arch/arm/boot/dts/sama5d2.dtsi @@ -284,7 +284,7 @@ clock-names = "dma_clk"; };
- pmc: pmc@f0014000 { + pmc: clock-controller@f0014000 { compatible = "atmel,sama5d2-pmc", "syscon"; reg = <0xf0014000 0x160>; interrupts = <74 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/sama5d3.dtsi b/arch/arm/boot/dts/sama5d3.dtsi index bde8e92d60bb1..4524a16322d16 100644 --- a/arch/arm/boot/dts/sama5d3.dtsi +++ b/arch/arm/boot/dts/sama5d3.dtsi @@ -1001,7 +1001,7 @@ }; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { compatible = "atmel,sama5d3-pmc", "syscon"; reg = <0xfffffc00 0x120>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/sama5d3_emac.dtsi b/arch/arm/boot/dts/sama5d3_emac.dtsi index 45226108850d2..5d7ce13de8ccf 100644 --- a/arch/arm/boot/dts/sama5d3_emac.dtsi +++ b/arch/arm/boot/dts/sama5d3_emac.dtsi @@ -30,7 +30,7 @@ }; };
- pmc: pmc@fffffc00 { + pmc: clock-controller@fffffc00 { };
macb1: ethernet@f802c000 { diff --git a/arch/arm/boot/dts/sama5d4.dtsi b/arch/arm/boot/dts/sama5d4.dtsi index af62157ae214f..e94f3a661f4bb 100644 --- a/arch/arm/boot/dts/sama5d4.dtsi +++ b/arch/arm/boot/dts/sama5d4.dtsi @@ -250,7 +250,7 @@ clock-names = "dma_clk"; };
- pmc: pmc@f0018000 { + pmc: clock-controller@f0018000 { compatible = "atmel,sama5d4-pmc", "syscon"; reg = <0xf0018000 0x120>; interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; diff --git a/arch/arm/boot/dts/sama7g5.dtsi b/arch/arm/boot/dts/sama7g5.dtsi index 929ba73702e93..b55adb96a06ec 100644 --- a/arch/arm/boot/dts/sama7g5.dtsi +++ b/arch/arm/boot/dts/sama7g5.dtsi @@ -241,7 +241,7 @@ clocks = <&pmc PMC_TYPE_PERIPHERAL 11>; };
- pmc: pmc@e0018000 { + pmc: clock-controller@e0018000 { compatible = "microchip,sama7g5-pmc", "syscon"; reg = <0xe0018000 0x200>; interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
From: Claudiu Beznea claudiu.beznea@microchip.com
[ Upstream commit 3ecb546333089195b6a1508cb58627b0797a26ca ]
Use clock-controller generic name for slow clock controller nodes.
Signed-off-by: Claudiu Beznea claudiu.beznea@microchip.com Link: https://lore.kernel.org/r/20230517094119.2894220-5-claudiu.beznea@microchip.... Stable-dep-of: f6ad3c13f1b8 ("ARM: dts: at91: sam9x60: fix the SOC detection") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/at91sam9g45.dtsi | 2 +- arch/arm/boot/dts/at91sam9rl.dtsi | 2 +- arch/arm/boot/dts/at91sam9x5.dtsi | 2 +- arch/arm/boot/dts/sam9x60.dtsi | 2 +- arch/arm/boot/dts/sama5d2.dtsi | 2 +- arch/arm/boot/dts/sama5d3.dtsi | 2 +- arch/arm/boot/dts/sama5d4.dtsi | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/arm/boot/dts/at91sam9g45.dtsi b/arch/arm/boot/dts/at91sam9g45.dtsi index 76afeb31b7f54..498cb92b29f96 100644 --- a/arch/arm/boot/dts/at91sam9g45.dtsi +++ b/arch/arm/boot/dts/at91sam9g45.dtsi @@ -923,7 +923,7 @@ status = "disabled"; };
- clk32k: sckc@fffffd50 { + clk32k: clock-controller@fffffd50 { compatible = "atmel,at91sam9x5-sckc"; reg = <0xfffffd50 0x4>; clocks = <&slow_xtal>; diff --git a/arch/arm/boot/dts/at91sam9rl.dtsi b/arch/arm/boot/dts/at91sam9rl.dtsi index a12e6c419fe3d..d7e8a115c916c 100644 --- a/arch/arm/boot/dts/at91sam9rl.dtsi +++ b/arch/arm/boot/dts/at91sam9rl.dtsi @@ -799,7 +799,7 @@ status = "disabled"; };
- clk32k: sckc@fffffd50 { + clk32k: clock-controller@fffffd50 { compatible = "atmel,at91sam9x5-sckc"; reg = <0xfffffd50 0x4>; clocks = <&slow_xtal>; diff --git a/arch/arm/boot/dts/at91sam9x5.dtsi b/arch/arm/boot/dts/at91sam9x5.dtsi index af19ef2a875c4..0123ee47151cb 100644 --- a/arch/arm/boot/dts/at91sam9x5.dtsi +++ b/arch/arm/boot/dts/at91sam9x5.dtsi @@ -154,7 +154,7 @@ clocks = <&pmc PMC_TYPE_CORE PMC_MCK>; };
- clk32k: sckc@fffffe50 { + clk32k: clock-controller@fffffe50 { compatible = "atmel,at91sam9x5-sckc"; reg = <0xfffffe50 0x4>; clocks = <&slow_xtal>; diff --git a/arch/arm/boot/dts/sam9x60.dtsi b/arch/arm/boot/dts/sam9x60.dtsi index 89aafb9a8b0fe..c8bedfa987e57 100644 --- a/arch/arm/boot/dts/sam9x60.dtsi +++ b/arch/arm/boot/dts/sam9x60.dtsi @@ -1322,7 +1322,7 @@ clocks = <&pmc PMC_TYPE_CORE PMC_MCK>; };
- clk32k: sckc@fffffe50 { + clk32k: clock-controller@fffffe50 { compatible = "microchip,sam9x60-sckc"; reg = <0xfffffe50 0x4>; clocks = <&slow_xtal>; diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi index 86009dd28e623..5f632e3f039e6 100644 --- a/arch/arm/boot/dts/sama5d2.dtsi +++ b/arch/arm/boot/dts/sama5d2.dtsi @@ -704,7 +704,7 @@ status = "disabled"; };
- clk32k: sckc@f8048050 { + clk32k: clock-controller@f8048050 { compatible = "atmel,sama5d4-sckc"; reg = <0xf8048050 0x4>;
diff --git a/arch/arm/boot/dts/sama5d3.dtsi b/arch/arm/boot/dts/sama5d3.dtsi index 4524a16322d16..0eebf6c760b3d 100644 --- a/arch/arm/boot/dts/sama5d3.dtsi +++ b/arch/arm/boot/dts/sama5d3.dtsi @@ -1040,7 +1040,7 @@ status = "disabled"; };
- clk32k: sckc@fffffe50 { + clk32k: clock-controller@fffffe50 { compatible = "atmel,sama5d3-sckc"; reg = <0xfffffe50 0x4>; clocks = <&slow_xtal>; diff --git a/arch/arm/boot/dts/sama5d4.dtsi b/arch/arm/boot/dts/sama5d4.dtsi index e94f3a661f4bb..de6c829692327 100644 --- a/arch/arm/boot/dts/sama5d4.dtsi +++ b/arch/arm/boot/dts/sama5d4.dtsi @@ -761,7 +761,7 @@ status = "disabled"; };
- clk32k: sckc@fc068650 { + clk32k: clock-controller@fc068650 { compatible = "atmel,sama5d4-sckc"; reg = <0xfc068650 0x4>; #clock-cells = <0>;
From: Claudiu Beznea claudiu.beznea@microchip.com
[ Upstream commit 327ca228e58be498446244eb7cf39b892adda5d7 ]
Use poweroff generic name for shdwc node to cope with device tree specifications.
Signed-off-by: Claudiu Beznea claudiu.beznea@microchip.com Acked-by: Nicolas Ferre nicolas.ferre@microchip.com Link: https://lore.kernel.org/r/20230616101646.879480-2-claudiu.beznea@microchip.c... Stable-dep-of: f6ad3c13f1b8 ("ARM: dts: at91: sam9x60: fix the SOC detection") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/at91-qil_a9260.dts | 2 +- arch/arm/boot/dts/at91-sama5d27_som1_ek.dts | 2 +- arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts | 2 +- arch/arm/boot/dts/at91-sama5d2_xplained.dts | 2 +- arch/arm/boot/dts/at91sam9260.dtsi | 2 +- arch/arm/boot/dts/at91sam9260ek.dts | 2 +- arch/arm/boot/dts/at91sam9261.dtsi | 2 +- arch/arm/boot/dts/at91sam9263.dtsi | 2 +- arch/arm/boot/dts/at91sam9g20ek_common.dtsi | 2 +- arch/arm/boot/dts/at91sam9g45.dtsi | 2 +- arch/arm/boot/dts/at91sam9n12.dtsi | 2 +- arch/arm/boot/dts/at91sam9rl.dtsi | 2 +- arch/arm/boot/dts/at91sam9x5.dtsi | 2 +- arch/arm/boot/dts/sam9x60.dtsi | 2 +- arch/arm/boot/dts/sama5d2.dtsi | 2 +- arch/arm/boot/dts/sama5d3.dtsi | 2 +- arch/arm/boot/dts/sama5d4.dtsi | 2 +- arch/arm/boot/dts/sama7g5.dtsi | 2 +- arch/arm/boot/dts/usb_a9260.dts | 2 +- arch/arm/boot/dts/usb_a9263.dts | 2 +- 20 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/arch/arm/boot/dts/at91-qil_a9260.dts b/arch/arm/boot/dts/at91-qil_a9260.dts index 9d26f99963483..5ccb3c139592d 100644 --- a/arch/arm/boot/dts/at91-qil_a9260.dts +++ b/arch/arm/boot/dts/at91-qil_a9260.dts @@ -108,7 +108,7 @@ status = "okay"; };
- shdwc@fffffd10 { + shdwc: poweroff@fffffd10 { atmel,wakeup-counter = <10>; atmel,wakeup-rtt-timer; }; diff --git a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts index 52ddd0571f1c0..d0a6dbd377dfa 100644 --- a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts +++ b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts @@ -139,7 +139,7 @@ }; };
- shdwc@f8048010 { + poweroff@f8048010 { debounce-delay-us = <976>; atmel,wakeup-rtc-timer;
diff --git a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts index bf1c9ca72a9f3..200b20515ab12 100644 --- a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts +++ b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts @@ -204,7 +204,7 @@ }; };
- shdwc@f8048010 { + poweroff@f8048010 { debounce-delay-us = <976>;
input@0 { diff --git a/arch/arm/boot/dts/at91-sama5d2_xplained.dts b/arch/arm/boot/dts/at91-sama5d2_xplained.dts index 2d53c47d7cc86..6680031387e8c 100644 --- a/arch/arm/boot/dts/at91-sama5d2_xplained.dts +++ b/arch/arm/boot/dts/at91-sama5d2_xplained.dts @@ -348,7 +348,7 @@ }; };
- shdwc@f8048010 { + poweroff@f8048010 { debounce-delay-us = <976>; atmel,wakeup-rtc-timer;
diff --git a/arch/arm/boot/dts/at91sam9260.dtsi b/arch/arm/boot/dts/at91sam9260.dtsi index 16e3b24b4dddb..35a007365b6a5 100644 --- a/arch/arm/boot/dts/at91sam9260.dtsi +++ b/arch/arm/boot/dts/at91sam9260.dtsi @@ -130,7 +130,7 @@ clocks = <&pmc PMC_TYPE_CORE PMC_SLOW>; };
- shdwc@fffffd10 { + shdwc: poweroff@fffffd10 { compatible = "atmel,at91sam9260-shdwc"; reg = <0xfffffd10 0x10>; clocks = <&pmc PMC_TYPE_CORE PMC_SLOW>; diff --git a/arch/arm/boot/dts/at91sam9260ek.dts b/arch/arm/boot/dts/at91sam9260ek.dts index bb72f050a4fef..720c15472c4a5 100644 --- a/arch/arm/boot/dts/at91sam9260ek.dts +++ b/arch/arm/boot/dts/at91sam9260ek.dts @@ -112,7 +112,7 @@ }; };
- shdwc@fffffd10 { + shdwc: poweroff@fffffd10 { atmel,wakeup-counter = <10>; atmel,wakeup-rtt-timer; }; diff --git a/arch/arm/boot/dts/at91sam9261.dtsi b/arch/arm/boot/dts/at91sam9261.dtsi index fe9ead867e2ab..528ffc6f6f962 100644 --- a/arch/arm/boot/dts/at91sam9261.dtsi +++ b/arch/arm/boot/dts/at91sam9261.dtsi @@ -614,7 +614,7 @@ clocks = <&slow_xtal>; };
- shdwc@fffffd10 { + poweroff@fffffd10 { compatible = "atmel,at91sam9260-shdwc"; reg = <0xfffffd10 0x10>; clocks = <&slow_xtal>; diff --git a/arch/arm/boot/dts/at91sam9263.dtsi b/arch/arm/boot/dts/at91sam9263.dtsi index ee5e6ed44dd40..75d8ff2d12c8a 100644 --- a/arch/arm/boot/dts/at91sam9263.dtsi +++ b/arch/arm/boot/dts/at91sam9263.dtsi @@ -158,7 +158,7 @@ clocks = <&slow_xtal>; };
- shdwc@fffffd10 { + poweroff@fffffd10 { compatible = "atmel,at91sam9260-shdwc"; reg = <0xfffffd10 0x10>; clocks = <&slow_xtal>; diff --git a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi index 024af2db638eb..565b99e79c520 100644 --- a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi +++ b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi @@ -126,7 +126,7 @@ }; };
- shdwc@fffffd10 { + shdwc: poweroff@fffffd10 { atmel,wakeup-counter = <10>; atmel,wakeup-rtt-timer; }; diff --git a/arch/arm/boot/dts/at91sam9g45.dtsi b/arch/arm/boot/dts/at91sam9g45.dtsi index 498cb92b29f96..7cccc606e36cd 100644 --- a/arch/arm/boot/dts/at91sam9g45.dtsi +++ b/arch/arm/boot/dts/at91sam9g45.dtsi @@ -152,7 +152,7 @@ };
- shdwc@fffffd10 { + poweroff@fffffd10 { compatible = "atmel,at91sam9rl-shdwc"; reg = <0xfffffd10 0x10>; clocks = <&clk32k>; diff --git a/arch/arm/boot/dts/at91sam9n12.dtsi b/arch/arm/boot/dts/at91sam9n12.dtsi index c2e7460fb7ff6..16a9a908985da 100644 --- a/arch/arm/boot/dts/at91sam9n12.dtsi +++ b/arch/arm/boot/dts/at91sam9n12.dtsi @@ -140,7 +140,7 @@ clocks = <&pmc PMC_TYPE_CORE PMC_MCK>; };
- shdwc@fffffe10 { + poweroff@fffffe10 { compatible = "atmel,at91sam9x5-shdwc"; reg = <0xfffffe10 0x10>; clocks = <&clk32k>; diff --git a/arch/arm/boot/dts/at91sam9rl.dtsi b/arch/arm/boot/dts/at91sam9rl.dtsi index d7e8a115c916c..3d089ffbe1626 100644 --- a/arch/arm/boot/dts/at91sam9rl.dtsi +++ b/arch/arm/boot/dts/at91sam9rl.dtsi @@ -778,7 +778,7 @@ clocks = <&clk32k>; };
- shdwc@fffffd10 { + poweroff@fffffd10 { compatible = "atmel,at91sam9260-shdwc"; reg = <0xfffffd10 0x10>; clocks = <&clk32k>; diff --git a/arch/arm/boot/dts/at91sam9x5.dtsi b/arch/arm/boot/dts/at91sam9x5.dtsi index 0123ee47151cb..a1fed912f2eea 100644 --- a/arch/arm/boot/dts/at91sam9x5.dtsi +++ b/arch/arm/boot/dts/at91sam9x5.dtsi @@ -141,7 +141,7 @@ clocks = <&clk32k>; };
- shutdown_controller: shdwc@fffffe10 { + shutdown_controller: poweroff@fffffe10 { compatible = "atmel,at91sam9x5-shdwc"; reg = <0xfffffe10 0x10>; clocks = <&clk32k>; diff --git a/arch/arm/boot/dts/sam9x60.dtsi b/arch/arm/boot/dts/sam9x60.dtsi index c8bedfa987e57..8b53997675e75 100644 --- a/arch/arm/boot/dts/sam9x60.dtsi +++ b/arch/arm/boot/dts/sam9x60.dtsi @@ -1297,7 +1297,7 @@ clocks = <&clk32k 0>; };
- shutdown_controller: shdwc@fffffe10 { + shutdown_controller: poweroff@fffffe10 { compatible = "microchip,sam9x60-shdwc"; reg = <0xfffffe10 0x10>; clocks = <&clk32k 0>; diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi index 5f632e3f039e6..8ae270fabfa82 100644 --- a/arch/arm/boot/dts/sama5d2.dtsi +++ b/arch/arm/boot/dts/sama5d2.dtsi @@ -680,7 +680,7 @@ clocks = <&clk32k>; };
- shutdown_controller: shdwc@f8048010 { + shutdown_controller: poweroff@f8048010 { compatible = "atmel,sama5d2-shdwc"; reg = <0xf8048010 0x10>; clocks = <&clk32k>; diff --git a/arch/arm/boot/dts/sama5d3.dtsi b/arch/arm/boot/dts/sama5d3.dtsi index 0eebf6c760b3d..d9e66700d1c20 100644 --- a/arch/arm/boot/dts/sama5d3.dtsi +++ b/arch/arm/boot/dts/sama5d3.dtsi @@ -1016,7 +1016,7 @@ clocks = <&clk32k>; };
- shutdown_controller: shutdown-controller@fffffe10 { + shutdown_controller: poweroff@fffffe10 { compatible = "atmel,at91sam9x5-shdwc"; reg = <0xfffffe10 0x10>; clocks = <&clk32k>; diff --git a/arch/arm/boot/dts/sama5d4.dtsi b/arch/arm/boot/dts/sama5d4.dtsi index de6c829692327..41284e013f531 100644 --- a/arch/arm/boot/dts/sama5d4.dtsi +++ b/arch/arm/boot/dts/sama5d4.dtsi @@ -740,7 +740,7 @@ clocks = <&clk32k>; };
- shutdown_controller: shdwc@fc068610 { + shutdown_controller: poweroff@fc068610 { compatible = "atmel,at91sam9x5-shdwc"; reg = <0xfc068610 0x10>; clocks = <&clk32k>; diff --git a/arch/arm/boot/dts/sama7g5.dtsi b/arch/arm/boot/dts/sama7g5.dtsi index b55adb96a06ec..9642a42d84e60 100644 --- a/arch/arm/boot/dts/sama7g5.dtsi +++ b/arch/arm/boot/dts/sama7g5.dtsi @@ -257,7 +257,7 @@ clocks = <&clk32k 0>; };
- shdwc: shdwc@e001d010 { + shdwc: poweroff@e001d010 { compatible = "microchip,sama7g5-shdwc", "syscon"; reg = <0xe001d010 0x10>; clocks = <&clk32k 0>; diff --git a/arch/arm/boot/dts/usb_a9260.dts b/arch/arm/boot/dts/usb_a9260.dts index 6cfa83921ac26..66f8da89007db 100644 --- a/arch/arm/boot/dts/usb_a9260.dts +++ b/arch/arm/boot/dts/usb_a9260.dts @@ -22,7 +22,7 @@
ahb { apb { - shdwc@fffffd10 { + shdwc: poweroff@fffffd10 { atmel,wakeup-counter = <10>; atmel,wakeup-rtt-timer; }; diff --git a/arch/arm/boot/dts/usb_a9263.dts b/arch/arm/boot/dts/usb_a9263.dts index b6cb9cdf81973..45745915b2e16 100644 --- a/arch/arm/boot/dts/usb_a9263.dts +++ b/arch/arm/boot/dts/usb_a9263.dts @@ -67,7 +67,7 @@ }; };
- shdwc@fffffd10 { + poweroff@fffffd10 { atmel,wakeup-counter = <10>; atmel,wakeup-rtt-timer; };
From: Durai Manickam KR durai.manickamkr@microchip.com
[ Upstream commit f6ad3c13f1b8c4e785cb7bd423887197142f47b0 ]
Remove the dbgu compatible strings in the UART submodule of the flexcom for the proper SOC detection.
Fixes: 99c808335877 (ARM: dts: at91: sam9x60: Add missing flexcom definitions) Signed-off-by: Durai Manickam KR durai.manickamkr@microchip.com Link: https://lore.kernel.org/r/20230712100042.317856-1-durai.manickamkr@microchip... Signed-off-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/sam9x60.dtsi | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/arm/boot/dts/sam9x60.dtsi b/arch/arm/boot/dts/sam9x60.dtsi index 8b53997675e75..73d570a172690 100644 --- a/arch/arm/boot/dts/sam9x60.dtsi +++ b/arch/arm/boot/dts/sam9x60.dtsi @@ -172,7 +172,7 @@ status = "disabled";
uart4: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <13 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -240,7 +240,7 @@ status = "disabled";
uart5: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; atmel,usart-mode = <AT91_USART_MODE_SERIAL>; interrupts = <14 IRQ_TYPE_LEVEL_HIGH 7>; @@ -370,7 +370,7 @@ status = "disabled";
uart11: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <32 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -419,7 +419,7 @@ status = "disabled";
uart12: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <33 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -576,7 +576,7 @@ status = "disabled";
uart6: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <9 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -625,7 +625,7 @@ status = "disabled";
uart7: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <10 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -674,7 +674,7 @@ status = "disabled";
uart8: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <11 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -723,7 +723,7 @@ status = "disabled";
uart0: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <5 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -791,7 +791,7 @@ status = "disabled";
uart1: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <6 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -859,7 +859,7 @@ status = "disabled";
uart2: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <7 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -927,7 +927,7 @@ status = "disabled";
uart3: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <8 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -1050,7 +1050,7 @@ status = "disabled";
uart9: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <15 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0 @@ -1099,7 +1099,7 @@ status = "disabled";
uart10: serial@200 { - compatible = "microchip,sam9x60-dbgu", "microchip,sam9x60-usart", "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart"; + compatible = "microchip,sam9x60-usart", "atmel,at91sam9260-usart"; reg = <0x200 0x200>; interrupts = <16 IRQ_TYPE_LEVEL_HIGH 7>; dmas = <&dma0
From: ndesaulniers@google.com ndesaulniers@google.com
[ Upstream commit 79e8328e5acbe691bbde029a52c89d70dcbc22f3 ]
Compiling big-endian targets with Clang produces the diagnostic:
fs/namei.c:2173:13: warning: use of bitwise '|' with boolean operands [-Wbitwise-instead-of-logical] } while (!(has_zero(a, &adata, &constants) | has_zero(b, &bdata, &constants))); ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ || fs/namei.c:2173:13: note: cast one or both operands to int to silence this warning
It appears that when has_zero was introduced, two definitions were produced with different signatures (in particular different return types).
Looking at the usage in hash_name() in fs/namei.c, I suspect that has_zero() is meant to be invoked twice per while loop iteration; using logical-or would not update `bdata` when `a` did not have zeros. So I think it's preferred to always return an unsigned long rather than a bool than update the while loop in hash_name() to use a logical-or rather than bitwise-or.
[ Also changed powerpc version to do the same - Linus ]
Link: https://github.com/ClangBuiltLinux/linux/issues/1832 Link: https://lore.kernel.org/lkml/20230801-bitwise-v1-1-799bec468dc4@google.com/ Fixes: 36126f8f2ed8 ("word-at-a-time: make the interfaces truly generic") Debugged-by: Nathan Chancellor nathan@kernel.org Signed-off-by: Nick Desaulniers ndesaulniers@google.com Acked-by: Heiko Carstens hca@linux.ibm.com Cc: Arnd Bergmann arnd@arndb.de Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/include/asm/word-at-a-time.h | 2 +- include/asm-generic/word-at-a-time.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/word-at-a-time.h b/arch/powerpc/include/asm/word-at-a-time.h index 46c31fb8748d5..30a12d2086871 100644 --- a/arch/powerpc/include/asm/word-at-a-time.h +++ b/arch/powerpc/include/asm/word-at-a-time.h @@ -34,7 +34,7 @@ static inline long find_zero(unsigned long mask) return leading_zero_bits >> 3; }
-static inline bool has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c) +static inline unsigned long has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c) { unsigned long rhs = val | c->low_bits; *data = rhs; diff --git a/include/asm-generic/word-at-a-time.h b/include/asm-generic/word-at-a-time.h index 20c93f08c9933..95a1d214108a5 100644 --- a/include/asm-generic/word-at-a-time.h +++ b/include/asm-generic/word-at-a-time.h @@ -38,7 +38,7 @@ static inline long find_zero(unsigned long mask) return (mask >> 8) ? byte : byte + 1; }
-static inline bool has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c) +static inline unsigned long has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c) { unsigned long rhs = val | c->low_bits; *data = rhs;
From: Sven Schnelle svens@linux.ibm.com
[ Upstream commit edc1e4b6e26536868ef819a735e04a5b32c10589 ]
Since commit bb1520d581a3 ("s390/mm: start kernel with DAT enabled") the kernel crashes early during boot when debug pagealloc is enabled:
mem auto-init: stack:off, heap alloc:off, heap free:off addressing exception: 0005 ilc:2 [#1] SMP DEBUG_PAGEALLOC Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 6.5.0-rc3-09759-gc5666c912155 #630 [..] Krnl Code: 00000000001325f6: ec5600248064 cgrj %r5,%r6,8,000000000013263e 00000000001325fc: eb880002000c srlg %r8,%r8,2 #0000000000132602: b2210051 ipte %r5,%r1,%r0,0 >0000000000132606: b90400d1 lgr %r13,%r1 000000000013260a: 41605008 la %r6,8(%r5) 000000000013260e: a7db1000 aghi %r13,4096 0000000000132612: b221006d ipte %r6,%r13,%r0,0 0000000000132616: e3d0d0000171 lay %r13,4096(%r13)
Call Trace: __kernel_map_pages+0x14e/0x320 __free_pages_ok+0x23a/0x5a8) free_low_memory_core_early+0x214/0x2c8 memblock_free_all+0x28/0x58 mem_init+0xb6/0x228 mm_core_init+0xb6/0x3b0 start_kernel+0x1d2/0x5a8 startup_continue+0x36/0x40 Kernel panic - not syncing: Fatal exception: panic_on_oops
This is caused by using large mappings on machines with EDAT1/EDAT2. Add the code to split the mappings into 4k pages if debug pagealloc is enabled by CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT or the debug_pagealloc kernel command line option.
Fixes: bb1520d581a3 ("s390/mm: start kernel with DAT enabled") Signed-off-by: Sven Schnelle svens@linux.ibm.com Reviewed-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/s390/mm/vmem.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c index b9dcb4ae6c59a..05f4912380fac 100644 --- a/arch/s390/mm/vmem.c +++ b/arch/s390/mm/vmem.c @@ -761,6 +761,8 @@ void __init vmem_map_init(void) if (static_key_enabled(&cpu_has_bear)) set_memory_nx(0, 1); set_memory_nx(PAGE_SIZE, 1); + if (debug_pagealloc_enabled()) + set_memory_4k(0, ident_map_size >> PAGE_SHIFT);
pr_info("Write protected kernel read-only data: %luk\n", (unsigned long)(__end_rodata - _stext) >> 10);
From: Heiko Carstens hca@linux.ibm.com
[ Upstream commit 0c02cc576eac161601927b41634f80bfd55bfa9e ]
Commit 9fb6c9b3fea1 ("s390/sthyi: add cache to store hypervisor info") added cache handling for store hypervisor info. This also changed the possible return code for sthyi_fill().
Instead of only returning a condition code like the sthyi instruction would do, it can now also return a negative error value (-ENOMEM). handle_styhi() was not changed accordingly. In case of an error, the negative error value would incorrectly injected into the guest PSW.
Add proper error handling to prevent this, and update the comment which describes the possible return values of sthyi_fill().
Fixes: 9fb6c9b3fea1 ("s390/sthyi: add cache to store hypervisor info") Reviewed-by: Christian Borntraeger borntraeger@linux.ibm.com Link: https://lore.kernel.org/r/20230727182939.2050744-1-hca@linux.ibm.com Signed-off-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/s390/kernel/sthyi.c | 6 +++--- arch/s390/kvm/intercept.c | 9 ++++++--- 2 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/s390/kernel/sthyi.c b/arch/s390/kernel/sthyi.c index 4d141e2c132e5..2ea7f208f0e73 100644 --- a/arch/s390/kernel/sthyi.c +++ b/arch/s390/kernel/sthyi.c @@ -459,9 +459,9 @@ static int sthyi_update_cache(u64 *rc) * * Fills the destination with system information returned by the STHYI * instruction. The data is generated by emulation or execution of STHYI, - * if available. The return value is the condition code that would be - * returned, the rc parameter is the return code which is passed in - * register R2 + 1. + * if available. The return value is either a negative error value or + * the condition code that would be returned, the rc parameter is the + * return code which is passed in register R2 + 1. */ int sthyi_fill(void *dst, u64 *rc) { diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c index 2cda8d9d7c6ef..f817006f9f936 100644 --- a/arch/s390/kvm/intercept.c +++ b/arch/s390/kvm/intercept.c @@ -389,8 +389,8 @@ static int handle_partial_execution(struct kvm_vcpu *vcpu) */ int handle_sthyi(struct kvm_vcpu *vcpu) { - int reg1, reg2, r = 0; - u64 code, addr, cc = 0, rc = 0; + int reg1, reg2, cc = 0, r = 0; + u64 code, addr, rc = 0; struct sthyi_sctns *sctns = NULL;
if (!test_kvm_facility(vcpu->kvm, 74)) @@ -421,7 +421,10 @@ int handle_sthyi(struct kvm_vcpu *vcpu) return -ENOMEM;
cc = sthyi_fill(sctns, &rc); - + if (cc < 0) { + free_page((unsigned long)sctns); + return cc; + } out: if (!cc) { if (kvm_s390_pv_cpu_is_protected(vcpu)) {
From: Gao Xiang hsiangkao@linux.alibaba.com
[ Upstream commit 94c43de73521d8ed7ebcfc6191d9dace1cbf7caa ]
When handling deduplicated compressed data, there can be multiple decompressed extents pointing to the same compressed data in one shot.
In such cases, the bvecs which belong to the longest extent will be selected as the primary bvecs for real decompressors to decode and the other duplicated bvecs will be directly copied from the primary bvecs.
Previously, only relative offsets of the longest extent were checked to decompress the primary bvecs. On rare occasions, it can be incorrect if there are several extents with the same start relative offset. As a result, some short bvecs could be selected for decompression and then cause data corruption.
For example, as Shijie Sun reported off-list, considering the following extents of a file: 117: 903345.. 915250 | 11905 : 385024.. 389120 | 4096 ... 119: 919729.. 930323 | 10594 : 385024.. 389120 | 4096 ... 124: 968881.. 980786 | 11905 : 385024.. 389120 | 4096
The start relative offset is the same: 2225, but extent 119 (919729.. 930323) is shorter than the others.
Let's restrict the bvec length in addition to the start offset if bvecs are not full.
Reported-by: Shijie Sun sunshijie@xiaomi.com Fixes: 5c2a64252c5d ("erofs: introduce partial-referenced pclusters") Tested-by Shijie Sun sunshijie@xiaomi.com Reviewed-by: Yue Hu huyue2@coolpad.com Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Gao Xiang hsiangkao@linux.alibaba.com Link: https://lore.kernel.org/r/20230719065459.60083-1-hsiangkao@linux.alibaba.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/erofs/zdata.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 4a1c238600c52..470988bb7867e 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -1110,10 +1110,11 @@ static void z_erofs_do_decompressed_bvec(struct z_erofs_decompress_backend *be, struct z_erofs_bvec *bvec) { struct z_erofs_bvec_item *item; + unsigned int pgnr;
- if (!((bvec->offset + be->pcl->pageofs_out) & ~PAGE_MASK)) { - unsigned int pgnr; - + if (!((bvec->offset + be->pcl->pageofs_out) & ~PAGE_MASK) && + (bvec->end == PAGE_SIZE || + bvec->offset + bvec->end == be->pcl->length)) { pgnr = (bvec->offset + be->pcl->pageofs_out) >> PAGE_SHIFT; DBG_BUGON(pgnr >= be->nr_pages); if (!be->decompressed_pages[pgnr]) {
From: Haixin Yu yuhaixin.yhx@linux.alibaba.com
[ Upstream commit 9754353d0ab123d71bf572a483ecc8b330ef36a3 ]
Commit f8ad6018ce3c065a ("perf pmu: Remove duplication around EVENT_SOURCE_DEVICE_PATH") uses sysfs__read_ull() to read a full sysfs path, which will never succeeds as it already comes with the sysfs mount point in it, which sysfs__read_ull() will add again.
Fix it by reading the file using filename__read_ull(), that will not add the sysfs mount point.
Fixes: f8ad6018ce3c065a ("perf pmu: Remove duplication around EVENT_SOURCE_DEVICE_PATH") Signed-off-by: Haixin Yu yuhaixin.yhx@linux.alibaba.com Tested-by: Jing Zhang renyu.zj@linux.alibaba.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: James Clark james.clark@arm.com Cc: Jiri Olsa jolsa@kernel.org Cc: John Garry john.g.garry@oracle.com Cc: Leo Yan leo.yan@linaro.org Cc: Mark Rutland mark.rutland@arm.com Cc: Mike Leach mike.leach@linaro.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will@kernel.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/ZL4G7rWXkfv-Ectq@B-Q60VQ05P-2326.local Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/arch/arm64/util/pmu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/tools/perf/arch/arm64/util/pmu.c b/tools/perf/arch/arm64/util/pmu.c index ef1ed645097c6..ce0d1c7578348 100644 --- a/tools/perf/arch/arm64/util/pmu.c +++ b/tools/perf/arch/arm64/util/pmu.c @@ -56,10 +56,11 @@ double perf_pmu__cpu_slots_per_cycle(void) perf_pmu__pathname_scnprintf(path, sizeof(path), pmu->name, "caps/slots"); /* - * The value of slots is not greater than 32 bits, but sysfs__read_int - * can't read value with 0x prefix, so use sysfs__read_ull instead. + * The value of slots is not greater than 32 bits, but + * filename__read_int can't read value with 0x prefix, + * so use filename__read_ull instead. */ - sysfs__read_ull(path, &slots); + filename__read_ull(path, &slots); }
return slots ? (double)slots : NAN;
From: Ilan Peer ilan.peer@intel.com
[ Upstream commit fd7f08d92fcd7cc3eca0dd6c853f722a4c6176df ]
The reporter noticed a warning when running iwlwifi:
WARNING: CPU: 8 PID: 659 at mm/page_alloc.c:4453 __alloc_pages+0x329/0x340
As cfg80211_parse_colocated_ap() is not expected to return a negative value return 0 and not a negative value if cfg80211_calc_short_ssid() fails.
Fixes: c8cb5b854b40f ("nl80211/cfg80211: support 6 GHz scanning") Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217675 Signed-off-by: Ilan Peer ilan.peer@intel.com Signed-off-by: Kalle Valo kvalo@kernel.org Link: https://lore.kernel.org/r/20230723201043.3007430-1-ilan.peer@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/wireless/scan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/wireless/scan.c b/net/wireless/scan.c index 396c63431e1f3..e9a3b0f724f18 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -640,7 +640,7 @@ static int cfg80211_parse_colocated_ap(const struct cfg80211_bss_ies *ies,
ret = cfg80211_calc_short_ssid(ies, &ssid_elem, &s_ssid_tmp); if (ret) - return ret; + return 0;
/* RNR IE may contain more than one NEIGHBOR_AP_INFO */ while (pos + sizeof(*ap_info) <= end) {
From: Zhengchao Shao shaozhengchao@huawei.com
[ Upstream commit aeb660171b0663847fa04806a96302ac6112ad26 ]
In function macsec_fs_tx_create_crypto_table_groups(), when the ft->g memory is successfully allocated but the 'in' memory fails to be allocated, the memory pointed to by ft->g is released once. And in function macsec_fs_tx_create(), macsec_fs_tx_destroy() is called to release the memory pointed to by ft->g again. This will cause double free problem.
Fixes: e467b283ffd5 ("net/mlx5e: Add MACsec TX steering rules") Signed-off-by: Zhengchao Shao shaozhengchao@huawei.com Reviewed-by: Simon Horman simon.horman@corigine.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec_fs.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec_fs.c index 7fc901a6ec5fc..414e285848813 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec_fs.c @@ -161,6 +161,7 @@ static int macsec_fs_tx_create_crypto_table_groups(struct mlx5e_flow_table *ft)
if (!in) { kfree(ft->g); + ft->g = NULL; return -ENOMEM; }
From: Zhengchao Shao shaozhengchao@huawei.com
[ Upstream commit 5dd77585dd9d0e03dd1bceb95f0269a7eaf6b936 ]
when mlx5_cmd_exec failed in mlx5dr_cmd_create_reformat_ctx, the memory pointed by 'in' is not released, which will cause memory leak. Move memory release after mlx5_cmd_exec.
Fixes: 1d9186476e12 ("net/mlx5: DR, Add direct rule command utilities") Signed-off-by: Zhengchao Shao shaozhengchao@huawei.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c index 1aa525e509f10..293d2edd03d59 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c @@ -562,11 +562,12 @@ int mlx5dr_cmd_create_reformat_ctx(struct mlx5_core_dev *mdev,
err = mlx5_cmd_exec(mdev, in, inlen, out, sizeof(out)); if (err) - return err; + goto err_free_in;
*reformat_id = MLX5_GET(alloc_packet_reformat_context_out, out, packet_reformat_id); - kvfree(in);
+err_free_in: + kvfree(in); return err; }
From: Zhengchao Shao shaozhengchao@huawei.com
[ Upstream commit c6cf0b6097bf1bf1b2a89b521e9ecd26b581a93a ]
The memory pointed to by the priv->rx_res pointer is not freed in the error path of mlx5e_init_rep_rx, which can lead to a memory leak. Fix by freeing the memory in the error path, thereby making the error path identical to mlx5e_cleanup_rep_rx().
Fixes: af8bbf730068 ("net/mlx5e: Convert mlx5e_flow_steering member of mlx5e_priv to pointer") Signed-off-by: Zhengchao Shao shaozhengchao@huawei.com Reviewed-by: Simon Horman simon.horman@corigine.com Reviewed-by: Tariq Toukan tariqt@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index 3e7041bd5705e..95d8714765f70 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -964,7 +964,7 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv) err = mlx5e_open_drop_rq(priv, &priv->drop_rq); if (err) { mlx5_core_err(mdev, "open drop rq failed, %d\n", err); - return err; + goto err_rx_res_free; }
err = mlx5e_rx_res_init(priv->rx_res, priv->mdev, 0, @@ -998,6 +998,7 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv) mlx5e_rx_res_destroy(priv->rx_res); err_close_drop_rq: mlx5e_close_drop_rq(&priv->drop_rq); +err_rx_res_free: mlx5e_rx_res_free(priv->rx_res); priv->rx_res = NULL; err_free_fs:
From: Yuanjun Gong ruc_gongyuanjun@163.com
[ Upstream commit e5bcb7564d3bd0c88613c76963c5349be9c511c5 ]
mlx5e_ipsec_remove_trailer() should return an error code if function pskb_trim() returns an unexpected value.
Fixes: 2ac9cfe78223 ("net/mlx5e: IPSec, Add Innova IPSec offload TX data path") Signed-off-by: Yuanjun Gong ruc_gongyuanjun@163.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c index eab5bc718771f..8d995e3048692 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c @@ -58,7 +58,9 @@ static int mlx5e_ipsec_remove_trailer(struct sk_buff *skb, struct xfrm_state *x)
trailer_len = alen + plen + 2;
- pskb_trim(skb, skb->len - trailer_len); + ret = pskb_trim(skb, skb->len - trailer_len); + if (unlikely(ret)) + return ret; if (skb->protocol == htons(ETH_P_IP)) { ipv4hdr->tot_len = htons(ntohs(ipv4hdr->tot_len) - trailer_len); ip_send_check(ipv4hdr);
From: Shay Drory shayd@nvidia.com
[ Upstream commit 0507f2c8be0d345fe7014147c027cea6dc1c00a4 ]
Currently, whenever a user is setting migratable port fn attr, the driver is always turn migratable capability on. Fix it by honor the user input
Fixes: e5b9642a33be ("net/mlx5: E-Switch, Implement devlink port function cmds to control migratable") Signed-off-by: Shay Drory shayd@nvidia.com Reviewed-by: Roi Dayan roid@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index 8d19c20d3447e..178880ba7c7b3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -4073,7 +4073,7 @@ int mlx5_devlink_port_fn_migratable_set(struct devlink_port *port, bool enable, }
hca_caps = MLX5_ADDR_OF(query_hca_cap_out, query_ctx, capability); - MLX5_SET(cmd_hca_cap_2, hca_caps, migratable, 1); + MLX5_SET(cmd_hca_cap_2, hca_caps, migratable, enable);
err = mlx5_vport_set_other_func_cap(esw->dev, hca_caps, vport->vport, MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE2);
From: Chris Mi cmi@nvidia.com
[ Upstream commit 93a331939d1d1c6c3422bc09ec43cac658594b34 ]
The cited commit holds encap tbl lock unconditionally when setting up dests. But it may cause the following deadlock:
PID: 1063722 TASK: ffffa062ca5d0000 CPU: 13 COMMAND: "handler8" #0 [ffffb14de05b7368] __schedule at ffffffffa1d5aa91 #1 [ffffb14de05b7410] schedule at ffffffffa1d5afdb #2 [ffffb14de05b7430] schedule_preempt_disabled at ffffffffa1d5b528 #3 [ffffb14de05b7440] __mutex_lock at ffffffffa1d5d6cb #4 [ffffb14de05b74e8] mutex_lock_nested at ffffffffa1d5ddeb #5 [ffffb14de05b74f8] mlx5e_tc_tun_encap_dests_set at ffffffffc12f2096 [mlx5_core] #6 [ffffb14de05b7568] post_process_attr at ffffffffc12d9fc5 [mlx5_core] #7 [ffffb14de05b75a0] mlx5e_tc_add_fdb_flow at ffffffffc12de877 [mlx5_core] #8 [ffffb14de05b75f0] __mlx5e_add_fdb_flow at ffffffffc12e0eef [mlx5_core] #9 [ffffb14de05b7660] mlx5e_tc_add_flow at ffffffffc12e12f7 [mlx5_core] #10 [ffffb14de05b76b8] mlx5e_configure_flower at ffffffffc12e1686 [mlx5_core] #11 [ffffb14de05b7720] mlx5e_rep_indr_offload at ffffffffc12e3817 [mlx5_core] #12 [ffffb14de05b7730] mlx5e_rep_indr_setup_tc_cb at ffffffffc12e388a [mlx5_core] #13 [ffffb14de05b7740] tc_setup_cb_add at ffffffffa1ab2ba8 #14 [ffffb14de05b77a0] fl_hw_replace_filter at ffffffffc0bdec2f [cls_flower] #15 [ffffb14de05b7868] fl_change at ffffffffc0be6caa [cls_flower] #16 [ffffb14de05b7908] tc_new_tfilter at ffffffffa1ab71f0
[1031218.028143] wait_for_completion+0x24/0x30 [1031218.028589] mlx5e_update_route_decap_flows+0x9a/0x1e0 [mlx5_core] [1031218.029256] mlx5e_tc_fib_event_work+0x1ad/0x300 [mlx5_core] [1031218.029885] process_one_work+0x24e/0x510
Actually no need to hold encap tbl lock if there is no encap action. Fix it by checking if encap action exists or not before holding encap tbl lock.
Fixes: 37c3b9fa7ccf ("net/mlx5e: Prevent encap offload when neigh update is running") Signed-off-by: Chris Mi cmi@nvidia.com Reviewed-by: Vlad Buslov vladbu@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../mellanox/mlx5/core/en/tc_tun_encap.c | 3 --- .../net/ethernet/mellanox/mlx5/core/en_tc.c | 21 ++++++++++++++++--- 2 files changed, 18 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c index f0c3464f037f4..0c88cf47af01b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c @@ -1030,9 +1030,6 @@ int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv, int out_index; int err = 0;
- if (!mlx5e_is_eswitch_flow(flow)) - return 0; - parse_attr = attr->parse_attr; esw_attr = attr->esw_attr; *vf_tun = false; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index ed05ac8ae1de5..e002f013fa015 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -1725,6 +1725,19 @@ verify_attr_actions(u32 actions, struct netlink_ext_ack *extack) return 0; }
+static bool +has_encap_dests(struct mlx5_flow_attr *attr) +{ + struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr; + int out_index; + + for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) + if (esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP) + return true; + + return false; +} + static int post_process_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr, @@ -1737,9 +1750,11 @@ post_process_attr(struct mlx5e_tc_flow *flow, if (err) goto err_out;
- err = mlx5e_tc_tun_encap_dests_set(flow->priv, flow, attr, extack, &vf_tun); - if (err) - goto err_out; + if (mlx5e_is_eswitch_flow(flow) && has_encap_dests(attr)) { + err = mlx5e_tc_tun_encap_dests_set(flow->priv, flow, attr, extack, &vf_tun); + if (err) + goto err_out; + }
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { err = mlx5e_tc_attach_mod_hdr(flow->priv, flow, attr);
From: Amir Tzin amirtz@nvidia.com
[ Upstream commit 3ec43c1b082a8804472430e1253544d75f4b540e ]
Moving to switchdev mode with ntuple offload on causes the kernel to crash since fs->arfs is freed during nic profile cleanup flow.
Ntuple offload is not supported in switchdev mode and it is already unset by mlx5 fix feature ndo in switchdev mode. Verify fs->arfs is valid before disabling it.
trace: [] RIP: 0010:_raw_spin_lock_bh+0x17/0x30 [] arfs_del_rules+0x44/0x1a0 [mlx5_core] [] mlx5e_arfs_disable+0xe/0x20 [mlx5_core] [] mlx5e_handle_feature+0x3d/0xb0 [mlx5_core] [] ? __rtnl_unlock+0x25/0x50 [] mlx5e_set_features+0xfe/0x160 [mlx5_core] [] __netdev_update_features+0x278/0xa50 [] ? netdev_run_todo+0x5e/0x2a0 [] netdev_update_features+0x22/0x70 [] ? _cond_resched+0x15/0x30 [] mlx5e_attach_netdev+0x12a/0x1e0 [mlx5_core] [] mlx5e_netdev_attach_profile+0xa1/0xc0 [mlx5_core] [] mlx5e_netdev_change_profile+0x77/0xe0 [mlx5_core] [] mlx5e_vport_rep_load+0x1ed/0x290 [mlx5_core] [] mlx5_esw_offloads_rep_load+0x88/0xd0 [mlx5_core] [] esw_offloads_load_rep.part.38+0x31/0x50 [mlx5_core] [] esw_offloads_enable+0x6c5/0x710 [mlx5_core] [] mlx5_eswitch_enable_locked+0x1bb/0x290 [mlx5_core] [] mlx5_devlink_eswitch_mode_set+0x14f/0x320 [mlx5_core] [] devlink_nl_cmd_eswitch_set_doit+0x94/0x120 [] genl_family_rcv_msg_doit.isra.17+0x113/0x150 [] genl_family_rcv_msg+0xb7/0x170 [] ? devlink_nl_cmd_port_split_doit+0x100/0x100 [] genl_rcv_msg+0x47/0xa0 [] ? genl_family_rcv_msg+0x170/0x170 [] netlink_rcv_skb+0x4c/0x130 [] genl_rcv+0x24/0x40 [] netlink_unicast+0x19a/0x230 [] netlink_sendmsg+0x204/0x3d0 [] sock_sendmsg+0x50/0x60
Fixes: 90b22b9bcd24 ("net/mlx5e: Disable Rx ntuple offload for uplink representor") Signed-off-by: Amir Tzin amirtz@nvidia.com Reviewed-by: Aya Levin ayal@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c index bed0c2d043e70..329d8c90facdd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c @@ -135,6 +135,16 @@ static void arfs_del_rules(struct mlx5e_flow_steering *fs);
int mlx5e_arfs_disable(struct mlx5e_flow_steering *fs) { + /* Moving to switchdev mode, fs->arfs is freed by mlx5e_nic_profile + * cleanup_rx callback and it is not recreated when + * mlx5e_uplink_rep_profile is loaded as mlx5e_create_flow_steering() + * is not called by the uplink_rep profile init_rx callback. Thus, if + * ntuple is set, moving to switchdev flow will enter this function + * with fs->arfs nullified. + */ + if (!mlx5e_fs_get_arfs(fs)) + return 0; + arfs_del_rules(fs);
return arfs_disable(fs);
From: Jianbo Liu jianbol@nvidia.com
[ Upstream commit d03b6e6f31820b84f7449cca022047f36c42bc3f ]
For IP tunnel encapsulation in ECMP (Equal-Cost Multipath) mode, as the flow is duplicated to the peer eswitch, the related neighbour information on the peer uplink representor is created as well.
In the cited commit, eswitch devcom unpair is moved to uplink unload API, specifically the profile->cleanup_tx. If there is a encap rule offloaded in ECMP mode, when one eswitch does unpair (because of unloading the driver, for instance), and the peer rule from the peer eswitch is going to be deleted, the use-after-free error is triggered while accessing neigh info, as it is already cleaned up in uplink's profile->disable, which is before its profile->cleanup_tx.
To fix this issue, move the neigh cleanup to profile's cleanup_tx callback, and after mlx5e_cleanup_uplink_rep_tx is called. The neigh init is moved to init_tx for symmeter.
[ 2453.376299] BUG: KASAN: slab-use-after-free in mlx5e_rep_neigh_entry_release+0x109/0x3a0 [mlx5_core] [ 2453.379125] Read of size 4 at addr ffff888127af9008 by task modprobe/2496
[ 2453.381542] CPU: 7 PID: 2496 Comm: modprobe Tainted: G B 6.4.0-rc7+ #15 [ 2453.383386] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 2453.384335] Call Trace: [ 2453.384625] <TASK> [ 2453.384891] dump_stack_lvl+0x33/0x50 [ 2453.385285] print_report+0xc2/0x610 [ 2453.385667] ? __virt_addr_valid+0xb1/0x130 [ 2453.386091] ? mlx5e_rep_neigh_entry_release+0x109/0x3a0 [mlx5_core] [ 2453.386757] kasan_report+0xae/0xe0 [ 2453.387123] ? mlx5e_rep_neigh_entry_release+0x109/0x3a0 [mlx5_core] [ 2453.387798] mlx5e_rep_neigh_entry_release+0x109/0x3a0 [mlx5_core] [ 2453.388465] mlx5e_rep_encap_entry_detach+0xa6/0xe0 [mlx5_core] [ 2453.389111] mlx5e_encap_dealloc+0xa7/0x100 [mlx5_core] [ 2453.389706] mlx5e_tc_tun_encap_dests_unset+0x61/0xb0 [mlx5_core] [ 2453.390361] mlx5_free_flow_attr_actions+0x11e/0x340 [mlx5_core] [ 2453.391015] ? complete_all+0x43/0xd0 [ 2453.391398] ? free_flow_post_acts+0x38/0x120 [mlx5_core] [ 2453.392004] mlx5e_tc_del_fdb_flow+0x4ae/0x690 [mlx5_core] [ 2453.392618] mlx5e_tc_del_fdb_peers_flow+0x308/0x370 [mlx5_core] [ 2453.393276] mlx5e_tc_clean_fdb_peer_flows+0xf5/0x140 [mlx5_core] [ 2453.393925] mlx5_esw_offloads_unpair+0x86/0x540 [mlx5_core] [ 2453.394546] ? mlx5_esw_offloads_set_ns_peer.isra.0+0x180/0x180 [mlx5_core] [ 2453.395268] ? down_write+0xaa/0x100 [ 2453.395652] mlx5_esw_offloads_devcom_event+0x203/0x530 [mlx5_core] [ 2453.396317] mlx5_devcom_send_event+0xbb/0x190 [mlx5_core] [ 2453.396917] mlx5_esw_offloads_devcom_cleanup+0xb0/0xd0 [mlx5_core] [ 2453.397582] mlx5e_tc_esw_cleanup+0x42/0x120 [mlx5_core] [ 2453.398182] mlx5e_rep_tc_cleanup+0x15/0x30 [mlx5_core] [ 2453.398768] mlx5e_cleanup_rep_tx+0x6c/0x80 [mlx5_core] [ 2453.399367] mlx5e_detach_netdev+0xee/0x120 [mlx5_core] [ 2453.399957] mlx5e_netdev_change_profile+0x84/0x170 [mlx5_core] [ 2453.400598] mlx5e_vport_rep_unload+0xe0/0xf0 [mlx5_core] [ 2453.403781] mlx5_eswitch_unregister_vport_reps+0x15e/0x190 [mlx5_core] [ 2453.404479] ? mlx5_eswitch_register_vport_reps+0x200/0x200 [mlx5_core] [ 2453.405170] ? up_write+0x39/0x60 [ 2453.405529] ? kernfs_remove_by_name_ns+0xb7/0xe0 [ 2453.405985] auxiliary_bus_remove+0x2e/0x40 [ 2453.406405] device_release_driver_internal+0x243/0x2d0 [ 2453.406900] ? kobject_put+0x42/0x2d0 [ 2453.407284] bus_remove_device+0x128/0x1d0 [ 2453.407687] device_del+0x240/0x550 [ 2453.408053] ? waiting_for_supplier_show+0xe0/0xe0 [ 2453.408511] ? kobject_put+0xfa/0x2d0 [ 2453.408889] ? __kmem_cache_free+0x14d/0x280 [ 2453.409310] mlx5_rescan_drivers_locked.part.0+0xcd/0x2b0 [mlx5_core] [ 2453.409973] mlx5_unregister_device+0x40/0x50 [mlx5_core] [ 2453.410561] mlx5_uninit_one+0x3d/0x110 [mlx5_core] [ 2453.411111] remove_one+0x89/0x130 [mlx5_core] [ 2453.411628] pci_device_remove+0x59/0xf0 [ 2453.412026] device_release_driver_internal+0x243/0x2d0 [ 2453.412511] ? parse_option_str+0x14/0x90 [ 2453.412915] driver_detach+0x7b/0xf0 [ 2453.413289] bus_remove_driver+0xb5/0x160 [ 2453.413685] pci_unregister_driver+0x3f/0xf0 [ 2453.414104] mlx5_cleanup+0xc/0x20 [mlx5_core]
Fixes: 2be5bd42a5bb ("net/mlx5: Handle pairing of E-switch via uplink un/load APIs") Signed-off-by: Jianbo Liu jianbol@nvidia.com Reviewed-by: Vlad Buslov vladbu@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/mellanox/mlx5/core/en_rep.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index 95d8714765f70..ad63d1f9a611f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -1112,6 +1112,10 @@ static int mlx5e_init_rep_tx(struct mlx5e_priv *priv) return err; }
+ err = mlx5e_rep_neigh_init(rpriv); + if (err) + goto err_neigh_init; + if (rpriv->rep->vport == MLX5_VPORT_UPLINK) { err = mlx5e_init_uplink_rep_tx(rpriv); if (err) @@ -1128,6 +1132,8 @@ static int mlx5e_init_rep_tx(struct mlx5e_priv *priv) if (rpriv->rep->vport == MLX5_VPORT_UPLINK) mlx5e_cleanup_uplink_rep_tx(rpriv); err_init_tx: + mlx5e_rep_neigh_cleanup(rpriv); +err_neigh_init: mlx5e_destroy_tises(priv); return err; } @@ -1141,22 +1147,17 @@ static void mlx5e_cleanup_rep_tx(struct mlx5e_priv *priv) if (rpriv->rep->vport == MLX5_VPORT_UPLINK) mlx5e_cleanup_uplink_rep_tx(rpriv);
+ mlx5e_rep_neigh_cleanup(rpriv); mlx5e_destroy_tises(priv); }
static void mlx5e_rep_enable(struct mlx5e_priv *priv) { - struct mlx5e_rep_priv *rpriv = priv->ppriv; - mlx5e_set_netdev_mtu_boundaries(priv); - mlx5e_rep_neigh_init(rpriv); }
static void mlx5e_rep_disable(struct mlx5e_priv *priv) { - struct mlx5e_rep_priv *rpriv = priv->ppriv; - - mlx5e_rep_neigh_cleanup(rpriv); }
static int mlx5e_update_rep_rx(struct mlx5e_priv *priv) @@ -1206,7 +1207,6 @@ static int uplink_rep_async_event(struct notifier_block *nb, unsigned long event
static void mlx5e_uplink_rep_enable(struct mlx5e_priv *priv) { - struct mlx5e_rep_priv *rpriv = priv->ppriv; struct net_device *netdev = priv->netdev; struct mlx5_core_dev *mdev = priv->mdev; u16 max_mtu; @@ -1228,7 +1228,6 @@ static void mlx5e_uplink_rep_enable(struct mlx5e_priv *priv) mlx5_notifier_register(mdev, &priv->events_nb); mlx5e_dcbnl_initialize(priv); mlx5e_dcbnl_init_app(priv); - mlx5e_rep_neigh_init(rpriv); mlx5e_rep_bridge_init(priv);
netdev->wanted_features |= NETIF_F_HW_TC; @@ -1243,7 +1242,6 @@ static void mlx5e_uplink_rep_enable(struct mlx5e_priv *priv)
static void mlx5e_uplink_rep_disable(struct mlx5e_priv *priv) { - struct mlx5e_rep_priv *rpriv = priv->ppriv; struct mlx5_core_dev *mdev = priv->mdev;
rtnl_lock(); @@ -1253,7 +1251,6 @@ static void mlx5e_uplink_rep_disable(struct mlx5e_priv *priv) rtnl_unlock();
mlx5e_rep_bridge_cleanup(priv); - mlx5e_rep_neigh_cleanup(rpriv); mlx5e_dcbnl_delete_app(priv); mlx5_notifier_unregister(mdev, &priv->events_nb); mlx5e_rep_tc_disable(priv);
From: Dragos Tatulea dtatulea@nvidia.com
[ Upstream commit e0f52298fee449fec37e3e3c32df60008b509b16 ]
The below crash can be encountered when using xdpsock in rx mode for legacy rq: the buffer gets released in the XDP_REDIRECT path, and then once again in the driver. This fix sets the flag to avoid releasing on the driver side.
XSK handling of buffers for legacy rq was relying on the caller to set the skip release flag. But the referenced fix started using fragment counts for pages instead of the skip flag.
Crash log: general protection fault, probably for non-canonical address 0xffff8881217e3a: 0000 [#1] SMP CPU: 0 PID: 14 Comm: ksoftirqd/0 Not tainted 6.5.0-rc1+ #31 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 RIP: 0010:bpf_prog_03b13f331978c78c+0xf/0x28 Code: ... RSP: 0018:ffff88810082fc98 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff888138404901 RCX: c0ffffc900027cbc RDX: ffffffffa000b514 RSI: 00ffff8881217e32 RDI: ffff888138404901 RBP: ffff88810082fc98 R08: 0000000000091100 R09: 0000000000000006 R10: 0000000000000800 R11: 0000000000000800 R12: ffffc9000027a000 R13: ffff8881217e2dc0 R14: ffff8881217e2910 R15: ffff8881217e2f00 FS: 0000000000000000(0000) GS:ffff88852c800000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000564cb2e2cde0 CR3: 000000010e603004 CR4: 0000000000370eb0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ? die_addr+0x32/0x80 ? exc_general_protection+0x192/0x390 ? asm_exc_general_protection+0x22/0x30 ? 0xffffffffa000b514 ? bpf_prog_03b13f331978c78c+0xf/0x28 mlx5e_xdp_handle+0x48/0x670 [mlx5_core] ? dev_gro_receive+0x3b5/0x6e0 mlx5e_xsk_skb_from_cqe_linear+0x6e/0x90 [mlx5_core] mlx5e_handle_rx_cqe+0x55/0x100 [mlx5_core] mlx5e_poll_rx_cq+0x87/0x6e0 [mlx5_core] mlx5e_napi_poll+0x45e/0x6b0 [mlx5_core] __napi_poll+0x25/0x1a0 net_rx_action+0x28a/0x300 __do_softirq+0xcd/0x279 ? sort_range+0x20/0x20 run_ksoftirqd+0x1a/0x20 smpboot_thread_fn+0xa2/0x130 kthread+0xc9/0xf0 ? kthread_complete_and_exit+0x20/0x20 ret_from_fork+0x1f/0x30 </TASK> Modules linked in: mlx5_ib mlx5_core rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm ib_uverbs ib_core xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat br_netfilter overlay zram zsmalloc fuse [last unloaded: mlx5_core] ---[ end trace 0000000000000000 ]---
Fixes: 7abd955a58fb ("net/mlx5e: RX, Fix page_pool page fragment tracking for XDP") Signed-off-by: Dragos Tatulea dtatulea@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index d97e6df66f454..b8dd744536553 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -323,8 +323,11 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, net_prefetch(mxbuf->xdp.data);
prog = rcu_dereference(rq->xdp_prog); - if (likely(prog && mlx5e_xdp_handle(rq, prog, mxbuf))) + if (likely(prog && mlx5e_xdp_handle(rq, prog, mxbuf))) { + if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))) + wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); return NULL; /* page/packet was consumed by XDP */ + }
/* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse * will be handled by mlx5e_free_rx_wqe.
From: Dragos Tatulea dtatulea@nvidia.com
[ Upstream commit 39646d9bcd1a65d2396328026626859a1dab59d7 ]
When the regular rq is reactivated after the XSK socket is closed it could be reading stale cqes which eventually corrupts the rq. This leads to no more traffic being received on the regular rq and a crash on the next close or deactivation of the rq.
Kal Cuttler Conely reported this issue as a crash on the release path when the xdpsock sample program is stopped (killed) and restarted in sequence while traffic is running.
This patch flushes all cqes when during the rq flush. The cqe flushing is done in the reset state of the rq. mlx5e_rq_to_ready code is moved into the flush function to allow for this.
Fixes: 082a9edf12fe ("net/mlx5e: xsk: Flush RQ on XSK activation to save memory") Reported-by: Kal Cutter Conley kal.conley@dectris.com Closes: https://lore.kernel.org/xdp-newbies/CAHApi-nUAs4TeFWUDV915CZJo07XVg2Vp63-no7... Signed-off-by: Dragos Tatulea dtatulea@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/mellanox/mlx5/core/en_main.c | 29 ++++++++++++++----- 1 file changed, 21 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index a5bdf78955d76..f084513fbead4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -1036,7 +1036,23 @@ static int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state, int next_s return err; }
-static int mlx5e_rq_to_ready(struct mlx5e_rq *rq, int curr_state) +static void mlx5e_flush_rq_cq(struct mlx5e_rq *rq) +{ + struct mlx5_cqwq *cqwq = &rq->cq.wq; + struct mlx5_cqe64 *cqe; + + if (test_bit(MLX5E_RQ_STATE_MINI_CQE_ENHANCED, &rq->state)) { + while ((cqe = mlx5_cqwq_get_cqe_enahnced_comp(cqwq))) + mlx5_cqwq_pop(cqwq); + } else { + while ((cqe = mlx5_cqwq_get_cqe(cqwq))) + mlx5_cqwq_pop(cqwq); + } + + mlx5_cqwq_update_db_record(cqwq); +} + +int mlx5e_flush_rq(struct mlx5e_rq *rq, int curr_state) { struct net_device *dev = rq->netdev; int err; @@ -1046,6 +1062,10 @@ static int mlx5e_rq_to_ready(struct mlx5e_rq *rq, int curr_state) netdev_err(dev, "Failed to move rq 0x%x to reset\n", rq->rqn); return err; } + + mlx5e_free_rx_descs(rq); + mlx5e_flush_rq_cq(rq); + err = mlx5e_modify_rq_state(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY); if (err) { netdev_err(dev, "Failed to move rq 0x%x to ready\n", rq->rqn); @@ -1055,13 +1075,6 @@ static int mlx5e_rq_to_ready(struct mlx5e_rq *rq, int curr_state) return 0; }
-int mlx5e_flush_rq(struct mlx5e_rq *rq, int curr_state) -{ - mlx5e_free_rx_descs(rq); - - return mlx5e_rq_to_ready(rq, curr_state); -} - static int mlx5e_modify_rq_vsd(struct mlx5e_rq *rq, bool vsd) { struct mlx5_core_dev *mdev = rq->mdev;
From: Jianbo Liu jianbol@nvidia.com
[ Upstream commit 3e4cf1dd2ce413f4be3e2c9062fb470e2ad2be88 ]
There are DEK objects cached in DEK pool after kTLS is used, and they are freed only in mlx5e_ktls_cleanup().
mlx5e_destroy_mdev_resources() is called in mlx5e_suspend() to free mdev resources, including protection domain (PD). However, PD is still referenced by the cached DEK objects in this case, because profile->cleanup() (and therefore mlx5e_ktls_cleanup()) is called after mlx5e_suspend() during devlink reload. So the following FW syndrome is generated:
mlx5_cmd_out_err:803:(pid 12948): DEALLOC_PD(0x801) op_mod(0x0) failed, status bad resource state(0x9), syndrome (0xef0c8a), err(-22)
To avoid this syndrome, move DEK pool destruction to mlx5e_ktls_cleanup_tx(), which is called by profile->cleanup_tx(). And move pool creation to mlx5e_ktls_init_tx() for symmetry.
Fixes: f741db1a5171 ("net/mlx5e: kTLS, Improve connection rate by using fast update encryption key") Signed-off-by: Jianbo Liu jianbol@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../mellanox/mlx5/core/en_accel/ktls.c | 8 ----- .../mellanox/mlx5/core/en_accel/ktls_tx.c | 29 +++++++++++++++++-- 2 files changed, 26 insertions(+), 11 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c index cf704f106b7c2..984fa04bd331b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c @@ -188,7 +188,6 @@ static void mlx5e_tls_debugfs_init(struct mlx5e_tls *tls,
int mlx5e_ktls_init(struct mlx5e_priv *priv) { - struct mlx5_crypto_dek_pool *dek_pool; struct mlx5e_tls *tls;
if (!mlx5e_is_ktls_device(priv->mdev)) @@ -199,12 +198,6 @@ int mlx5e_ktls_init(struct mlx5e_priv *priv) return -ENOMEM; tls->mdev = priv->mdev;
- dek_pool = mlx5_crypto_dek_pool_create(priv->mdev, MLX5_ACCEL_OBJ_TLS_KEY); - if (IS_ERR(dek_pool)) { - kfree(tls); - return PTR_ERR(dek_pool); - } - tls->dek_pool = dek_pool; priv->tls = tls;
mlx5e_tls_debugfs_init(tls, priv->dfs_root); @@ -222,7 +215,6 @@ void mlx5e_ktls_cleanup(struct mlx5e_priv *priv) debugfs_remove_recursive(tls->debugfs.dfs); tls->debugfs.dfs = NULL;
- mlx5_crypto_dek_pool_destroy(tls->dek_pool); kfree(priv->tls); priv->tls = NULL; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c index 0e4c0a093293a..c49363dd6bf9a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c @@ -908,28 +908,51 @@ static void mlx5e_tls_tx_debugfs_init(struct mlx5e_tls *tls,
int mlx5e_ktls_init_tx(struct mlx5e_priv *priv) { + struct mlx5_crypto_dek_pool *dek_pool; struct mlx5e_tls *tls = priv->tls; + int err; + + if (!mlx5e_is_ktls_device(priv->mdev)) + return 0; + + /* DEK pool could be used by either or both of TX and RX. But we have to + * put the creation here to avoid syndrome when doing devlink reload. + */ + dek_pool = mlx5_crypto_dek_pool_create(priv->mdev, MLX5_ACCEL_OBJ_TLS_KEY); + if (IS_ERR(dek_pool)) + return PTR_ERR(dek_pool); + tls->dek_pool = dek_pool;
if (!mlx5e_is_ktls_tx(priv->mdev)) return 0;
priv->tls->tx_pool = mlx5e_tls_tx_pool_init(priv->mdev, &priv->tls->sw_stats); - if (!priv->tls->tx_pool) - return -ENOMEM; + if (!priv->tls->tx_pool) { + err = -ENOMEM; + goto err_tx_pool_init; + }
mlx5e_tls_tx_debugfs_init(tls, tls->debugfs.dfs);
return 0; + +err_tx_pool_init: + mlx5_crypto_dek_pool_destroy(dek_pool); + return err; }
void mlx5e_ktls_cleanup_tx(struct mlx5e_priv *priv) { if (!mlx5e_is_ktls_tx(priv->mdev)) - return; + goto dek_pool_destroy;
debugfs_remove_recursive(priv->tls->debugfs.dfs_tx); priv->tls->debugfs.dfs_tx = NULL;
mlx5e_tls_tx_pool_cleanup(priv->tls->tx_pool); priv->tls->tx_pool = NULL; + +dek_pool_destroy: + if (mlx5e_is_ktls_device(priv->mdev)) + mlx5_crypto_dek_pool_destroy(priv->tls->dek_pool); }
From: Chris Mi cmi@nvidia.com
[ Upstream commit 61eab651f6e96791cfad6db45f1107c398699b2d ]
The cited commit sets ft prio to fs_base_prio. But if ignore_flow_level it not supported, ft prio must be set based on tc filter prio. Otherwise, all the ft prio are the same on the same chain. It is invalid if ignore_flow_level is not supported.
Fix it by setting ft prio based on tc filter prio and setting fs_base_prio to 0 for fdb.
Fixes: 8e80e5648092 ("net/mlx5: fs_chains: Refactor to detach chains from tc usage") Signed-off-by: Chris Mi cmi@nvidia.com Reviewed-by: Paul Blakey paulb@nvidia.com Reviewed-by: Roi Dayan roid@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 1 - drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c | 2 +- 2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index 178880ba7c7b3..c1f419b36289c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -1376,7 +1376,6 @@ esw_chains_create(struct mlx5_eswitch *esw, struct mlx5_flow_table *miss_fdb)
esw_init_chains_offload_flags(esw, &attr.flags); attr.ns = MLX5_FLOW_NAMESPACE_FDB; - attr.fs_base_prio = FDB_TC_OFFLOAD; attr.max_grp_num = esw->params.large_group_num; attr.default_ft = miss_fdb; attr.mapping = esw->offloads.reg_c0_obj_pool; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c index db9df9798ffac..a80ecb672f33d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c @@ -178,7 +178,7 @@ mlx5_chains_create_table(struct mlx5_fs_chains *chains, if (!mlx5_chains_ignore_flow_level_supported(chains) || (chain == 0 && prio == 1 && level == 0)) { ft_attr.level = chains->fs_base_level; - ft_attr.prio = chains->fs_base_prio; + ft_attr.prio = chains->fs_base_prio + prio - 1; ns = (chains->ns == MLX5_FLOW_NAMESPACE_FDB) ? mlx5_get_fdb_sub_ns(chains->dev, chain) : mlx5_get_flow_namespace(chains->dev, chains->ns);
From: Shay Drory shayd@nvidia.com
[ Upstream commit 53d737dfd3d7b023fa9fa445ea3f3db0ac9da402 ]
Currently, in case an interface is down, mlx5 driver doesn't unregister its devlink params, which leads to this WARN[1]. Fix it by unregistering devlink params in that case as well.
[1] [ 295.244769 ] WARNING: CPU: 15 PID: 1 at net/core/devlink.c:9042 devlink_free+0x174/0x1fc [ 295.488379 ] CPU: 15 PID: 1 Comm: shutdown Tainted: G S OE 5.15.0-1017.19.3.g0677e61-bluefield #g0677e61 [ 295.509330 ] Hardware name: https://www.mellanox.com BlueField SoC/BlueField SoC, BIOS 4.2.0.12761 Jun 6 2023 [ 295.543096 ] pc : devlink_free+0x174/0x1fc [ 295.551104 ] lr : mlx5_devlink_free+0x18/0x2c [mlx5_core] [ 295.561816 ] sp : ffff80000809b850 [ 295.711155 ] Call trace: [ 295.716030 ] devlink_free+0x174/0x1fc [ 295.723346 ] mlx5_devlink_free+0x18/0x2c [mlx5_core] [ 295.733351 ] mlx5_sf_dev_remove+0x98/0xb0 [mlx5_core] [ 295.743534 ] auxiliary_bus_remove+0x2c/0x50 [ 295.751893 ] __device_release_driver+0x19c/0x280 [ 295.761120 ] device_release_driver+0x34/0x50 [ 295.769649 ] bus_remove_device+0xdc/0x170 [ 295.777656 ] device_del+0x17c/0x3a4 [ 295.784620 ] mlx5_sf_dev_remove+0x28/0xf0 [mlx5_core] [ 295.794800 ] mlx5_sf_dev_table_destroy+0x98/0x110 [mlx5_core] [ 295.806375 ] mlx5_unload+0x34/0xd0 [mlx5_core] [ 295.815339 ] mlx5_unload_one+0x70/0xe4 [mlx5_core] [ 295.824998 ] shutdown+0xb0/0xd8 [mlx5_core] [ 295.833439 ] pci_device_shutdown+0x3c/0xa0 [ 295.841651 ] device_shutdown+0x170/0x340 [ 295.849486 ] __do_sys_reboot+0x1f4/0x2a0 [ 295.857322 ] __arm64_sys_reboot+0x2c/0x40 [ 295.865329 ] invoke_syscall+0x78/0x100 [ 295.872817 ] el0_svc_common.constprop.0+0x54/0x184 [ 295.882392 ] do_el0_svc+0x30/0xac [ 295.889008 ] el0_svc+0x48/0x160 [ 295.895278 ] el0t_64_sync_handler+0xa4/0x130 [ 295.903807 ] el0t_64_sync+0x1a4/0x1a8 [ 295.911120 ] ---[ end trace 4f1d2381d00d9dce ]---
Fixes: fe578cbb2f05 ("net/mlx5: Move devlink registration before mlx5_load") Signed-off-by: Shay Drory shayd@nvidia.com Reviewed-by: Maher Sanalla msanalla@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/main.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index d6ee016deae17..c7a06c8bbb7a3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -1456,6 +1456,7 @@ void mlx5_uninit_one(struct mlx5_core_dev *dev) if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) { mlx5_core_warn(dev, "%s: interface is down, NOP\n", __func__); + mlx5_devlink_params_unregister(priv_to_devlink(dev)); mlx5_cleanup_once(dev); goto out; }
From: Lin Ma linma@zju.edu.cn
[ Upstream commit bcc29b7f5af6797702c2306a7aacb831fc5ce9cb ]
The nla_for_each_nested parsing in function bpf_sk_storage_diag_alloc does not check the length of the nested attribute. This can lead to an out-of-attribute read and allow a malformed nlattr (e.g., length 0) to be viewed as a 4 byte integer.
This patch adds an additional check when the nlattr is getting counted. This makes sure the latter nla_get_u32 can access the attributes with the correct length.
Fixes: 1ed4d92458a9 ("bpf: INET_DIAG support in bpf_sk_storage") Suggested-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lin Ma linma@zju.edu.cn Reviewed-by: Jakub Kicinski kuba@kernel.org Link: https://lore.kernel.org/r/20230725023330.422856-1-linma@zju.edu.cn Signed-off-by: Martin KaFai Lau martin.lau@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/bpf_sk_storage.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c index d4172534dfa8d..cca7594be92ec 100644 --- a/net/core/bpf_sk_storage.c +++ b/net/core/bpf_sk_storage.c @@ -496,8 +496,11 @@ bpf_sk_storage_diag_alloc(const struct nlattr *nla_stgs) return ERR_PTR(-EPERM);
nla_for_each_nested(nla, nla_stgs, rem) { - if (nla_type(nla) == SK_DIAG_BPF_STORAGE_REQ_MAP_FD) + if (nla_type(nla) == SK_DIAG_BPF_STORAGE_REQ_MAP_FD) { + if (nla_len(nla) != sizeof(u32)) + return ERR_PTR(-EINVAL); nr_maps++; + } }
diag = kzalloc(struct_size(diag, maps, nr_maps), GFP_KERNEL);
From: Lin Ma linma@zju.edu.cn
[ Upstream commit d73ef2d69c0dba5f5a1cb9600045c873bab1fb7f ]
There are totally 9 ndo_bridge_setlink handlers in the current kernel, which are 1) bnxt_bridge_setlink, 2) be_ndo_bridge_setlink 3) i40e_ndo_bridge_setlink 4) ice_bridge_setlink 5) ixgbe_ndo_bridge_setlink 6) mlx5e_bridge_setlink 7) nfp_net_bridge_setlink 8) qeth_l2_bridge_setlink 9) br_setlink.
By investigating the code, we find that 1-7 parse and use nlattr IFLA_BRIDGE_MODE but 3 and 4 forget to do the nla_len check. This can lead to an out-of-attribute read and allow a malformed nlattr (e.g., length 0) to be viewed as a 2 byte integer.
To avoid such issues, also for other ndo_bridge_setlink handlers in the future. This patch adds the nla_len check in rtnl_bridge_setlink and does an early error return if length mismatches. To make it works, the break is removed from the parsing for IFLA_BRIDGE_FLAGS to make sure this nla_for_each_nested iterates every attribute.
Fixes: b1edc14a3fbf ("ice: Implement ice_bridge_getlink and ice_bridge_setlink") Fixes: 51616018dd1b ("i40e: Add support for getlink, setlink ndo ops") Suggested-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Lin Ma linma@zju.edu.cn Acked-by: Nikolay Aleksandrov razor@blackwall.org Reviewed-by: Hangbin Liu liuhangbin@gmail.com Link: https://lore.kernel.org/r/20230726075314.1059224-1-linma@zju.edu.cn Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/rtnetlink.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index 2fe6a3379aaed..aa1743b2b770b 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -5139,13 +5139,17 @@ static int rtnl_bridge_setlink(struct sk_buff *skb, struct nlmsghdr *nlh, br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC); if (br_spec) { nla_for_each_nested(attr, br_spec, rem) { - if (nla_type(attr) == IFLA_BRIDGE_FLAGS) { + if (nla_type(attr) == IFLA_BRIDGE_FLAGS && !have_flags) { if (nla_len(attr) < sizeof(flags)) return -EINVAL;
have_flags = true; flags = nla_get_u16(attr); - break; + } + + if (nla_type(attr) == IFLA_BRIDGE_MODE) { + if (nla_len(attr) < sizeof(u16)) + return -EINVAL; } } }
From: Yuanjun Gong ruc_gongyuanjun@163.com
[ Upstream commit dadc5b86cc9459581f37fe755b431adc399ea393 ]
in bcm_sf2_sw_probe(), check the return value of clk_prepare_enable() and return the error code if clk_prepare_enable() returns an unexpected value.
Fixes: e9ec5c3bd238 ("net: dsa: bcm_sf2: request and handle clocks") Signed-off-by: Yuanjun Gong ruc_gongyuanjun@163.com Reviewed-by: Florian Fainelli florian.fainelli@broadcom.com Link: https://lore.kernel.org/r/20230726170506.16547-1-ruc_gongyuanjun@163.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/bcm_sf2.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c index cde253d27bd08..72374b066f64a 100644 --- a/drivers/net/dsa/bcm_sf2.c +++ b/drivers/net/dsa/bcm_sf2.c @@ -1436,7 +1436,9 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev) if (IS_ERR(priv->clk)) return PTR_ERR(priv->clk);
- clk_prepare_enable(priv->clk); + ret = clk_prepare_enable(priv->clk); + if (ret) + return ret;
priv->clk_mdiv = devm_clk_get_optional(&pdev->dev, "sw_switch_mdiv"); if (IS_ERR(priv->clk_mdiv)) { @@ -1444,7 +1446,9 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev) goto out_clk; }
- clk_prepare_enable(priv->clk_mdiv); + ret = clk_prepare_enable(priv->clk_mdiv); + if (ret) + goto out_clk;
ret = bcm_sf2_sw_rst(priv); if (ret) {
From: Georg Müller georgmueller@gmx.net
[ Upstream commit 98ce8e4a9dcfb448b30a2d7a16190f4a00382377 ]
Without gcc, the test will fail.
On cleanup, ignore probe removal errors. Otherwise, in case of an error adding the probe, the temporary directory is not removed.
Fixes: 56cbeacf14353057 ("perf probe: Add test for regression introduced by switch to die_get_decl_file()") Signed-off-by: Georg Müller georgmueller@gmx.net Acked-by: Ian Rogers irogers@google.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Georg Müller georgmueller@gmx.net Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Masami Hiramatsu mhiramat@kernel.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20230728151812.454806-2-georgmueller@gmx.net Link: https://lore.kernel.org/r/CAP-5=fUP6UuLgRty3t2=fQsQi3k4hDMz415vWdp1x88QMvZ8u... Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/tests/shell/test_uprobe_from_different_cu.sh | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/tools/perf/tests/shell/test_uprobe_from_different_cu.sh b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh index 00d2e0e2e0c28..319f36ebb9a40 100644 --- a/tools/perf/tests/shell/test_uprobe_from_different_cu.sh +++ b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh @@ -4,6 +4,12 @@
set -e
+# skip if there's no gcc +if ! [ -x "$(command -v gcc)" ]; then + echo "failed: no gcc compiler" + exit 2 +fi + temp_dir=$(mktemp -d /tmp/perf-uprobe-different-cu-sh.XXXXXXXXXX)
cleanup() @@ -11,7 +17,7 @@ cleanup() trap - EXIT TERM INT if [[ "${temp_dir}" =~ ^/tmp/perf-uprobe-different-cu-sh.*$ ]]; then echo "--- Cleaning up ---" - perf probe -x ${temp_dir}/testfile -d foo + perf probe -x ${temp_dir}/testfile -d foo || true rm -f "${temp_dir}/"* rmdir "${temp_dir}" fi
From: Jamal Hadi Salim jhs@mojatatu.com
[ Upstream commit e68409db995380d1badacba41ff24996bd396171 ]
A match entry is uniquely identified with an "address" or "path" in the form of: hashtable ID(12b):bucketid(8b):nodeid(12b).
When creating table match entries all of hash table id, bucket id and node (match entry id) are needed to be either specified by the user or reasonable in-kernel defaults are used. The in-kernel default for a table id is 0x800(omnipresent root table); for bucketid it is 0x0. Prior to this fix there was none for a nodeid i.e. the code assumed that the user passed the correct nodeid and if the user passes a nodeid of 0 (as Mingi Cho did) then that is what was used. But nodeid of 0 is reserved for identifying the table. This is not a problem until we dump. The dump code notices that the nodeid is zero and assumes it is referencing a table and therefore references table struct tc_u_hnode instead of what was created i.e match entry struct tc_u_knode.
Ming does an equivalent of: tc filter add dev dummy0 parent 10: prio 1 handle 0x1000 \ protocol ip u32 match ip src 10.0.0.1/32 classid 10:1 action ok
Essentially specifying a table id 0, bucketid 1 and nodeid of zero Tableid 0 is remapped to the default of 0x800. Bucketid 1 is ignored and defaults to 0x00. Nodeid was assumed to be what Ming passed - 0x000
dumping before fix shows: ~$ tc filter ls dev dummy0 parent 10: filter protocol ip pref 1 u32 chain 0 filter protocol ip pref 1 u32 chain 0 fh 800: ht divisor 1 filter protocol ip pref 1 u32 chain 0 fh 800: ht divisor -30591
Note that the last line reports a table instead of a match entry (you can tell this because it says "ht divisor..."). As a result of reporting the wrong data type (misinterpretting of struct tc_u_knode as being struct tc_u_hnode) the divisor is reported with value of -30591. Ming identified this as part of the heap address (physmap_base is 0xffff8880 (-30591 - 1)).
The fix is to ensure that when table entry matches are added and no nodeid is specified (i.e nodeid == 0) then we get the next available nodeid from the table's pool.
After the fix, this is what the dump shows: $ tc filter ls dev dummy0 parent 10: filter protocol ip pref 1 u32 chain 0 filter protocol ip pref 1 u32 chain 0 fh 800: ht divisor 1 filter protocol ip pref 1 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 flowid 10:1 not_in_hw match 0a000001/ffffffff at 12 action order 1: gact action pass random type none pass val 0 index 1 ref 1 bind 1
Reported-by: Mingi Cho mgcho.minic@gmail.com Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Jamal Hadi Salim jhs@mojatatu.com Link: https://lore.kernel.org/r/20230726135151.416917-1-jhs@mojatatu.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/cls_u32.c | 56 ++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 50 insertions(+), 6 deletions(-)
diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c index 5abf31e432caf..907e58841fe80 100644 --- a/net/sched/cls_u32.c +++ b/net/sched/cls_u32.c @@ -1024,18 +1024,62 @@ static int u32_change(struct net *net, struct sk_buff *in_skb, return -EINVAL; }
+ /* At this point, we need to derive the new handle that will be used to + * uniquely map the identity of this table match entry. The + * identity of the entry that we need to construct is 32 bits made of: + * htid(12b):bucketid(8b):node/entryid(12b) + * + * At this point _we have the table(ht)_ in which we will insert this + * entry. We carry the table's id in variable "htid". + * Note that earlier code picked the ht selection either by a) the user + * providing the htid specified via TCA_U32_HASH attribute or b) when + * no such attribute is passed then the root ht, is default to at ID + * 0x[800][00][000]. Rule: the root table has a single bucket with ID 0. + * If OTOH the user passed us the htid, they may also pass a bucketid of + * choice. 0 is fine. For example a user htid is 0x[600][01][000] it is + * indicating hash bucketid of 1. Rule: the entry/node ID _cannot_ be + * passed via the htid, so even if it was non-zero it will be ignored. + * + * We may also have a handle, if the user passed one. The handle also + * carries the same addressing of htid(12b):bucketid(8b):node/entryid(12b). + * Rule: the bucketid on the handle is ignored even if one was passed; + * rather the value on "htid" is always assumed to be the bucketid. + */ if (handle) { + /* Rule: The htid from handle and tableid from htid must match */ if (TC_U32_HTID(handle) && TC_U32_HTID(handle ^ htid)) { NL_SET_ERR_MSG_MOD(extack, "Handle specified hash table address mismatch"); return -EINVAL; } - handle = htid | TC_U32_NODE(handle); - err = idr_alloc_u32(&ht->handle_idr, NULL, &handle, handle, - GFP_KERNEL); - if (err) - return err; - } else + /* Ok, so far we have a valid htid(12b):bucketid(8b) but we + * need to finalize the table entry identification with the last + * part - the node/entryid(12b)). Rule: Nodeid _cannot be 0_ for + * entries. Rule: nodeid of 0 is reserved only for tables(see + * earlier code which processes TC_U32_DIVISOR attribute). + * Rule: The nodeid can only be derived from the handle (and not + * htid). + * Rule: if the handle specified zero for the node id example + * 0x60000000, then pick a new nodeid from the pool of IDs + * this hash table has been allocating from. + * If OTOH it is specified (i.e for example the user passed a + * handle such as 0x60000123), then we use it generate our final + * handle which is used to uniquely identify the match entry. + */ + if (!TC_U32_NODE(handle)) { + handle = gen_new_kid(ht, htid); + } else { + handle = htid | TC_U32_NODE(handle); + err = idr_alloc_u32(&ht->handle_idr, NULL, &handle, + handle, GFP_KERNEL); + if (err) + return err; + } + } else { + /* The user did not give us a handle; lets just generate one + * from the table's pool of nodeids. + */ handle = gen_new_kid(ht, htid); + }
if (tb[TCA_U32_SEL] == NULL) { NL_SET_ERR_MSG_MOD(extack, "Selector not specified");
From: Chengfeng Ye dg573847474@gmail.com
[ Upstream commit 56c6be35fcbed54279df0a2c9e60480a61841d6f ]
As &hc->lock is acquired by both timer _hfcpci_softirq() and hardirq hfcpci_int(), the timer should disable irq before lock acquisition otherwise deadlock could happen if the timmer is preemtped by the hadr irq.
Possible deadlock scenario: hfcpci_softirq() (timer) -> _hfcpci_softirq() -> spin_lock(&hc->lock); <irq interruption> -> hfcpci_int() -> spin_lock(&hc->lock); (deadlock here)
This flaw was found by an experimental static analysis tool I am developing for irq-related deadlock.
The tentative patch fixes the potential deadlock by spin_lock_irq() in timer.
Fixes: b36b654a7e82 ("mISDN: Create /sys/class/mISDN") Signed-off-by: Chengfeng Ye dg573847474@gmail.com Link: https://lore.kernel.org/r/20230727085619.7419-1-dg573847474@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/isdn/hardware/mISDN/hfcpci.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c index c0331b2680108..fe391de1aba32 100644 --- a/drivers/isdn/hardware/mISDN/hfcpci.c +++ b/drivers/isdn/hardware/mISDN/hfcpci.c @@ -839,7 +839,7 @@ hfcpci_fill_fifo(struct bchannel *bch) *z1t = cpu_to_le16(new_z1); /* now send data */ if (bch->tx_idx < bch->tx_skb->len) return; - dev_kfree_skb(bch->tx_skb); + dev_kfree_skb_any(bch->tx_skb); if (get_next_bframe(bch)) goto next_t_frame; return; @@ -895,7 +895,7 @@ hfcpci_fill_fifo(struct bchannel *bch) } bz->za[new_f1].z1 = cpu_to_le16(new_z1); /* for next buffer */ bz->f1 = new_f1; /* next frame */ - dev_kfree_skb(bch->tx_skb); + dev_kfree_skb_any(bch->tx_skb); get_next_bframe(bch); }
@@ -1119,7 +1119,7 @@ tx_birq(struct bchannel *bch) if (bch->tx_skb && bch->tx_idx < bch->tx_skb->len) hfcpci_fill_fifo(bch); else { - dev_kfree_skb(bch->tx_skb); + dev_kfree_skb_any(bch->tx_skb); if (get_next_bframe(bch)) hfcpci_fill_fifo(bch); } @@ -2277,7 +2277,7 @@ _hfcpci_softirq(struct device *dev, void *unused) return 0;
if (hc->hw.int_m2 & HFCPCI_IRQ_ENABLE) { - spin_lock(&hc->lock); + spin_lock_irq(&hc->lock); bch = Sel_BCS(hc, hc->hw.bswapped ? 2 : 1); if (bch && bch->state == ISDN_P_B_RAW) { /* B1 rx&tx */ main_rec_hfcpci(bch); @@ -2288,7 +2288,7 @@ _hfcpci_softirq(struct device *dev, void *unused) main_rec_hfcpci(bch); tx_birq(bch); } - spin_unlock(&hc->lock); + spin_unlock_irq(&hc->lock); } return 0; }
From: Thierry Reding treding@nvidia.com
[ Upstream commit a0b1b2055be34c0ec1371764d040164cde1ead79 ]
The clock data is an array of struct clk_bulk_data, so make sure to allocate enough memory.
Fixes: d8ca113724e7 ("net: stmmac: tegra: Add MGBE support") Signed-off-by: Thierry Reding treding@nvidia.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c index bdf990cf2f310..0880048ccdddc 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c @@ -234,7 +234,8 @@ static int tegra_mgbe_probe(struct platform_device *pdev) res.addr = mgbe->regs; res.irq = irq;
- mgbe->clks = devm_kzalloc(&pdev->dev, sizeof(*mgbe->clks), GFP_KERNEL); + mgbe->clks = devm_kcalloc(&pdev->dev, ARRAY_SIZE(mgbe_clks), + sizeof(*mgbe->clks), GFP_KERNEL); if (!mgbe->clks) return -ENOMEM;
From: Konstantin Khorenko khorenko@virtuozzo.com
[ Upstream commit e346e231b42bcae6822a6326acfb7b741e9e6026 ]
Here we've got to a situation when tasklet called usleep_range() in PTT acquire logic, thus welcome to the "scheduling while atomic" BUG().
BUG: scheduling while atomic: swapper/24/0/0x00000100
[<ffffffffb41c6199>] schedule+0x29/0x70 [<ffffffffb41c5512>] schedule_hrtimeout_range_clock+0xb2/0x150 [<ffffffffb41c55c3>] schedule_hrtimeout_range+0x13/0x20 [<ffffffffb41c3bcf>] usleep_range+0x4f/0x70 [<ffffffffc08d3e58>] qed_ptt_acquire+0x38/0x100 [qed] [<ffffffffc08eac48>] _qed_get_vport_stats+0x458/0x580 [qed] [<ffffffffc08ead8c>] qed_get_vport_stats+0x1c/0xd0 [qed] [<ffffffffc08dffd3>] qed_get_protocol_stats+0x93/0x100 [qed] qed_mcp_send_protocol_stats case MFW_DRV_MSG_GET_LAN_STATS: case MFW_DRV_MSG_GET_FCOE_STATS: case MFW_DRV_MSG_GET_ISCSI_STATS: case MFW_DRV_MSG_GET_RDMA_STATS: [<ffffffffc08e36d8>] qed_mcp_handle_events+0x2d8/0x890 [qed] qed_int_assertion qed_int_attentions [<ffffffffc08d9490>] qed_int_sp_dpc+0xa50/0xdc0 [qed] [<ffffffffb3aa7623>] tasklet_action+0x83/0x140 [<ffffffffb41d9125>] __do_softirq+0x125/0x2bb [<ffffffffb41d560c>] call_softirq+0x1c/0x30 [<ffffffffb3a30645>] do_softirq+0x65/0xa0 [<ffffffffb3aa78d5>] irq_exit+0x105/0x110 [<ffffffffb41d8996>] do_IRQ+0x56/0xf0
Fix this by making caller to provide the context whether it could be in atomic context flow or not when getting stats from QED driver. QED driver based on the context provided decide to schedule out or not when acquiring the PTT BAR window.
We faced the BUG_ON() while getting vport stats, but according to the code same issue could happen for fcoe and iscsi statistics as well, so fixing them too.
Fixes: 6c75424612a7 ("qed: Add support for NCSI statistics.") Fixes: 1e128c81290a ("qed: Add support for hardware offloaded FCoE.") Fixes: 2f2b2614e893 ("qed: Provide iSCSI statistics to management") Cc: Sudarsana Kalluru skalluru@marvell.com Cc: David Miller davem@davemloft.net Cc: Manish Chopra manishc@marvell.com
Signed-off-by: Konstantin Khorenko khorenko@virtuozzo.com Reviewed-by: Simon Horman horms@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qed/qed_dev_api.h | 16 ++++++++++++ drivers/net/ethernet/qlogic/qed/qed_fcoe.c | 19 ++++++++++---- drivers/net/ethernet/qlogic/qed/qed_fcoe.h | 17 ++++++++++-- drivers/net/ethernet/qlogic/qed/qed_hw.c | 26 ++++++++++++++++--- drivers/net/ethernet/qlogic/qed/qed_iscsi.c | 19 ++++++++++---- drivers/net/ethernet/qlogic/qed/qed_iscsi.h | 8 ++++-- drivers/net/ethernet/qlogic/qed/qed_l2.c | 19 ++++++++++---- drivers/net/ethernet/qlogic/qed/qed_l2.h | 24 +++++++++++++++++ drivers/net/ethernet/qlogic/qed/qed_main.c | 6 ++--- 9 files changed, 128 insertions(+), 26 deletions(-)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h index f8682356d0cf4..94d4f9413ab7a 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h +++ b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h @@ -193,6 +193,22 @@ void qed_hw_remove(struct qed_dev *cdev); */ struct qed_ptt *qed_ptt_acquire(struct qed_hwfn *p_hwfn);
+/** + * qed_ptt_acquire_context(): Allocate a PTT window honoring the context + * atomicy. + * + * @p_hwfn: HW device data. + * @is_atomic: Hint from the caller - if the func can sleep or not. + * + * Context: The function should not sleep in case is_atomic == true. + * Return: struct qed_ptt. + * + * Should be called at the entry point to the driver + * (at the beginning of an exported function). + */ +struct qed_ptt *qed_ptt_acquire_context(struct qed_hwfn *p_hwfn, + bool is_atomic); + /** * qed_ptt_release(): Release PTT Window. * diff --git a/drivers/net/ethernet/qlogic/qed/qed_fcoe.c b/drivers/net/ethernet/qlogic/qed/qed_fcoe.c index 3764190b948eb..04602ac947087 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_fcoe.c +++ b/drivers/net/ethernet/qlogic/qed/qed_fcoe.c @@ -693,13 +693,14 @@ static void _qed_fcoe_get_pstats(struct qed_hwfn *p_hwfn, }
static int qed_fcoe_get_stats(struct qed_hwfn *p_hwfn, - struct qed_fcoe_stats *p_stats) + struct qed_fcoe_stats *p_stats, + bool is_atomic) { struct qed_ptt *p_ptt;
memset(p_stats, 0, sizeof(*p_stats));
- p_ptt = qed_ptt_acquire(p_hwfn); + p_ptt = qed_ptt_acquire_context(p_hwfn, is_atomic);
if (!p_ptt) { DP_ERR(p_hwfn, "Failed to acquire ptt\n"); @@ -973,19 +974,27 @@ static int qed_fcoe_destroy_conn(struct qed_dev *cdev, QED_SPQ_MODE_EBLOCK, NULL); }
+static int qed_fcoe_stats_context(struct qed_dev *cdev, + struct qed_fcoe_stats *stats, + bool is_atomic) +{ + return qed_fcoe_get_stats(QED_AFFIN_HWFN(cdev), stats, is_atomic); +} + static int qed_fcoe_stats(struct qed_dev *cdev, struct qed_fcoe_stats *stats) { - return qed_fcoe_get_stats(QED_AFFIN_HWFN(cdev), stats); + return qed_fcoe_stats_context(cdev, stats, false); }
void qed_get_protocol_stats_fcoe(struct qed_dev *cdev, - struct qed_mcp_fcoe_stats *stats) + struct qed_mcp_fcoe_stats *stats, + bool is_atomic) { struct qed_fcoe_stats proto_stats;
/* Retrieve FW statistics */ memset(&proto_stats, 0, sizeof(proto_stats)); - if (qed_fcoe_stats(cdev, &proto_stats)) { + if (qed_fcoe_stats_context(cdev, &proto_stats, is_atomic)) { DP_VERBOSE(cdev, QED_MSG_STORAGE, "Failed to collect FCoE statistics\n"); return; diff --git a/drivers/net/ethernet/qlogic/qed/qed_fcoe.h b/drivers/net/ethernet/qlogic/qed/qed_fcoe.h index 19c85adf4ceb1..214e8299ecb4e 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_fcoe.h +++ b/drivers/net/ethernet/qlogic/qed/qed_fcoe.h @@ -28,8 +28,20 @@ int qed_fcoe_alloc(struct qed_hwfn *p_hwfn); void qed_fcoe_setup(struct qed_hwfn *p_hwfn);
void qed_fcoe_free(struct qed_hwfn *p_hwfn); +/** + * qed_get_protocol_stats_fcoe(): Fills provided statistics + * struct with statistics. + * + * @cdev: Qed dev pointer. + * @stats: Points to struct that will be filled with statistics. + * @is_atomic: Hint from the caller - if the func can sleep or not. + * + * Context: The function should not sleep in case is_atomic == true. + * Return: Void. + */ void qed_get_protocol_stats_fcoe(struct qed_dev *cdev, - struct qed_mcp_fcoe_stats *stats); + struct qed_mcp_fcoe_stats *stats, + bool is_atomic); #else /* CONFIG_QED_FCOE */ static inline int qed_fcoe_alloc(struct qed_hwfn *p_hwfn) { @@ -40,7 +52,8 @@ static inline void qed_fcoe_setup(struct qed_hwfn *p_hwfn) {} static inline void qed_fcoe_free(struct qed_hwfn *p_hwfn) {}
static inline void qed_get_protocol_stats_fcoe(struct qed_dev *cdev, - struct qed_mcp_fcoe_stats *stats) + struct qed_mcp_fcoe_stats *stats, + bool is_atomic) { } #endif /* CONFIG_QED_FCOE */ diff --git a/drivers/net/ethernet/qlogic/qed/qed_hw.c b/drivers/net/ethernet/qlogic/qed/qed_hw.c index 554f30b0cfd5e..6263f847b6b92 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hw.c +++ b/drivers/net/ethernet/qlogic/qed/qed_hw.c @@ -23,7 +23,10 @@ #include "qed_reg_addr.h" #include "qed_sriov.h"
-#define QED_BAR_ACQUIRE_TIMEOUT 1000 +#define QED_BAR_ACQUIRE_TIMEOUT_USLEEP_CNT 1000 +#define QED_BAR_ACQUIRE_TIMEOUT_USLEEP 1000 +#define QED_BAR_ACQUIRE_TIMEOUT_UDELAY_CNT 100000 +#define QED_BAR_ACQUIRE_TIMEOUT_UDELAY 10
/* Invalid values */ #define QED_BAR_INVALID_OFFSET (cpu_to_le32(-1)) @@ -84,12 +87,22 @@ void qed_ptt_pool_free(struct qed_hwfn *p_hwfn) }
struct qed_ptt *qed_ptt_acquire(struct qed_hwfn *p_hwfn) +{ + return qed_ptt_acquire_context(p_hwfn, false); +} + +struct qed_ptt *qed_ptt_acquire_context(struct qed_hwfn *p_hwfn, bool is_atomic) { struct qed_ptt *p_ptt; - unsigned int i; + unsigned int i, count; + + if (is_atomic) + count = QED_BAR_ACQUIRE_TIMEOUT_UDELAY_CNT; + else + count = QED_BAR_ACQUIRE_TIMEOUT_USLEEP_CNT;
/* Take the free PTT from the list */ - for (i = 0; i < QED_BAR_ACQUIRE_TIMEOUT; i++) { + for (i = 0; i < count; i++) { spin_lock_bh(&p_hwfn->p_ptt_pool->lock);
if (!list_empty(&p_hwfn->p_ptt_pool->free_list)) { @@ -105,7 +118,12 @@ struct qed_ptt *qed_ptt_acquire(struct qed_hwfn *p_hwfn) }
spin_unlock_bh(&p_hwfn->p_ptt_pool->lock); - usleep_range(1000, 2000); + + if (is_atomic) + udelay(QED_BAR_ACQUIRE_TIMEOUT_UDELAY); + else + usleep_range(QED_BAR_ACQUIRE_TIMEOUT_USLEEP, + QED_BAR_ACQUIRE_TIMEOUT_USLEEP * 2); }
DP_NOTICE(p_hwfn, "PTT acquire timeout - failed to allocate PTT\n"); diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c index 511ab214eb9c8..980e7289b4814 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c @@ -999,13 +999,14 @@ static void _qed_iscsi_get_pstats(struct qed_hwfn *p_hwfn, }
static int qed_iscsi_get_stats(struct qed_hwfn *p_hwfn, - struct qed_iscsi_stats *stats) + struct qed_iscsi_stats *stats, + bool is_atomic) { struct qed_ptt *p_ptt;
memset(stats, 0, sizeof(*stats));
- p_ptt = qed_ptt_acquire(p_hwfn); + p_ptt = qed_ptt_acquire_context(p_hwfn, is_atomic); if (!p_ptt) { DP_ERR(p_hwfn, "Failed to acquire ptt\n"); return -EAGAIN; @@ -1336,9 +1337,16 @@ static int qed_iscsi_destroy_conn(struct qed_dev *cdev, QED_SPQ_MODE_EBLOCK, NULL); }
+static int qed_iscsi_stats_context(struct qed_dev *cdev, + struct qed_iscsi_stats *stats, + bool is_atomic) +{ + return qed_iscsi_get_stats(QED_AFFIN_HWFN(cdev), stats, is_atomic); +} + static int qed_iscsi_stats(struct qed_dev *cdev, struct qed_iscsi_stats *stats) { - return qed_iscsi_get_stats(QED_AFFIN_HWFN(cdev), stats); + return qed_iscsi_stats_context(cdev, stats, false); }
static int qed_iscsi_change_mac(struct qed_dev *cdev, @@ -1358,13 +1366,14 @@ static int qed_iscsi_change_mac(struct qed_dev *cdev, }
void qed_get_protocol_stats_iscsi(struct qed_dev *cdev, - struct qed_mcp_iscsi_stats *stats) + struct qed_mcp_iscsi_stats *stats, + bool is_atomic) { struct qed_iscsi_stats proto_stats;
/* Retrieve FW statistics */ memset(&proto_stats, 0, sizeof(proto_stats)); - if (qed_iscsi_stats(cdev, &proto_stats)) { + if (qed_iscsi_stats_context(cdev, &proto_stats, is_atomic)) { DP_VERBOSE(cdev, QED_MSG_STORAGE, "Failed to collect ISCSI statistics\n"); return; diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.h b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h index dec2b00259d42..974cb8d26608c 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iscsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h @@ -39,11 +39,14 @@ void qed_iscsi_free(struct qed_hwfn *p_hwfn); * * @cdev: Qed dev pointer. * @stats: Points to struct that will be filled with statistics. + * @is_atomic: Hint from the caller - if the func can sleep or not. * + * Context: The function should not sleep in case is_atomic == true. * Return: Void. */ void qed_get_protocol_stats_iscsi(struct qed_dev *cdev, - struct qed_mcp_iscsi_stats *stats); + struct qed_mcp_iscsi_stats *stats, + bool is_atomic); #else /* IS_ENABLED(CONFIG_QED_ISCSI) */ static inline int qed_iscsi_alloc(struct qed_hwfn *p_hwfn) { @@ -56,7 +59,8 @@ static inline void qed_iscsi_free(struct qed_hwfn *p_hwfn) {}
static inline void qed_get_protocol_stats_iscsi(struct qed_dev *cdev, - struct qed_mcp_iscsi_stats *stats) {} + struct qed_mcp_iscsi_stats *stats, + bool is_atomic) {} #endif /* IS_ENABLED(CONFIG_QED_ISCSI) */
#endif diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c index 7776d3bdd459a..970b9aabbc3d7 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_l2.c +++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c @@ -1863,7 +1863,8 @@ static void __qed_get_vport_stats(struct qed_hwfn *p_hwfn, }
static void _qed_get_vport_stats(struct qed_dev *cdev, - struct qed_eth_stats *stats) + struct qed_eth_stats *stats, + bool is_atomic) { u8 fw_vport = 0; int i; @@ -1872,10 +1873,11 @@ static void _qed_get_vport_stats(struct qed_dev *cdev,
for_each_hwfn(cdev, i) { struct qed_hwfn *p_hwfn = &cdev->hwfns[i]; - struct qed_ptt *p_ptt = IS_PF(cdev) ? qed_ptt_acquire(p_hwfn) - : NULL; + struct qed_ptt *p_ptt; bool b_get_port_stats;
+ p_ptt = IS_PF(cdev) ? qed_ptt_acquire_context(p_hwfn, is_atomic) + : NULL; if (IS_PF(cdev)) { /* The main vport index is relative first */ if (qed_fw_vport(p_hwfn, 0, &fw_vport)) { @@ -1900,6 +1902,13 @@ static void _qed_get_vport_stats(struct qed_dev *cdev, }
void qed_get_vport_stats(struct qed_dev *cdev, struct qed_eth_stats *stats) +{ + qed_get_vport_stats_context(cdev, stats, false); +} + +void qed_get_vport_stats_context(struct qed_dev *cdev, + struct qed_eth_stats *stats, + bool is_atomic) { u32 i;
@@ -1908,7 +1917,7 @@ void qed_get_vport_stats(struct qed_dev *cdev, struct qed_eth_stats *stats) return; }
- _qed_get_vport_stats(cdev, stats); + _qed_get_vport_stats(cdev, stats, is_atomic);
if (!cdev->reset_stats) return; @@ -1960,7 +1969,7 @@ void qed_reset_vport_stats(struct qed_dev *cdev) if (!cdev->reset_stats) { DP_INFO(cdev, "Reset stats not allocated\n"); } else { - _qed_get_vport_stats(cdev, cdev->reset_stats); + _qed_get_vport_stats(cdev, cdev->reset_stats, false); cdev->reset_stats->common.link_change_count = 0; } } diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.h b/drivers/net/ethernet/qlogic/qed/qed_l2.h index a538cf478c14e..2d2f82c785ad2 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_l2.h +++ b/drivers/net/ethernet/qlogic/qed/qed_l2.h @@ -249,8 +249,32 @@ qed_sp_eth_rx_queues_update(struct qed_hwfn *p_hwfn, enum spq_mode comp_mode, struct qed_spq_comp_cb *p_comp_data);
+/** + * qed_get_vport_stats(): Fills provided statistics + * struct with statistics. + * + * @cdev: Qed dev pointer. + * @stats: Points to struct that will be filled with statistics. + * + * Return: Void. + */ void qed_get_vport_stats(struct qed_dev *cdev, struct qed_eth_stats *stats);
+/** + * qed_get_vport_stats_context(): Fills provided statistics + * struct with statistics. + * + * @cdev: Qed dev pointer. + * @stats: Points to struct that will be filled with statistics. + * @is_atomic: Hint from the caller - if the func can sleep or not. + * + * Context: The function should not sleep in case is_atomic == true. + * Return: Void. + */ +void qed_get_vport_stats_context(struct qed_dev *cdev, + struct qed_eth_stats *stats, + bool is_atomic); + void qed_reset_vport_stats(struct qed_dev *cdev);
/** diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c index f5af83342856f..c278f8893042b 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_main.c +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c @@ -3092,7 +3092,7 @@ void qed_get_protocol_stats(struct qed_dev *cdev,
switch (type) { case QED_MCP_LAN_STATS: - qed_get_vport_stats(cdev, ð_stats); + qed_get_vport_stats_context(cdev, ð_stats, true); stats->lan_stats.ucast_rx_pkts = eth_stats.common.rx_ucast_pkts; stats->lan_stats.ucast_tx_pkts = @@ -3100,10 +3100,10 @@ void qed_get_protocol_stats(struct qed_dev *cdev, stats->lan_stats.fcs_err = -1; break; case QED_MCP_FCOE_STATS: - qed_get_protocol_stats_fcoe(cdev, &stats->fcoe_stats); + qed_get_protocol_stats_fcoe(cdev, &stats->fcoe_stats, true); break; case QED_MCP_ISCSI_STATS: - qed_get_protocol_stats_iscsi(cdev, &stats->iscsi_stats); + qed_get_protocol_stats_iscsi(cdev, &stats->iscsi_stats, true); break; default: DP_VERBOSE(cdev, QED_MSG_SP,
From: Eric Dumazet edumazet@google.com
[ Upstream commit d457a0e329b0bfd3a1450e0b1a18cd2b47a25a08 ]
Move declarations into include/net/gso.h and code into net/core/gso.c
Signed-off-by: Eric Dumazet edumazet@google.com Cc: Stanislav Fomichev sdf@google.com Reviewed-by: Simon Horman simon.horman@corigine.com Reviewed-by: David Ahern dsahern@kernel.org Link: https://lore.kernel.org/r/20230608191738.3947077-1-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 7938cd154368 ("net: gro: fix misuse of CB in udp socket lookup") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/tg3.c | 1 + .../net/ethernet/myricom/myri10ge/myri10ge.c | 1 + drivers/net/ethernet/sfc/siena/tx_common.c | 1 + drivers/net/ethernet/sfc/tx_common.c | 1 + drivers/net/tap.c | 1 + drivers/net/usb/r8152.c | 1 + drivers/net/wireguard/device.c | 1 + drivers/net/wireless/intel/iwlwifi/mvm/tx.c | 1 + include/linux/netdevice.h | 26 +- include/linux/skbuff.h | 71 ----- include/net/gro.h | 1 + include/net/gso.h | 109 +++++++ include/net/udp.h | 1 + net/core/Makefile | 2 +- net/core/dev.c | 70 +---- net/core/gro.c | 59 +--- net/core/gso.c | 273 ++++++++++++++++++ net/core/skbuff.c | 142 +-------- net/ipv4/af_inet.c | 1 + net/ipv4/esp4_offload.c | 1 + net/ipv4/gre_offload.c | 1 + net/ipv4/ip_output.c | 1 + net/ipv4/tcp_offload.c | 1 + net/ipv4/udp.c | 1 + net/ipv4/udp_offload.c | 1 + net/ipv6/esp6_offload.c | 1 + net/ipv6/ip6_offload.c | 1 + net/ipv6/ip6_output.c | 1 + net/ipv6/udp_offload.c | 1 + net/mac80211/tx.c | 1 + net/mpls/af_mpls.c | 1 + net/mpls/mpls_gso.c | 1 + net/netfilter/nf_flow_table_ip.c | 1 + net/netfilter/nfnetlink_queue.c | 1 + net/nsh/nsh.c | 1 + net/openvswitch/actions.c | 1 + net/openvswitch/datapath.c | 1 + net/sched/act_police.c | 1 + net/sched/sch_cake.c | 1 + net/sched/sch_netem.c | 1 + net/sched/sch_taprio.c | 1 + net/sched/sch_tbf.c | 1 + net/sctp/offload.c | 1 + net/xfrm/xfrm_device.c | 1 + net/xfrm/xfrm_interface_core.c | 1 + net/xfrm/xfrm_output.c | 1 + 46 files changed, 425 insertions(+), 365 deletions(-) create mode 100644 include/net/gso.h create mode 100644 net/core/gso.c
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index a52cf9aae4988..5ef073a79ce94 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -57,6 +57,7 @@ #include <linux/crc32poly.h>
#include <net/checksum.h> +#include <net/gso.h> #include <net/ip.h>
#include <linux/io.h> diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c index c5687d94ea885..7b7e1c5b00f47 100644 --- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c +++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c @@ -66,6 +66,7 @@ #include <linux/slab.h> #include <linux/prefetch.h> #include <net/checksum.h> +#include <net/gso.h> #include <net/ip.h> #include <net/tcp.h> #include <asm/byteorder.h> diff --git a/drivers/net/ethernet/sfc/siena/tx_common.c b/drivers/net/ethernet/sfc/siena/tx_common.c index 93a32d61944f0..a7a9ab304e136 100644 --- a/drivers/net/ethernet/sfc/siena/tx_common.c +++ b/drivers/net/ethernet/sfc/siena/tx_common.c @@ -12,6 +12,7 @@ #include "efx.h" #include "nic_common.h" #include "tx_common.h" +#include <net/gso.h>
static unsigned int efx_tx_cb_page_count(struct efx_tx_queue *tx_queue) { diff --git a/drivers/net/ethernet/sfc/tx_common.c b/drivers/net/ethernet/sfc/tx_common.c index 755aa92bf8236..9f2393d343715 100644 --- a/drivers/net/ethernet/sfc/tx_common.c +++ b/drivers/net/ethernet/sfc/tx_common.c @@ -12,6 +12,7 @@ #include "efx.h" #include "nic_common.h" #include "tx_common.h" +#include <net/gso.h>
static unsigned int efx_tx_cb_page_count(struct efx_tx_queue *tx_queue) { diff --git a/drivers/net/tap.c b/drivers/net/tap.c index d30d730ed5a71..9137fb8c1c420 100644 --- a/drivers/net/tap.c +++ b/drivers/net/tap.c @@ -18,6 +18,7 @@ #include <linux/fs.h> #include <linux/uio.h>
+#include <net/gso.h> #include <net/net_namespace.h> #include <net/rtnetlink.h> #include <net/sock.h> diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index 0999a58ca9d26..0738baa5b82e4 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -27,6 +27,7 @@ #include <linux/firmware.h> #include <crypto/hash.h> #include <linux/usb/r8152.h> +#include <net/gso.h>
/* Information for net-next */ #define NETNEXT_VERSION "12" diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c index d58e9f818d3b7..258dcc1039216 100644 --- a/drivers/net/wireguard/device.c +++ b/drivers/net/wireguard/device.c @@ -20,6 +20,7 @@ #include <linux/icmp.h> #include <linux/suspend.h> #include <net/dst_metadata.h> +#include <net/gso.h> #include <net/icmp.h> #include <net/rtnetlink.h> #include <net/ip_tunnels.h> diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c index 00719e1304386..682733193d3de 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c @@ -7,6 +7,7 @@ #include <linux/ieee80211.h> #include <linux/etherdevice.h> #include <linux/tcp.h> +#include <net/gso.h> #include <net/ip.h> #include <net/ipv6.h>
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 68adc8af29efb..9291c04a2e09d 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -4827,13 +4827,6 @@ int skb_crc32c_csum_help(struct sk_buff *skb); int skb_csum_hwoffload_help(struct sk_buff *skb, const netdev_features_t features);
-struct sk_buff *__skb_gso_segment(struct sk_buff *skb, - netdev_features_t features, bool tx_path); -struct sk_buff *skb_eth_gso_segment(struct sk_buff *skb, - netdev_features_t features, __be16 type); -struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb, - netdev_features_t features); - struct netdev_bonding_info { ifslave slave; ifbond master; @@ -4856,11 +4849,6 @@ static inline void ethtool_notify(struct net_device *dev, unsigned int cmd, } #endif
-static inline -struct sk_buff *skb_gso_segment(struct sk_buff *skb, netdev_features_t features) -{ - return __skb_gso_segment(skb, features, true); -} __be16 skb_network_protocol(struct sk_buff *skb, int *depth);
static inline bool can_checksum_protocol(netdev_features_t features, @@ -4987,6 +4975,7 @@ netdev_features_t passthru_features_check(struct sk_buff *skb, struct net_device *dev, netdev_features_t features); netdev_features_t netif_skb_features(struct sk_buff *skb); +void skb_warn_bad_offload(const struct sk_buff *skb);
static inline bool net_gso_ok(netdev_features_t features, int gso_type) { @@ -5035,19 +5024,6 @@ void netif_set_tso_max_segs(struct net_device *dev, unsigned int segs); void netif_inherit_tso_max(struct net_device *to, const struct net_device *from);
-static inline void skb_gso_error_unwind(struct sk_buff *skb, __be16 protocol, - int pulled_hlen, u16 mac_offset, - int mac_len) -{ - skb->protocol = protocol; - skb->encapsulation = 1; - skb_push(skb, pulled_hlen); - skb_reset_transport_header(skb); - skb->mac_header = mac_offset; - skb->network_header = skb->mac_header + mac_len; - skb->mac_len = mac_len; -} - static inline bool netif_is_macsec(const struct net_device *dev) { return dev->priv_flags & IFF_MACSEC; diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 0b40417457cd1..fdd9db2612968 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3992,8 +3992,6 @@ int skb_zerocopy(struct sk_buff *to, struct sk_buff *from, void skb_split(struct sk_buff *skb, struct sk_buff *skb1, const u32 len); int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen); void skb_scrub_packet(struct sk_buff *skb, bool xnet); -bool skb_gso_validate_network_len(const struct sk_buff *skb, unsigned int mtu); -bool skb_gso_validate_mac_len(const struct sk_buff *skb, unsigned int len); struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features); struct sk_buff *skb_segment_list(struct sk_buff *skb, netdev_features_t features, unsigned int offset); @@ -4859,75 +4857,6 @@ static inline struct sec_path *skb_sec_path(const struct sk_buff *skb) #endif }
-/* Keeps track of mac header offset relative to skb->head. - * It is useful for TSO of Tunneling protocol. e.g. GRE. - * For non-tunnel skb it points to skb_mac_header() and for - * tunnel skb it points to outer mac header. - * Keeps track of level of encapsulation of network headers. - */ -struct skb_gso_cb { - union { - int mac_offset; - int data_offset; - }; - int encap_level; - __wsum csum; - __u16 csum_start; -}; -#define SKB_GSO_CB_OFFSET 32 -#define SKB_GSO_CB(skb) ((struct skb_gso_cb *)((skb)->cb + SKB_GSO_CB_OFFSET)) - -static inline int skb_tnl_header_len(const struct sk_buff *inner_skb) -{ - return (skb_mac_header(inner_skb) - inner_skb->head) - - SKB_GSO_CB(inner_skb)->mac_offset; -} - -static inline int gso_pskb_expand_head(struct sk_buff *skb, int extra) -{ - int new_headroom, headroom; - int ret; - - headroom = skb_headroom(skb); - ret = pskb_expand_head(skb, extra, 0, GFP_ATOMIC); - if (ret) - return ret; - - new_headroom = skb_headroom(skb); - SKB_GSO_CB(skb)->mac_offset += (new_headroom - headroom); - return 0; -} - -static inline void gso_reset_checksum(struct sk_buff *skb, __wsum res) -{ - /* Do not update partial checksums if remote checksum is enabled. */ - if (skb->remcsum_offload) - return; - - SKB_GSO_CB(skb)->csum = res; - SKB_GSO_CB(skb)->csum_start = skb_checksum_start(skb) - skb->head; -} - -/* Compute the checksum for a gso segment. First compute the checksum value - * from the start of transport header to SKB_GSO_CB(skb)->csum_start, and - * then add in skb->csum (checksum from csum_start to end of packet). - * skb->csum and csum_start are then updated to reflect the checksum of the - * resultant packet starting from the transport header-- the resultant checksum - * is in the res argument (i.e. normally zero or ~ of checksum of a pseudo - * header. - */ -static inline __sum16 gso_make_checksum(struct sk_buff *skb, __wsum res) -{ - unsigned char *csum_start = skb_transport_header(skb); - int plen = (skb->head + SKB_GSO_CB(skb)->csum_start) - csum_start; - __wsum partial = SKB_GSO_CB(skb)->csum; - - SKB_GSO_CB(skb)->csum = res; - SKB_GSO_CB(skb)->csum_start = csum_start - skb->head; - - return csum_fold(csum_partial(csum_start, plen, partial)); -} - static inline bool skb_is_gso(const struct sk_buff *skb) { return skb_shinfo(skb)->gso_size; diff --git a/include/net/gro.h b/include/net/gro.h index a4fab706240d2..972ff42d3a829 100644 --- a/include/net/gro.h +++ b/include/net/gro.h @@ -446,5 +446,6 @@ static inline void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb, gro_normal_list(napi); }
+extern struct list_head offload_base;
#endif /* _NET_IPV6_GRO_H */ diff --git a/include/net/gso.h b/include/net/gso.h new file mode 100644 index 0000000000000..29975440cad51 --- /dev/null +++ b/include/net/gso.h @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef _NET_GSO_H +#define _NET_GSO_H + +#include <linux/skbuff.h> + +/* Keeps track of mac header offset relative to skb->head. + * It is useful for TSO of Tunneling protocol. e.g. GRE. + * For non-tunnel skb it points to skb_mac_header() and for + * tunnel skb it points to outer mac header. + * Keeps track of level of encapsulation of network headers. + */ +struct skb_gso_cb { + union { + int mac_offset; + int data_offset; + }; + int encap_level; + __wsum csum; + __u16 csum_start; +}; +#define SKB_GSO_CB_OFFSET 32 +#define SKB_GSO_CB(skb) ((struct skb_gso_cb *)((skb)->cb + SKB_GSO_CB_OFFSET)) + +static inline int skb_tnl_header_len(const struct sk_buff *inner_skb) +{ + return (skb_mac_header(inner_skb) - inner_skb->head) - + SKB_GSO_CB(inner_skb)->mac_offset; +} + +static inline int gso_pskb_expand_head(struct sk_buff *skb, int extra) +{ + int new_headroom, headroom; + int ret; + + headroom = skb_headroom(skb); + ret = pskb_expand_head(skb, extra, 0, GFP_ATOMIC); + if (ret) + return ret; + + new_headroom = skb_headroom(skb); + SKB_GSO_CB(skb)->mac_offset += (new_headroom - headroom); + return 0; +} + +static inline void gso_reset_checksum(struct sk_buff *skb, __wsum res) +{ + /* Do not update partial checksums if remote checksum is enabled. */ + if (skb->remcsum_offload) + return; + + SKB_GSO_CB(skb)->csum = res; + SKB_GSO_CB(skb)->csum_start = skb_checksum_start(skb) - skb->head; +} + +/* Compute the checksum for a gso segment. First compute the checksum value + * from the start of transport header to SKB_GSO_CB(skb)->csum_start, and + * then add in skb->csum (checksum from csum_start to end of packet). + * skb->csum and csum_start are then updated to reflect the checksum of the + * resultant packet starting from the transport header-- the resultant checksum + * is in the res argument (i.e. normally zero or ~ of checksum of a pseudo + * header. + */ +static inline __sum16 gso_make_checksum(struct sk_buff *skb, __wsum res) +{ + unsigned char *csum_start = skb_transport_header(skb); + int plen = (skb->head + SKB_GSO_CB(skb)->csum_start) - csum_start; + __wsum partial = SKB_GSO_CB(skb)->csum; + + SKB_GSO_CB(skb)->csum = res; + SKB_GSO_CB(skb)->csum_start = csum_start - skb->head; + + return csum_fold(csum_partial(csum_start, plen, partial)); +} + +struct sk_buff *__skb_gso_segment(struct sk_buff *skb, + netdev_features_t features, bool tx_path); + +static inline struct sk_buff *skb_gso_segment(struct sk_buff *skb, + netdev_features_t features) +{ + return __skb_gso_segment(skb, features, true); +} + +struct sk_buff *skb_eth_gso_segment(struct sk_buff *skb, + netdev_features_t features, __be16 type); + +struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb, + netdev_features_t features); + +bool skb_gso_validate_network_len(const struct sk_buff *skb, unsigned int mtu); + +bool skb_gso_validate_mac_len(const struct sk_buff *skb, unsigned int len); + +static inline void skb_gso_error_unwind(struct sk_buff *skb, __be16 protocol, + int pulled_hlen, u16 mac_offset, + int mac_len) +{ + skb->protocol = protocol; + skb->encapsulation = 1; + skb_push(skb, pulled_hlen); + skb_reset_transport_header(skb); + skb->mac_header = mac_offset; + skb->network_header = skb->mac_header + mac_len; + skb->mac_len = mac_len; +} + +#endif /* _NET_GSO_H */ diff --git a/include/net/udp.h b/include/net/udp.h index de4b528522bb9..94f3486c43e33 100644 --- a/include/net/udp.h +++ b/include/net/udp.h @@ -21,6 +21,7 @@ #include <linux/list.h> #include <linux/bug.h> #include <net/inet_sock.h> +#include <net/gso.h> #include <net/sock.h> #include <net/snmp.h> #include <net/ip.h> diff --git a/net/core/Makefile b/net/core/Makefile index 8f367813bc681..731db2eaa6107 100644 --- a/net/core/Makefile +++ b/net/core/Makefile @@ -13,7 +13,7 @@ obj-y += dev.o dev_addr_lists.o dst.o netevent.o \ neighbour.o rtnetlink.o utils.o link_watch.o filter.o \ sock_diag.o dev_ioctl.o tso.o sock_reuseport.o \ fib_notifier.o xdp.o flow_offload.o gro.o \ - netdev-genl.o netdev-genl-gen.o + netdev-genl.o netdev-genl-gen.o gso.o
obj-$(CONFIG_NETDEV_ADDR_LIST_TEST) += dev_addr_lists_test.o
diff --git a/net/core/dev.c b/net/core/dev.c index c29f3e1db3ca7..44a4eb76a659e 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3209,7 +3209,7 @@ static u16 skb_tx_hash(const struct net_device *dev, return (u16) reciprocal_scale(skb_get_hash(skb), qcount) + qoffset; }
-static void skb_warn_bad_offload(const struct sk_buff *skb) +void skb_warn_bad_offload(const struct sk_buff *skb) { static const netdev_features_t null_features; struct net_device *dev = skb->dev; @@ -3338,74 +3338,6 @@ __be16 skb_network_protocol(struct sk_buff *skb, int *depth) return vlan_get_protocol_and_depth(skb, type, depth); }
-/* openvswitch calls this on rx path, so we need a different check. - */ -static inline bool skb_needs_check(struct sk_buff *skb, bool tx_path) -{ - if (tx_path) - return skb->ip_summed != CHECKSUM_PARTIAL && - skb->ip_summed != CHECKSUM_UNNECESSARY; - - return skb->ip_summed == CHECKSUM_NONE; -} - -/** - * __skb_gso_segment - Perform segmentation on skb. - * @skb: buffer to segment - * @features: features for the output path (see dev->features) - * @tx_path: whether it is called in TX path - * - * This function segments the given skb and returns a list of segments. - * - * It may return NULL if the skb requires no segmentation. This is - * only possible when GSO is used for verifying header integrity. - * - * Segmentation preserves SKB_GSO_CB_OFFSET bytes of previous skb cb. - */ -struct sk_buff *__skb_gso_segment(struct sk_buff *skb, - netdev_features_t features, bool tx_path) -{ - struct sk_buff *segs; - - if (unlikely(skb_needs_check(skb, tx_path))) { - int err; - - /* We're going to init ->check field in TCP or UDP header */ - err = skb_cow_head(skb, 0); - if (err < 0) - return ERR_PTR(err); - } - - /* Only report GSO partial support if it will enable us to - * support segmentation on this frame without needing additional - * work. - */ - if (features & NETIF_F_GSO_PARTIAL) { - netdev_features_t partial_features = NETIF_F_GSO_ROBUST; - struct net_device *dev = skb->dev; - - partial_features |= dev->features & dev->gso_partial_features; - if (!skb_gso_ok(skb, features | partial_features)) - features &= ~NETIF_F_GSO_PARTIAL; - } - - BUILD_BUG_ON(SKB_GSO_CB_OFFSET + - sizeof(*SKB_GSO_CB(skb)) > sizeof(skb->cb)); - - SKB_GSO_CB(skb)->mac_offset = skb_headroom(skb); - SKB_GSO_CB(skb)->encap_level = 0; - - skb_reset_mac_header(skb); - skb_reset_mac_len(skb); - - segs = skb_mac_gso_segment(skb, features); - - if (segs != skb && unlikely(skb_needs_check(skb, tx_path) && !IS_ERR(segs))) - skb_warn_bad_offload(skb); - - return segs; -} -EXPORT_SYMBOL(__skb_gso_segment);
/* Take action when hardware reception checksum errors are detected. */ #ifdef CONFIG_BUG diff --git a/net/core/gro.c b/net/core/gro.c index 2d84165cb4f1d..2f1b6524bddc5 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -10,7 +10,7 @@ #define GRO_MAX_HEAD (MAX_HEADER + 128)
static DEFINE_SPINLOCK(offload_lock); -static struct list_head offload_base __read_mostly = LIST_HEAD_INIT(offload_base); +struct list_head offload_base __read_mostly = LIST_HEAD_INIT(offload_base); /* Maximum number of GRO_NORMAL skbs to batch up for list-RX */ int gro_normal_batch __read_mostly = 8;
@@ -92,63 +92,6 @@ void dev_remove_offload(struct packet_offload *po) } EXPORT_SYMBOL(dev_remove_offload);
-/** - * skb_eth_gso_segment - segmentation handler for ethernet protocols. - * @skb: buffer to segment - * @features: features for the output path (see dev->features) - * @type: Ethernet Protocol ID - */ -struct sk_buff *skb_eth_gso_segment(struct sk_buff *skb, - netdev_features_t features, __be16 type) -{ - struct sk_buff *segs = ERR_PTR(-EPROTONOSUPPORT); - struct packet_offload *ptype; - - rcu_read_lock(); - list_for_each_entry_rcu(ptype, &offload_base, list) { - if (ptype->type == type && ptype->callbacks.gso_segment) { - segs = ptype->callbacks.gso_segment(skb, features); - break; - } - } - rcu_read_unlock(); - - return segs; -} -EXPORT_SYMBOL(skb_eth_gso_segment); - -/** - * skb_mac_gso_segment - mac layer segmentation handler. - * @skb: buffer to segment - * @features: features for the output path (see dev->features) - */ -struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb, - netdev_features_t features) -{ - struct sk_buff *segs = ERR_PTR(-EPROTONOSUPPORT); - struct packet_offload *ptype; - int vlan_depth = skb->mac_len; - __be16 type = skb_network_protocol(skb, &vlan_depth); - - if (unlikely(!type)) - return ERR_PTR(-EINVAL); - - __skb_pull(skb, vlan_depth); - - rcu_read_lock(); - list_for_each_entry_rcu(ptype, &offload_base, list) { - if (ptype->type == type && ptype->callbacks.gso_segment) { - segs = ptype->callbacks.gso_segment(skb, features); - break; - } - } - rcu_read_unlock(); - - __skb_push(skb, skb->data - skb_mac_header(skb)); - - return segs; -} -EXPORT_SYMBOL(skb_mac_gso_segment);
int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb) { diff --git a/net/core/gso.c b/net/core/gso.c new file mode 100644 index 0000000000000..9e1803bfc9c6c --- /dev/null +++ b/net/core/gso.c @@ -0,0 +1,273 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +#include <linux/skbuff.h> +#include <linux/sctp.h> +#include <net/gso.h> +#include <net/gro.h> + +/** + * skb_eth_gso_segment - segmentation handler for ethernet protocols. + * @skb: buffer to segment + * @features: features for the output path (see dev->features) + * @type: Ethernet Protocol ID + */ +struct sk_buff *skb_eth_gso_segment(struct sk_buff *skb, + netdev_features_t features, __be16 type) +{ + struct sk_buff *segs = ERR_PTR(-EPROTONOSUPPORT); + struct packet_offload *ptype; + + rcu_read_lock(); + list_for_each_entry_rcu(ptype, &offload_base, list) { + if (ptype->type == type && ptype->callbacks.gso_segment) { + segs = ptype->callbacks.gso_segment(skb, features); + break; + } + } + rcu_read_unlock(); + + return segs; +} +EXPORT_SYMBOL(skb_eth_gso_segment); + +/** + * skb_mac_gso_segment - mac layer segmentation handler. + * @skb: buffer to segment + * @features: features for the output path (see dev->features) + */ +struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb, + netdev_features_t features) +{ + struct sk_buff *segs = ERR_PTR(-EPROTONOSUPPORT); + struct packet_offload *ptype; + int vlan_depth = skb->mac_len; + __be16 type = skb_network_protocol(skb, &vlan_depth); + + if (unlikely(!type)) + return ERR_PTR(-EINVAL); + + __skb_pull(skb, vlan_depth); + + rcu_read_lock(); + list_for_each_entry_rcu(ptype, &offload_base, list) { + if (ptype->type == type && ptype->callbacks.gso_segment) { + segs = ptype->callbacks.gso_segment(skb, features); + break; + } + } + rcu_read_unlock(); + + __skb_push(skb, skb->data - skb_mac_header(skb)); + + return segs; +} +EXPORT_SYMBOL(skb_mac_gso_segment); +/* openvswitch calls this on rx path, so we need a different check. + */ +static bool skb_needs_check(const struct sk_buff *skb, bool tx_path) +{ + if (tx_path) + return skb->ip_summed != CHECKSUM_PARTIAL && + skb->ip_summed != CHECKSUM_UNNECESSARY; + + return skb->ip_summed == CHECKSUM_NONE; +} + +/** + * __skb_gso_segment - Perform segmentation on skb. + * @skb: buffer to segment + * @features: features for the output path (see dev->features) + * @tx_path: whether it is called in TX path + * + * This function segments the given skb and returns a list of segments. + * + * It may return NULL if the skb requires no segmentation. This is + * only possible when GSO is used for verifying header integrity. + * + * Segmentation preserves SKB_GSO_CB_OFFSET bytes of previous skb cb. + */ +struct sk_buff *__skb_gso_segment(struct sk_buff *skb, + netdev_features_t features, bool tx_path) +{ + struct sk_buff *segs; + + if (unlikely(skb_needs_check(skb, tx_path))) { + int err; + + /* We're going to init ->check field in TCP or UDP header */ + err = skb_cow_head(skb, 0); + if (err < 0) + return ERR_PTR(err); + } + + /* Only report GSO partial support if it will enable us to + * support segmentation on this frame without needing additional + * work. + */ + if (features & NETIF_F_GSO_PARTIAL) { + netdev_features_t partial_features = NETIF_F_GSO_ROBUST; + struct net_device *dev = skb->dev; + + partial_features |= dev->features & dev->gso_partial_features; + if (!skb_gso_ok(skb, features | partial_features)) + features &= ~NETIF_F_GSO_PARTIAL; + } + + BUILD_BUG_ON(SKB_GSO_CB_OFFSET + + sizeof(*SKB_GSO_CB(skb)) > sizeof(skb->cb)); + + SKB_GSO_CB(skb)->mac_offset = skb_headroom(skb); + SKB_GSO_CB(skb)->encap_level = 0; + + skb_reset_mac_header(skb); + skb_reset_mac_len(skb); + + segs = skb_mac_gso_segment(skb, features); + + if (segs != skb && unlikely(skb_needs_check(skb, tx_path) && !IS_ERR(segs))) + skb_warn_bad_offload(skb); + + return segs; +} +EXPORT_SYMBOL(__skb_gso_segment); + +/** + * skb_gso_transport_seglen - Return length of individual segments of a gso packet + * + * @skb: GSO skb + * + * skb_gso_transport_seglen is used to determine the real size of the + * individual segments, including Layer4 headers (TCP/UDP). + * + * The MAC/L2 or network (IP, IPv6) headers are not accounted for. + */ +static unsigned int skb_gso_transport_seglen(const struct sk_buff *skb) +{ + const struct skb_shared_info *shinfo = skb_shinfo(skb); + unsigned int thlen = 0; + + if (skb->encapsulation) { + thlen = skb_inner_transport_header(skb) - + skb_transport_header(skb); + + if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) + thlen += inner_tcp_hdrlen(skb); + } else if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) { + thlen = tcp_hdrlen(skb); + } else if (unlikely(skb_is_gso_sctp(skb))) { + thlen = sizeof(struct sctphdr); + } else if (shinfo->gso_type & SKB_GSO_UDP_L4) { + thlen = sizeof(struct udphdr); + } + /* UFO sets gso_size to the size of the fragmentation + * payload, i.e. the size of the L4 (UDP) header is already + * accounted for. + */ + return thlen + shinfo->gso_size; +} + +/** + * skb_gso_network_seglen - Return length of individual segments of a gso packet + * + * @skb: GSO skb + * + * skb_gso_network_seglen is used to determine the real size of the + * individual segments, including Layer3 (IP, IPv6) and L4 headers (TCP/UDP). + * + * The MAC/L2 header is not accounted for. + */ +static unsigned int skb_gso_network_seglen(const struct sk_buff *skb) +{ + unsigned int hdr_len = skb_transport_header(skb) - + skb_network_header(skb); + + return hdr_len + skb_gso_transport_seglen(skb); +} + +/** + * skb_gso_mac_seglen - Return length of individual segments of a gso packet + * + * @skb: GSO skb + * + * skb_gso_mac_seglen is used to determine the real size of the + * individual segments, including MAC/L2, Layer3 (IP, IPv6) and L4 + * headers (TCP/UDP). + */ +static unsigned int skb_gso_mac_seglen(const struct sk_buff *skb) +{ + unsigned int hdr_len = skb_transport_header(skb) - skb_mac_header(skb); + + return hdr_len + skb_gso_transport_seglen(skb); +} + +/** + * skb_gso_size_check - check the skb size, considering GSO_BY_FRAGS + * + * There are a couple of instances where we have a GSO skb, and we + * want to determine what size it would be after it is segmented. + * + * We might want to check: + * - L3+L4+payload size (e.g. IP forwarding) + * - L2+L3+L4+payload size (e.g. sanity check before passing to driver) + * + * This is a helper to do that correctly considering GSO_BY_FRAGS. + * + * @skb: GSO skb + * + * @seg_len: The segmented length (from skb_gso_*_seglen). In the + * GSO_BY_FRAGS case this will be [header sizes + GSO_BY_FRAGS]. + * + * @max_len: The maximum permissible length. + * + * Returns true if the segmented length <= max length. + */ +static inline bool skb_gso_size_check(const struct sk_buff *skb, + unsigned int seg_len, + unsigned int max_len) { + const struct skb_shared_info *shinfo = skb_shinfo(skb); + const struct sk_buff *iter; + + if (shinfo->gso_size != GSO_BY_FRAGS) + return seg_len <= max_len; + + /* Undo this so we can re-use header sizes */ + seg_len -= GSO_BY_FRAGS; + + skb_walk_frags(skb, iter) { + if (seg_len + skb_headlen(iter) > max_len) + return false; + } + + return true; +} + +/** + * skb_gso_validate_network_len - Will a split GSO skb fit into a given MTU? + * + * @skb: GSO skb + * @mtu: MTU to validate against + * + * skb_gso_validate_network_len validates if a given skb will fit a + * wanted MTU once split. It considers L3 headers, L4 headers, and the + * payload. + */ +bool skb_gso_validate_network_len(const struct sk_buff *skb, unsigned int mtu) +{ + return skb_gso_size_check(skb, skb_gso_network_seglen(skb), mtu); +} +EXPORT_SYMBOL_GPL(skb_gso_validate_network_len); + +/** + * skb_gso_validate_mac_len - Will a split GSO skb fit in a given length? + * + * @skb: GSO skb + * @len: length to validate against + * + * skb_gso_validate_mac_len validates if a given skb will fit a wanted + * length once split, including L2, L3 and L4 headers and the payload. + */ +bool skb_gso_validate_mac_len(const struct sk_buff *skb, unsigned int len) +{ + return skb_gso_size_check(skb, skb_gso_mac_seglen(skb), len); +} +EXPORT_SYMBOL_GPL(skb_gso_validate_mac_len); + diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 1b6a1d99869dc..593ec18e3f007 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -67,6 +67,7 @@ #include <net/dst.h> #include <net/sock.h> #include <net/checksum.h> +#include <net/gso.h> #include <net/ip6_checksum.h> #include <net/xfrm.h> #include <net/mpls.h> @@ -5789,147 +5790,6 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet) } EXPORT_SYMBOL_GPL(skb_scrub_packet);
-/** - * skb_gso_transport_seglen - Return length of individual segments of a gso packet - * - * @skb: GSO skb - * - * skb_gso_transport_seglen is used to determine the real size of the - * individual segments, including Layer4 headers (TCP/UDP). - * - * The MAC/L2 or network (IP, IPv6) headers are not accounted for. - */ -static unsigned int skb_gso_transport_seglen(const struct sk_buff *skb) -{ - const struct skb_shared_info *shinfo = skb_shinfo(skb); - unsigned int thlen = 0; - - if (skb->encapsulation) { - thlen = skb_inner_transport_header(skb) - - skb_transport_header(skb); - - if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) - thlen += inner_tcp_hdrlen(skb); - } else if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) { - thlen = tcp_hdrlen(skb); - } else if (unlikely(skb_is_gso_sctp(skb))) { - thlen = sizeof(struct sctphdr); - } else if (shinfo->gso_type & SKB_GSO_UDP_L4) { - thlen = sizeof(struct udphdr); - } - /* UFO sets gso_size to the size of the fragmentation - * payload, i.e. the size of the L4 (UDP) header is already - * accounted for. - */ - return thlen + shinfo->gso_size; -} - -/** - * skb_gso_network_seglen - Return length of individual segments of a gso packet - * - * @skb: GSO skb - * - * skb_gso_network_seglen is used to determine the real size of the - * individual segments, including Layer3 (IP, IPv6) and L4 headers (TCP/UDP). - * - * The MAC/L2 header is not accounted for. - */ -static unsigned int skb_gso_network_seglen(const struct sk_buff *skb) -{ - unsigned int hdr_len = skb_transport_header(skb) - - skb_network_header(skb); - - return hdr_len + skb_gso_transport_seglen(skb); -} - -/** - * skb_gso_mac_seglen - Return length of individual segments of a gso packet - * - * @skb: GSO skb - * - * skb_gso_mac_seglen is used to determine the real size of the - * individual segments, including MAC/L2, Layer3 (IP, IPv6) and L4 - * headers (TCP/UDP). - */ -static unsigned int skb_gso_mac_seglen(const struct sk_buff *skb) -{ - unsigned int hdr_len = skb_transport_header(skb) - skb_mac_header(skb); - - return hdr_len + skb_gso_transport_seglen(skb); -} - -/** - * skb_gso_size_check - check the skb size, considering GSO_BY_FRAGS - * - * There are a couple of instances where we have a GSO skb, and we - * want to determine what size it would be after it is segmented. - * - * We might want to check: - * - L3+L4+payload size (e.g. IP forwarding) - * - L2+L3+L4+payload size (e.g. sanity check before passing to driver) - * - * This is a helper to do that correctly considering GSO_BY_FRAGS. - * - * @skb: GSO skb - * - * @seg_len: The segmented length (from skb_gso_*_seglen). In the - * GSO_BY_FRAGS case this will be [header sizes + GSO_BY_FRAGS]. - * - * @max_len: The maximum permissible length. - * - * Returns true if the segmented length <= max length. - */ -static inline bool skb_gso_size_check(const struct sk_buff *skb, - unsigned int seg_len, - unsigned int max_len) { - const struct skb_shared_info *shinfo = skb_shinfo(skb); - const struct sk_buff *iter; - - if (shinfo->gso_size != GSO_BY_FRAGS) - return seg_len <= max_len; - - /* Undo this so we can re-use header sizes */ - seg_len -= GSO_BY_FRAGS; - - skb_walk_frags(skb, iter) { - if (seg_len + skb_headlen(iter) > max_len) - return false; - } - - return true; -} - -/** - * skb_gso_validate_network_len - Will a split GSO skb fit into a given MTU? - * - * @skb: GSO skb - * @mtu: MTU to validate against - * - * skb_gso_validate_network_len validates if a given skb will fit a - * wanted MTU once split. It considers L3 headers, L4 headers, and the - * payload. - */ -bool skb_gso_validate_network_len(const struct sk_buff *skb, unsigned int mtu) -{ - return skb_gso_size_check(skb, skb_gso_network_seglen(skb), mtu); -} -EXPORT_SYMBOL_GPL(skb_gso_validate_network_len); - -/** - * skb_gso_validate_mac_len - Will a split GSO skb fit in a given length? - * - * @skb: GSO skb - * @len: length to validate against - * - * skb_gso_validate_mac_len validates if a given skb will fit a wanted - * length once split, including L2, L3 and L4 headers and the payload. - */ -bool skb_gso_validate_mac_len(const struct sk_buff *skb, unsigned int len) -{ - return skb_gso_size_check(skb, skb_gso_mac_seglen(skb), len); -} -EXPORT_SYMBOL_GPL(skb_gso_validate_mac_len); - static struct sk_buff *skb_reorder_vlan_header(struct sk_buff *skb) { int mac_len, meta_len; diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 4a76ebf793b85..10ebe39dcc873 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -100,6 +100,7 @@ #include <net/ip_fib.h> #include <net/inet_connection_sock.h> #include <net/gro.h> +#include <net/gso.h> #include <net/tcp.h> #include <net/udp.h> #include <net/udplite.h> diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c index ee848be59e65a..10e96ed6c9e39 100644 --- a/net/ipv4/esp4_offload.c +++ b/net/ipv4/esp4_offload.c @@ -17,6 +17,7 @@ #include <linux/err.h> #include <linux/module.h> #include <net/gro.h> +#include <net/gso.h> #include <net/ip.h> #include <net/xfrm.h> #include <net/esp.h> diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c index 2b9cb5398335b..311e70bfce407 100644 --- a/net/ipv4/gre_offload.c +++ b/net/ipv4/gre_offload.c @@ -11,6 +11,7 @@ #include <net/protocol.h> #include <net/gre.h> #include <net/gro.h> +#include <net/gso.h>
static struct sk_buff *gre_gso_segment(struct sk_buff *skb, netdev_features_t features) diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index a1bead441026e..d95e40a47098a 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -73,6 +73,7 @@ #include <net/arp.h> #include <net/icmp.h> #include <net/checksum.h> +#include <net/gso.h> #include <net/inetpeer.h> #include <net/inet_ecn.h> #include <net/lwtunnel.h> diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c index 4851211aa60d6..9c51ee9ccd4c0 100644 --- a/net/ipv4/tcp_offload.c +++ b/net/ipv4/tcp_offload.c @@ -9,6 +9,7 @@ #include <linux/indirect_call_wrapper.h> #include <linux/skbuff.h> #include <net/gro.h> +#include <net/gso.h> #include <net/tcp.h> #include <net/protocol.h>
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 9482def1f3103..c6b790001aa77 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -103,6 +103,7 @@ #include <net/ip_tunnels.h> #include <net/route.h> #include <net/checksum.h> +#include <net/gso.h> #include <net/xfrm.h> #include <trace/events/udp.h> #include <linux/static_key.h> diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index 4a61832e7f69b..f402946da344b 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -8,6 +8,7 @@
#include <linux/skbuff.h> #include <net/gro.h> +#include <net/gso.h> #include <net/udp.h> #include <net/protocol.h> #include <net/inet_common.h> diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c index 7723402689973..a189e08370a5e 100644 --- a/net/ipv6/esp6_offload.c +++ b/net/ipv6/esp6_offload.c @@ -17,6 +17,7 @@ #include <linux/err.h> #include <linux/module.h> #include <net/gro.h> +#include <net/gso.h> #include <net/ip.h> #include <net/xfrm.h> #include <net/esp.h> diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c index 00dc2e3b01845..d6314287338da 100644 --- a/net/ipv6/ip6_offload.c +++ b/net/ipv6/ip6_offload.c @@ -16,6 +16,7 @@ #include <net/tcp.h> #include <net/udp.h> #include <net/gro.h> +#include <net/gso.h>
#include "ip6_offload.h"
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index 9554cf46ed888..4a27fab1d09a3 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -42,6 +42,7 @@ #include <net/sock.h> #include <net/snmp.h>
+#include <net/gso.h> #include <net/ipv6.h> #include <net/ndisc.h> #include <net/protocol.h> diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c index e0e10f6bcdc18..09fa7a42cb937 100644 --- a/net/ipv6/udp_offload.c +++ b/net/ipv6/udp_offload.c @@ -14,6 +14,7 @@ #include <net/ip6_checksum.h> #include "ip6_offload.h" #include <net/gro.h> +#include <net/gso.h>
static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb, netdev_features_t features) diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c index 13b522dab0a3d..39ca4a8fe7b32 100644 --- a/net/mac80211/tx.c +++ b/net/mac80211/tx.c @@ -26,6 +26,7 @@ #include <net/codel_impl.h> #include <asm/unaligned.h> #include <net/fq_impl.h> +#include <net/gso.h>
#include "ieee80211_i.h" #include "driver-ops.h" diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c index dc5165d3eec4e..bf6e81d562631 100644 --- a/net/mpls/af_mpls.c +++ b/net/mpls/af_mpls.c @@ -12,6 +12,7 @@ #include <linux/nospec.h> #include <linux/vmalloc.h> #include <linux/percpu.h> +#include <net/gso.h> #include <net/ip.h> #include <net/dst.h> #include <net/sock.h> diff --git a/net/mpls/mpls_gso.c b/net/mpls/mpls_gso.c index 1482259de9b5d..533d082f0701e 100644 --- a/net/mpls/mpls_gso.c +++ b/net/mpls/mpls_gso.c @@ -14,6 +14,7 @@ #include <linux/netdev_features.h> #include <linux/netdevice.h> #include <linux/skbuff.h> +#include <net/gso.h> #include <net/mpls.h>
static struct sk_buff *mpls_gso_segment(struct sk_buff *skb, diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c index 3bbaf9c7ea46a..7eba00f6c6b6a 100644 --- a/net/netfilter/nf_flow_table_ip.c +++ b/net/netfilter/nf_flow_table_ip.c @@ -8,6 +8,7 @@ #include <linux/ipv6.h> #include <linux/netdevice.h> #include <linux/if_ether.h> +#include <net/gso.h> #include <net/ip.h> #include <net/ipv6.h> #include <net/ip6_route.h> diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c index e311462f6d98d..556bc902af00f 100644 --- a/net/netfilter/nfnetlink_queue.c +++ b/net/netfilter/nfnetlink_queue.c @@ -30,6 +30,7 @@ #include <linux/netfilter/nf_conntrack_common.h> #include <linux/list.h> #include <linux/cgroup-defs.h> +#include <net/gso.h> #include <net/sock.h> #include <net/tcp_states.h> #include <net/netfilter/nf_queue.h> diff --git a/net/nsh/nsh.c b/net/nsh/nsh.c index 0f23e5e8e03eb..f4a38bd6a7e04 100644 --- a/net/nsh/nsh.c +++ b/net/nsh/nsh.c @@ -8,6 +8,7 @@ #include <linux/module.h> #include <linux/netdevice.h> #include <linux/skbuff.h> +#include <net/gso.h> #include <net/nsh.h> #include <net/tun_proto.h>
diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c index a8cf9a88758ef..8074ea00d577e 100644 --- a/net/openvswitch/actions.c +++ b/net/openvswitch/actions.c @@ -17,6 +17,7 @@ #include <linux/if_vlan.h>
#include <net/dst.h> +#include <net/gso.h> #include <net/ip.h> #include <net/ipv6.h> #include <net/ip6_fib.h> diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c index 58f530f60172a..a6d2a0b1aa21e 100644 --- a/net/openvswitch/datapath.c +++ b/net/openvswitch/datapath.c @@ -35,6 +35,7 @@ #include <linux/rculist.h> #include <linux/dmi.h> #include <net/genetlink.h> +#include <net/gso.h> #include <net/net_namespace.h> #include <net/netns/generic.h> #include <net/pkt_cls.h> diff --git a/net/sched/act_police.c b/net/sched/act_police.c index 2e9dce03d1ecc..f3121c5a85e9f 100644 --- a/net/sched/act_police.c +++ b/net/sched/act_police.c @@ -16,6 +16,7 @@ #include <linux/init.h> #include <linux/slab.h> #include <net/act_api.h> +#include <net/gso.h> #include <net/netlink.h> #include <net/pkt_cls.h> #include <net/tc_act/tc_police.h> diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c index 891e007d5c0bf..9cff99558694d 100644 --- a/net/sched/sch_cake.c +++ b/net/sched/sch_cake.c @@ -65,6 +65,7 @@ #include <linux/reciprocal_div.h> #include <net/netlink.h> #include <linux/if_vlan.h> +#include <net/gso.h> #include <net/pkt_sched.h> #include <net/pkt_cls.h> #include <net/tcp.h> diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c index b93ec2a3454eb..38d9aa0cd30e7 100644 --- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c @@ -21,6 +21,7 @@ #include <linux/reciprocal_div.h> #include <linux/rbtree.h>
+#include <net/gso.h> #include <net/netlink.h> #include <net/pkt_sched.h> #include <net/inet_ecn.h> diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index 4caf80ddc6721..f681af138179c 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -20,6 +20,7 @@ #include <linux/spinlock.h> #include <linux/rcupdate.h> #include <linux/time.h> +#include <net/gso.h> #include <net/netlink.h> #include <net/pkt_sched.h> #include <net/pkt_cls.h> diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c index 277ad11f4d613..17d2d00ddb182 100644 --- a/net/sched/sch_tbf.c +++ b/net/sched/sch_tbf.c @@ -13,6 +13,7 @@ #include <linux/string.h> #include <linux/errno.h> #include <linux/skbuff.h> +#include <net/gso.h> #include <net/netlink.h> #include <net/sch_generic.h> #include <net/pkt_cls.h> diff --git a/net/sctp/offload.c b/net/sctp/offload.c index eb874e3c399a5..502095173d885 100644 --- a/net/sctp/offload.c +++ b/net/sctp/offload.c @@ -22,6 +22,7 @@ #include <net/sctp/sctp.h> #include <net/sctp/checksum.h> #include <net/protocol.h> +#include <net/gso.h>
static __le32 sctp_gso_make_checksum(struct sk_buff *skb) { diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index 408f5e55744ed..533697e2488f2 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -15,6 +15,7 @@ #include <linux/slab.h> #include <linux/spinlock.h> #include <net/dst.h> +#include <net/gso.h> #include <net/xfrm.h> #include <linux/notifier.h>
diff --git a/net/xfrm/xfrm_interface_core.c b/net/xfrm/xfrm_interface_core.c index 35279c220bd78..a3319965470a7 100644 --- a/net/xfrm/xfrm_interface_core.c +++ b/net/xfrm/xfrm_interface_core.c @@ -33,6 +33,7 @@ #include <linux/uaccess.h> #include <linux/atomic.h>
+#include <net/gso.h> #include <net/icmp.h> #include <net/ip.h> #include <net/ipv6.h> diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c index 369e5de8558ff..662c83beb345e 100644 --- a/net/xfrm/xfrm_output.c +++ b/net/xfrm/xfrm_output.c @@ -13,6 +13,7 @@ #include <linux/slab.h> #include <linux/spinlock.h> #include <net/dst.h> +#include <net/gso.h> #include <net/icmp.h> #include <net/inet_ecn.h> #include <net/xfrm.h>
From: Richard Gobert richardbgobert@gmail.com
[ Upstream commit 7938cd15436873f649f31cb867bac2d88ca564d0 ]
This patch fixes a misuse of IP{6}CB(skb) in GRO, while calling to `udp6_lib_lookup2` when handling udp tunnels. `udp6_lib_lookup2` fetch the device from CB. The fix changes it to fetch the device from `skb->dev`. l3mdev case requires special attention since it has a master and a slave device.
Fixes: a6024562ffd7 ("udp: Add GRO functions to UDP socket") Reported-by: Gal Pressman gal@nvidia.com Signed-off-by: Richard Gobert richardbgobert@gmail.com Reviewed-by: David Ahern dsahern@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/gro.h | 43 ++++++++++++++++++++++++++++++++++++++++++ net/ipv4/udp.c | 8 ++++++-- net/ipv4/udp_offload.c | 7 +++++-- net/ipv6/udp.c | 8 ++++++-- net/ipv6/udp_offload.c | 7 +++++-- 5 files changed, 65 insertions(+), 8 deletions(-)
diff --git a/include/net/gro.h b/include/net/gro.h index 972ff42d3a829..d3d318e7d917b 100644 --- a/include/net/gro.h +++ b/include/net/gro.h @@ -446,6 +446,49 @@ static inline void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb, gro_normal_list(napi); }
+/* This function is the alternative of 'inet_iif' and 'inet_sdif' + * functions in case we can not rely on fields of IPCB. + * + * The caller must verify skb_valid_dst(skb) is false and skb->dev is initialized. + * The caller must hold the RCU read lock. + */ +static inline void inet_get_iif_sdif(const struct sk_buff *skb, int *iif, int *sdif) +{ + *iif = inet_iif(skb) ?: skb->dev->ifindex; + *sdif = 0; + +#if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV) + if (netif_is_l3_slave(skb->dev)) { + struct net_device *master = netdev_master_upper_dev_get_rcu(skb->dev); + + *sdif = *iif; + *iif = master ? master->ifindex : 0; + } +#endif +} + +/* This function is the alternative of 'inet6_iif' and 'inet6_sdif' + * functions in case we can not rely on fields of IP6CB. + * + * The caller must verify skb_valid_dst(skb) is false and skb->dev is initialized. + * The caller must hold the RCU read lock. + */ +static inline void inet6_get_iif_sdif(const struct sk_buff *skb, int *iif, int *sdif) +{ + /* using skb->dev->ifindex because skb_dst(skb) is not initialized */ + *iif = skb->dev->ifindex; + *sdif = 0; + +#if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV) + if (netif_is_l3_slave(skb->dev)) { + struct net_device *master = netdev_master_upper_dev_get_rcu(skb->dev); + + *sdif = *iif; + *iif = master ? master->ifindex : 0; + } +#endif +} + extern struct list_head offload_base;
#endif /* _NET_IPV6_GRO_H */ diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index c6b790001aa77..6d327d6d978c5 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -114,6 +114,7 @@ #include <net/sock_reuseport.h> #include <net/addrconf.h> #include <net/udp_tunnel.h> +#include <net/gro.h> #if IS_ENABLED(CONFIG_IPV6) #include <net/ipv6_stubs.h> #endif @@ -555,10 +556,13 @@ struct sock *udp4_lib_lookup_skb(const struct sk_buff *skb, { const struct iphdr *iph = ip_hdr(skb); struct net *net = dev_net(skb->dev); + int iif, sdif; + + inet_get_iif_sdif(skb, &iif, &sdif);
return __udp4_lib_lookup(net, iph->saddr, sport, - iph->daddr, dport, inet_iif(skb), - inet_sdif(skb), net->ipv4.udp_table, NULL); + iph->daddr, dport, iif, + sdif, net->ipv4.udp_table, NULL); }
/* Must be called under rcu_read_lock(). diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index f402946da344b..0f46b3c2e4ac5 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -609,10 +609,13 @@ static struct sock *udp4_gro_lookup_skb(struct sk_buff *skb, __be16 sport, { const struct iphdr *iph = skb_gro_network_header(skb); struct net *net = dev_net(skb->dev); + int iif, sdif; + + inet_get_iif_sdif(skb, &iif, &sdif);
return __udp4_lib_lookup(net, iph->saddr, sport, - iph->daddr, dport, inet_iif(skb), - inet_sdif(skb), net->ipv4.udp_table, NULL); + iph->daddr, dport, iif, + sdif, net->ipv4.udp_table, NULL); }
INDIRECT_CALLABLE_SCOPE diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index d594a0425749b..27292d44df654 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -51,6 +51,7 @@ #include <net/inet6_hashtables.h> #include <net/busy_poll.h> #include <net/sock_reuseport.h> +#include <net/gro.h>
#include <linux/proc_fs.h> #include <linux/seq_file.h> @@ -300,10 +301,13 @@ struct sock *udp6_lib_lookup_skb(const struct sk_buff *skb, { const struct ipv6hdr *iph = ipv6_hdr(skb); struct net *net = dev_net(skb->dev); + int iif, sdif; + + inet6_get_iif_sdif(skb, &iif, &sdif);
return __udp6_lib_lookup(net, &iph->saddr, sport, - &iph->daddr, dport, inet6_iif(skb), - inet6_sdif(skb), net->ipv4.udp_table, NULL); + &iph->daddr, dport, iif, + sdif, net->ipv4.udp_table, NULL); }
/* Must be called under rcu_read_lock(). diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c index 09fa7a42cb937..6b95ba241ebe2 100644 --- a/net/ipv6/udp_offload.c +++ b/net/ipv6/udp_offload.c @@ -118,10 +118,13 @@ static struct sock *udp6_gro_lookup_skb(struct sk_buff *skb, __be16 sport, { const struct ipv6hdr *iph = skb_gro_network_header(skb); struct net *net = dev_net(skb->dev); + int iif, sdif; + + inet6_get_iif_sdif(skb, &iif, &sdif);
return __udp6_lib_lookup(net, &iph->saddr, sport, - &iph->daddr, dport, inet6_iif(skb), - inet6_sdif(skb), net->ipv4.udp_table, NULL); + &iph->daddr, dport, iif, + sdif, net->ipv4.udp_table, NULL); }
INDIRECT_CALLABLE_SCOPE
From: Eric Dumazet edumazet@google.com
[ Upstream commit fe11fdcb4207907d80cda2e73777465d68131e66 ]
sk_getsockopt() runs locklessly. This means sk->sk_reserved_mem can be read while other threads are changing its value.
Add missing annotations where they are needed.
Fixes: 2bb2f5fb21b0 ("net: add new socket option SO_RESERVE_MEM") Signed-off-by: Eric Dumazet edumazet@google.com Cc: Wei Wang weiwan@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/net/core/sock.c b/net/core/sock.c index 4a0edccf86066..7b88290ddc6e7 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1003,7 +1003,7 @@ static void sock_release_reserved_memory(struct sock *sk, int bytes) bytes = round_down(bytes, PAGE_SIZE);
WARN_ON(bytes > sk->sk_reserved_mem); - sk->sk_reserved_mem -= bytes; + WRITE_ONCE(sk->sk_reserved_mem, sk->sk_reserved_mem - bytes); sk_mem_reclaim(sk); }
@@ -1040,7 +1040,8 @@ static int sock_reserve_memory(struct sock *sk, int bytes) } sk->sk_forward_alloc += pages << PAGE_SHIFT;
- sk->sk_reserved_mem += pages << PAGE_SHIFT; + WRITE_ONCE(sk->sk_reserved_mem, + sk->sk_reserved_mem + (pages << PAGE_SHIFT));
return 0; } @@ -1925,7 +1926,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname, break;
case SO_RESERVE_MEM: - v.val = sk->sk_reserved_mem; + v.val = READ_ONCE(sk->sk_reserved_mem); break;
case SO_TXREHASH:
From: Eric Dumazet edumazet@google.com
[ Upstream commit c76a0328899bbe226f8adeb88b8da9e4167bd316 ]
sk_getsockopt() runs locklessly. This means sk->sk_txrehash can be read while other threads are changing its value.
Other locations were handled in commit cb6cd2cec799 ("tcp: Change SYN ACK retransmit behaviour to account for rehash")
Fixes: 26859240e4ee ("txhash: Add socket option to control TX hash rethink behavior") Signed-off-by: Eric Dumazet edumazet@google.com Cc: Akhmat Karakotov hmukos@yandex-team.ru Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/net/core/sock.c b/net/core/sock.c index 7b88290ddc6e7..b25511e7e8103 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1523,7 +1523,9 @@ int sk_setsockopt(struct sock *sk, int level, int optname, } if ((u8)val == SOCK_TXREHASH_DEFAULT) val = READ_ONCE(sock_net(sk)->core.sysctl_txrehash); - /* Paired with READ_ONCE() in tcp_rtx_synack() */ + /* Paired with READ_ONCE() in tcp_rtx_synack() + * and sk_getsockopt(). + */ WRITE_ONCE(sk->sk_txrehash, (u8)val); break;
@@ -1930,7 +1932,8 @@ int sk_getsockopt(struct sock *sk, int level, int optname, break;
case SO_TXREHASH: - v.val = sk->sk_txrehash; + /* Paired with WRITE_ONCE() in sk_setsockopt() */ + v.val = READ_ONCE(sk->sk_txrehash); break;
default:
From: Eric Dumazet edumazet@google.com
[ Upstream commit ea7f45ef77b39e72244d282e47f6cb1ef4135cd2 ]
sk_getsockopt() runs locklessly. This means sk->sk_max_pacing_rate can be read while other threads are changing its value.
Fixes: 62748f32d501 ("net: introduce SO_MAX_PACING_RATE") Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/net/core/sock.c b/net/core/sock.c index b25511e7e8103..b87f498072251 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1428,7 +1428,8 @@ int sk_setsockopt(struct sock *sk, int level, int optname, cmpxchg(&sk->sk_pacing_status, SK_PACING_NONE, SK_PACING_NEEDED); - sk->sk_max_pacing_rate = ulval; + /* Pairs with READ_ONCE() from sk_getsockopt() */ + WRITE_ONCE(sk->sk_max_pacing_rate, ulval); sk->sk_pacing_rate = min(sk->sk_pacing_rate, ulval); break; } @@ -1855,12 +1856,14 @@ int sk_getsockopt(struct sock *sk, int level, int optname, #endif
case SO_MAX_PACING_RATE: + /* The READ_ONCE() pair with the WRITE_ONCE() in sk_setsockopt() */ if (sizeof(v.ulval) != sizeof(v.val) && len >= sizeof(v.ulval)) { lv = sizeof(v.ulval); - v.ulval = sk->sk_max_pacing_rate; + v.ulval = READ_ONCE(sk->sk_max_pacing_rate); } else { /* 32bit version */ - v.val = min_t(unsigned long, sk->sk_max_pacing_rate, ~0U); + v.val = min_t(unsigned long, ~0U, + READ_ONCE(sk->sk_max_pacing_rate)); } break;
From: Eric Dumazet edumazet@google.com
[ Upstream commit e6d12bdb435d23ff6c1890c852d85408a2f496ee ]
In a prior commit, I forgot to change sk_getsockopt() when reading sk->sk_rcvlowat locklessly.
Fixes: eac66402d1c3 ("net: annotate sk->sk_rcvlowat lockless reads") Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/sock.c b/net/core/sock.c index b87f498072251..aed5d09a41c4b 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1719,7 +1719,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname, break;
case SO_RCVLOWAT: - v.val = sk->sk_rcvlowat; + v.val = READ_ONCE(sk->sk_rcvlowat); break;
case SO_SNDLOWAT:
From: Eric Dumazet edumazet@google.com
[ Upstream commit 74bc084327c643499474ba75df485607da37dd6e ]
In a prior commit, I forgot to change sk_getsockopt() when reading sk->sk_sndbuf locklessly.
Fixes: e292f05e0df7 ("tcp: annotate sk->sk_sndbuf lockless reads") Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/sock.c b/net/core/sock.c index aed5d09a41c4b..c5dfeb6d4fec6 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1626,7 +1626,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname, break;
case SO_SNDBUF: - v.val = sk->sk_sndbuf; + v.val = READ_ONCE(sk->sk_sndbuf); break;
case SO_RCVBUF:
From: Eric Dumazet edumazet@google.com
[ Upstream commit b4b553253091cafe9ec38994acf42795e073bef5 ]
In a prior commit, I forgot to change sk_getsockopt() when reading sk->sk_rcvbuf locklessly.
Fixes: ebb3b78db7bf ("tcp: annotate sk->sk_rcvbuf lockless reads") Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/sock.c b/net/core/sock.c index c5dfeb6d4fec6..8c69610753ec2 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1630,7 +1630,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname, break;
case SO_RCVBUF: - v.val = sk->sk_rcvbuf; + v.val = READ_ONCE(sk->sk_rcvbuf); break;
case SO_REUSEADDR:
From: Eric Dumazet edumazet@google.com
[ Upstream commit 3c5b4d69c358a9275a8de98f87caf6eda644b086 ]
sk->sk_mark is often read while another thread could change the value.
Fixes: 4a19ec5800fc ("[NET]: Introducing socket mark socket option.") Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/inet_sock.h | 7 ++++--- include/net/ip.h | 2 +- include/net/route.h | 4 ++-- net/can/raw.c | 2 +- net/core/sock.c | 4 ++-- net/dccp/ipv6.c | 4 ++-- net/ipv4/inet_diag.c | 4 ++-- net/ipv4/ip_output.c | 4 ++-- net/ipv4/route.c | 4 ++-- net/ipv4/tcp_ipv4.c | 2 +- net/ipv6/ping.c | 2 +- net/ipv6/raw.c | 4 ++-- net/ipv6/route.c | 7 ++++--- net/ipv6/tcp_ipv6.c | 6 +++--- net/ipv6/udp.c | 4 ++-- net/l2tp/l2tp_ip6.c | 2 +- net/mptcp/sockopt.c | 2 +- net/netfilter/nft_socket.c | 2 +- net/netfilter/xt_socket.c | 4 ++-- net/packet/af_packet.c | 6 +++--- net/smc/af_smc.c | 2 +- net/xdp/xsk.c | 2 +- net/xfrm/xfrm_policy.c | 2 +- 23 files changed, 42 insertions(+), 40 deletions(-)
diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h index caa20a9055310..0bb32bfc61832 100644 --- a/include/net/inet_sock.h +++ b/include/net/inet_sock.h @@ -107,11 +107,12 @@ static inline struct inet_request_sock *inet_rsk(const struct request_sock *sk)
static inline u32 inet_request_mark(const struct sock *sk, struct sk_buff *skb) { - if (!sk->sk_mark && - READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept)) + u32 mark = READ_ONCE(sk->sk_mark); + + if (!mark && READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept)) return skb->mark;
- return sk->sk_mark; + return mark; }
static inline int inet_request_bound_dev_if(const struct sock *sk, diff --git a/include/net/ip.h b/include/net/ip.h index 83a1a9bc3ceb1..530e7257e4389 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -93,7 +93,7 @@ static inline void ipcm_init_sk(struct ipcm_cookie *ipcm, { ipcm_init(ipcm);
- ipcm->sockc.mark = inet->sk.sk_mark; + ipcm->sockc.mark = READ_ONCE(inet->sk.sk_mark); ipcm->sockc.tsflags = inet->sk.sk_tsflags; ipcm->oif = READ_ONCE(inet->sk.sk_bound_dev_if); ipcm->addr = inet->inet_saddr; diff --git a/include/net/route.h b/include/net/route.h index bcc367cf3aa2d..9ca0f72868b76 100644 --- a/include/net/route.h +++ b/include/net/route.h @@ -168,7 +168,7 @@ static inline struct rtable *ip_route_output_ports(struct net *net, struct flowi __be16 dport, __be16 sport, __u8 proto, __u8 tos, int oif) { - flowi4_init_output(fl4, oif, sk ? sk->sk_mark : 0, tos, + flowi4_init_output(fl4, oif, sk ? READ_ONCE(sk->sk_mark) : 0, tos, RT_SCOPE_UNIVERSE, proto, sk ? inet_sk_flowi_flags(sk) : 0, daddr, saddr, dport, sport, sock_net_uid(net, sk)); @@ -301,7 +301,7 @@ static inline void ip_route_connect_init(struct flowi4 *fl4, __be32 dst, if (inet_sk(sk)->transparent) flow_flags |= FLOWI_FLAG_ANYSRC;
- flowi4_init_output(fl4, oif, sk->sk_mark, ip_sock_rt_tos(sk), + flowi4_init_output(fl4, oif, READ_ONCE(sk->sk_mark), ip_sock_rt_tos(sk), ip_sock_rt_scope(sk), protocol, flow_flags, dst, src, dport, sport, sk->sk_uid); } diff --git a/net/can/raw.c b/net/can/raw.c index f64469b98260f..f8e3866157a33 100644 --- a/net/can/raw.c +++ b/net/can/raw.c @@ -873,7 +873,7 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
skb->dev = dev; skb->priority = sk->sk_priority; - skb->mark = sk->sk_mark; + skb->mark = READ_ONCE(sk->sk_mark); skb->tstamp = sockc.transmit_time;
skb_setup_tx_timestamp(skb, sockc.tsflags); diff --git a/net/core/sock.c b/net/core/sock.c index 8c69610753ec2..9298dffbe46b8 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -984,7 +984,7 @@ EXPORT_SYMBOL(sock_set_rcvbuf); static void __sock_set_mark(struct sock *sk, u32 val) { if (val != sk->sk_mark) { - sk->sk_mark = val; + WRITE_ONCE(sk->sk_mark, val); sk_dst_reset(sk); } } @@ -1799,7 +1799,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname, optval, optlen, len);
case SO_MARK: - v.val = sk->sk_mark; + v.val = READ_ONCE(sk->sk_mark); break;
case SO_RCVMARK: diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c index 93c98990d7263..94b69a50c8b50 100644 --- a/net/dccp/ipv6.c +++ b/net/dccp/ipv6.c @@ -238,8 +238,8 @@ static int dccp_v6_send_response(const struct sock *sk, struct request_sock *req opt = ireq->ipv6_opt; if (!opt) opt = rcu_dereference(np->opt); - err = ip6_xmit(sk, skb, &fl6, sk->sk_mark, opt, np->tclass, - sk->sk_priority); + err = ip6_xmit(sk, skb, &fl6, READ_ONCE(sk->sk_mark), opt, + np->tclass, sk->sk_priority); rcu_read_unlock(); err = net_xmit_eval(err); } diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c index b812eb36f0e36..f7426926a1041 100644 --- a/net/ipv4/inet_diag.c +++ b/net/ipv4/inet_diag.c @@ -150,7 +150,7 @@ int inet_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb, } #endif
- if (net_admin && nla_put_u32(skb, INET_DIAG_MARK, sk->sk_mark)) + if (net_admin && nla_put_u32(skb, INET_DIAG_MARK, READ_ONCE(sk->sk_mark))) goto errout;
if (ext & (1 << (INET_DIAG_CLASS_ID - 1)) || @@ -799,7 +799,7 @@ int inet_diag_bc_sk(const struct nlattr *bc, struct sock *sk) entry.ifindex = sk->sk_bound_dev_if; entry.userlocks = sk_fullsock(sk) ? sk->sk_userlocks : 0; if (sk_fullsock(sk)) - entry.mark = sk->sk_mark; + entry.mark = READ_ONCE(sk->sk_mark); else if (sk->sk_state == TCP_NEW_SYN_RECV) entry.mark = inet_rsk(inet_reqsk(sk))->ir_mark; else if (sk->sk_state == TCP_TIME_WAIT) diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index d95e40a47098a..80c94749eafe2 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -186,7 +186,7 @@ int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk,
skb->priority = sk->sk_priority; if (!skb->mark) - skb->mark = sk->sk_mark; + skb->mark = READ_ONCE(sk->sk_mark);
/* Send it out. */ return ip_local_out(net, skb->sk, skb); @@ -529,7 +529,7 @@ int __ip_queue_xmit(struct sock *sk, struct sk_buff *skb, struct flowi *fl,
/* TODO : should we use skb->sk here instead of sk ? */ skb->priority = sk->sk_priority; - skb->mark = sk->sk_mark; + skb->mark = READ_ONCE(sk->sk_mark);
res = ip_local_out(net, sk, skb); rcu_read_unlock(); diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 98d7e6ba7493b..92fede388d520 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -518,7 +518,7 @@ static void __build_flow_key(const struct net *net, struct flowi4 *fl4, const struct inet_sock *inet = inet_sk(sk);
oif = sk->sk_bound_dev_if; - mark = sk->sk_mark; + mark = READ_ONCE(sk->sk_mark); tos = ip_sock_rt_tos(sk); scope = ip_sock_rt_scope(sk); prot = inet->hdrincl ? IPPROTO_RAW : sk->sk_protocol; @@ -552,7 +552,7 @@ static void build_sk_flow_key(struct flowi4 *fl4, const struct sock *sk) inet_opt = rcu_dereference(inet->inet_opt); if (inet_opt && inet_opt->opt.srr) daddr = inet_opt->opt.faddr; - flowi4_init_output(fl4, sk->sk_bound_dev_if, sk->sk_mark, + flowi4_init_output(fl4, sk->sk_bound_dev_if, READ_ONCE(sk->sk_mark), ip_sock_rt_tos(sk) & IPTOS_RT_MASK, ip_sock_rt_scope(sk), inet->hdrincl ? IPPROTO_RAW : sk->sk_protocol, diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index f37d13ee7b4cc..48429f0ee23b0 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -931,7 +931,7 @@ static void tcp_v4_send_ack(const struct sock *sk, ctl_sk = this_cpu_read(ipv4_tcp_sk); sock_net_set(ctl_sk, net); ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ? - inet_twsk(sk)->tw_mark : sk->sk_mark; + inet_twsk(sk)->tw_mark : READ_ONCE(sk->sk_mark); ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ? inet_twsk(sk)->tw_priority : sk->sk_priority; transmit_time = tcp_transmit_time(sk); diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c index f804c11e2146c..c2c291827a2ce 100644 --- a/net/ipv6/ping.c +++ b/net/ipv6/ping.c @@ -120,7 +120,7 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
ipcm6_init_sk(&ipc6, np); ipc6.sockc.tsflags = sk->sk_tsflags; - ipc6.sockc.mark = sk->sk_mark; + ipc6.sockc.mark = READ_ONCE(sk->sk_mark);
fl6.flowi6_oif = oif;
diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c index 44ee7a2e72ac2..a90a09658a71a 100644 --- a/net/ipv6/raw.c +++ b/net/ipv6/raw.c @@ -774,12 +774,12 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) */ memset(&fl6, 0, sizeof(fl6));
- fl6.flowi6_mark = sk->sk_mark; + fl6.flowi6_mark = READ_ONCE(sk->sk_mark); fl6.flowi6_uid = sk->sk_uid;
ipcm6_init(&ipc6); ipc6.sockc.tsflags = sk->sk_tsflags; - ipc6.sockc.mark = sk->sk_mark; + ipc6.sockc.mark = fl6.flowi6_mark;
if (sin6) { if (addr_len < SIN6_LEN_RFC2133) diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 392aaa373b667..d5c6be77ec1ea 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -2951,7 +2951,8 @@ void ip6_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, __be32 mtu) if (!oif && skb->dev) oif = l3mdev_master_ifindex(skb->dev);
- ip6_update_pmtu(skb, sock_net(sk), mtu, oif, sk->sk_mark, sk->sk_uid); + ip6_update_pmtu(skb, sock_net(sk), mtu, oif, READ_ONCE(sk->sk_mark), + sk->sk_uid);
dst = __sk_dst_get(sk); if (!dst || !dst->obsolete || @@ -3172,8 +3173,8 @@ void ip6_redirect_no_header(struct sk_buff *skb, struct net *net, int oif)
void ip6_sk_redirect(struct sk_buff *skb, struct sock *sk) { - ip6_redirect(skb, sock_net(sk), sk->sk_bound_dev_if, sk->sk_mark, - sk->sk_uid); + ip6_redirect(skb, sock_net(sk), sk->sk_bound_dev_if, + READ_ONCE(sk->sk_mark), sk->sk_uid); } EXPORT_SYMBOL_GPL(ip6_sk_redirect);
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index f7c248a7f8d1d..346c9bcd5849d 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -568,8 +568,8 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst, opt = ireq->ipv6_opt; if (!opt) opt = rcu_dereference(np->opt); - err = ip6_xmit(sk, skb, fl6, skb->mark ? : sk->sk_mark, opt, - tclass, sk->sk_priority); + err = ip6_xmit(sk, skb, fl6, skb->mark ? : READ_ONCE(sk->sk_mark), + opt, tclass, sk->sk_priority); rcu_read_unlock(); err = net_xmit_eval(err); } @@ -943,7 +943,7 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32 if (sk->sk_state == TCP_TIME_WAIT) mark = inet_twsk(sk)->tw_mark; else - mark = sk->sk_mark; + mark = READ_ONCE(sk->sk_mark); skb_set_delivery_time(buff, tcp_transmit_time(sk), true); } if (txhash) { diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 27292d44df654..8521729fb2375 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -628,7 +628,7 @@ int __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt, if (type == NDISC_REDIRECT) { if (tunnel) { ip6_redirect(skb, sock_net(sk), inet6_iif(skb), - sk->sk_mark, sk->sk_uid); + READ_ONCE(sk->sk_mark), sk->sk_uid); } else { ip6_sk_redirect(skb, sk); } @@ -1360,7 +1360,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) ipcm6_init(&ipc6); ipc6.gso_size = READ_ONCE(up->gso_size); ipc6.sockc.tsflags = sk->sk_tsflags; - ipc6.sockc.mark = sk->sk_mark; + ipc6.sockc.mark = READ_ONCE(sk->sk_mark);
/* destination address check */ if (sin6) { diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c index 5137ea1861ce2..bce4132b0a5c8 100644 --- a/net/l2tp/l2tp_ip6.c +++ b/net/l2tp/l2tp_ip6.c @@ -519,7 +519,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) /* Get and verify the address */ memset(&fl6, 0, sizeof(fl6));
- fl6.flowi6_mark = sk->sk_mark; + fl6.flowi6_mark = READ_ONCE(sk->sk_mark); fl6.flowi6_uid = sk->sk_uid;
ipcm6_init(&ipc6); diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index d4258869ac48f..64fcfc3d5270f 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -102,7 +102,7 @@ static void mptcp_sol_socket_sync_intval(struct mptcp_sock *msk, int optname, in break; case SO_MARK: if (READ_ONCE(ssk->sk_mark) != sk->sk_mark) { - ssk->sk_mark = sk->sk_mark; + WRITE_ONCE(ssk->sk_mark, sk->sk_mark); sk_dst_reset(ssk); } break; diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c index 85f8df87efdaa..1dd336a3ce786 100644 --- a/net/netfilter/nft_socket.c +++ b/net/netfilter/nft_socket.c @@ -107,7 +107,7 @@ static void nft_socket_eval(const struct nft_expr *expr, break; case NFT_SOCKET_MARK: if (sk_fullsock(sk)) { - *dest = sk->sk_mark; + *dest = READ_ONCE(sk->sk_mark); } else { regs->verdict.code = NFT_BREAK; return; diff --git a/net/netfilter/xt_socket.c b/net/netfilter/xt_socket.c index 7013f55f05d1e..76e01f292aaff 100644 --- a/net/netfilter/xt_socket.c +++ b/net/netfilter/xt_socket.c @@ -77,7 +77,7 @@ socket_match(const struct sk_buff *skb, struct xt_action_param *par,
if (info->flags & XT_SOCKET_RESTORESKMARK && !wildcard && transparent && sk_fullsock(sk)) - pskb->mark = sk->sk_mark; + pskb->mark = READ_ONCE(sk->sk_mark);
if (sk != skb->sk) sock_gen_put(sk); @@ -138,7 +138,7 @@ socket_mt6_v1_v2_v3(const struct sk_buff *skb, struct xt_action_param *par)
if (info->flags & XT_SOCKET_RESTORESKMARK && !wildcard && transparent && sk_fullsock(sk)) - pskb->mark = sk->sk_mark; + pskb->mark = READ_ONCE(sk->sk_mark);
if (sk != skb->sk) sock_gen_put(sk); diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index a2dbeb264f260..6f033c334c7b4 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2051,7 +2051,7 @@ static int packet_sendmsg_spkt(struct socket *sock, struct msghdr *msg, skb->protocol = proto; skb->dev = dev; skb->priority = sk->sk_priority; - skb->mark = sk->sk_mark; + skb->mark = READ_ONCE(sk->sk_mark); skb->tstamp = sockc.transmit_time;
skb_setup_tx_timestamp(skb, sockc.tsflags); @@ -2586,7 +2586,7 @@ static int tpacket_fill_skb(struct packet_sock *po, struct sk_buff *skb, skb->protocol = proto; skb->dev = dev; skb->priority = po->sk.sk_priority; - skb->mark = po->sk.sk_mark; + skb->mark = READ_ONCE(po->sk.sk_mark); skb->tstamp = sockc->transmit_time; skb_setup_tx_timestamp(skb, sockc->tsflags); skb_zcopy_set_nouarg(skb, ph.raw); @@ -2988,7 +2988,7 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len) goto out_unlock;
sockcm_init(&sockc, sk); - sockc.mark = sk->sk_mark; + sockc.mark = READ_ONCE(sk->sk_mark); if (msg->msg_controllen) { err = sock_cmsg_send(sk, msg, &sockc); if (unlikely(err)) diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index 538e9c6ec8c98..fa6b54c1411cb 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -445,7 +445,7 @@ static void smc_copy_sock_settings(struct sock *nsk, struct sock *osk, nsk->sk_rcvbuf = osk->sk_rcvbuf; nsk->sk_sndtimeo = osk->sk_sndtimeo; nsk->sk_rcvtimeo = osk->sk_rcvtimeo; - nsk->sk_mark = osk->sk_mark; + nsk->sk_mark = READ_ONCE(osk->sk_mark); nsk->sk_priority = osk->sk_priority; nsk->sk_rcvlowat = osk->sk_rcvlowat; nsk->sk_bound_dev_if = osk->sk_bound_dev_if; diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 32dd55b9ce8a8..35e518eaaebae 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -505,7 +505,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
skb->dev = dev; skb->priority = xs->sk.sk_priority; - skb->mark = xs->sk.sk_mark; + skb->mark = READ_ONCE(xs->sk.sk_mark); skb_shinfo(skb)->destructor_arg = (void *)(long)desc->addr; skb->destructor = xsk_destruct_skb;
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index e7617c9959c31..d6b405782b636 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -2250,7 +2250,7 @@ static struct xfrm_policy *xfrm_sk_policy_lookup(const struct sock *sk, int dir,
match = xfrm_selector_match(&pol->selector, fl, family); if (match) { - if ((sk->sk_mark & pol->mark.m) != pol->mark.v || + if ((READ_ONCE(sk->sk_mark) & pol->mark.m) != pol->mark.v || pol->if_id != if_id) { pol = NULL; goto out;
From: Eric Dumazet edumazet@google.com
[ Upstream commit 11695c6e966b0ec7ed1d16777d294cef865a5c91 ]
sk_getsockopt() runs locklessly, thus we need to annotate the read of sk->sk_peek_off.
While we are at it, add corresponding annotations to sk_set_peek_off() and unix_set_peek_off().
Fixes: b9bb53f3836f ("sock: convert sk_peek_offset functions to WRITE_ONCE") Signed-off-by: Eric Dumazet edumazet@google.com Cc: Willem de Bruijn willemb@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock.c | 4 ++-- net/unix/af_unix.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/net/core/sock.c b/net/core/sock.c index 9298dffbe46b8..f9fc2a0130a9f 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1818,7 +1818,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname, if (!sock->ops->set_peek_off) return -EOPNOTSUPP;
- v.val = sk->sk_peek_off; + v.val = READ_ONCE(sk->sk_peek_off); break; case SO_NOFCS: v.val = sock_flag(sk, SOCK_NOFCS); @@ -3127,7 +3127,7 @@ EXPORT_SYMBOL(__sk_mem_reclaim);
int sk_set_peek_off(struct sock *sk, int val) { - sk->sk_peek_off = val; + WRITE_ONCE(sk->sk_peek_off, val); return 0; } EXPORT_SYMBOL_GPL(sk_set_peek_off); diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index e7728b57a8c70..10615878e3961 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -780,7 +780,7 @@ static int unix_set_peek_off(struct sock *sk, int val) if (mutex_lock_interruptible(&u->iolock)) return -EINTR;
- sk->sk_peek_off = val; + WRITE_ONCE(sk->sk_peek_off, val); mutex_unlock(&u->iolock);
return 0;
From: Eric Dumazet edumazet@google.com
[ Upstream commit e5f0d2dd3c2faa671711dac6d3ff3cef307bcfe3 ]
In a prior commit I forgot that sk_getsockopt() reads sk->sk_ll_usec without holding a lock.
Fixes: 0dbffbb5335a ("net: annotate data race around sk_ll_usec") Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/sock.c b/net/core/sock.c index f9fc2a0130a9f..c0c495e0a474e 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1848,7 +1848,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname,
#ifdef CONFIG_NET_RX_BUSY_POLL case SO_BUSY_POLL: - v.val = sk->sk_ll_usec; + v.val = READ_ONCE(sk->sk_ll_usec); break; case SO_PREFER_BUSY_POLL: v.val = READ_ONCE(sk->sk_prefer_busy_poll);
From: Eric Dumazet edumazet@google.com
[ Upstream commit 8bf43be799d4b242ea552a14db10456446be843e ]
sk_getsockopt() runs locklessly. This means sk->sk_priority can be read while other threads are changing its value.
Other reads also happen without socket lock being held.
Add missing annotations where needed.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock.c | 6 +++--- net/ipv4/ip_output.c | 4 ++-- net/ipv4/ip_sockglue.c | 2 +- net/ipv4/raw.c | 2 +- net/ipv4/tcp_ipv4.c | 2 +- net/ipv6/raw.c | 2 +- net/ipv6/tcp_ipv6.c | 3 ++- net/packet/af_packet.c | 6 +++--- 8 files changed, 14 insertions(+), 13 deletions(-)
diff --git a/net/core/sock.c b/net/core/sock.c index c0c495e0a474e..1f31a97100d4f 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -800,7 +800,7 @@ EXPORT_SYMBOL(sock_no_linger); void sock_set_priority(struct sock *sk, u32 priority) { lock_sock(sk); - sk->sk_priority = priority; + WRITE_ONCE(sk->sk_priority, priority); release_sock(sk); } EXPORT_SYMBOL(sock_set_priority); @@ -1210,7 +1210,7 @@ int sk_setsockopt(struct sock *sk, int level, int optname, if ((val >= 0 && val <= 6) || sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_RAW) || sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)) - sk->sk_priority = val; + WRITE_ONCE(sk->sk_priority, val); else ret = -EPERM; break; @@ -1672,7 +1672,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname, break;
case SO_PRIORITY: - v.val = sk->sk_priority; + v.val = READ_ONCE(sk->sk_priority); break;
case SO_LINGER: diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 80c94749eafe2..6f6f63cf9224f 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -184,7 +184,7 @@ int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk, ip_options_build(skb, &opt->opt, daddr, rt); }
- skb->priority = sk->sk_priority; + skb->priority = READ_ONCE(sk->sk_priority); if (!skb->mark) skb->mark = READ_ONCE(sk->sk_mark);
@@ -528,7 +528,7 @@ int __ip_queue_xmit(struct sock *sk, struct sk_buff *skb, struct flowi *fl, skb_shinfo(skb)->gso_segs ?: 1);
/* TODO : should we use skb->sk here instead of sk ? */ - skb->priority = sk->sk_priority; + skb->priority = READ_ONCE(sk->sk_priority); skb->mark = READ_ONCE(sk->sk_mark);
res = ip_local_out(net, sk, skb); diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c index 8e97d8d4cc9d9..d41bce8927b2c 100644 --- a/net/ipv4/ip_sockglue.c +++ b/net/ipv4/ip_sockglue.c @@ -592,7 +592,7 @@ void __ip_sock_set_tos(struct sock *sk, int val) } if (inet_sk(sk)->tos != val) { inet_sk(sk)->tos = val; - sk->sk_priority = rt_tos2priority(val); + WRITE_ONCE(sk->sk_priority, rt_tos2priority(val)); sk_dst_reset(sk); } } diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c index eadf1c9ef7e49..fb31624019435 100644 --- a/net/ipv4/raw.c +++ b/net/ipv4/raw.c @@ -348,7 +348,7 @@ static int raw_send_hdrinc(struct sock *sk, struct flowi4 *fl4, goto error; skb_reserve(skb, hlen);
- skb->priority = sk->sk_priority; + skb->priority = READ_ONCE(sk->sk_priority); skb->mark = sockc->mark; skb->tstamp = sockc->transmit_time; skb_dst_set(skb, &rt->dst); diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 48429f0ee23b0..498dd4acdeec8 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -933,7 +933,7 @@ static void tcp_v4_send_ack(const struct sock *sk, ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ? inet_twsk(sk)->tw_mark : READ_ONCE(sk->sk_mark); ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ? - inet_twsk(sk)->tw_priority : sk->sk_priority; + inet_twsk(sk)->tw_priority : READ_ONCE(sk->sk_priority); transmit_time = tcp_transmit_time(sk); ip_send_unicast_reply(ctl_sk, skb, &TCP_SKB_CB(skb)->header.h4.opt, diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c index a90a09658a71a..d85d2082aeb77 100644 --- a/net/ipv6/raw.c +++ b/net/ipv6/raw.c @@ -614,7 +614,7 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length, skb_reserve(skb, hlen);
skb->protocol = htons(ETH_P_IPV6); - skb->priority = sk->sk_priority; + skb->priority = READ_ONCE(sk->sk_priority); skb->mark = sockc->mark; skb->tstamp = sockc->transmit_time;
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 346c9bcd5849d..3155692a0e06b 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1132,7 +1132,8 @@ static void tcp_v6_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb, tcp_time_stamp_raw() + tcp_rsk(req)->ts_off, READ_ONCE(req->ts_recent), sk->sk_bound_dev_if, tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->saddr, l3index), - ipv6_get_dsfield(ipv6_hdr(skb)), 0, sk->sk_priority, + ipv6_get_dsfield(ipv6_hdr(skb)), 0, + READ_ONCE(sk->sk_priority), READ_ONCE(tcp_rsk(req)->txhash)); }
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index 6f033c334c7b4..a753246ef1657 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2050,7 +2050,7 @@ static int packet_sendmsg_spkt(struct socket *sock, struct msghdr *msg,
skb->protocol = proto; skb->dev = dev; - skb->priority = sk->sk_priority; + skb->priority = READ_ONCE(sk->sk_priority); skb->mark = READ_ONCE(sk->sk_mark); skb->tstamp = sockc.transmit_time;
@@ -2585,7 +2585,7 @@ static int tpacket_fill_skb(struct packet_sock *po, struct sk_buff *skb,
skb->protocol = proto; skb->dev = dev; - skb->priority = po->sk.sk_priority; + skb->priority = READ_ONCE(po->sk.sk_priority); skb->mark = READ_ONCE(po->sk.sk_mark); skb->tstamp = sockc->transmit_time; skb_setup_tx_timestamp(skb, sockc->tsflags); @@ -3061,7 +3061,7 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
skb->protocol = proto; skb->dev = dev; - skb->priority = sk->sk_priority; + skb->priority = READ_ONCE(sk->sk_priority); skb->mark = sockc.mark; skb->tstamp = sockc.transmit_time;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit e739718444f7bf2fa3d70d101761ad83056ca628 ]
syzkaller found zero division error [0] in div_s64_rem() called from get_cycle_time_elapsed(), where sched->cycle_time is the divisor.
We have tests in parse_taprio_schedule() so that cycle_time will never be 0, and actually cycle_time is not 0 in get_cycle_time_elapsed().
The problem is that the types of divisor are different; cycle_time is s64, but the argument of div_s64_rem() is s32.
syzkaller fed this input and 0x100000000 is cast to s32 to be 0.
@TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME={0xc, 0x8, 0x100000000}
We use s64 for cycle_time to cast it to ktime_t, so let's keep it and set max for cycle_time.
While at it, we prevent overflow in setup_txtime() and add another test in parse_taprio_schedule() to check if cycle_time overflows.
Also, we add a new tdc test case for this issue.
[0]: divide error: 0000 [#1] PREEMPT SMP KASAN NOPTI CPU: 1 PID: 103 Comm: kworker/1:3 Not tainted 6.5.0-rc1-00330-g60cc1f7d0605 #3 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 Workqueue: ipv6_addrconf addrconf_dad_work RIP: 0010:div_s64_rem include/linux/math64.h:42 [inline] RIP: 0010:get_cycle_time_elapsed net/sched/sch_taprio.c:223 [inline] RIP: 0010:find_entry_to_transmit+0x252/0x7e0 net/sched/sch_taprio.c:344 Code: 3c 02 00 0f 85 5e 05 00 00 48 8b 4c 24 08 4d 8b bd 40 01 00 00 48 8b 7c 24 48 48 89 c8 4c 29 f8 48 63 f7 48 99 48 89 74 24 70 <48> f7 fe 48 29 d1 48 8d 04 0f 49 89 cc 48 89 44 24 20 49 8d 85 10 RSP: 0018:ffffc90000acf260 EFLAGS: 00010206 RAX: 177450e0347560cf RBX: 0000000000000000 RCX: 177450e0347560cf RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000100000000 RBP: 0000000000000056 R08: 0000000000000000 R09: ffffed10020a0934 R10: ffff8880105049a7 R11: ffff88806cf3a520 R12: ffff888010504800 R13: ffff88800c00d800 R14: ffff8880105049a0 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff88806cf00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f0edf84f0e8 CR3: 000000000d73c002 CR4: 0000000000770ee0 PKRU: 55555554 Call Trace: <TASK> get_packet_txtime net/sched/sch_taprio.c:508 [inline] taprio_enqueue_one+0x900/0xff0 net/sched/sch_taprio.c:577 taprio_enqueue+0x378/0xae0 net/sched/sch_taprio.c:658 dev_qdisc_enqueue+0x46/0x170 net/core/dev.c:3732 __dev_xmit_skb net/core/dev.c:3821 [inline] __dev_queue_xmit+0x1b2f/0x3000 net/core/dev.c:4169 dev_queue_xmit include/linux/netdevice.h:3088 [inline] neigh_resolve_output net/core/neighbour.c:1552 [inline] neigh_resolve_output+0x4a7/0x780 net/core/neighbour.c:1532 neigh_output include/net/neighbour.h:544 [inline] ip6_finish_output2+0x924/0x17d0 net/ipv6/ip6_output.c:135 __ip6_finish_output+0x620/0xaa0 net/ipv6/ip6_output.c:196 ip6_finish_output net/ipv6/ip6_output.c:207 [inline] NF_HOOK_COND include/linux/netfilter.h:292 [inline] ip6_output+0x206/0x410 net/ipv6/ip6_output.c:228 dst_output include/net/dst.h:458 [inline] NF_HOOK.constprop.0+0xea/0x260 include/linux/netfilter.h:303 ndisc_send_skb+0x872/0xe80 net/ipv6/ndisc.c:508 ndisc_send_ns+0xb5/0x130 net/ipv6/ndisc.c:666 addrconf_dad_work+0xc14/0x13f0 net/ipv6/addrconf.c:4175 process_one_work+0x92c/0x13a0 kernel/workqueue.c:2597 worker_thread+0x60f/0x1240 kernel/workqueue.c:2748 kthread+0x2fe/0x3f0 kernel/kthread.c:389 ret_from_fork+0x2c/0x50 arch/x86/entry/entry_64.S:308 </TASK> Modules linked in:
Fixes: 4cfd5779bd6e ("taprio: Add support for txtime-assist mode") Reported-by: syzkaller syzkaller@googlegroups.com Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Co-developed-by: Eric Dumazet edumazet@google.com Co-developed-by: Pedro Tammela pctammela@mojatatu.com Acked-by: Vinicius Costa Gomes vinicius.gomes@intel.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/sch_taprio.c | 15 +++++++++-- .../tc-testing/tc-tests/qdiscs/taprio.json | 25 +++++++++++++++++++ 2 files changed, 38 insertions(+), 2 deletions(-)
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index f681af138179c..97afa244e54f5 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -1013,6 +1013,11 @@ static const struct nla_policy taprio_tc_policy[TCA_TAPRIO_TC_ENTRY_MAX + 1] = { TC_FP_PREEMPTIBLE), };
+static struct netlink_range_validation_signed taprio_cycle_time_range = { + .min = 0, + .max = INT_MAX, +}; + static const struct nla_policy taprio_policy[TCA_TAPRIO_ATTR_MAX + 1] = { [TCA_TAPRIO_ATTR_PRIOMAP] = { .len = sizeof(struct tc_mqprio_qopt) @@ -1021,7 +1026,8 @@ static const struct nla_policy taprio_policy[TCA_TAPRIO_ATTR_MAX + 1] = { [TCA_TAPRIO_ATTR_SCHED_BASE_TIME] = { .type = NLA_S64 }, [TCA_TAPRIO_ATTR_SCHED_SINGLE_ENTRY] = { .type = NLA_NESTED }, [TCA_TAPRIO_ATTR_SCHED_CLOCKID] = { .type = NLA_S32 }, - [TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME] = { .type = NLA_S64 }, + [TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME] = + NLA_POLICY_FULL_RANGE_SIGNED(NLA_S64, &taprio_cycle_time_range), [TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION] = { .type = NLA_S64 }, [TCA_TAPRIO_ATTR_FLAGS] = { .type = NLA_U32 }, [TCA_TAPRIO_ATTR_TXTIME_DELAY] = { .type = NLA_U32 }, @@ -1157,6 +1163,11 @@ static int parse_taprio_schedule(struct taprio_sched *q, struct nlattr **tb, return -EINVAL; }
+ if (cycle < 0 || cycle > INT_MAX) { + NL_SET_ERR_MSG(extack, "'cycle_time' is too big"); + return -EINVAL; + } + new->cycle_time = cycle; }
@@ -1345,7 +1356,7 @@ static void setup_txtime(struct taprio_sched *q, struct sched_gate_list *sched, ktime_t base) { struct sched_entry *entry; - u32 interval = 0; + u64 interval = 0;
list_for_each_entry(entry, &sched->entries, list) { entry->next_txtime = ktime_add_ns(base, interval); diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json index a44455372646a..08d4861c2e782 100644 --- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json +++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json @@ -131,5 +131,30 @@ "teardown": [ "echo "1" > /sys/bus/netdevsim/del_device" ] + }, + { + "id": "3e1e", + "name": "Add taprio Qdisc with an invalid cycle-time", + "category": [ + "qdisc", + "taprio" + ], + "plugins": { + "requires": "nsPlugin" + }, + "setup": [ + "echo "1 1 8" > /sys/bus/netdevsim/new_device", + "$TC qdisc add dev $ETH root handle 1: taprio num_tc 3 map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 queues 1@0 1@0 1@0 base-time 1000000000 sched-entry S 01 300000 flags 0x1 clockid CLOCK_TAI cycle-time 4294967296 || /bin/true", + "$IP link set dev $ETH up", + "$IP addr add 10.10.10.10/24 dev $ETH" + ], + "cmdUnderTest": "/bin/true", + "expExitCode": "0", + "verifyCmd": "$TC qdisc show dev $ETH", + "matchPattern": "qdisc taprio 1: root refcnt", + "matchCount": "0", + "teardown": [ + "echo "1" > /sys/bus/netdevsim/del_device" + ] } ]
From: Duoming Zhou duoming@zju.edu.cn
[ Upstream commit 1e7417c188d0a83fb385ba2dbe35fd2563f2b6f3 ]
The timer dev->stat_monitor can schedule the delayed work dev->wq and the delayed work dev->wq can also arm the dev->stat_monitor timer.
When the device is detaching, the net_device will be deallocated. but the net_device private data could still be dereferenced in delayed work or timer handler. As a result, the UAF bugs will happen.
One racy situation is shown below:
(Thread 1) | (Thread 2) lan78xx_stat_monitor() | ... | lan78xx_disconnect() lan78xx_defer_kevent() | ... ... | cancel_delayed_work_sync(&dev->wq); schedule_delayed_work() | ... (wait some time) | free_netdev(net); //free net_device lan78xx_delayedwork() | //use net_device private data | dev-> //use |
Although we use cancel_delayed_work_sync() to cancel the delayed work in lan78xx_disconnect(), it could still be scheduled in timer handler lan78xx_stat_monitor().
Another racy situation is shown below:
(Thread 1) | (Thread 2) lan78xx_delayedwork | mod_timer() | lan78xx_disconnect() | cancel_delayed_work_sync() (wait some time) | if (timer_pending(&dev->stat_monitor)) | del_timer_sync(&dev->stat_monitor); lan78xx_stat_monitor() | ... lan78xx_defer_kevent() | free_netdev(net); //free //use net_device private data| dev-> //use |
Although we use del_timer_sync() to delete the timer, the function timer_pending() returns 0 when the timer is activated. As a result, the del_timer_sync() will not be executed and the timer could be re-armed.
In order to mitigate this bug, We use timer_shutdown_sync() to shutdown the timer and then use cancel_delayed_work_sync() to cancel the delayed work. As a result, the net_device could be deallocated safely.
What's more, the dev->flags is set to EVENT_DEV_DISCONNECT in lan78xx_disconnect(). But it could still be set to EVENT_STAT_UPDATE in lan78xx_stat_monitor(). So this patch put the set_bit() behind timer_shutdown_sync().
Fixes: 77dfff5bb7e2 ("lan78xx: Fix race condition in disconnect handling") Signed-off-by: Duoming Zhou duoming@zju.edu.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/lan78xx.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c index c458c030fadf6..59cde06aa7f60 100644 --- a/drivers/net/usb/lan78xx.c +++ b/drivers/net/usb/lan78xx.c @@ -4224,8 +4224,6 @@ static void lan78xx_disconnect(struct usb_interface *intf) if (!dev) return;
- set_bit(EVENT_DEV_DISCONNECT, &dev->flags); - netif_napi_del(&dev->napi);
udev = interface_to_usbdev(intf); @@ -4233,6 +4231,8 @@ static void lan78xx_disconnect(struct usb_interface *intf)
unregister_netdev(net);
+ timer_shutdown_sync(&dev->stat_monitor); + set_bit(EVENT_DEV_DISCONNECT, &dev->flags); cancel_delayed_work_sync(&dev->wq);
phydev = net->phydev; @@ -4247,9 +4247,6 @@ static void lan78xx_disconnect(struct usb_interface *intf)
usb_scuttle_anchored_urbs(&dev->deferred);
- if (timer_pending(&dev->stat_monitor)) - del_timer_sync(&dev->stat_monitor); - lan78xx_unbind(dev, intf);
lan78xx_free_tx_resources(dev);
On Wed, 09 Aug 2023, Greg Kroah-Hartman wrote:
From: Duoming Zhou duoming@zju.edu.cn
[ Upstream commit 1e7417c188d0a83fb385ba2dbe35fd2563f2b6f3 ]
The timer dev->stat_monitor can schedule the delayed work dev->wq and the delayed work dev->wq can also arm the dev->stat_monitor timer.
When the device is detaching, the net_device will be deallocated. but the net_device private data could still be dereferenced in delayed work or timer handler. As a result, the UAF bugs will happen.
One racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_stat_monitor() | ... | lan78xx_disconnect() lan78xx_defer_kevent() | ... ... | cancel_delayed_work_sync(&dev->wq); schedule_delayed_work() | ... (wait some time) | free_netdev(net); //free net_device lan78xx_delayedwork() | //use net_device private data | dev-> //use |
Although we use cancel_delayed_work_sync() to cancel the delayed work in lan78xx_disconnect(), it could still be scheduled in timer handler lan78xx_stat_monitor().
Another racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_delayedwork | mod_timer() | lan78xx_disconnect() | cancel_delayed_work_sync() (wait some time) | if (timer_pending(&dev->stat_monitor)) | del_timer_sync(&dev->stat_monitor); lan78xx_stat_monitor() | ... lan78xx_defer_kevent() | free_netdev(net); //free //use net_device private data| dev-> //use |
Although we use del_timer_sync() to delete the timer, the function timer_pending() returns 0 when the timer is activated. As a result, the del_timer_sync() will not be executed and the timer could be re-armed.
In order to mitigate this bug, We use timer_shutdown_sync() to shutdown the timer and then use cancel_delayed_work_sync() to cancel the delayed work. As a result, the net_device could be deallocated safely.
What's more, the dev->flags is set to EVENT_DEV_DISCONNECT in lan78xx_disconnect(). But it could still be set to EVENT_STAT_UPDATE in lan78xx_stat_monitor(). So this patch put the set_bit() behind timer_shutdown_sync().
Fixes: 77dfff5bb7e2 ("lan78xx: Fix race condition in disconnect handling")
Any idea why this stopped at linux-6.4.y? The aforementioned Fixes: commit also exists in linux-6.1.y and linux-5.15.y. I don't see any earlier backport attempts or failure reports that would otherwise explain this.
Signed-off-by: Duoming Zhou duoming@zju.edu.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org
drivers/net/usb/lan78xx.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-)
On Mon, Jan 08, 2024 at 02:52:24PM +0000, Lee Jones wrote:
On Wed, 09 Aug 2023, Greg Kroah-Hartman wrote:
From: Duoming Zhou duoming@zju.edu.cn
[ Upstream commit 1e7417c188d0a83fb385ba2dbe35fd2563f2b6f3 ]
The timer dev->stat_monitor can schedule the delayed work dev->wq and the delayed work dev->wq can also arm the dev->stat_monitor timer.
When the device is detaching, the net_device will be deallocated. but the net_device private data could still be dereferenced in delayed work or timer handler. As a result, the UAF bugs will happen.
One racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_stat_monitor() | ... | lan78xx_disconnect() lan78xx_defer_kevent() | ... ... | cancel_delayed_work_sync(&dev->wq); schedule_delayed_work() | ... (wait some time) | free_netdev(net); //free net_device lan78xx_delayedwork() | //use net_device private data | dev-> //use |
Although we use cancel_delayed_work_sync() to cancel the delayed work in lan78xx_disconnect(), it could still be scheduled in timer handler lan78xx_stat_monitor().
Another racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_delayedwork | mod_timer() | lan78xx_disconnect() | cancel_delayed_work_sync() (wait some time) | if (timer_pending(&dev->stat_monitor)) | del_timer_sync(&dev->stat_monitor); lan78xx_stat_monitor() | ... lan78xx_defer_kevent() | free_netdev(net); //free //use net_device private data| dev-> //use |
Although we use del_timer_sync() to delete the timer, the function timer_pending() returns 0 when the timer is activated. As a result, the del_timer_sync() will not be executed and the timer could be re-armed.
In order to mitigate this bug, We use timer_shutdown_sync() to shutdown the timer and then use cancel_delayed_work_sync() to cancel the delayed work. As a result, the net_device could be deallocated safely.
What's more, the dev->flags is set to EVENT_DEV_DISCONNECT in lan78xx_disconnect(). But it could still be set to EVENT_STAT_UPDATE in lan78xx_stat_monitor(). So this patch put the set_bit() behind timer_shutdown_sync().
Fixes: 77dfff5bb7e2 ("lan78xx: Fix race condition in disconnect handling")
Any idea why this stopped at linux-6.4.y? The aforementioned Fixes: commit also exists in linux-6.1.y and linux-5.15.y. I don't see any earlier backport attempts or failure reports that would otherwise explain this.
Did you try to build it:
drivers/net/usb/lan78xx.c: In function ‘lan78xx_disconnect’: drivers/net/usb/lan78xx.c:4234:9: error: implicit declaration of function ‘timer_shutdown_sync’ [-Werror=implicit-function-declaration] 4234 | timer_shutdown_sync(&dev->stat_monitor); | ^~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors
That's a good reason to not include it...
Also, is it really an issue or just showing up on a CVE-checker somewhere? (odds on the latter, based on the author of this commit...)
If anyone really cares about it, I figured they would submit a working patch :)
thanks,
greg k-h
On Mon, 08 Jan 2024, Greg Kroah-Hartman wrote:
On Mon, Jan 08, 2024 at 02:52:24PM +0000, Lee Jones wrote:
On Wed, 09 Aug 2023, Greg Kroah-Hartman wrote:
From: Duoming Zhou duoming@zju.edu.cn
[ Upstream commit 1e7417c188d0a83fb385ba2dbe35fd2563f2b6f3 ]
The timer dev->stat_monitor can schedule the delayed work dev->wq and the delayed work dev->wq can also arm the dev->stat_monitor timer.
When the device is detaching, the net_device will be deallocated. but the net_device private data could still be dereferenced in delayed work or timer handler. As a result, the UAF bugs will happen.
One racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_stat_monitor() | ... | lan78xx_disconnect() lan78xx_defer_kevent() | ... ... | cancel_delayed_work_sync(&dev->wq); schedule_delayed_work() | ... (wait some time) | free_netdev(net); //free net_device lan78xx_delayedwork() | //use net_device private data | dev-> //use |
Although we use cancel_delayed_work_sync() to cancel the delayed work in lan78xx_disconnect(), it could still be scheduled in timer handler lan78xx_stat_monitor().
Another racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_delayedwork | mod_timer() | lan78xx_disconnect() | cancel_delayed_work_sync() (wait some time) | if (timer_pending(&dev->stat_monitor)) | del_timer_sync(&dev->stat_monitor); lan78xx_stat_monitor() | ... lan78xx_defer_kevent() | free_netdev(net); //free //use net_device private data| dev-> //use |
Although we use del_timer_sync() to delete the timer, the function timer_pending() returns 0 when the timer is activated. As a result, the del_timer_sync() will not be executed and the timer could be re-armed.
In order to mitigate this bug, We use timer_shutdown_sync() to shutdown the timer and then use cancel_delayed_work_sync() to cancel the delayed work. As a result, the net_device could be deallocated safely.
What's more, the dev->flags is set to EVENT_DEV_DISCONNECT in lan78xx_disconnect(). But it could still be set to EVENT_STAT_UPDATE in lan78xx_stat_monitor(). So this patch put the set_bit() behind timer_shutdown_sync().
Fixes: 77dfff5bb7e2 ("lan78xx: Fix race condition in disconnect handling")
Any idea why this stopped at linux-6.4.y? The aforementioned Fixes: commit also exists in linux-6.1.y and linux-5.15.y. I don't see any earlier backport attempts or failure reports that would otherwise explain this.
Did you try to build it:
No, I just noticed that it was missing.
drivers/net/usb/lan78xx.c: In function ‘lan78xx_disconnect’: drivers/net/usb/lan78xx.c:4234:9: error: implicit declaration of function ‘timer_shutdown_sync’ [-Werror=implicit-function-declaration] 4234 | timer_shutdown_sync(&dev->stat_monitor); | ^~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors
That's a good reason to not include it...
It's a perfect reason not to include it.
The issue is not that the patch is not present. It's more the lack of transparency in terms of searchable information on why it was not included.
I was under the impression that a report is usually sent out when a patch failed to apply for any reason?
[...]
On Mon, Jan 08, 2024 at 04:58:06PM +0000, Lee Jones wrote:
On Mon, 08 Jan 2024, Greg Kroah-Hartman wrote:
On Mon, Jan 08, 2024 at 02:52:24PM +0000, Lee Jones wrote:
On Wed, 09 Aug 2023, Greg Kroah-Hartman wrote:
From: Duoming Zhou duoming@zju.edu.cn
[ Upstream commit 1e7417c188d0a83fb385ba2dbe35fd2563f2b6f3 ]
The timer dev->stat_monitor can schedule the delayed work dev->wq and the delayed work dev->wq can also arm the dev->stat_monitor timer.
When the device is detaching, the net_device will be deallocated. but the net_device private data could still be dereferenced in delayed work or timer handler. As a result, the UAF bugs will happen.
One racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_stat_monitor() | ... | lan78xx_disconnect() lan78xx_defer_kevent() | ... ... | cancel_delayed_work_sync(&dev->wq); schedule_delayed_work() | ... (wait some time) | free_netdev(net); //free net_device lan78xx_delayedwork() | //use net_device private data | dev-> //use |
Although we use cancel_delayed_work_sync() to cancel the delayed work in lan78xx_disconnect(), it could still be scheduled in timer handler lan78xx_stat_monitor().
Another racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_delayedwork | mod_timer() | lan78xx_disconnect() | cancel_delayed_work_sync() (wait some time) | if (timer_pending(&dev->stat_monitor)) | del_timer_sync(&dev->stat_monitor); lan78xx_stat_monitor() | ... lan78xx_defer_kevent() | free_netdev(net); //free //use net_device private data| dev-> //use |
Although we use del_timer_sync() to delete the timer, the function timer_pending() returns 0 when the timer is activated. As a result, the del_timer_sync() will not be executed and the timer could be re-armed.
In order to mitigate this bug, We use timer_shutdown_sync() to shutdown the timer and then use cancel_delayed_work_sync() to cancel the delayed work. As a result, the net_device could be deallocated safely.
What's more, the dev->flags is set to EVENT_DEV_DISCONNECT in lan78xx_disconnect(). But it could still be set to EVENT_STAT_UPDATE in lan78xx_stat_monitor(). So this patch put the set_bit() behind timer_shutdown_sync().
Fixes: 77dfff5bb7e2 ("lan78xx: Fix race condition in disconnect handling")
Any idea why this stopped at linux-6.4.y? The aforementioned Fixes: commit also exists in linux-6.1.y and linux-5.15.y. I don't see any earlier backport attempts or failure reports that would otherwise explain this.
Did you try to build it:
No, I just noticed that it was missing.
drivers/net/usb/lan78xx.c: In function ‘lan78xx_disconnect’: drivers/net/usb/lan78xx.c:4234:9: error: implicit declaration of function ‘timer_shutdown_sync’ [-Werror=implicit-function-declaration] 4234 | timer_shutdown_sync(&dev->stat_monitor); | ^~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors
That's a good reason to not include it...
It's a perfect reason not to include it.
The issue is not that the patch is not present. It's more the lack of transparency in terms of searchable information on why it was not included.
I was under the impression that a report is usually sent out when a patch failed to apply for any reason?
For patches that are explicitly tagged for stable inclusion, yes, that will happen. That is not the case for this commit.
For patches that only have a "Fixes:" tag on it, those are gotten to on a "best effort" basis when we get a chance, as those were obviously not explicitly asked to be backported. And when they are backported, if they fail, they will fail silently as the author/maintainer was not explicitly asking them to be applied to a stable tree, so it would just be noise to complain about it.
So, it's lucky that this patch was backported at all to any stable tree :)
thanks,
greg k-h
On Mon, 08 Jan 2024, Greg Kroah-Hartman wrote:
On Mon, Jan 08, 2024 at 04:58:06PM +0000, Lee Jones wrote:
On Mon, 08 Jan 2024, Greg Kroah-Hartman wrote:
On Mon, Jan 08, 2024 at 02:52:24PM +0000, Lee Jones wrote:
On Wed, 09 Aug 2023, Greg Kroah-Hartman wrote:
From: Duoming Zhou duoming@zju.edu.cn
[ Upstream commit 1e7417c188d0a83fb385ba2dbe35fd2563f2b6f3 ]
The timer dev->stat_monitor can schedule the delayed work dev->wq and the delayed work dev->wq can also arm the dev->stat_monitor timer.
When the device is detaching, the net_device will be deallocated. but the net_device private data could still be dereferenced in delayed work or timer handler. As a result, the UAF bugs will happen.
One racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_stat_monitor() | ... | lan78xx_disconnect() lan78xx_defer_kevent() | ... ... | cancel_delayed_work_sync(&dev->wq); schedule_delayed_work() | ... (wait some time) | free_netdev(net); //free net_device lan78xx_delayedwork() | //use net_device private data | dev-> //use |
Although we use cancel_delayed_work_sync() to cancel the delayed work in lan78xx_disconnect(), it could still be scheduled in timer handler lan78xx_stat_monitor().
Another racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_delayedwork | mod_timer() | lan78xx_disconnect() | cancel_delayed_work_sync() (wait some time) | if (timer_pending(&dev->stat_monitor)) | del_timer_sync(&dev->stat_monitor); lan78xx_stat_monitor() | ... lan78xx_defer_kevent() | free_netdev(net); //free //use net_device private data| dev-> //use |
Although we use del_timer_sync() to delete the timer, the function timer_pending() returns 0 when the timer is activated. As a result, the del_timer_sync() will not be executed and the timer could be re-armed.
In order to mitigate this bug, We use timer_shutdown_sync() to shutdown the timer and then use cancel_delayed_work_sync() to cancel the delayed work. As a result, the net_device could be deallocated safely.
What's more, the dev->flags is set to EVENT_DEV_DISCONNECT in lan78xx_disconnect(). But it could still be set to EVENT_STAT_UPDATE in lan78xx_stat_monitor(). So this patch put the set_bit() behind timer_shutdown_sync().
Fixes: 77dfff5bb7e2 ("lan78xx: Fix race condition in disconnect handling")
Any idea why this stopped at linux-6.4.y? The aforementioned Fixes: commit also exists in linux-6.1.y and linux-5.15.y. I don't see any earlier backport attempts or failure reports that would otherwise explain this.
Did you try to build it:
No, I just noticed that it was missing.
drivers/net/usb/lan78xx.c: In function ‘lan78xx_disconnect’: drivers/net/usb/lan78xx.c:4234:9: error: implicit declaration of function ‘timer_shutdown_sync’ [-Werror=implicit-function-declaration] 4234 | timer_shutdown_sync(&dev->stat_monitor); | ^~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors
That's a good reason to not include it...
It's a perfect reason not to include it.
The issue is not that the patch is not present. It's more the lack of transparency in terms of searchable information on why it was not included.
I was under the impression that a report is usually sent out when a patch failed to apply for any reason?
For patches that are explicitly tagged for stable inclusion, yes, that will happen. That is not the case for this commit.
For patches that only have a "Fixes:" tag on it, those are gotten to on a "best effort" basis when we get a chance, as those were obviously not explicitly asked to be backported. And when they are backported, if they fail, they will fail silently as the author/maintainer was not explicitly asking them to be applied to a stable tree, so it would just be noise to complain about it.
So, it's lucky that this patch was backported at all to any stable tree :)
That's fair to a point.
Just know that if there are no other means to determine the actions taken place behind closed doors, then these queries are likely to reoccur.
It would be far nicer if an automated mail was sent out when a failed backport attempt were made in all cases. Even if we drop the individual contributor/maintainer addresses and only ping the mailing lists, since at least it then becomes helpfully searchable on LORE. Is it really more work to duplicate the workflow between intended Stable inclusions and any other attempt?
On Tue, Jan 09, 2024 at 08:32:51AM +0000, Lee Jones wrote:
On Mon, 08 Jan 2024, Greg Kroah-Hartman wrote:
On Mon, Jan 08, 2024 at 04:58:06PM +0000, Lee Jones wrote:
On Mon, 08 Jan 2024, Greg Kroah-Hartman wrote:
On Mon, Jan 08, 2024 at 02:52:24PM +0000, Lee Jones wrote:
On Wed, 09 Aug 2023, Greg Kroah-Hartman wrote:
From: Duoming Zhou duoming@zju.edu.cn
[ Upstream commit 1e7417c188d0a83fb385ba2dbe35fd2563f2b6f3 ]
The timer dev->stat_monitor can schedule the delayed work dev->wq and the delayed work dev->wq can also arm the dev->stat_monitor timer.
When the device is detaching, the net_device will be deallocated. but the net_device private data could still be dereferenced in delayed work or timer handler. As a result, the UAF bugs will happen.
One racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_stat_monitor() | ... | lan78xx_disconnect() lan78xx_defer_kevent() | ... ... | cancel_delayed_work_sync(&dev->wq); schedule_delayed_work() | ... (wait some time) | free_netdev(net); //free net_device lan78xx_delayedwork() | //use net_device private data | dev-> //use |
Although we use cancel_delayed_work_sync() to cancel the delayed work in lan78xx_disconnect(), it could still be scheduled in timer handler lan78xx_stat_monitor().
Another racy situation is shown below:
(Thread 1) | (Thread 2)
lan78xx_delayedwork | mod_timer() | lan78xx_disconnect() | cancel_delayed_work_sync() (wait some time) | if (timer_pending(&dev->stat_monitor)) | del_timer_sync(&dev->stat_monitor); lan78xx_stat_monitor() | ... lan78xx_defer_kevent() | free_netdev(net); //free //use net_device private data| dev-> //use |
Although we use del_timer_sync() to delete the timer, the function timer_pending() returns 0 when the timer is activated. As a result, the del_timer_sync() will not be executed and the timer could be re-armed.
In order to mitigate this bug, We use timer_shutdown_sync() to shutdown the timer and then use cancel_delayed_work_sync() to cancel the delayed work. As a result, the net_device could be deallocated safely.
What's more, the dev->flags is set to EVENT_DEV_DISCONNECT in lan78xx_disconnect(). But it could still be set to EVENT_STAT_UPDATE in lan78xx_stat_monitor(). So this patch put the set_bit() behind timer_shutdown_sync().
Fixes: 77dfff5bb7e2 ("lan78xx: Fix race condition in disconnect handling")
Any idea why this stopped at linux-6.4.y? The aforementioned Fixes: commit also exists in linux-6.1.y and linux-5.15.y. I don't see any earlier backport attempts or failure reports that would otherwise explain this.
Did you try to build it:
No, I just noticed that it was missing.
drivers/net/usb/lan78xx.c: In function ‘lan78xx_disconnect’: drivers/net/usb/lan78xx.c:4234:9: error: implicit declaration of function ‘timer_shutdown_sync’ [-Werror=implicit-function-declaration] 4234 | timer_shutdown_sync(&dev->stat_monitor); | ^~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors
That's a good reason to not include it...
It's a perfect reason not to include it.
The issue is not that the patch is not present. It's more the lack of transparency in terms of searchable information on why it was not included.
I was under the impression that a report is usually sent out when a patch failed to apply for any reason?
For patches that are explicitly tagged for stable inclusion, yes, that will happen. That is not the case for this commit.
For patches that only have a "Fixes:" tag on it, those are gotten to on a "best effort" basis when we get a chance, as those were obviously not explicitly asked to be backported. And when they are backported, if they fail, they will fail silently as the author/maintainer was not explicitly asking them to be applied to a stable tree, so it would just be noise to complain about it.
So, it's lucky that this patch was backported at all to any stable tree :)
That's fair to a point.
Just know that if there are no other means to determine the actions taken place behind closed doors, then these queries are likely to reoccur.
There are no "closed doors" here, everything we apply is sent to the public, so the lack of an email will mean it was not applied. We can't spam the lists with "well I tried this patch on this tree and it didn't work" messages all the time, as that would be a mess, especially as changes like this were NOT asked to be backported, we just took the time to attempt to do so on our own accord.
Again, patches that are explicitly asked to be applied, and fail, will be notified if they fail. Patches that are not asked to be applied, and we try to do so anyway because it looks like they might be relevant, and they fail, will not be applied as it would seem that the original author was correct in that it shouldn't have been applied.
It would be far nicer if an automated mail was sent out when a failed backport attempt were made in all cases. Even if we drop the individual contributor/maintainer addresses and only ping the mailing lists, since at least it then becomes helpfully searchable on LORE. Is it really more work to duplicate the workflow between intended Stable inclusions and any other attempt?
It's a different patch stream and workflow. One that some people have suggested that we maybe just stop doing entirely (i.e. don't take anything that is not explicitly marked), but that would mean that many subsystems would never get any backports done for them because they never mark anything. So we do the best that we can, and do not bother the subsystems that do not want to be part of the stable backport process, which is totally legitimate as it is extra effort on their part and that is the first rule of the stable kernel process when we created them: - we will not impose any extra work on any maintainer or developer if they do not want to do it.
Also, when asking "why wasn't this patch applied?", 90% of the time a simple "let me go apply it and build it" will show that it either doesn't apply, or breaks the build, which will save people a round-trip in emails as that's the obvious answer :)
thanks,
greg k-h
From: Rafal Rogalski rafalx.rogalski@intel.com
[ Upstream commit 4b31fd4d77ffa430d0b74ba1885ea0a41594f202 ]
During qdisc create/delete, it is necessary to rebuild the queue of VSIs. An error occurred because the VSIs created by RDMA were still active.
Added check if RDMA is active. If yes, it disallows qdisc changes and writes a message in the system logs.
Fixes: 348048e724a0 ("ice: Implement iidc operations") Signed-off-by: Rafal Rogalski rafalx.rogalski@intel.com Signed-off-by: Mateusz Palczewski mateusz.palczewski@intel.com Signed-off-by: Kamil Maziarz kamil.maziarz@intel.com Tested-by: Bharathi Sreenivas bharathi.sreenivas@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Link: https://lore.kernel.org/r/20230728171243.2446101-1-anthony.l.nguyen@intel.co... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice_main.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index fbe70458fda27..34e8e7cb1bc54 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -9055,6 +9055,7 @@ ice_setup_tc(struct net_device *netdev, enum tc_setup_type type, { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_pf *pf = np->vsi->back; + bool locked = false; int err;
switch (type) { @@ -9064,10 +9065,27 @@ ice_setup_tc(struct net_device *netdev, enum tc_setup_type type, ice_setup_tc_block_cb, np, np, true); case TC_SETUP_QDISC_MQPRIO: + if (pf->adev) { + mutex_lock(&pf->adev_mutex); + device_lock(&pf->adev->dev); + locked = true; + if (pf->adev->dev.driver) { + netdev_err(netdev, "Cannot change qdisc when RDMA is active\n"); + err = -EBUSY; + goto adev_unlock; + } + } + /* setup traffic classifier for receive side */ mutex_lock(&pf->tc_mutex); err = ice_setup_tc_mqprio_qdisc(netdev, type_data); mutex_unlock(&pf->tc_mutex); + +adev_unlock: + if (locked) { + device_unlock(&pf->adev->dev); + mutex_unlock(&pf->adev_mutex); + } return err; default: return -EOPNOTSUPP;
From: Jakub Kicinski kuba@kernel.org
[ Upstream commit 37b61cda9c1606cd8b6445d900ca9dc03185e8b6 ]
Similarly to other recently fixed drivers make sure we don't try to access XDP or page pool APIs when NAPI budget is 0. NAPI budget of 0 may mean that we are in netpoll.
This may result in running software IRQs in hard IRQ context, leading to deadlocks or crashes.
To make sure bnapi->tx_pkts don't get wiped without handling the events, move clearing the field into the handler itself. Remember to clear tx_pkts after reset (bnxt_enable_napi()) as it's technically possible that netpoll will accumulate some tx_pkts and then a reset will happen, leaving tx_pkts out of sync with reality.
Fixes: 322b87ca55f2 ("bnxt_en: add page_pool support") Reviewed-by: Andy Gospodarek gospo@broadcom.com Reviewed-by: Michael Chan michael.chan@broadcom.com Link: https://lore.kernel.org/r/20230728205020.2784844-1-kuba@kernel.org Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 26 +++++++++++-------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 8 +++++- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 2 +- 4 files changed, 24 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index b499bc9c4e067..0b314bf4fbe65 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -633,12 +633,13 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; }
-static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) +static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) { struct bnxt_tx_ring_info *txr = bnapi->tx_ring; struct netdev_queue *txq = netdev_get_tx_queue(bp->dev, txr->txq_index); u16 cons = txr->tx_cons; struct pci_dev *pdev = bp->pdev; + int nr_pkts = bnapi->tx_pkts; int i; unsigned int tx_bytes = 0;
@@ -688,6 +689,7 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) dev_kfree_skb_any(skb); }
+ bnapi->tx_pkts = 0; WRITE_ONCE(txr->tx_cons, cons);
__netif_txq_completed_wake(txq, nr_pkts, tx_bytes, @@ -2573,12 +2575,11 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, return rx_pkts; }
-static void __bnxt_poll_work_done(struct bnxt *bp, struct bnxt_napi *bnapi) +static void __bnxt_poll_work_done(struct bnxt *bp, struct bnxt_napi *bnapi, + int budget) { - if (bnapi->tx_pkts) { - bnapi->tx_int(bp, bnapi, bnapi->tx_pkts); - bnapi->tx_pkts = 0; - } + if (bnapi->tx_pkts) + bnapi->tx_int(bp, bnapi, budget);
if ((bnapi->events & BNXT_RX_EVENT) && !(bnapi->in_reset)) { struct bnxt_rx_ring_info *rxr = bnapi->rx_ring; @@ -2607,7 +2608,7 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, */ bnxt_db_cq(bp, &cpr->cp_db, cpr->cp_raw_cons);
- __bnxt_poll_work_done(bp, bnapi); + __bnxt_poll_work_done(bp, bnapi, budget); return rx_pkts; }
@@ -2738,7 +2739,7 @@ static int __bnxt_poll_cqs(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) }
static void __bnxt_poll_cqs_done(struct bnxt *bp, struct bnxt_napi *bnapi, - u64 dbr_type) + u64 dbr_type, int budget) { struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring; int i; @@ -2754,7 +2755,7 @@ static void __bnxt_poll_cqs_done(struct bnxt *bp, struct bnxt_napi *bnapi, cpr2->had_work_done = 0; } } - __bnxt_poll_work_done(bp, bnapi); + __bnxt_poll_work_done(bp, bnapi, budget); }
static int bnxt_poll_p5(struct napi_struct *napi, int budget) @@ -2784,7 +2785,8 @@ static int bnxt_poll_p5(struct napi_struct *napi, int budget) if (cpr->has_more_work) break;
- __bnxt_poll_cqs_done(bp, bnapi, DBR_TYPE_CQ_ARMALL); + __bnxt_poll_cqs_done(bp, bnapi, DBR_TYPE_CQ_ARMALL, + budget); cpr->cp_raw_cons = raw_cons; if (napi_complete_done(napi, work_done)) BNXT_DB_NQ_ARM_P5(&cpr->cp_db, @@ -2814,7 +2816,7 @@ static int bnxt_poll_p5(struct napi_struct *napi, int budget) } raw_cons = NEXT_RAW_CMP(raw_cons); } - __bnxt_poll_cqs_done(bp, bnapi, DBR_TYPE_CQ); + __bnxt_poll_cqs_done(bp, bnapi, DBR_TYPE_CQ, budget); if (raw_cons != cpr->cp_raw_cons) { cpr->cp_raw_cons = raw_cons; BNXT_DB_NQ_P5(&cpr->cp_db, raw_cons); @@ -9433,6 +9435,8 @@ static void bnxt_enable_napi(struct bnxt *bp) cpr->sw_stats.rx.rx_resets++; bnapi->in_reset = false;
+ bnapi->tx_pkts = 0; + if (bnapi->rx_ring) { INIT_WORK(&cpr->dim.work, bnxt_dim_work); cpr->dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 080e73496066b..bb95c3dc5270f 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1005,7 +1005,7 @@ struct bnxt_napi { struct bnxt_tx_ring_info *tx_ring;
void (*tx_int)(struct bnxt *, struct bnxt_napi *, - int); + int budget); int tx_pkts; u8 events;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 4efa5fe6972b2..7f2f9a317d473 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -125,16 +125,20 @@ static void __bnxt_xmit_xdp_redirect(struct bnxt *bp, dma_unmap_len_set(tx_buf, len, 0); }
-void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) +void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) { struct bnxt_tx_ring_info *txr = bnapi->tx_ring; struct bnxt_rx_ring_info *rxr = bnapi->rx_ring; bool rx_doorbell_needed = false; + int nr_pkts = bnapi->tx_pkts; struct bnxt_sw_tx_bd *tx_buf; u16 tx_cons = txr->tx_cons; u16 last_tx_cons = tx_cons; int i, j, frags;
+ if (!budget) + return; + for (i = 0; i < nr_pkts; i++) { tx_buf = &txr->tx_buf_ring[tx_cons];
@@ -161,6 +165,8 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) } tx_cons = NEXT_TX(tx_cons); } + + bnapi->tx_pkts = 0; WRITE_ONCE(txr->tx_cons, tx_cons); if (rx_doorbell_needed) { tx_buf = &txr->tx_buf_ring[last_tx_cons]; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h index ea430d6961df3..5e412c5655ba5 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h @@ -16,7 +16,7 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, struct bnxt_tx_ring_info *txr, dma_addr_t mapping, u32 len, struct xdp_buff *xdp); -void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts); +void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int budget); bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, struct xdp_buff xdp, struct page *page, u8 **data_ptr, unsigned int *len, u8 *event);
From: Michal Schmidt mschmidt@redhat.com
[ Upstream commit 611e1b016c7beceec5ae82ac62d4a7ca224c8f9d ]
The two mbox-related mutexes are destroyed in octep_ctrl_mbox_uninit(), but the corresponding mutex_init calls were missing. A "DEBUG_LOCKS_WARN_ON(lock->magic != lock)" warning was emitted with CONFIG_DEBUG_MUTEXES on.
Initialize the two mutexes in octep_ctrl_mbox_init().
Fixes: 577f0d1b1c5f ("octeon_ep: add separate mailbox command and response queues") Signed-off-by: Michal Schmidt mschmidt@redhat.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Link: https://lore.kernel.org/r/20230729151516.24153-1-mschmidt@redhat.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_mbox.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_mbox.c b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_mbox.c index 035ead7935c74..dab61cc1acb57 100644 --- a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_mbox.c +++ b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_mbox.c @@ -98,6 +98,9 @@ int octep_ctrl_mbox_init(struct octep_ctrl_mbox *mbox) writeq(OCTEP_CTRL_MBOX_STATUS_INIT, OCTEP_CTRL_MBOX_INFO_HOST_STATUS(mbox->barmem));
+ mutex_init(&mbox->h2fq_lock); + mutex_init(&mbox->f2hq_lock); + mbox->h2fq.sz = readl(OCTEP_CTRL_MBOX_H2FQ_SZ(mbox->barmem)); mbox->h2fq.hw_prod = OCTEP_CTRL_MBOX_H2FQ_PROD(mbox->barmem); mbox->h2fq.hw_cons = OCTEP_CTRL_MBOX_H2FQ_CONS(mbox->barmem);
From: Andrii Nakryiko andrii@kernel.org
[ Upstream commit 1d28635abcf1914425d6516e641978011984c58a ]
Make each bpf() syscall command a bit more self-contained, making it easier to further enhance it. We move sysctl_unprivileged_bpf_disabled handling down to map_create() and bpf_prog_load(), two special commands in this regard.
Also swap the order of checks, calling bpf_capable() only if sysctl_unprivileged_bpf_disabled is true, avoiding unnecessary audit messages.
Signed-off-by: Andrii Nakryiko andrii@kernel.org Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Stanislav Fomichev sdf@google.com Link: https://lore.kernel.org/bpf/20230613223533.3689589-2-andrii@kernel.org Stable-dep-of: 640a604585aa ("bpf, cpumap: Make sure kthread is running before map update returns") Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/syscall.c | 34 +++++++++++++++++++--------------- 1 file changed, 19 insertions(+), 15 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 5524fcf6fb2a4..0a7238125e1a4 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1157,6 +1157,15 @@ static int map_create(union bpf_attr *attr) !node_online(numa_node))) return -EINVAL;
+ /* Intent here is for unprivileged_bpf_disabled to block BPF map + * creation for unprivileged users; other actions depend + * on fd availability and access to bpffs, so are dependent on + * object creation success. Even with unprivileged BPF disabled, + * capability checks are still carried out. + */ + if (sysctl_unprivileged_bpf_disabled && !bpf_capable()) + return -EPERM; + /* find map type and init map: hashtable vs rbtree vs bloom vs ... */ map = find_and_alloc_map(attr); if (IS_ERR(map)) @@ -2535,6 +2544,16 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) /* eBPF programs must be GPL compatible to use GPL-ed functions */ is_gpl = license_is_gpl_compatible(license);
+ /* Intent here is for unprivileged_bpf_disabled to block BPF program + * creation for unprivileged users; other actions depend + * on fd availability and access to bpffs, so are dependent on + * object creation success. Even with unprivileged BPF disabled, + * capability checks are still carried out for these + * and other operations. + */ + if (sysctl_unprivileged_bpf_disabled && !bpf_capable()) + return -EPERM; + if (attr->insn_cnt == 0 || attr->insn_cnt > (bpf_capable() ? BPF_COMPLEXITY_LIMIT_INSNS : BPF_MAXINSNS)) return -E2BIG; @@ -5018,23 +5037,8 @@ static int bpf_prog_bind_map(union bpf_attr *attr) static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) { union bpf_attr attr; - bool capable; int err;
- capable = bpf_capable() || !sysctl_unprivileged_bpf_disabled; - - /* Intent here is for unprivileged_bpf_disabled to block key object - * creation commands for unprivileged users; other actions depend - * of fd availability and access to bpffs, so are dependent on - * object creation success. Capabilities are later verified for - * operations such as load and map create, so even with unprivileged - * BPF disabled, capability checks are still carried out for these - * and other operations. - */ - if (!capable && - (cmd == BPF_MAP_CREATE || cmd == BPF_PROG_LOAD)) - return -EPERM; - err = bpf_check_uarg_tail_zero(uattr, sizeof(attr), size); if (err) return err;
From: Andrii Nakryiko andrii@kernel.org
[ Upstream commit 22db41226b679768df8f0a4ff5de8e58f625f45b ]
Currently find_and_alloc_map() performs two separate functions: some argument sanity checking and partial map creation workflow hanling. Neither of those functions are self-sufficient and are augmented by further checks and initialization logic in the caller (map_create() function). So unify all the sanity checks, permission checks, and creation and initialization logic in one linear piece of code in map_create() instead. This also make it easier to further enhance permission checks and keep them located in one place.
Signed-off-by: Andrii Nakryiko andrii@kernel.org Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Stanislav Fomichev sdf@google.com Link: https://lore.kernel.org/bpf/20230613223533.3689589-3-andrii@kernel.org Stable-dep-of: 640a604585aa ("bpf, cpumap: Make sure kthread is running before map update returns") Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/syscall.c | 57 +++++++++++++++++++------------------------- 1 file changed, 24 insertions(+), 33 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 0a7238125e1a4..8fddf0eea9bf2 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -109,37 +109,6 @@ const struct bpf_map_ops bpf_map_offload_ops = { .map_mem_usage = bpf_map_offload_map_mem_usage, };
-static struct bpf_map *find_and_alloc_map(union bpf_attr *attr) -{ - const struct bpf_map_ops *ops; - u32 type = attr->map_type; - struct bpf_map *map; - int err; - - if (type >= ARRAY_SIZE(bpf_map_types)) - return ERR_PTR(-EINVAL); - type = array_index_nospec(type, ARRAY_SIZE(bpf_map_types)); - ops = bpf_map_types[type]; - if (!ops) - return ERR_PTR(-EINVAL); - - if (ops->map_alloc_check) { - err = ops->map_alloc_check(attr); - if (err) - return ERR_PTR(err); - } - if (attr->map_ifindex) - ops = &bpf_map_offload_ops; - if (!ops->map_mem_usage) - return ERR_PTR(-EINVAL); - map = ops->map_alloc(attr); - if (IS_ERR(map)) - return map; - map->ops = ops; - map->map_type = type; - return map; -} - static void bpf_map_write_active_inc(struct bpf_map *map) { atomic64_inc(&map->writecnt); @@ -1127,7 +1096,9 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, /* called via syscall */ static int map_create(union bpf_attr *attr) { + const struct bpf_map_ops *ops; int numa_node = bpf_map_attr_numa_node(attr); + u32 map_type = attr->map_type; struct bpf_map *map; int f_flags; int err; @@ -1157,6 +1128,25 @@ static int map_create(union bpf_attr *attr) !node_online(numa_node))) return -EINVAL;
+ /* find map type and init map: hashtable vs rbtree vs bloom vs ... */ + map_type = attr->map_type; + if (map_type >= ARRAY_SIZE(bpf_map_types)) + return -EINVAL; + map_type = array_index_nospec(map_type, ARRAY_SIZE(bpf_map_types)); + ops = bpf_map_types[map_type]; + if (!ops) + return -EINVAL; + + if (ops->map_alloc_check) { + err = ops->map_alloc_check(attr); + if (err) + return err; + } + if (attr->map_ifindex) + ops = &bpf_map_offload_ops; + if (!ops->map_mem_usage) + return -EINVAL; + /* Intent here is for unprivileged_bpf_disabled to block BPF map * creation for unprivileged users; other actions depend * on fd availability and access to bpffs, so are dependent on @@ -1166,10 +1156,11 @@ static int map_create(union bpf_attr *attr) if (sysctl_unprivileged_bpf_disabled && !bpf_capable()) return -EPERM;
- /* find map type and init map: hashtable vs rbtree vs bloom vs ... */ - map = find_and_alloc_map(attr); + map = ops->map_alloc(attr); if (IS_ERR(map)) return PTR_ERR(map); + map->ops = ops; + map->map_type = map_type;
err = bpf_obj_name_cpy(map->name, attr->map_name, sizeof(attr->map_name));
From: Andrii Nakryiko andrii@kernel.org
[ Upstream commit 6c3eba1c5e283fd2bb1c076dbfcb47f569c3bfde ]
This allows to do more centralized decisions later on, and generally makes it very explicit which maps are privileged and which are not (e.g., LRU_HASH and LRU_PERCPU_HASH, which are privileged HASH variants, as opposed to unprivileged HASH and HASH_PERCPU; now this is explicit and easy to verify).
Signed-off-by: Andrii Nakryiko andrii@kernel.org Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Stanislav Fomichev sdf@google.com Link: https://lore.kernel.org/bpf/20230613223533.3689589-4-andrii@kernel.org Stable-dep-of: 640a604585aa ("bpf, cpumap: Make sure kthread is running before map update returns") Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/bloom_filter.c | 3 -- kernel/bpf/bpf_local_storage.c | 3 -- kernel/bpf/bpf_struct_ops.c | 3 -- kernel/bpf/cpumap.c | 4 -- kernel/bpf/devmap.c | 3 -- kernel/bpf/hashtab.c | 6 --- kernel/bpf/lpm_trie.c | 3 -- kernel/bpf/queue_stack_maps.c | 4 -- kernel/bpf/reuseport_array.c | 3 -- kernel/bpf/stackmap.c | 3 -- kernel/bpf/syscall.c | 47 +++++++++++++++++++ net/core/sock_map.c | 4 -- net/xdp/xskmap.c | 4 -- .../bpf/prog_tests/unpriv_bpf_disabled.c | 6 ++- 14 files changed, 52 insertions(+), 44 deletions(-)
diff --git a/kernel/bpf/bloom_filter.c b/kernel/bpf/bloom_filter.c index 540331b610a97..addf3dd57b59b 100644 --- a/kernel/bpf/bloom_filter.c +++ b/kernel/bpf/bloom_filter.c @@ -86,9 +86,6 @@ static struct bpf_map *bloom_map_alloc(union bpf_attr *attr) int numa_node = bpf_map_attr_numa_node(attr); struct bpf_bloom_filter *bloom;
- if (!bpf_capable()) - return ERR_PTR(-EPERM); - if (attr->key_size != 0 || attr->value_size == 0 || attr->max_entries == 0 || attr->map_flags & ~BLOOM_CREATE_FLAG_MASK || diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index 47d9948d768f0..b5149cfce7d4d 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -723,9 +723,6 @@ int bpf_local_storage_map_alloc_check(union bpf_attr *attr) !attr->btf_key_type_id || !attr->btf_value_type_id) return -EINVAL;
- if (!bpf_capable()) - return -EPERM; - if (attr->value_size > BPF_LOCAL_STORAGE_MAX_VALUE_SIZE) return -E2BIG;
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index d3f0a4825fa61..116a0ce378ecd 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -655,9 +655,6 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr) const struct btf_type *t, *vt; struct bpf_map *map;
- if (!bpf_capable()) - return ERR_PTR(-EPERM); - st_ops = bpf_struct_ops_find_value(attr->btf_vmlinux_value_type_id); if (!st_ops) return ERR_PTR(-ENOTSUPP); diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 3da63be602d1c..6ae02be7a48e3 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -28,7 +28,6 @@ #include <linux/sched.h> #include <linux/workqueue.h> #include <linux/kthread.h> -#include <linux/capability.h> #include <trace/events/xdp.h> #include <linux/btf_ids.h>
@@ -89,9 +88,6 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) u32 value_size = attr->value_size; struct bpf_cpu_map *cmap;
- if (!bpf_capable()) - return ERR_PTR(-EPERM); - /* check sanity of attributes */ if (attr->max_entries == 0 || attr->key_size != 4 || (value_size != offsetofend(struct bpf_cpumap_val, qsize) && diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 802692fa3905c..49cc0b5671c61 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -160,9 +160,6 @@ static struct bpf_map *dev_map_alloc(union bpf_attr *attr) struct bpf_dtab *dtab; int err;
- if (!capable(CAP_NET_ADMIN)) - return ERR_PTR(-EPERM); - dtab = bpf_map_area_alloc(sizeof(*dtab), NUMA_NO_NODE); if (!dtab) return ERR_PTR(-ENOMEM); diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 9901efee4339d..56d3da7d0bc66 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -422,12 +422,6 @@ static int htab_map_alloc_check(union bpf_attr *attr) BUILD_BUG_ON(offsetof(struct htab_elem, fnode.next) != offsetof(struct htab_elem, hash_node.pprev));
- if (lru && !bpf_capable()) - /* LRU implementation is much complicated than other - * maps. Hence, limit to CAP_BPF. - */ - return -EPERM; - if (zero_seed && !capable(CAP_SYS_ADMIN)) /* Guard against local DoS, and discourage production use. */ return -EPERM; diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c index e0d3ddf2037ab..17c7e7782a1f7 100644 --- a/kernel/bpf/lpm_trie.c +++ b/kernel/bpf/lpm_trie.c @@ -544,9 +544,6 @@ static struct bpf_map *trie_alloc(union bpf_attr *attr) { struct lpm_trie *trie;
- if (!bpf_capable()) - return ERR_PTR(-EPERM); - /* check sanity of attributes */ if (attr->max_entries == 0 || !(attr->map_flags & BPF_F_NO_PREALLOC) || diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c index 601609164ef34..8d2ddcb7566b7 100644 --- a/kernel/bpf/queue_stack_maps.c +++ b/kernel/bpf/queue_stack_maps.c @@ -7,7 +7,6 @@ #include <linux/bpf.h> #include <linux/list.h> #include <linux/slab.h> -#include <linux/capability.h> #include <linux/btf_ids.h> #include "percpu_freelist.h"
@@ -46,9 +45,6 @@ static bool queue_stack_map_is_full(struct bpf_queue_stack *qs) /* Called from syscall */ static int queue_stack_map_alloc_check(union bpf_attr *attr) { - if (!bpf_capable()) - return -EPERM; - /* check sanity of attributes */ if (attr->max_entries == 0 || attr->key_size != 0 || attr->value_size == 0 || diff --git a/kernel/bpf/reuseport_array.c b/kernel/bpf/reuseport_array.c index cbf2d8d784b89..4b4f9670f1a9a 100644 --- a/kernel/bpf/reuseport_array.c +++ b/kernel/bpf/reuseport_array.c @@ -151,9 +151,6 @@ static struct bpf_map *reuseport_array_alloc(union bpf_attr *attr) int numa_node = bpf_map_attr_numa_node(attr); struct reuseport_array *array;
- if (!bpf_capable()) - return ERR_PTR(-EPERM); - /* allocate all map elements and zero-initialize them */ array = bpf_map_area_alloc(struct_size(array, ptrs, attr->max_entries), numa_node); if (!array) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index b25fce425b2c6..458bb80b14d57 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -74,9 +74,6 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr) u64 cost, n_buckets; int err;
- if (!bpf_capable()) - return ERR_PTR(-EPERM); - if (attr->map_flags & ~STACK_CREATE_FLAG_MASK) return ERR_PTR(-EINVAL);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 8fddf0eea9bf2..f715ec5d541ad 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1156,6 +1156,53 @@ static int map_create(union bpf_attr *attr) if (sysctl_unprivileged_bpf_disabled && !bpf_capable()) return -EPERM;
+ /* check privileged map type permissions */ + switch (map_type) { + case BPF_MAP_TYPE_ARRAY: + case BPF_MAP_TYPE_PERCPU_ARRAY: + case BPF_MAP_TYPE_PROG_ARRAY: + case BPF_MAP_TYPE_PERF_EVENT_ARRAY: + case BPF_MAP_TYPE_CGROUP_ARRAY: + case BPF_MAP_TYPE_ARRAY_OF_MAPS: + case BPF_MAP_TYPE_HASH: + case BPF_MAP_TYPE_PERCPU_HASH: + case BPF_MAP_TYPE_HASH_OF_MAPS: + case BPF_MAP_TYPE_RINGBUF: + case BPF_MAP_TYPE_USER_RINGBUF: + case BPF_MAP_TYPE_CGROUP_STORAGE: + case BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE: + /* unprivileged */ + break; + case BPF_MAP_TYPE_SK_STORAGE: + case BPF_MAP_TYPE_INODE_STORAGE: + case BPF_MAP_TYPE_TASK_STORAGE: + case BPF_MAP_TYPE_CGRP_STORAGE: + case BPF_MAP_TYPE_BLOOM_FILTER: + case BPF_MAP_TYPE_LPM_TRIE: + case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY: + case BPF_MAP_TYPE_STACK_TRACE: + case BPF_MAP_TYPE_QUEUE: + case BPF_MAP_TYPE_STACK: + case BPF_MAP_TYPE_LRU_HASH: + case BPF_MAP_TYPE_LRU_PERCPU_HASH: + case BPF_MAP_TYPE_STRUCT_OPS: + case BPF_MAP_TYPE_CPUMAP: + if (!bpf_capable()) + return -EPERM; + break; + case BPF_MAP_TYPE_SOCKMAP: + case BPF_MAP_TYPE_SOCKHASH: + case BPF_MAP_TYPE_DEVMAP: + case BPF_MAP_TYPE_DEVMAP_HASH: + case BPF_MAP_TYPE_XSKMAP: + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + break; + default: + WARN(1, "unsupported map type %d", map_type); + return -EPERM; + } + map = ops->map_alloc(attr); if (IS_ERR(map)) return PTR_ERR(map); diff --git a/net/core/sock_map.c b/net/core/sock_map.c index 00afb66cd0950..19538d6287144 100644 --- a/net/core/sock_map.c +++ b/net/core/sock_map.c @@ -32,8 +32,6 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr) { struct bpf_stab *stab;
- if (!capable(CAP_NET_ADMIN)) - return ERR_PTR(-EPERM); if (attr->max_entries == 0 || attr->key_size != 4 || (attr->value_size != sizeof(u32) && @@ -1085,8 +1083,6 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr) struct bpf_shtab *htab; int i, err;
- if (!capable(CAP_NET_ADMIN)) - return ERR_PTR(-EPERM); if (attr->max_entries == 0 || attr->key_size == 0 || (attr->value_size != sizeof(u32) && diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c index 2c1427074a3bb..e1c526f97ce31 100644 --- a/net/xdp/xskmap.c +++ b/net/xdp/xskmap.c @@ -5,7 +5,6 @@
#include <linux/bpf.h> #include <linux/filter.h> -#include <linux/capability.h> #include <net/xdp_sock.h> #include <linux/slab.h> #include <linux/sched.h> @@ -68,9 +67,6 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) int numa_node; u64 size;
- if (!capable(CAP_NET_ADMIN)) - return ERR_PTR(-EPERM); - if (attr->max_entries == 0 || attr->key_size != 4 || attr->value_size != 4 || attr->map_flags & ~(BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY)) diff --git a/tools/testing/selftests/bpf/prog_tests/unpriv_bpf_disabled.c b/tools/testing/selftests/bpf/prog_tests/unpriv_bpf_disabled.c index 8383a99f610fd..0adf8d9475cb2 100644 --- a/tools/testing/selftests/bpf/prog_tests/unpriv_bpf_disabled.c +++ b/tools/testing/selftests/bpf/prog_tests/unpriv_bpf_disabled.c @@ -171,7 +171,11 @@ static void test_unpriv_bpf_disabled_negative(struct test_unpriv_bpf_disabled *s prog_insns, prog_insn_cnt, &load_opts), -EPERM, "prog_load_fails");
- for (i = BPF_MAP_TYPE_HASH; i <= BPF_MAP_TYPE_BLOOM_FILTER; i++) + /* some map types require particular correct parameters which could be + * sanity-checked before enforcing -EPERM, so only validate that + * the simple ARRAY and HASH maps are failing with -EPERM + */ + for (i = BPF_MAP_TYPE_HASH; i <= BPF_MAP_TYPE_ARRAY; i++) ASSERT_EQ(bpf_map_create(i, NULL, sizeof(int), sizeof(int), 1, NULL), -EPERM, "map_create_fails");
From: Hou Tao houtao1@huawei.com
[ Upstream commit 640a604585aa30f93e39b17d4d6ba69fcb1e66c9 ]
The following warning was reported when running stress-mode enabled xdp_redirect_cpu with some RT threads:
------------[ cut here ]------------ WARNING: CPU: 4 PID: 65 at kernel/bpf/cpumap.c:135 CPU: 4 PID: 65 Comm: kworker/4:1 Not tainted 6.5.0-rc2+ #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) Workqueue: events cpu_map_kthread_stop RIP: 0010:put_cpu_map_entry+0xda/0x220 ...... Call Trace: <TASK> ? show_regs+0x65/0x70 ? __warn+0xa5/0x240 ...... ? put_cpu_map_entry+0xda/0x220 cpu_map_kthread_stop+0x41/0x60 process_one_work+0x6b0/0xb80 worker_thread+0x96/0x720 kthread+0x1a5/0x1f0 ret_from_fork+0x3a/0x70 ret_from_fork_asm+0x1b/0x30 </TASK>
The root cause is the same as commit 436901649731 ("bpf: cpumap: Fix memory leak in cpu_map_update_elem"). The kthread is stopped prematurely by kthread_stop() in cpu_map_kthread_stop(), and kthread() doesn't call cpu_map_kthread_run() at all but XDP program has already queued some frames or skbs into ptr_ring. So when __cpu_map_ring_cleanup() checks the ptr_ring, it will find it was not emptied and report a warning.
An alternative fix is to use __cpu_map_ring_cleanup() to drop these pending frames or skbs when kthread_stop() returns -EINTR, but it may confuse the user, because these frames or skbs have been handled correctly by XDP program. So instead of dropping these frames or skbs, just make sure the per-cpu kthread is running before __cpu_map_entry_alloc() returns.
After apply the fix, the error handle for kthread_stop() will be unnecessary because it will always return 0, so just remove it.
Fixes: 6710e1126934 ("bpf: introduce new bpf cpu map type BPF_MAP_TYPE_CPUMAP") Signed-off-by: Hou Tao houtao1@huawei.com Reviewed-by: Pu Lehui pulehui@huawei.com Acked-by: Jesper Dangaard Brouer hawk@kernel.org Link: https://lore.kernel.org/r/20230729095107.1722450-2-houtao@huaweicloud.com Signed-off-by: Martin KaFai Lau martin.lau@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/cpumap.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 6ae02be7a48e3..7eeb200251640 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -28,6 +28,7 @@ #include <linux/sched.h> #include <linux/workqueue.h> #include <linux/kthread.h> +#include <linux/completion.h> #include <trace/events/xdp.h> #include <linux/btf_ids.h>
@@ -73,6 +74,7 @@ struct bpf_cpu_map_entry { struct rcu_head rcu;
struct work_struct kthread_stop_wq; + struct completion kthread_running; };
struct bpf_cpu_map { @@ -153,7 +155,6 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu) static void cpu_map_kthread_stop(struct work_struct *work) { struct bpf_cpu_map_entry *rcpu; - int err;
rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq);
@@ -163,14 +164,7 @@ static void cpu_map_kthread_stop(struct work_struct *work) rcu_barrier();
/* kthread_stop will wake_up_process and wait for it to complete */ - err = kthread_stop(rcpu->kthread); - if (err) { - /* kthread_stop may be called before cpu_map_kthread_run - * is executed, so we need to release the memory related - * to rcpu. - */ - put_cpu_map_entry(rcpu); - } + kthread_stop(rcpu->kthread); }
static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu, @@ -298,11 +292,11 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, return nframes; }
- static int cpu_map_kthread_run(void *data) { struct bpf_cpu_map_entry *rcpu = data;
+ complete(&rcpu->kthread_running); set_current_state(TASK_INTERRUPTIBLE);
/* When kthread gives stop order, then rcpu have been disconnected @@ -467,6 +461,7 @@ __cpu_map_entry_alloc(struct bpf_map *map, struct bpf_cpumap_val *value, goto free_ptr_ring;
/* Setup kthread */ + init_completion(&rcpu->kthread_running); rcpu->kthread = kthread_create_on_node(cpu_map_kthread_run, rcpu, numa, "cpumap/%d/map:%d", cpu, map->id); @@ -480,6 +475,12 @@ __cpu_map_entry_alloc(struct bpf_map *map, struct bpf_cpumap_val *value, kthread_bind(rcpu->kthread, cpu); wake_up_process(rcpu->kthread);
+ /* Make sure kthread has been running, so kthread_stop() will not + * stop the kthread prematurely and all pending frames or skbs + * will be handled by the kthread before kthread_stop() returns. + */ + wait_for_completion(&rcpu->kthread_running); + return rcpu;
free_prog:
From: Hou Tao houtao1@huawei.com
[ Upstream commit 7c62b75cd1a792e14b037fa4f61f9b18914e7de1 ]
The following warning was reported when running xdp_redirect_cpu with both skb-mode and stress-mode enabled:
------------[ cut here ]------------ Incorrect XDP memory type (-2128176192) usage WARNING: CPU: 7 PID: 1442 at net/core/xdp.c:405 Modules linked in: CPU: 7 PID: 1442 Comm: kworker/7:0 Tainted: G 6.5.0-rc2+ #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) Workqueue: events __cpu_map_entry_free RIP: 0010:__xdp_return+0x1e4/0x4a0 ...... Call Trace: <TASK> ? show_regs+0x65/0x70 ? __warn+0xa5/0x240 ? __xdp_return+0x1e4/0x4a0 ...... xdp_return_frame+0x4d/0x150 __cpu_map_entry_free+0xf9/0x230 process_one_work+0x6b0/0xb80 worker_thread+0x96/0x720 kthread+0x1a5/0x1f0 ret_from_fork+0x3a/0x70 ret_from_fork_asm+0x1b/0x30 </TASK>
The reason for the warning is twofold. One is due to the kthread cpu_map_kthread_run() is stopped prematurely. Another one is __cpu_map_ring_cleanup() doesn't handle skb mode and treats skbs in ptr_ring as XDP frames.
Prematurely-stopped kthread will be fixed by the preceding patch and ptr_ring will be empty when __cpu_map_ring_cleanup() is called. But as the comments in __cpu_map_ring_cleanup() said, handling and freeing skbs in ptr_ring as well to "catch any broken behaviour gracefully".
Fixes: 11941f8a8536 ("bpf: cpumap: Implement generic cpumap") Signed-off-by: Hou Tao houtao1@huawei.com Acked-by: Jesper Dangaard Brouer hawk@kernel.org Link: https://lore.kernel.org/r/20230729095107.1722450-3-houtao@huaweicloud.com Signed-off-by: Martin KaFai Lau martin.lau@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/cpumap.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 7eeb200251640..286ab3db0fde8 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -131,11 +131,17 @@ static void __cpu_map_ring_cleanup(struct ptr_ring *ring) * invoked cpu_map_kthread_stop(). Catch any broken behaviour * gracefully and warn once. */ - struct xdp_frame *xdpf; + void *ptr;
- while ((xdpf = ptr_ring_consume(ring))) - if (WARN_ON_ONCE(xdpf)) - xdp_return_frame(xdpf); + while ((ptr = ptr_ring_consume(ring))) { + WARN_ON_ONCE(1); + if (unlikely(__ptr_test_bit(0, &ptr))) { + __ptr_clear_bit(0, &ptr); + kfree_skb(ptr); + continue; + } + xdp_return_frame(ptr); + } }
static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)
From: valis sec@valis.email
[ Upstream commit 3044b16e7c6fe5d24b1cdbcf1bd0a9d92d1ebd81 ]
When u32_change() is called on an existing filter, the whole tcf_result struct is always copied into the new instance of the filter.
This causes a problem when updating a filter bound to a class, as tcf_unbind_filter() is always called on the old instance in the success path, decreasing filter_cnt of the still referenced class and allowing it to be deleted, leading to a use-after-free.
Fix this by no longer copying the tcf_result struct from the old filter.
Fixes: de5df63228fc ("net: sched: cls_u32 changes to knode must appear atomic to readers") Reported-by: valis sec@valis.email Reported-by: M A Ramdhan ramdhan@starlabs.sg Signed-off-by: valis sec@valis.email Signed-off-by: Jamal Hadi Salim jhs@mojatatu.com Reviewed-by: Victor Nogueira victor@mojatatu.com Reviewed-by: Pedro Tammela pctammela@mojatatu.com Reviewed-by: M A Ramdhan ramdhan@starlabs.sg Link: https://lore.kernel.org/r/20230729123202.72406-2-jhs@mojatatu.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/cls_u32.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c index 907e58841fe80..da4c179a4d418 100644 --- a/net/sched/cls_u32.c +++ b/net/sched/cls_u32.c @@ -826,7 +826,6 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
new->ifindex = n->ifindex; new->fshift = n->fshift; - new->res = n->res; new->flags = n->flags; RCU_INIT_POINTER(new->ht_down, ht);
From: valis sec@valis.email
[ Upstream commit 76e42ae831991c828cffa8c37736ebfb831ad5ec ]
When fw_change() is called on an existing filter, the whole tcf_result struct is always copied into the new instance of the filter.
This causes a problem when updating a filter bound to a class, as tcf_unbind_filter() is always called on the old instance in the success path, decreasing filter_cnt of the still referenced class and allowing it to be deleted, leading to a use-after-free.
Fix this by no longer copying the tcf_result struct from the old filter.
Fixes: e35a8ee5993b ("net: sched: fw use RCU") Reported-by: valis sec@valis.email Reported-by: Bing-Jhong Billy Jheng billy@starlabs.sg Signed-off-by: valis sec@valis.email Signed-off-by: Jamal Hadi Salim jhs@mojatatu.com Reviewed-by: Victor Nogueira victor@mojatatu.com Reviewed-by: Pedro Tammela pctammela@mojatatu.com Reviewed-by: M A Ramdhan ramdhan@starlabs.sg Link: https://lore.kernel.org/r/20230729123202.72406-3-jhs@mojatatu.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/cls_fw.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/net/sched/cls_fw.c b/net/sched/cls_fw.c index 8641f80593179..c49d6af0e0480 100644 --- a/net/sched/cls_fw.c +++ b/net/sched/cls_fw.c @@ -267,7 +267,6 @@ static int fw_change(struct net *net, struct sk_buff *in_skb, return -ENOBUFS;
fnew->id = f->id; - fnew->res = f->res; fnew->ifindex = f->ifindex; fnew->tp = f->tp;
From: valis sec@valis.email
[ Upstream commit b80b829e9e2c1b3f7aae34855e04d8f6ecaf13c8 ]
When route4_change() is called on an existing filter, the whole tcf_result struct is always copied into the new instance of the filter.
This causes a problem when updating a filter bound to a class, as tcf_unbind_filter() is always called on the old instance in the success path, decreasing filter_cnt of the still referenced class and allowing it to be deleted, leading to a use-after-free.
Fix this by no longer copying the tcf_result struct from the old filter.
Fixes: 1109c00547fc ("net: sched: RCU cls_route") Reported-by: valis sec@valis.email Reported-by: Bing-Jhong Billy Jheng billy@starlabs.sg Signed-off-by: valis sec@valis.email Signed-off-by: Jamal Hadi Salim jhs@mojatatu.com Reviewed-by: Victor Nogueira victor@mojatatu.com Reviewed-by: Pedro Tammela pctammela@mojatatu.com Reviewed-by: M A Ramdhan ramdhan@starlabs.sg Link: https://lore.kernel.org/r/20230729123202.72406-4-jhs@mojatatu.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/cls_route.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c index d0c53724d3e86..1e20bbd687f1d 100644 --- a/net/sched/cls_route.c +++ b/net/sched/cls_route.c @@ -513,7 +513,6 @@ static int route4_change(struct net *net, struct sk_buff *in_skb, if (fold) { f->id = fold->id; f->iif = fold->iif; - f->res = fold->res; f->handle = fold->handle;
f->tp = fold->tp;
From: Tomas Glozar tglozar@redhat.com
[ Upstream commit 13d2618b48f15966d1adfe1ff6a1985f5eef40ba ]
Disabling preemption in sock_map_sk_acquire conflicts with GFP_ATOMIC allocation later in sk_psock_init_link on PREEMPT_RT kernels, since GFP_ATOMIC might sleep on RT (see bpf: Make BPF and PREEMPT_RT co-exist patchset notes for details).
This causes calling bpf_map_update_elem on BPF_MAP_TYPE_SOCKMAP maps to BUG (sleeping function called from invalid context) on RT kernels.
preempt_disable was introduced together with lock_sk and rcu_read_lock in commit 99ba2b5aba24e ("bpf: sockhash, disallow bpf_tcp_close and update in parallel"), probably to match disabled migration of BPF programs, and is no longer necessary.
Remove preempt_disable to fix BUG in sock_map_update_common on RT.
Signed-off-by: Tomas Glozar tglozar@redhat.com Reviewed-by: Jakub Sitnicki jakub@cloudflare.com Link: https://lore.kernel.org/all/20200224140131.461979697@linutronix.de/ Fixes: 99ba2b5aba24 ("bpf: sockhash, disallow bpf_tcp_close and update in parallel") Reviewed-by: John Fastabend john.fastabend@gmail.com Link: https://lore.kernel.org/r/20230728064411.305576-1-tglozar@redhat.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sock_map.c | 2 -- 1 file changed, 2 deletions(-)
diff --git a/net/core/sock_map.c b/net/core/sock_map.c index 19538d6287144..08ab108206bf8 100644 --- a/net/core/sock_map.c +++ b/net/core/sock_map.c @@ -115,7 +115,6 @@ static void sock_map_sk_acquire(struct sock *sk) __acquires(&sk->sk_lock.slock) { lock_sock(sk); - preempt_disable(); rcu_read_lock(); }
@@ -123,7 +122,6 @@ static void sock_map_sk_release(struct sock *sk) __releases(&sk->sk_lock.slock) { rcu_read_unlock(); - preempt_enable(); release_sock(sk); }
From: Dan Carpenter dan.carpenter@linaro.org
[ Upstream commit ef45e8400f5bb66b03cc949f76c80e2a118447de ]
Most kernel functions return negative error codes but some irq functions return zero on error. In this code irq_of_parse_and_map(), returns zero and platform_get_irq() returns negative error codes. We need to handle both cases appropriately.
Fixes: 8425c41d1ef7 ("net: ll_temac: Extend support to non-device-tree platforms") Signed-off-by: Dan Carpenter dan.carpenter@linaro.org Acked-by: Esben Haabendal esben@geanix.com Reviewed-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Harini Katakam harini.katakam@amd.com Link: https://lore.kernel.org/r/3d0aef75-06e0-45a5-a2a6-2cc4738d4143@moroto.mounta... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/xilinx/ll_temac_main.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c index e0ac1bcd9925c..49f303353ecb0 100644 --- a/drivers/net/ethernet/xilinx/ll_temac_main.c +++ b/drivers/net/ethernet/xilinx/ll_temac_main.c @@ -1567,12 +1567,16 @@ static int temac_probe(struct platform_device *pdev) }
/* Error handle returned DMA RX and TX interrupts */ - if (lp->rx_irq < 0) - return dev_err_probe(&pdev->dev, lp->rx_irq, + if (lp->rx_irq <= 0) { + rc = lp->rx_irq ?: -EINVAL; + return dev_err_probe(&pdev->dev, rc, "could not get DMA RX irq\n"); - if (lp->tx_irq < 0) - return dev_err_probe(&pdev->dev, lp->tx_irq, + } + if (lp->tx_irq <= 0) { + rc = lp->tx_irq ?: -EINVAL; + return dev_err_probe(&pdev->dev, rc, "could not get DMA TX irq\n"); + }
if (temac_np) { /* Retrieve the MAC address */
From: Yuanjun Gong ruc_gongyuanjun@163.com
[ Upstream commit 0b6291ad1940c403734312d0e453e8dac9148f69 ]
in korina_probe(), the return value of clk_prepare_enable() should be checked since it might fail. we can use devm_clk_get_optional_enabled() instead of devm_clk_get_optional() and clk_prepare_enable() to automatically handle the error.
Fixes: e4cd854ec487 ("net: korina: Get mdio input clock via common clock framework") Signed-off-by: Yuanjun Gong ruc_gongyuanjun@163.com Link: https://lore.kernel.org/r/20230731090535.21416-1-ruc_gongyuanjun@163.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/korina.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/korina.c b/drivers/net/ethernet/korina.c index 2b9335cb4bb3a..8537578e1cf1d 100644 --- a/drivers/net/ethernet/korina.c +++ b/drivers/net/ethernet/korina.c @@ -1302,11 +1302,10 @@ static int korina_probe(struct platform_device *pdev) else if (of_get_ethdev_address(pdev->dev.of_node, dev) < 0) eth_hw_addr_random(dev);
- clk = devm_clk_get_optional(&pdev->dev, "mdioclk"); + clk = devm_clk_get_optional_enabled(&pdev->dev, "mdioclk"); if (IS_ERR(clk)) return PTR_ERR(clk); if (clk) { - clk_prepare_enable(clk); lp->mii_clock_freq = clk_get_rate(clk); } else { lp->mii_clock_freq = 200000000; /* max possible input clk */
From: Mark Brown broonie@kernel.org
[ Upstream commit f3bb7759a924713bc54d15f6d0d70733b5935fad ]
As documented in acd7aaf51b20 ("netsec: ignore 'phy-mode' device property on ACPI systems") the SocioNext SynQuacer platform ships with firmware defining the PHY mode as RGMII even though the physical configuration of the PHY is for TX and RX delays. Since bbc4d71d63549bc ("net: phy: realtek: fix rtl8211e rx/tx delay config") this has caused misconfiguration of the PHY, rendering the network unusable.
This was worked around for ACPI by ignoring the phy-mode property but the system is also used with DT. For DT instead if we're running on a SynQuacer force a working PHY mode, as well as the standard EDK2 firmware with DT there are also some of these systems that use u-boot and might not initialise the PHY if not netbooting. Newer firmware imagaes for at least EDK2 are available from Linaro so print a warning when doing this.
Fixes: 533dd11a12f6 ("net: socionext: Add Synquacer NetSec driver") Signed-off-by: Mark Brown broonie@kernel.org Acked-by: Ard Biesheuvel ardb@kernel.org Acked-by: Ilias Apalodimas ilias.apalodimas@linaro.org Reviewed-by: Andrew Lunn andrew@lunn.ch Link: https://lore.kernel.org/r/20230731-synquacer-net-v3-1-944be5f06428@kernel.or... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/socionext/netsec.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index 2d7347b71c41b..0dcd6a568b061 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -1851,6 +1851,17 @@ static int netsec_of_probe(struct platform_device *pdev, return err; }
+ /* + * SynQuacer is physically configured with TX and RX delays + * but the standard firmware claimed otherwise for a long + * time, ignore it. + */ + if (of_machine_is_compatible("socionext,developer-box") && + priv->phy_interface != PHY_INTERFACE_MODE_RGMII_ID) { + dev_warn(&pdev->dev, "Outdated firmware reports incorrect PHY mode, overriding\n"); + priv->phy_interface = PHY_INTERFACE_MODE_RGMII_ID; + } + priv->phy_np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); if (!priv->phy_np) { dev_err(&pdev->dev, "missing required property 'phy-handle'\n");
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 3ff1617450eceb290ac17120fc172815e09a93cf ]
Dan Carpenter reported an error spotted by Smatch.
./tools/testing/selftests/net/so_incoming_cpu.c:163 create_clients() error: uninitialized symbol 'ret'.
The returned value of sched_setaffinity() should be checked with ASSERT_EQ(), but the value was not saved in a proper variable, resulting in an error above.
Let's save the returned value of with sched_setaffinity().
Fixes: 6df96146b202 ("selftest: Add test for SO_INCOMING_CPU.") Reported-by: Dan Carpenter dan.carpenter@linaro.org Closes: https://lore.kernel.org/linux-kselftest/fe376760-33b6-4fc9-88e8-178e809af1ac... Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20230731181553.5392-1-kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/net/so_incoming_cpu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/net/so_incoming_cpu.c b/tools/testing/selftests/net/so_incoming_cpu.c index 0e04f9fef9867..a148181641026 100644 --- a/tools/testing/selftests/net/so_incoming_cpu.c +++ b/tools/testing/selftests/net/so_incoming_cpu.c @@ -159,7 +159,7 @@ void create_clients(struct __test_metadata *_metadata, /* Make sure SYN will be processed on the i-th CPU * and finally distributed to the i-th listener. */ - sched_setaffinity(0, sizeof(cpu_set), &cpu_set); + ret = sched_setaffinity(0, sizeof(cpu_set), &cpu_set); ASSERT_EQ(ret, 0);
for (j = 0; j < CLIENT_PER_SERVER; j++) {
From: Somnath Kotur somnath.kotur@broadcom.com
[ Upstream commit f6974b4c2d8e1062b5a52228ee47293c15b4ee1e ]
The RXBD length field on all bnxt chips is 16-bit and so we cannot support a full page when the native page size is 64K or greater. The non-XDP (non page pool) code path has logic to handle this but the XDP page pool code path does not handle this. Add the missing logic to use page_pool_dev_alloc_frag() to allocate 32K chunks if the page size is 64K or greater.
Fixes: 9f4b28301ce6 ("bnxt: XDP multibuffer enablement") Link: https://lore.kernel.org/netdev/20230728231829.235716-2-michael.chan@broadcom... Reviewed-by: Andy Gospodarek andrew.gospodarek@broadcom.com Signed-off-by: Somnath Kotur somnath.kotur@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Link: https://lore.kernel.org/r/20230731142043.58855-2-michael.chan@broadcom.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 42 ++++++++++++------- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 6 +-- 2 files changed, 29 insertions(+), 19 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 0b314bf4fbe65..81ed3744fa330 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -699,17 +699,24 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, struct bnxt_rx_ring_info *rxr, + unsigned int *offset, gfp_t gfp) { struct device *dev = &bp->pdev->dev; struct page *page;
- page = page_pool_dev_alloc_pages(rxr->page_pool); + if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { + page = page_pool_dev_alloc_frag(rxr->page_pool, offset, + BNXT_RX_PAGE_SIZE); + } else { + page = page_pool_dev_alloc_pages(rxr->page_pool); + *offset = 0; + } if (!page) return NULL;
- *mapping = dma_map_page_attrs(dev, page, 0, PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); + *mapping = dma_map_page_attrs(dev, page, *offset, BNXT_RX_PAGE_SIZE, + bp->rx_dir, DMA_ATTR_WEAK_ORDERING); if (dma_mapping_error(dev, *mapping)) { page_pool_recycle_direct(rxr->page_pool, page); return NULL; @@ -749,15 +756,16 @@ int bnxt_alloc_rx_data(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, dma_addr_t mapping;
if (BNXT_RX_PAGE_MODE(bp)) { + unsigned int offset; struct page *page = - __bnxt_alloc_rx_page(bp, &mapping, rxr, gfp); + __bnxt_alloc_rx_page(bp, &mapping, rxr, &offset, gfp);
if (!page) return -ENOMEM;
mapping += bp->rx_dma_offset; rx_buf->data = page; - rx_buf->data_ptr = page_address(page) + bp->rx_offset; + rx_buf->data_ptr = page_address(page) + offset + bp->rx_offset; } else { u8 *data = __bnxt_alloc_rx_frag(bp, &mapping, gfp);
@@ -817,7 +825,7 @@ static inline int bnxt_alloc_rx_page(struct bnxt *bp, unsigned int offset = 0;
if (BNXT_RX_PAGE_MODE(bp)) { - page = __bnxt_alloc_rx_page(bp, &mapping, rxr, gfp); + page = __bnxt_alloc_rx_page(bp, &mapping, rxr, &offset, gfp);
if (!page) return -ENOMEM; @@ -964,15 +972,15 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct bnxt *bp, return NULL; } dma_addr -= bp->rx_dma_offset; - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); - skb = build_skb(page_address(page), PAGE_SIZE); + dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + bp->rx_dir, DMA_ATTR_WEAK_ORDERING); + skb = build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE); if (!skb) { page_pool_recycle_direct(rxr->page_pool, page); return NULL; } skb_mark_for_recycle(skb); - skb_reserve(skb, bp->rx_dma_offset); + skb_reserve(skb, bp->rx_offset); __skb_put(skb, len);
return skb; @@ -998,8 +1006,8 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *bp, return NULL; } dma_addr -= bp->rx_dma_offset; - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); + dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + bp->rx_dir, DMA_ATTR_WEAK_ORDERING);
if (unlikely(!payload)) payload = eth_get_headlen(bp->dev, data_ptr, len); @@ -1012,7 +1020,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *bp,
skb_mark_for_recycle(skb); off = (void *)data_ptr - page_address(page); - skb_add_rx_frag(skb, 0, page, off, len, PAGE_SIZE); + skb_add_rx_frag(skb, 0, page, off, len, BNXT_RX_PAGE_SIZE); memcpy(skb->data - NET_IP_ALIGN, data_ptr - NET_IP_ALIGN, payload + NET_IP_ALIGN);
@@ -1147,7 +1155,7 @@ static struct sk_buff *bnxt_rx_agg_pages_skb(struct bnxt *bp,
skb->data_len += total_frag_len; skb->len += total_frag_len; - skb->truesize += PAGE_SIZE * agg_bufs; + skb->truesize += BNXT_RX_PAGE_SIZE * agg_bufs; return skb; }
@@ -2949,8 +2957,8 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) rx_buf->data = NULL; if (BNXT_RX_PAGE_MODE(bp)) { mapping -= bp->rx_dma_offset; - dma_unmap_page_attrs(&pdev->dev, mapping, PAGE_SIZE, - bp->rx_dir, + dma_unmap_page_attrs(&pdev->dev, mapping, + BNXT_RX_PAGE_SIZE, bp->rx_dir, DMA_ATTR_WEAK_ORDERING); page_pool_recycle_direct(rxr->page_pool, data); } else { @@ -3219,6 +3227,8 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, pp.napi = &rxr->bnapi->napi; pp.dev = &bp->pdev->dev; pp.dma_dir = DMA_BIDIRECTIONAL; + if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) + pp.flags |= PP_FLAG_PAGE_FRAG;
rxr->page_pool = page_pool_create(&pp); if (IS_ERR(rxr->page_pool)) { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 7f2f9a317d473..fb43232310b2d 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -186,8 +186,8 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, u8 *data_ptr, unsigned int len, struct xdp_buff *xdp) { + u32 buflen = BNXT_RX_PAGE_SIZE; struct bnxt_sw_rx_bd *rx_buf; - u32 buflen = PAGE_SIZE; struct pci_dev *pdev; dma_addr_t mapping; u32 offset; @@ -303,7 +303,7 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, rx_buf = &rxr->rx_buf_ring[cons]; mapping = rx_buf->mapping - bp->rx_dma_offset; dma_unmap_page_attrs(&pdev->dev, mapping, - PAGE_SIZE, bp->rx_dir, + BNXT_RX_PAGE_SIZE, bp->rx_dir, DMA_ATTR_WEAK_ORDERING);
/* if we are unable to allocate a new buffer, abort and reuse */ @@ -486,7 +486,7 @@ bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, u8 num_frags, } xdp_update_skb_shared_info(skb, num_frags, sinfo->xdp_frags_size, - PAGE_SIZE * sinfo->nr_frags, + BNXT_RX_PAGE_SIZE * sinfo->nr_frags, xdp_buff_is_frag_pfmemalloc(xdp)); return skb; }
From: Michael Chan michael.chan@broadcom.com
[ Upstream commit 08450ea98ae98d5a35145b675b76db616046ea11 ]
The existing code does not allow the MTU to be set to the maximum even after an XDP program supporting multiple buffers is attached. Fix it to set the netdev->max_mtu to the maximum value if the attached XDP program supports mutiple buffers, regardless of the current MTU value.
Also use a local variable dev instead of repeatedly using bp->dev.
Fixes: 1dc4c557bfed ("bnxt: adding bnxt_xdp_build_skb to build skb from multibuffer xdp_buff") Reviewed-by: Somnath Kotur somnath.kotur@broadcom.com Reviewed-by: Ajit Khaparde ajit.khaparde@broadcom.com Reviewed-by: Andy Gospodarek andrew.gospodarek@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Link: https://lore.kernel.org/r/20230731142043.58855-3-michael.chan@broadcom.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 81ed3744fa330..e481960cb6c7a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4005,26 +4005,29 @@ void bnxt_set_ring_params(struct bnxt *bp) */ int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode) { + struct net_device *dev = bp->dev; + if (page_mode) { bp->flags &= ~BNXT_FLAG_AGG_RINGS; bp->flags |= BNXT_FLAG_RX_PAGE_MODE;
- if (bp->dev->mtu > BNXT_MAX_PAGE_MODE_MTU) { + if (bp->xdp_prog->aux->xdp_has_frags) + dev->max_mtu = min_t(u16, bp->max_mtu, BNXT_MAX_MTU); + else + dev->max_mtu = + min_t(u16, bp->max_mtu, BNXT_MAX_PAGE_MODE_MTU); + if (dev->mtu > BNXT_MAX_PAGE_MODE_MTU) { bp->flags |= BNXT_FLAG_JUMBO; bp->rx_skb_func = bnxt_rx_multi_page_skb; - bp->dev->max_mtu = - min_t(u16, bp->max_mtu, BNXT_MAX_MTU); } else { bp->flags |= BNXT_FLAG_NO_AGG_RINGS; bp->rx_skb_func = bnxt_rx_page_skb; - bp->dev->max_mtu = - min_t(u16, bp->max_mtu, BNXT_MAX_PAGE_MODE_MTU); } bp->rx_dir = DMA_BIDIRECTIONAL; /* Disable LRO or GRO_HW */ - netdev_update_features(bp->dev); + netdev_update_features(dev); } else { - bp->dev->max_mtu = bp->max_mtu; + dev->max_mtu = bp->max_mtu; bp->flags &= ~BNXT_FLAG_RX_PAGE_MODE; bp->rx_dir = DMA_FROM_DEVICE; bp->rx_skb_func = bnxt_rx_skb;
From: Lin Ma linma@zju.edu.cn
[ Upstream commit 31d49ba033095f6e8158c60f69714a500922e0c3 ]
The dcbnl_bcn_setcfg uses erroneous policy to parse tb[DCB_ATTR_BCN], which is introduced in commit 859ee3c43812 ("DCB: Add support for DCB BCN"). Please see the comment in below code
static int dcbnl_bcn_setcfg(...) { ... ret = nla_parse_nested_deprecated(..., dcbnl_pfc_up_nest, .. ) // !!! dcbnl_pfc_up_nest for attributes // DCB_PFC_UP_ATTR_0 to DCB_PFC_UP_ATTR_ALL in enum dcbnl_pfc_up_attrs ... for (i = DCB_BCN_ATTR_RP_0; i <= DCB_BCN_ATTR_RP_7; i++) { // !!! DCB_BCN_ATTR_RP_0 to DCB_BCN_ATTR_RP_7 in enum dcbnl_bcn_attrs ... value_byte = nla_get_u8(data[i]); ... } ... for (i = DCB_BCN_ATTR_BCNA_0; i <= DCB_BCN_ATTR_RI; i++) { // !!! DCB_BCN_ATTR_BCNA_0 to DCB_BCN_ATTR_RI in enum dcbnl_bcn_attrs ... value_int = nla_get_u32(data[i]); ... } ... }
That is, the nla_parse_nested_deprecated uses dcbnl_pfc_up_nest attributes to parse nlattr defined in dcbnl_pfc_up_attrs. But the following access code fetch each nlattr as dcbnl_bcn_attrs attributes. By looking up the associated nla_policy for dcbnl_bcn_attrs. We can find the beginning part of these two policies are "same".
static const struct nla_policy dcbnl_pfc_up_nest[...] = { [DCB_PFC_UP_ATTR_0] = {.type = NLA_U8}, [DCB_PFC_UP_ATTR_1] = {.type = NLA_U8}, [DCB_PFC_UP_ATTR_2] = {.type = NLA_U8}, [DCB_PFC_UP_ATTR_3] = {.type = NLA_U8}, [DCB_PFC_UP_ATTR_4] = {.type = NLA_U8}, [DCB_PFC_UP_ATTR_5] = {.type = NLA_U8}, [DCB_PFC_UP_ATTR_6] = {.type = NLA_U8}, [DCB_PFC_UP_ATTR_7] = {.type = NLA_U8}, [DCB_PFC_UP_ATTR_ALL] = {.type = NLA_FLAG}, };
static const struct nla_policy dcbnl_bcn_nest[...] = { [DCB_BCN_ATTR_RP_0] = {.type = NLA_U8}, [DCB_BCN_ATTR_RP_1] = {.type = NLA_U8}, [DCB_BCN_ATTR_RP_2] = {.type = NLA_U8}, [DCB_BCN_ATTR_RP_3] = {.type = NLA_U8}, [DCB_BCN_ATTR_RP_4] = {.type = NLA_U8}, [DCB_BCN_ATTR_RP_5] = {.type = NLA_U8}, [DCB_BCN_ATTR_RP_6] = {.type = NLA_U8}, [DCB_BCN_ATTR_RP_7] = {.type = NLA_U8}, [DCB_BCN_ATTR_RP_ALL] = {.type = NLA_FLAG}, // from here is somewhat different [DCB_BCN_ATTR_BCNA_0] = {.type = NLA_U32}, ... [DCB_BCN_ATTR_ALL] = {.type = NLA_FLAG}, };
Therefore, the current code is buggy and this nla_parse_nested_deprecated could overflow the dcbnl_pfc_up_nest and use the adjacent nla_policy to parse attributes from DCB_BCN_ATTR_BCNA_0.
Hence use the correct policy dcbnl_bcn_nest to parse the nested tb[DCB_ATTR_BCN] TLV.
Fixes: 859ee3c43812 ("DCB: Add support for DCB BCN") Signed-off-by: Lin Ma linma@zju.edu.cn Reviewed-by: Simon Horman horms@kernel.org Link: https://lore.kernel.org/r/20230801013248.87240-1-linma@zju.edu.cn Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/dcb/dcbnl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c index c0c4381285759..2e6b8c8fd2ded 100644 --- a/net/dcb/dcbnl.c +++ b/net/dcb/dcbnl.c @@ -980,7 +980,7 @@ static int dcbnl_bcn_setcfg(struct net_device *netdev, struct nlmsghdr *nlh, return -EOPNOTSUPP;
ret = nla_parse_nested_deprecated(data, DCB_BCN_ATTR_MAX, - tb[DCB_ATTR_BCN], dcbnl_pfc_up_nest, + tb[DCB_ATTR_BCN], dcbnl_bcn_nest, NULL); if (ret) return ret;
From: Alexandra Winter wintera@linux.ibm.com
[ Upstream commit 1cfef80d4c2b2c599189f36f36320b205d9447d9 ]
dev_close() and dev_open() are issued to change the interface state to DOWN or UP (dev->flags IFF_UP). When the netdev is set DOWN it loses e.g its Ipv6 addresses and routes. We don't want this in cases of device recovery (triggered by hardware or software) or when the qeth device is set offline.
Setting a qeth device offline or online and device recovery actions call netif_device_detach() and/or netif_device_attach(). That will reset or set the LOWER_UP indication i.e. change the dev->state Bit __LINK_STATE_PRESENT. That is enough to e.g. cause bond failovers, and still preserves the interface settings that are handled by the network stack.
Don't call dev_open() nor dev_close() from the qeth device driver. Let the network stack handle this.
Fixes: d4560150cb47 ("s390/qeth: call dev_close() during recovery") Signed-off-by: Alexandra Winter wintera@linux.ibm.com Reviewed-by: Wenjia Zhang wenjia@linux.ibm.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/s390/net/qeth_core.h | 1 - drivers/s390/net/qeth_core_main.c | 2 -- drivers/s390/net/qeth_l2_main.c | 9 ++++++--- drivers/s390/net/qeth_l3_main.c | 8 +++++--- 4 files changed, 11 insertions(+), 9 deletions(-)
diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index 1d195429753dd..613eab7297046 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -716,7 +716,6 @@ struct qeth_card_info { u16 chid; u8 ids_valid:1; /* cssid,iid,chid */ u8 dev_addr_is_registered:1; - u8 open_when_online:1; u8 promisc_mode:1; u8 use_v1_blkt:1; u8 is_vm_nic:1; diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 1d5b207c2b9e9..cd783290bde5e 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -5373,8 +5373,6 @@ int qeth_set_offline(struct qeth_card *card, const struct qeth_discipline *disc, qeth_clear_ipacmd_list(card);
rtnl_lock(); - card->info.open_when_online = card->dev->flags & IFF_UP; - dev_close(card->dev); netif_device_detach(card->dev); netif_carrier_off(card->dev); rtnl_unlock(); diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index 9f13ed170a437..75910c0bcc2bc 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -2388,9 +2388,12 @@ static int qeth_l2_set_online(struct qeth_card *card, bool carrier_ok) qeth_enable_hw_features(dev); qeth_l2_enable_brport_features(card);
- if (card->info.open_when_online) { - card->info.open_when_online = 0; - dev_open(dev, NULL); + if (netif_running(dev)) { + local_bh_disable(); + napi_schedule(&card->napi); + /* kick-start the NAPI softirq: */ + local_bh_enable(); + qeth_l2_set_rx_mode(dev); } rtnl_unlock(); } diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index af4e60d2917e9..b92a32b4b1141 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -2018,9 +2018,11 @@ static int qeth_l3_set_online(struct qeth_card *card, bool carrier_ok) netif_device_attach(dev); qeth_enable_hw_features(dev);
- if (card->info.open_when_online) { - card->info.open_when_online = 0; - dev_open(dev, NULL); + if (netif_running(dev)) { + local_bh_disable(); + napi_schedule(&card->napi); + /* kick-start the NAPI softirq: */ + local_bh_enable(); } rtnl_unlock(); }
From: Yue Haibing yuehaibing@huawei.com
[ Upstream commit 30e0191b16e8a58e4620fa3e2839ddc7b9d4281c ]
skbuff: skb_under_panic: text:ffffffff88771f69 len:56 put:-4 head:ffff88805f86a800 data:ffff887f5f86a850 tail:0x88 end:0x2c0 dev:pim6reg ------------[ cut here ]------------ kernel BUG at net/core/skbuff.c:192! invalid opcode: 0000 [#1] PREEMPT SMP KASAN CPU: 2 PID: 22968 Comm: kworker/2:11 Not tainted 6.5.0-rc3-00044-g0a8db05b571a #236 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 Workqueue: ipv6_addrconf addrconf_dad_work RIP: 0010:skb_panic+0x152/0x1d0 Call Trace: <TASK> skb_push+0xc4/0xe0 ip6mr_cache_report+0xd69/0x19b0 reg_vif_xmit+0x406/0x690 dev_hard_start_xmit+0x17e/0x6e0 __dev_queue_xmit+0x2d6a/0x3d20 vlan_dev_hard_start_xmit+0x3ab/0x5c0 dev_hard_start_xmit+0x17e/0x6e0 __dev_queue_xmit+0x2d6a/0x3d20 neigh_connected_output+0x3ed/0x570 ip6_finish_output2+0x5b5/0x1950 ip6_finish_output+0x693/0x11c0 ip6_output+0x24b/0x880 NF_HOOK.constprop.0+0xfd/0x530 ndisc_send_skb+0x9db/0x1400 ndisc_send_rs+0x12a/0x6c0 addrconf_dad_completed+0x3c9/0xea0 addrconf_dad_work+0x849/0x1420 process_one_work+0xa22/0x16e0 worker_thread+0x679/0x10c0 ret_from_fork+0x28/0x60 ret_from_fork_asm+0x11/0x20
When setup a vlan device on dev pim6reg, DAD ns packet may sent on reg_vif_xmit(). reg_vif_xmit() ip6mr_cache_report() skb_push(skb, -skb_network_offset(pkt));//skb_network_offset(pkt) is 4 And skb_push declared as: void *skb_push(struct sk_buff *skb, unsigned int len); skb->data -= len; //0xffff88805f86a84c - 0xfffffffc = 0xffff887f5f86a850 skb->data is set to 0xffff887f5f86a850, which is invalid mem addr, lead to skb_push() fails.
Fixes: 14fb64e1f449 ("[IPV6] MROUTE: Support PIM-SM (SSM).") Signed-off-by: Yue Haibing yuehaibing@huawei.com Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/ip6mr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c index 51cf37abd142d..b4152b5d68ffb 100644 --- a/net/ipv6/ip6mr.c +++ b/net/ipv6/ip6mr.c @@ -1073,7 +1073,7 @@ static int ip6mr_cache_report(const struct mr_table *mrt, struct sk_buff *pkt, And all this only to mangle msg->im6_msgtype and to set msg->im6_mbz to "mbz" :-) */ - skb_push(skb, -skb_network_offset(pkt)); + __skb_pull(skb, skb_network_offset(pkt));
skb_push(skb, sizeof(*msg)); skb_reset_transport_header(skb);
From: Benjamin Poirier bpoirier@nvidia.com
[ Upstream commit 0756384fb1bd38adb2ebcfd1307422f433a1d772 ]
The nexthop code expects a 31 bit hash, such as what is returned by fib_multipath_hash() and rt6_multipath_hash(). Passing the 32 bit hash returned by skb_get_hash() can lead to problems related to the fact that 'int hash' is a negative number when the MSB is set.
In the case of hash threshold nexthop groups, nexthop_select_path_hthr() will disproportionately select the first nexthop group entry. In the case of resilient nexthop groups, nexthop_select_path_res() may do an out of bounds access in nh_buckets[], for example: hash = -912054133 num_nh_buckets = 2 bucket_index = 65535
which leads to the following panic:
BUG: unable to handle page fault for address: ffffc900025910c8 PGD 100000067 P4D 100000067 PUD 10026b067 PMD 0 Oops: 0002 [#1] PREEMPT SMP KASAN NOPTI CPU: 4 PID: 856 Comm: kworker/4:3 Not tainted 6.5.0-rc2+ #34 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Workqueue: ipv6_addrconf addrconf_dad_work RIP: 0010:nexthop_select_path+0x197/0xbf0 Code: c1 e4 05 be 08 00 00 00 4c 8b 35 a4 14 7e 01 4e 8d 6c 25 00 4a 8d 7c 25 08 48 01 dd e8 c2 25 15 ff 49 8d 7d 08 e8 39 13 15 ff <4d> 89 75 08 48 89 ef e8 7d 12 15 ff 48 8b 5d 00 e8 14 55 2f 00 85 RSP: 0018:ffff88810c36f260 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 00000000002000c0 RCX: ffffffffaf02dd77 RDX: dffffc0000000000 RSI: 0000000000000008 RDI: ffffc900025910c8 RBP: ffffc900025910c0 R08: 0000000000000001 R09: fffff520004b2219 R10: ffffc900025910cf R11: 31392d2068736168 R12: 00000000002000c0 R13: ffffc900025910c0 R14: 00000000fffef608 R15: ffff88811840e900 FS: 0000000000000000(0000) GS:ffff8881f7000000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffc900025910c8 CR3: 0000000129d00000 CR4: 0000000000750ee0 PKRU: 55555554 Call Trace: <TASK> ? __die+0x23/0x70 ? page_fault_oops+0x1ee/0x5c0 ? __pfx_is_prefetch.constprop.0+0x10/0x10 ? __pfx_page_fault_oops+0x10/0x10 ? search_bpf_extables+0xfe/0x1c0 ? fixup_exception+0x3b/0x470 ? exc_page_fault+0xf6/0x110 ? asm_exc_page_fault+0x26/0x30 ? nexthop_select_path+0x197/0xbf0 ? nexthop_select_path+0x197/0xbf0 ? lock_is_held_type+0xe7/0x140 vxlan_xmit+0x5b2/0x2340 ? __lock_acquire+0x92b/0x3370 ? __pfx_vxlan_xmit+0x10/0x10 ? __pfx___lock_acquire+0x10/0x10 ? __pfx_register_lock_class+0x10/0x10 ? skb_network_protocol+0xce/0x2d0 ? dev_hard_start_xmit+0xca/0x350 ? __pfx_vxlan_xmit+0x10/0x10 dev_hard_start_xmit+0xca/0x350 __dev_queue_xmit+0x513/0x1e20 ? __pfx___dev_queue_xmit+0x10/0x10 ? __pfx_lock_release+0x10/0x10 ? mark_held_locks+0x44/0x90 ? skb_push+0x4c/0x80 ? eth_header+0x81/0xe0 ? __pfx_eth_header+0x10/0x10 ? neigh_resolve_output+0x215/0x310 ? ip6_finish_output2+0x2ba/0xc90 ip6_finish_output2+0x2ba/0xc90 ? lock_release+0x236/0x3e0 ? ip6_mtu+0xbb/0x240 ? __pfx_ip6_finish_output2+0x10/0x10 ? find_held_lock+0x83/0xa0 ? lock_is_held_type+0xe7/0x140 ip6_finish_output+0x1ee/0x780 ip6_output+0x138/0x460 ? __pfx_ip6_output+0x10/0x10 ? __pfx___lock_acquire+0x10/0x10 ? __pfx_ip6_finish_output+0x10/0x10 NF_HOOK.constprop.0+0xc0/0x420 ? __pfx_NF_HOOK.constprop.0+0x10/0x10 ? ndisc_send_skb+0x2c0/0x960 ? __pfx_lock_release+0x10/0x10 ? __local_bh_enable_ip+0x93/0x110 ? lock_is_held_type+0xe7/0x140 ndisc_send_skb+0x4be/0x960 ? __pfx_ndisc_send_skb+0x10/0x10 ? mark_held_locks+0x65/0x90 ? find_held_lock+0x83/0xa0 ndisc_send_ns+0xb0/0x110 ? __pfx_ndisc_send_ns+0x10/0x10 addrconf_dad_work+0x631/0x8e0 ? lock_acquire+0x180/0x3f0 ? __pfx_addrconf_dad_work+0x10/0x10 ? mark_held_locks+0x24/0x90 process_one_work+0x582/0x9c0 ? __pfx_process_one_work+0x10/0x10 ? __pfx_do_raw_spin_lock+0x10/0x10 ? mark_held_locks+0x24/0x90 worker_thread+0x93/0x630 ? __kthread_parkme+0xdc/0x100 ? __pfx_worker_thread+0x10/0x10 kthread+0x1a5/0x1e0 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x34/0x60 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 RIP: 0000:0x0 Code: Unable to access opcode bytes at 0xffffffffffffffd6. RSP: 0000:0000000000000000 EFLAGS: 00000000 ORIG_RAX: 0000000000000000 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 </TASK> Modules linked in: CR2: ffffc900025910c8 ---[ end trace 0000000000000000 ]--- RIP: 0010:nexthop_select_path+0x197/0xbf0 Code: c1 e4 05 be 08 00 00 00 4c 8b 35 a4 14 7e 01 4e 8d 6c 25 00 4a 8d 7c 25 08 48 01 dd e8 c2 25 15 ff 49 8d 7d 08 e8 39 13 15 ff <4d> 89 75 08 48 89 ef e8 7d 12 15 ff 48 8b 5d 00 e8 14 55 2f 00 85 RSP: 0018:ffff88810c36f260 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 00000000002000c0 RCX: ffffffffaf02dd77 RDX: dffffc0000000000 RSI: 0000000000000008 RDI: ffffc900025910c8 RBP: ffffc900025910c0 R08: 0000000000000001 R09: fffff520004b2219 R10: ffffc900025910cf R11: 31392d2068736168 R12: 00000000002000c0 R13: ffffc900025910c0 R14: 00000000fffef608 R15: ffff88811840e900 FS: 0000000000000000(0000) GS:ffff8881f7000000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffffffffffffd6 CR3: 0000000129d00000 CR4: 0000000000750ee0 PKRU: 55555554 Kernel panic - not syncing: Fatal exception in interrupt Kernel Offset: 0x2ca00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
Fix this problem by ensuring the MSB of hash is 0 using a right shift - the same approach used in fib_multipath_hash() and rt6_multipath_hash().
Fixes: 1274e1cc4226 ("vxlan: ecmp support for mac fdb entries") Signed-off-by: Benjamin Poirier bpoirier@nvidia.com Reviewed-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Simon Horman horms@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/vxlan.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/net/vxlan.h b/include/net/vxlan.h index b57567296bc67..fae2893613aa2 100644 --- a/include/net/vxlan.h +++ b/include/net/vxlan.h @@ -554,12 +554,12 @@ static inline void vxlan_flag_attr_error(int attrtype, }
static inline bool vxlan_fdb_nh_path_select(struct nexthop *nh, - int hash, + u32 hash, struct vxlan_rdst *rdst) { struct fib_nh_common *nhc;
- nhc = nexthop_path_fdb_result(nh, hash); + nhc = nexthop_path_fdb_result(nh, hash >> 1); if (unlikely(!nhc)) return false;
From: Jianbo Liu jianbol@nvidia.com
[ Upstream commit 618d28a535a0582617465d14e05f3881736a2962 ]
As find_closest_ft_recursive is called to find the closest FT, the first parameter of find_closest_ft can be changed from fs_prio to fs_node. Thus this function is extended to find the closest FT for the nodes of any type, not only prios, but also the sub namespaces.
Signed-off-by: Jianbo Liu jianbol@nvidia.com Signed-off-by: Leon Romanovsky leonro@nvidia.com Link: https://lore.kernel.org/r/d3962c2b443ec8dde7a740dc742a1f052d5e256c.169080394... Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: c635ca45a7a2 ("net/mlx5: fs_core: Skip the FTs in the same FS_TYPE_PRIO_CHAINS fs_prio") Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/mellanox/mlx5/core/fs_core.c | 29 +++++++++---------- 1 file changed, 14 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 19da02c416161..852e265541d19 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -905,18 +905,17 @@ static struct mlx5_flow_table *find_closest_ft_recursive(struct fs_node *root, return ft; }
-/* If reverse is false then return the first flow table in next priority of - * prio in the tree, else return the last flow table in the previous priority - * of prio in the tree. +/* If reverse is false then return the first flow table next to the passed node + * in the tree, else return the last flow table before the node in the tree. */ -static struct mlx5_flow_table *find_closest_ft(struct fs_prio *prio, bool reverse) +static struct mlx5_flow_table *find_closest_ft(struct fs_node *node, bool reverse) { struct mlx5_flow_table *ft = NULL; struct fs_node *curr_node; struct fs_node *parent;
- parent = prio->node.parent; - curr_node = &prio->node; + parent = node->parent; + curr_node = node; while (!ft && parent) { ft = find_closest_ft_recursive(parent, &curr_node->list, reverse); curr_node = parent; @@ -926,15 +925,15 @@ static struct mlx5_flow_table *find_closest_ft(struct fs_prio *prio, bool revers }
/* Assuming all the tree is locked by mutex chain lock */ -static struct mlx5_flow_table *find_next_chained_ft(struct fs_prio *prio) +static struct mlx5_flow_table *find_next_chained_ft(struct fs_node *node) { - return find_closest_ft(prio, false); + return find_closest_ft(node, false); }
/* Assuming all the tree is locked by mutex chain lock */ -static struct mlx5_flow_table *find_prev_chained_ft(struct fs_prio *prio) +static struct mlx5_flow_table *find_prev_chained_ft(struct fs_node *node) { - return find_closest_ft(prio, true); + return find_closest_ft(node, true); }
static struct mlx5_flow_table *find_next_fwd_ft(struct mlx5_flow_table *ft, @@ -946,7 +945,7 @@ static struct mlx5_flow_table *find_next_fwd_ft(struct mlx5_flow_table *ft, next_ns = flow_act->action & MLX5_FLOW_CONTEXT_ACTION_FWD_NEXT_NS; fs_get_obj(prio, next_ns ? ft->ns->node.parent : ft->node.parent);
- return find_next_chained_ft(prio); + return find_next_chained_ft(&prio->node); }
static int connect_fts_in_prio(struct mlx5_core_dev *dev, @@ -977,7 +976,7 @@ static int connect_prev_fts(struct mlx5_core_dev *dev, { struct mlx5_flow_table *prev_ft;
- prev_ft = find_prev_chained_ft(prio); + prev_ft = find_prev_chained_ft(&prio->node); if (prev_ft) { struct fs_prio *prev_prio;
@@ -1123,7 +1122,7 @@ static int connect_flow_table(struct mlx5_core_dev *dev, struct mlx5_flow_table if (err) return err;
- next_ft = first_ft ? first_ft : find_next_chained_ft(prio); + next_ft = first_ft ? first_ft : find_next_chained_ft(&prio->node); err = connect_fwd_rules(dev, ft, next_ft); if (err) return err; @@ -1198,7 +1197,7 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa
tree_init_node(&ft->node, del_hw_flow_table, del_sw_flow_table); next_ft = unmanaged ? ft_attr->next_ft : - find_next_chained_ft(fs_prio); + find_next_chained_ft(&fs_prio->node); ft->def_miss_action = ns->def_miss_action; ft->ns = ns; err = root->cmds->create_flow_table(root, ft, ft_attr, next_ft); @@ -2201,7 +2200,7 @@ static struct mlx5_flow_table *find_next_ft(struct mlx5_flow_table *ft)
if (!list_is_last(&ft->node.list, &prio->node.children)) return list_next_entry(ft, node.list); - return find_next_chained_ft(prio); + return find_next_chained_ft(&prio->node); }
static int update_root_ft_destroy(struct mlx5_flow_table *ft)
From: Jianbo Liu jianbol@nvidia.com
[ Upstream commit c635ca45a7a2023904a1f851e99319af7b87017d ]
In the cited commit, new type of FS_TYPE_PRIO_CHAINS fs_prio was added to support multiple parallel namespaces for multi-chains. And we skip all the flow tables under the fs_node of this type unconditionally, when searching for the next or previous flow table to connect for a new table.
As this search function is also used for find new root table when the old one is being deleted, it will skip the entire FS_TYPE_PRIO_CHAINS fs_node next to the old root. However, new root table should be chosen from it if there is any table in it. Fix it by skipping only the flow tables in the same FS_TYPE_PRIO_CHAINS fs_node when finding the closest FT for a fs_node.
Besides, complete the connecting from FTs of previous priority of prio because there should be multiple prevs after this fs_prio type is introduced. And also the next FT should be chosen from the first flow table next to the prio in the same FS_TYPE_PRIO_CHAINS fs_prio, if this prio is the first child.
Fixes: 328edb499f99 ("net/mlx5: Split FDB fast path prio to multiple namespaces") Signed-off-by: Jianbo Liu jianbol@nvidia.com Reviewed-by: Paul Blakey paulb@nvidia.com Signed-off-by: Leon Romanovsky leonro@nvidia.com Link: https://lore.kernel.org/r/7a95754df479e722038996c97c97b062b372591f.169080394... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/mellanox/mlx5/core/fs_core.c | 80 +++++++++++++++++-- 1 file changed, 72 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 852e265541d19..5f87c446d3d97 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -889,7 +889,7 @@ static struct mlx5_flow_table *find_closest_ft_recursive(struct fs_node *root, struct fs_node *iter = list_entry(start, struct fs_node, list); struct mlx5_flow_table *ft = NULL;
- if (!root || root->type == FS_TYPE_PRIO_CHAINS) + if (!root) return NULL;
list_for_each_advance_continue(iter, &root->children, reverse) { @@ -905,19 +905,42 @@ static struct mlx5_flow_table *find_closest_ft_recursive(struct fs_node *root, return ft; }
+static struct fs_node *find_prio_chains_parent(struct fs_node *parent, + struct fs_node **child) +{ + struct fs_node *node = NULL; + + while (parent && parent->type != FS_TYPE_PRIO_CHAINS) { + node = parent; + parent = parent->parent; + } + + if (child) + *child = node; + + return parent; +} + /* If reverse is false then return the first flow table next to the passed node * in the tree, else return the last flow table before the node in the tree. + * If skip is true, skip the flow tables in the same prio_chains prio. */ -static struct mlx5_flow_table *find_closest_ft(struct fs_node *node, bool reverse) +static struct mlx5_flow_table *find_closest_ft(struct fs_node *node, bool reverse, + bool skip) { + struct fs_node *prio_chains_parent = NULL; struct mlx5_flow_table *ft = NULL; struct fs_node *curr_node; struct fs_node *parent;
+ if (skip) + prio_chains_parent = find_prio_chains_parent(node, NULL); parent = node->parent; curr_node = node; while (!ft && parent) { - ft = find_closest_ft_recursive(parent, &curr_node->list, reverse); + if (parent != prio_chains_parent) + ft = find_closest_ft_recursive(parent, &curr_node->list, + reverse); curr_node = parent; parent = curr_node->parent; } @@ -927,13 +950,13 @@ static struct mlx5_flow_table *find_closest_ft(struct fs_node *node, bool revers /* Assuming all the tree is locked by mutex chain lock */ static struct mlx5_flow_table *find_next_chained_ft(struct fs_node *node) { - return find_closest_ft(node, false); + return find_closest_ft(node, false, true); }
/* Assuming all the tree is locked by mutex chain lock */ static struct mlx5_flow_table *find_prev_chained_ft(struct fs_node *node) { - return find_closest_ft(node, true); + return find_closest_ft(node, true, true); }
static struct mlx5_flow_table *find_next_fwd_ft(struct mlx5_flow_table *ft, @@ -969,21 +992,55 @@ static int connect_fts_in_prio(struct mlx5_core_dev *dev, return 0; }
+static struct mlx5_flow_table *find_closet_ft_prio_chains(struct fs_node *node, + struct fs_node *parent, + struct fs_node **child, + bool reverse) +{ + struct mlx5_flow_table *ft; + + ft = find_closest_ft(node, reverse, false); + + if (ft && parent == find_prio_chains_parent(&ft->node, child)) + return ft; + + return NULL; +} + /* Connect flow tables from previous priority of prio to ft */ static int connect_prev_fts(struct mlx5_core_dev *dev, struct mlx5_flow_table *ft, struct fs_prio *prio) { + struct fs_node *prio_parent, *parent = NULL, *child, *node; struct mlx5_flow_table *prev_ft; + int err = 0; + + prio_parent = find_prio_chains_parent(&prio->node, &child); + + /* return directly if not under the first sub ns of prio_chains prio */ + if (prio_parent && !list_is_first(&child->list, &prio_parent->children)) + return 0;
prev_ft = find_prev_chained_ft(&prio->node); - if (prev_ft) { + while (prev_ft) { struct fs_prio *prev_prio;
fs_get_obj(prev_prio, prev_ft->node.parent); - return connect_fts_in_prio(dev, prev_prio, ft); + err = connect_fts_in_prio(dev, prev_prio, ft); + if (err) + break; + + if (!parent) { + parent = find_prio_chains_parent(&prev_prio->node, &child); + if (!parent) + break; + } + + node = child; + prev_ft = find_closet_ft_prio_chains(node, parent, &child, true); } - return 0; + return err; }
static int update_root_ft_create(struct mlx5_flow_table *ft, struct fs_prio @@ -2194,12 +2251,19 @@ EXPORT_SYMBOL(mlx5_del_flow_rules); /* Assuming prio->node.children(flow tables) is sorted by level */ static struct mlx5_flow_table *find_next_ft(struct mlx5_flow_table *ft) { + struct fs_node *prio_parent, *child; struct fs_prio *prio;
fs_get_obj(prio, ft->node.parent);
if (!list_is_last(&ft->node.list, &prio->node.children)) return list_next_entry(ft, node.list); + + prio_parent = find_prio_chains_parent(&prio->node, &child); + + if (prio_parent && list_is_first(&child->list, &prio_parent->children)) + return find_closest_ft(&prio->node, false, false); + return find_next_chained_ft(&prio->node); }
From: Leon Romanovsky leonro@nvidia.com
[ Upstream commit 62da08331f1a2bef9d0148613133ce8e640a2f8d ]
Fix typo in setup_fte_upper_proto_match() where destination UDP port was used instead of source port.
Fixes: a7385187a386 ("net/mlx5e: IPsec, support upper protocol selector field offload") Signed-off-by: Leon Romanovsky leonro@nvidia.com Link: https://lore.kernel.org/r/ffc024a4d192113103f392b0502688366ca88c1f.169080394... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index dbe87bf89c0dd..832d36be4a17b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -808,9 +808,9 @@ static void setup_fte_upper_proto_match(struct mlx5_flow_spec *spec, struct upsp }
if (upspec->sport) { - MLX5_SET(fte_match_set_lyr_2_4, spec->match_criteria, udp_dport, + MLX5_SET(fte_match_set_lyr_2_4, spec->match_criteria, udp_sport, upspec->sport_mask); - MLX5_SET(fte_match_set_lyr_2_4, spec->match_value, udp_dport, upspec->sport); + MLX5_SET(fte_match_set_lyr_2_4, spec->match_value, udp_sport, upspec->sport); } }
From: Jonas Gorski jonas.gorski@bisdn.de
[ Upstream commit b755c25fbcd568821a3bb0e0d5c2daa5fcb00bba ]
When both supported and previous version have the same major version, and the firmwares are missing, the driver ends in a loop requesting the same (previous) version over and over again:
[ 76.327413] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.1.img firmware, fall-back to previous 4.0 version [ 76.339802] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.0.img firmware, fall-back to previous 4.0 version [ 76.352162] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.0.img firmware, fall-back to previous 4.0 version [ 76.364502] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.0.img firmware, fall-back to previous 4.0 version [ 76.376848] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.0.img firmware, fall-back to previous 4.0 version [ 76.389183] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.0.img firmware, fall-back to previous 4.0 version [ 76.401522] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.0.img firmware, fall-back to previous 4.0 version [ 76.413860] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.0.img firmware, fall-back to previous 4.0 version [ 76.426199] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.0.img firmware, fall-back to previous 4.0 version ...
Fix this by inverting the check to that we aren't yet at the previous version, and also check the minor version.
This also catches the case where both versions are the same, as it was after commit bb5dbf2cc64d ("net: marvell: prestera: add firmware v4.0 support").
With this fix applied:
[ 88.499622] Prestera DX 0000:01:00.0: missing latest mrvl/prestera/mvsw_prestera_fw-v4.1.img firmware, fall-back to previous 4.0 version [ 88.511995] Prestera DX 0000:01:00.0: failed to request previous firmware: mrvl/prestera/mvsw_prestera_fw-v4.0.img [ 88.522403] Prestera DX: probe of 0000:01:00.0 failed with error -2
Fixes: 47f26018a414 ("net: marvell: prestera: try to load previous fw version") Signed-off-by: Jonas Gorski jonas.gorski@bisdn.de Acked-by: Elad Nachman enachman@marvell.com Reviewed-by: Jesse Brandeburg jesse.brandeburg@intel.com Acked-by: Taras Chornyi taras.chornyi@plvision.eu Link: https://lore.kernel.org/r/20230802092357.163944-1-jonas.gorski@bisdn.de Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/prestera/prestera_pci.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/prestera/prestera_pci.c b/drivers/net/ethernet/marvell/prestera/prestera_pci.c index f328d957b2db7..35857dc19542f 100644 --- a/drivers/net/ethernet/marvell/prestera/prestera_pci.c +++ b/drivers/net/ethernet/marvell/prestera/prestera_pci.c @@ -727,7 +727,8 @@ static int prestera_fw_get(struct prestera_fw *fw)
err = request_firmware_direct(&fw->bin, fw_path, fw->dev.dev); if (err) { - if (ver_maj == PRESTERA_SUPP_FW_MAJ_VER) { + if (ver_maj != PRESTERA_PREV_FW_MAJ_VER || + ver_min != PRESTERA_PREV_FW_MIN_VER) { ver_maj = PRESTERA_PREV_FW_MAJ_VER; ver_min = PRESTERA_PREV_FW_MIN_VER;
From: Eric Dumazet edumazet@google.com
[ Upstream commit e6638094d7af6c7b9dcca05ad009e79e31b4f670 ]
Because v4 and v6 families use separate inetpeer trees (respectively net->ipv4.peers and net->ipv6.peers), inetpeer_addr_cmp(a, b) assumes a & b share the same family.
tcp_metrics use a common hash table, where entries can have different families.
We must therefore make sure to not call inetpeer_addr_cmp() if the families do not match.
Fixes: d39d14ffa24c ("net: Add helper function to compare inetpeer addresses") Signed-off-by: Eric Dumazet edumazet@google.com Reviewed-by: David Ahern dsahern@kernel.org Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20230802131500.1478140-2-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_metrics.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index 82f4575f9cd90..c4daf0aa2d4d9 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -78,7 +78,7 @@ static void tcp_metric_set(struct tcp_metrics_block *tm, static bool addr_same(const struct inetpeer_addr *a, const struct inetpeer_addr *b) { - return inetpeer_addr_cmp(a, b) == 0; + return (a->family == b->family) && !inetpeer_addr_cmp(a, b); }
struct tcpm_hash_bucket {
From: Eric Dumazet edumazet@google.com
[ Upstream commit 949ad62a5d5311d36fce2e14fe5fed3f936da51c ]
tm->tcpm_stamp can be read or written locklessly.
Add needed READ_ONCE()/WRITE_ONCE() to document this.
Also constify tcpm_check_stamp() dst argument.
Fixes: 51c5d0c4b169 ("tcp: Maintain dynamic metrics in local cache.") Signed-off-by: Eric Dumazet edumazet@google.com Reviewed-by: David Ahern dsahern@kernel.org Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20230802131500.1478140-3-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_metrics.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index c4daf0aa2d4d9..8386165887963 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -97,7 +97,7 @@ static void tcpm_suck_dst(struct tcp_metrics_block *tm, u32 msval; u32 val;
- tm->tcpm_stamp = jiffies; + WRITE_ONCE(tm->tcpm_stamp, jiffies);
val = 0; if (dst_metric_locked(dst, RTAX_RTT)) @@ -131,9 +131,15 @@ static void tcpm_suck_dst(struct tcp_metrics_block *tm,
#define TCP_METRICS_TIMEOUT (60 * 60 * HZ)
-static void tcpm_check_stamp(struct tcp_metrics_block *tm, struct dst_entry *dst) +static void tcpm_check_stamp(struct tcp_metrics_block *tm, + const struct dst_entry *dst) { - if (tm && unlikely(time_after(jiffies, tm->tcpm_stamp + TCP_METRICS_TIMEOUT))) + unsigned long limit; + + if (!tm) + return; + limit = READ_ONCE(tm->tcpm_stamp) + TCP_METRICS_TIMEOUT; + if (unlikely(time_after(jiffies, limit))) tcpm_suck_dst(tm, dst, false); }
@@ -174,7 +180,8 @@ static struct tcp_metrics_block *tcpm_new(struct dst_entry *dst, oldest = deref_locked(tcp_metrics_hash[hash].chain); for (tm = deref_locked(oldest->tcpm_next); tm; tm = deref_locked(tm->tcpm_next)) { - if (time_before(tm->tcpm_stamp, oldest->tcpm_stamp)) + if (time_before(READ_ONCE(tm->tcpm_stamp), + READ_ONCE(oldest->tcpm_stamp))) oldest = tm; } tm = oldest; @@ -434,7 +441,7 @@ void tcp_update_metrics(struct sock *sk) tp->reordering); } } - tm->tcpm_stamp = jiffies; + WRITE_ONCE(tm->tcpm_stamp, jiffies); out_unlock: rcu_read_unlock(); } @@ -647,7 +654,7 @@ static int tcp_metrics_fill_info(struct sk_buff *msg, }
if (nla_put_msecs(msg, TCP_METRICS_ATTR_AGE, - jiffies - tm->tcpm_stamp, + jiffies - READ_ONCE(tm->tcpm_stamp), TCP_METRICS_ATTR_PAD) < 0) goto nla_put_failure;
From: Eric Dumazet edumazet@google.com
[ Upstream commit 285ce119a3c6c4502585936650143e54c8692788 ]
tm->tcpm_lock can be read or written locklessly.
Add needed READ_ONCE()/WRITE_ONCE() to document this.
Fixes: 51c5d0c4b169 ("tcp: Maintain dynamic metrics in local cache.") Signed-off-by: Eric Dumazet edumazet@google.com Reviewed-by: David Ahern dsahern@kernel.org Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20230802131500.1478140-4-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_metrics.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index 8386165887963..131fa30049691 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -59,7 +59,8 @@ static inline struct net *tm_net(struct tcp_metrics_block *tm) static bool tcp_metric_locked(struct tcp_metrics_block *tm, enum tcp_metric_index idx) { - return tm->tcpm_lock & (1 << idx); + /* Paired with WRITE_ONCE() in tcpm_suck_dst() */ + return READ_ONCE(tm->tcpm_lock) & (1 << idx); }
static u32 tcp_metric_get(struct tcp_metrics_block *tm, @@ -110,7 +111,8 @@ static void tcpm_suck_dst(struct tcp_metrics_block *tm, val |= 1 << TCP_METRIC_CWND; if (dst_metric_locked(dst, RTAX_REORDERING)) val |= 1 << TCP_METRIC_REORDERING; - tm->tcpm_lock = val; + /* Paired with READ_ONCE() in tcp_metric_locked() */ + WRITE_ONCE(tm->tcpm_lock, val);
msval = dst_metric_raw(dst, RTAX_RTT); tm->tcpm_vals[TCP_METRIC_RTT] = msval * USEC_PER_MSEC;
From: Eric Dumazet edumazet@google.com
[ Upstream commit 8c4d04f6b443869d25e59822f7cec88d647028a9 ]
tm->tcpm_vals[] values can be read or written locklessly.
Add needed READ_ONCE()/WRITE_ONCE() to document this, and force use of tcp_metric_get() and tcp_metric_set()
Fixes: 51c5d0c4b169 ("tcp: Maintain dynamic metrics in local cache.") Signed-off-by: Eric Dumazet edumazet@google.com Reviewed-by: David Ahern dsahern@kernel.org Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_metrics.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index 131fa30049691..fd4ab7a51cef2 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -63,17 +63,19 @@ static bool tcp_metric_locked(struct tcp_metrics_block *tm, return READ_ONCE(tm->tcpm_lock) & (1 << idx); }
-static u32 tcp_metric_get(struct tcp_metrics_block *tm, +static u32 tcp_metric_get(const struct tcp_metrics_block *tm, enum tcp_metric_index idx) { - return tm->tcpm_vals[idx]; + /* Paired with WRITE_ONCE() in tcp_metric_set() */ + return READ_ONCE(tm->tcpm_vals[idx]); }
static void tcp_metric_set(struct tcp_metrics_block *tm, enum tcp_metric_index idx, u32 val) { - tm->tcpm_vals[idx] = val; + /* Paired with READ_ONCE() in tcp_metric_get() */ + WRITE_ONCE(tm->tcpm_vals[idx], val); }
static bool addr_same(const struct inetpeer_addr *a, @@ -115,13 +117,16 @@ static void tcpm_suck_dst(struct tcp_metrics_block *tm, WRITE_ONCE(tm->tcpm_lock, val);
msval = dst_metric_raw(dst, RTAX_RTT); - tm->tcpm_vals[TCP_METRIC_RTT] = msval * USEC_PER_MSEC; + tcp_metric_set(tm, TCP_METRIC_RTT, msval * USEC_PER_MSEC);
msval = dst_metric_raw(dst, RTAX_RTTVAR); - tm->tcpm_vals[TCP_METRIC_RTTVAR] = msval * USEC_PER_MSEC; - tm->tcpm_vals[TCP_METRIC_SSTHRESH] = dst_metric_raw(dst, RTAX_SSTHRESH); - tm->tcpm_vals[TCP_METRIC_CWND] = dst_metric_raw(dst, RTAX_CWND); - tm->tcpm_vals[TCP_METRIC_REORDERING] = dst_metric_raw(dst, RTAX_REORDERING); + tcp_metric_set(tm, TCP_METRIC_RTTVAR, msval * USEC_PER_MSEC); + tcp_metric_set(tm, TCP_METRIC_SSTHRESH, + dst_metric_raw(dst, RTAX_SSTHRESH)); + tcp_metric_set(tm, TCP_METRIC_CWND, + dst_metric_raw(dst, RTAX_CWND)); + tcp_metric_set(tm, TCP_METRIC_REORDERING, + dst_metric_raw(dst, RTAX_REORDERING)); if (fastopen_clear) { tm->tcpm_fastopen.mss = 0; tm->tcpm_fastopen.syn_loss = 0; @@ -667,7 +672,7 @@ static int tcp_metrics_fill_info(struct sk_buff *msg, if (!nest) goto nla_put_failure; for (i = 0; i < TCP_METRIC_MAX_KERNEL + 1; i++) { - u32 val = tm->tcpm_vals[i]; + u32 val = tcp_metric_get(tm, i);
if (!val) continue;
From: Eric Dumazet edumazet@google.com
[ Upstream commit d5d986ce42c71a7562d32c4e21e026b0f87befec ]
tm->tcpm_net can be read or written locklessly.
Instead of changing write_pnet() and read_pnet() and potentially hurt performance, add the needed READ_ONCE()/WRITE_ONCE() in tm_net() and tcpm_new().
Fixes: 849e8a0ca8d5 ("tcp_metrics: Add a field tcpm_net and verify it matches on lookup") Signed-off-by: Eric Dumazet edumazet@google.com Reviewed-by: David Ahern dsahern@kernel.org Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20230802131500.1478140-6-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_metrics.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index fd4ab7a51cef2..4fd274836a48f 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -40,7 +40,7 @@ struct tcp_fastopen_metrics {
struct tcp_metrics_block { struct tcp_metrics_block __rcu *tcpm_next; - possible_net_t tcpm_net; + struct net *tcpm_net; struct inetpeer_addr tcpm_saddr; struct inetpeer_addr tcpm_daddr; unsigned long tcpm_stamp; @@ -51,9 +51,10 @@ struct tcp_metrics_block { struct rcu_head rcu_head; };
-static inline struct net *tm_net(struct tcp_metrics_block *tm) +static inline struct net *tm_net(const struct tcp_metrics_block *tm) { - return read_pnet(&tm->tcpm_net); + /* Paired with the WRITE_ONCE() in tcpm_new() */ + return READ_ONCE(tm->tcpm_net); }
static bool tcp_metric_locked(struct tcp_metrics_block *tm, @@ -197,7 +198,9 @@ static struct tcp_metrics_block *tcpm_new(struct dst_entry *dst, if (!tm) goto out_unlock; } - write_pnet(&tm->tcpm_net, net); + /* Paired with the READ_ONCE() in tm_net() */ + WRITE_ONCE(tm->tcpm_net, net); + tm->tcpm_saddr = *saddr; tm->tcpm_daddr = *daddr;
From: Eric Dumazet edumazet@google.com
[ Upstream commit ddf251fa2bc1d3699eec0bae6ed0bc373b8fda79 ]
Whenever tcpm_new() reclaims an old entry, tcpm_suck_dst() would overwrite data that could be read from tcp_fastopen_cache_get() or tcp_metrics_fill_info().
We need to acquire fastopen_seqlock to maintain consistency.
For newly allocated objects, tcpm_new() can switch to kzalloc() to avoid an extra fastopen_seqlock acquisition.
Fixes: 1fe4c481ba63 ("net-tcp: Fast Open client - cookie cache") Signed-off-by: Eric Dumazet edumazet@google.com Cc: Yuchung Cheng ycheng@google.com Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Link: https://lore.kernel.org/r/20230802131500.1478140-7-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_metrics.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index 4fd274836a48f..99ac5efe244d3 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -93,6 +93,7 @@ static struct tcpm_hash_bucket *tcp_metrics_hash __read_mostly; static unsigned int tcp_metrics_hash_log __read_mostly;
static DEFINE_SPINLOCK(tcp_metrics_lock); +static DEFINE_SEQLOCK(fastopen_seqlock);
static void tcpm_suck_dst(struct tcp_metrics_block *tm, const struct dst_entry *dst, @@ -129,11 +130,13 @@ static void tcpm_suck_dst(struct tcp_metrics_block *tm, tcp_metric_set(tm, TCP_METRIC_REORDERING, dst_metric_raw(dst, RTAX_REORDERING)); if (fastopen_clear) { + write_seqlock(&fastopen_seqlock); tm->tcpm_fastopen.mss = 0; tm->tcpm_fastopen.syn_loss = 0; tm->tcpm_fastopen.try_exp = 0; tm->tcpm_fastopen.cookie.exp = false; tm->tcpm_fastopen.cookie.len = 0; + write_sequnlock(&fastopen_seqlock); } }
@@ -194,7 +197,7 @@ static struct tcp_metrics_block *tcpm_new(struct dst_entry *dst, } tm = oldest; } else { - tm = kmalloc(sizeof(*tm), GFP_ATOMIC); + tm = kzalloc(sizeof(*tm), GFP_ATOMIC); if (!tm) goto out_unlock; } @@ -204,7 +207,7 @@ static struct tcp_metrics_block *tcpm_new(struct dst_entry *dst, tm->tcpm_saddr = *saddr; tm->tcpm_daddr = *daddr;
- tcpm_suck_dst(tm, dst, true); + tcpm_suck_dst(tm, dst, reclaim);
if (likely(!reclaim)) { tm->tcpm_next = tcp_metrics_hash[hash].chain; @@ -556,8 +559,6 @@ bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst) return ret; }
-static DEFINE_SEQLOCK(fastopen_seqlock); - void tcp_fastopen_cache_get(struct sock *sk, u16 *mss, struct tcp_fastopen_cookie *cookie) {
From: Stefano Garzarella sgarzare@redhat.com
[ Upstream commit 3c50c8b240390907c9a33c86d25d850520db6dfa ]
We forgot to add vsock_perf to the rm command in the `clean` target, so now we have a left over after `make clean` in tools/testing/vsock.
Fixes: 8abbffd27ced ("test/vsock: vsock_perf utility") Cc: AVKrasnov@sberdevices.ru Signed-off-by: Stefano Garzarella sgarzare@redhat.com Reviewed-by: Simon Horman horms@kernel.org Tested-by: Simon Horman horms@kernel.org # build-tested Link: https://lore.kernel.org/r/20230803085454.30897-1-sgarzare@redhat.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/vsock/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/vsock/Makefile b/tools/testing/vsock/Makefile index 43a254f0e14dd..21a98ba565ab5 100644 --- a/tools/testing/vsock/Makefile +++ b/tools/testing/vsock/Makefile @@ -8,5 +8,5 @@ vsock_perf: vsock_perf.o CFLAGS += -g -O2 -Werror -Wall -I. -I../../include -I../../../usr/include -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -D_GNU_SOURCE .PHONY: all test clean clean: - ${RM} *.o *.d vsock_test vsock_diag_test + ${RM} *.o *.d vsock_test vsock_diag_test vsock_perf -include *.d
From: Boqun Feng boqun.feng@gmail.com
commit b3d8aa84bbfe9b58ccc5332cacf8ea17200af310 upstream.
Currently the rust allocator simply passes the size of the type Layout to krealloc(), and in theory the alignment requirement from the type Layout may be larger than the guarantee provided by SLAB, which means the allocated object is mis-aligned.
Fix this by adjusting the allocation size to the nearest power of two, which SLAB always guarantees a size-aligned allocation. And because Rust guarantees that the original size must be a multiple of alignment and the alignment must be a power of two, then the alignment requirement is satisfied.
Suggested-by: Vlastimil Babka vbabka@suse.cz Co-developed-by: "Andreas Hindborg (Samsung)" nmi@metaspace.dk Signed-off-by: "Andreas Hindborg (Samsung)" nmi@metaspace.dk Signed-off-by: Boqun Feng boqun.feng@gmail.com Cc: stable@vger.kernel.org # v6.1+ Acked-by: Vlastimil Babka vbabka@suse.cz Fixes: 247b365dc8dc ("rust: add `kernel` crate") Link: https://github.com/Rust-for-Linux/linux/issues/974 Link: https://lore.kernel.org/r/20230730012905.643822-2-boqun.feng@gmail.com [ Applied rewording of comment as discussed in the mailing list. ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/bindings/bindings_helper.h | 1 rust/kernel/allocator.rs | 74 +++++++++++++++++++++++++++++++--------- 2 files changed, 60 insertions(+), 15 deletions(-)
--- a/rust/bindings/bindings_helper.h +++ b/rust/bindings/bindings_helper.h @@ -12,5 +12,6 @@ #include <linux/sched.h>
/* `bindgen` gets confused at certain things. */ +const size_t BINDINGS_ARCH_SLAB_MINALIGN = ARCH_SLAB_MINALIGN; const gfp_t BINDINGS_GFP_KERNEL = GFP_KERNEL; const gfp_t BINDINGS___GFP_ZERO = __GFP_ZERO; --- a/rust/kernel/allocator.rs +++ b/rust/kernel/allocator.rs @@ -9,6 +9,36 @@ use crate::bindings;
struct KernelAllocator;
+/// Calls `krealloc` with a proper size to alloc a new object aligned to `new_layout`'s alignment. +/// +/// # Safety +/// +/// - `ptr` can be either null or a pointer which has been allocated by this allocator. +/// - `new_layout` must have a non-zero size. +unsafe fn krealloc_aligned(ptr: *mut u8, new_layout: Layout, flags: bindings::gfp_t) -> *mut u8 { + // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. + let layout = new_layout.pad_to_align(); + + let mut size = layout.size(); + + if layout.align() > bindings::BINDINGS_ARCH_SLAB_MINALIGN { + // The alignment requirement exceeds the slab guarantee, thus try to enlarge the size + // to use the "power-of-two" size/alignment guarantee (see comments in `kmalloc()` for + // more information). + // + // Note that `layout.size()` (after padding) is guaranteed to be a multiple of + // `layout.align()`, so `next_power_of_two` gives enough alignment guarantee. + size = size.next_power_of_two(); + } + + // SAFETY: + // - `ptr` is either null or a pointer returned from a previous `k{re}alloc()` by the + // function safety requirement. + // - `size` is greater than 0 since it's either a `layout.size()` (which cannot be zero + // according to the function safety requirement) or a result from `next_power_of_two()`. + unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags) as *mut u8 } +} + unsafe impl GlobalAlloc for KernelAllocator { unsafe fn alloc(&self, layout: Layout) -> *mut u8 { // `krealloc()` is used instead of `kmalloc()` because the latter is @@ -30,10 +60,20 @@ static ALLOCATOR: KernelAllocator = Kern // to extract the object file that has them from the archive. For the moment, // let's generate them ourselves instead. // +// Note: Although these are *safe* functions, they are called by the compiler +// with parameters that obey the same `GlobalAlloc` function safety +// requirements: size and align should form a valid layout, and size is +// greater than 0. +// // Note that `#[no_mangle]` implies exported too, nowadays. #[no_mangle] -fn __rust_alloc(size: usize, _align: usize) -> *mut u8 { - unsafe { bindings::krealloc(core::ptr::null(), size, bindings::GFP_KERNEL) as *mut u8 } +fn __rust_alloc(size: usize, align: usize) -> *mut u8 { + // SAFETY: See assumption above. + let layout = unsafe { Layout::from_size_align_unchecked(size, align) }; + + // SAFETY: `ptr::null_mut()` is null, per assumption above the size of `layout` is greater + // than 0. + unsafe { krealloc_aligned(ptr::null_mut(), layout, bindings::GFP_KERNEL) } }
#[no_mangle] @@ -42,23 +82,27 @@ fn __rust_dealloc(ptr: *mut u8, _size: u }
#[no_mangle] -fn __rust_realloc(ptr: *mut u8, _old_size: usize, _align: usize, new_size: usize) -> *mut u8 { - unsafe { - bindings::krealloc( - ptr as *const core::ffi::c_void, - new_size, - bindings::GFP_KERNEL, - ) as *mut u8 - } +fn __rust_realloc(ptr: *mut u8, _old_size: usize, align: usize, new_size: usize) -> *mut u8 { + // SAFETY: See assumption above. + let new_layout = unsafe { Layout::from_size_align_unchecked(new_size, align) }; + + // SAFETY: Per assumption above, `ptr` is allocated by `__rust_*` before, and the size of + // `new_layout` is greater than 0. + unsafe { krealloc_aligned(ptr, new_layout, bindings::GFP_KERNEL) } }
#[no_mangle] -fn __rust_alloc_zeroed(size: usize, _align: usize) -> *mut u8 { +fn __rust_alloc_zeroed(size: usize, align: usize) -> *mut u8 { + // SAFETY: See assumption above. + let layout = unsafe { Layout::from_size_align_unchecked(size, align) }; + + // SAFETY: `ptr::null_mut()` is null, per assumption above the size of `layout` is greater + // than 0. unsafe { - bindings::krealloc( - core::ptr::null(), - size, + krealloc_aligned( + ptr::null_mut(), + layout, bindings::GFP_KERNEL | bindings::__GFP_ZERO, - ) as *mut u8 + ) } }
From: Steffen Maier maier@linux.ibm.com
commit e65851989001c0c9ba9177564b13b38201c0854c upstream.
Storage devices are free to send RSCNs, e.g. for internal state changes. If this happens on all connected paths, zfcp risks temporarily losing all paths at the same time. This has strong requirements on multipath configuration such as "no_path_retry queue".
Avoid such situations by deferring fc_rport blocking until after the ADISC response, when any actual state change of the remote port became clear. The already existing port recovery triggers explicitly block the fc_rport. The triggers are: on ADISC reject or timeout (typical cable pull case), and on ADISC indicating that the remote port has changed its WWPN or the port is meanwhile no longer open.
As a side effect, this also removes a confusing direct function call to another work item function zfcp_scsi_rport_work() instead of scheduling that other work item. It was probably done that way to have the rport block side effect immediate and synchronous to the caller.
Fixes: a2fa0aede07c ("[SCSI] zfcp: Block FC transport rports early on errors") Cc: stable@vger.kernel.org #v2.6.30+ Reviewed-by: Benjamin Block bblock@linux.ibm.com Reviewed-by: Fedor Loshakov loshakov@linux.ibm.com Signed-off-by: Steffen Maier maier@linux.ibm.com Link: https://lore.kernel.org/r/20230724145156.3920244-1-maier@linux.ibm.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/s390/scsi/zfcp_fc.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
--- a/drivers/s390/scsi/zfcp_fc.c +++ b/drivers/s390/scsi/zfcp_fc.c @@ -534,8 +534,7 @@ static void zfcp_fc_adisc_handler(void *
/* re-init to undo drop from zfcp_fc_adisc() */ port->d_id = ntoh24(adisc_resp->adisc_port_id); - /* port is good, unblock rport without going through erp */ - zfcp_scsi_schedule_rport_register(port); + /* port is still good, nothing to do */ out: atomic_andnot(ZFCP_STATUS_PORT_LINK_TEST, &port->status); put_device(&port->dev); @@ -595,9 +594,6 @@ void zfcp_fc_link_test_work(struct work_ int retval;
set_worker_desc("zadisc%16llx", port->wwpn); /* < WORKER_DESC_LEN=24 */ - get_device(&port->dev); - port->rport_task = RPORT_DEL; - zfcp_scsi_rport_work(&port->rport_work);
/* only issue one test command at one time per port */ if (atomic_read(&port->status) & ZFCP_STATUS_PORT_LINK_TEST)
From: Michael Kelley mikelley@microsoft.com
commit 010c1e1c5741365dbbf44a5a5bb9f30192875c4c upstream.
The Hyper-V host is queried to get the max transfer size that it supports, and this value is used to set max_sectors for the synthetic SCSI controller. However, this max transfer size may be too large for virtual Fibre Channel devices, which are limited to 512 Kbytes. If a larger transfer size is used with a vFC device, Hyper-V always returns an error, and storvsc logs a message like this where the SRB status and SCSI status are both zero:
hv_storvsc <GUID>: tag#197 cmd 0x8a status: scsi 0x0 srb 0x0 hv 0xc0000001
Add logic to limit the max transfer size to 512 Kbytes for vFC devices.
Fixes: 1d3e0980782f ("scsi: storvsc: Correct reporting of Hyper-V I/O size limits") Cc: stable@vger.kernel.org Signed-off-by: Michael Kelley mikelley@microsoft.com Link: https://lore.kernel.org/r/1689887102-32806-1-git-send-email-mikelley@microso... Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/scsi/storvsc_drv.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/scsi/storvsc_drv.c +++ b/drivers/scsi/storvsc_drv.c @@ -365,6 +365,7 @@ static void storvsc_on_channel_callback( #define STORVSC_FC_MAX_LUNS_PER_TARGET 255 #define STORVSC_FC_MAX_TARGETS 128 #define STORVSC_FC_MAX_CHANNELS 8 +#define STORVSC_FC_MAX_XFER_SIZE ((u32)(512 * 1024))
#define STORVSC_IDE_MAX_LUNS_PER_TARGET 64 #define STORVSC_IDE_MAX_TARGETS 1 @@ -2004,6 +2005,9 @@ static int storvsc_probe(struct hv_devic * protecting it from any weird value. */ max_xfer_bytes = round_down(stor_device->max_transfer_bytes, HV_HYP_PAGE_SIZE); + if (is_fc) + max_xfer_bytes = min(max_xfer_bytes, STORVSC_FC_MAX_XFER_SIZE); + /* max_hw_sectors_kb */ host->max_sectors = max_xfer_bytes >> 9; /*
From: Song Shuai suagrfillet@gmail.com
commit 640c503d7dbd7d34a62099c933f4db0ed77ccbec upstream.
RISC-V Linux exports "va_kernel_pa_offset" in vmcoreinfo to help Crash-utility translate the kernel virtual address correctly.
Here adds the definition of "va_kernel_pa_offset".
Fixes: 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping") Link: https://lore.kernel.org/linux-riscv/20230724040649.220279-1-suagrfillet@gmai... Signed-off-by: Song Shuai suagrfillet@gmail.com Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com Link: https://lore.kernel.org/r/20230724100917.309061-2-suagrfillet@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/admin-guide/kdump/vmcoreinfo.rst | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst index c18d94fa6470..f8ebb63b6c5d 100644 --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -624,3 +624,9 @@ Used to get the correct ranges: * VMALLOC_START ~ VMALLOC_END : vmalloc() / ioremap() space. * VMEMMAP_START ~ VMEMMAP_END : vmemmap space, used for struct page array. * KERNEL_LINK_ADDR : start address of Kernel link and BPF + +va_kernel_pa_offset +------------------- + +Indicates the offset between the kernel virtual and physical mappings. +Used to translate virtual to physical addresses.
From: Ilya Dryomov idryomov@gmail.com
commit e6e2843230799230fc5deb8279728a7218b0d63c upstream.
If the cluster becomes unavailable, ceph_osdc_notify() may hang even with osd_request_timeout option set because linger_notify_finish_wait() waits for MWatchNotify NOTIFY_COMPLETE message with no associated OSD request in flight -- it's completely asynchronous.
Introduce an additional timeout, derived from the specified notify timeout. While at it, switch both waits to killable which is more correct.
Cc: stable@vger.kernel.org Signed-off-by: Ilya Dryomov idryomov@gmail.com Reviewed-by: Dongsheng Yang dongsheng.yang@easystack.cn Reviewed-by: Xiubo Li xiubli@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ceph/osd_client.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-)
--- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -3334,17 +3334,24 @@ static int linger_reg_commit_wait(struct int ret;
dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id); - ret = wait_for_completion_interruptible(&lreq->reg_commit_wait); + ret = wait_for_completion_killable(&lreq->reg_commit_wait); return ret ?: lreq->reg_commit_error; }
-static int linger_notify_finish_wait(struct ceph_osd_linger_request *lreq) +static int linger_notify_finish_wait(struct ceph_osd_linger_request *lreq, + unsigned long timeout) { - int ret; + long left;
dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id); - ret = wait_for_completion_interruptible(&lreq->notify_finish_wait); - return ret ?: lreq->notify_finish_error; + left = wait_for_completion_killable_timeout(&lreq->notify_finish_wait, + ceph_timeout_jiffies(timeout)); + if (left <= 0) + left = left ?: -ETIMEDOUT; + else + left = lreq->notify_finish_error; /* completed */ + + return left; }
/* @@ -4896,7 +4903,8 @@ int ceph_osdc_notify(struct ceph_osd_cli linger_submit(lreq); ret = linger_reg_commit_wait(lreq); if (!ret) - ret = linger_notify_finish_wait(lreq); + ret = linger_notify_finish_wait(lreq, + msecs_to_jiffies(2 * timeout * MSEC_PER_SEC)); else dout("lreq %p failed to initiate notify %d\n", lreq, ret);
From: Ross Maynard bids.7405@bigpond.com
commit b99225b4fe297d07400f9e2332ecd7347b224f8d upstream.
The SL-A300, B500/5600, and C700 devices no longer auto-load because of "usbnet: Remove over-broad module alias from zaurus." This patch adds IDs for those 3 devices.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217632 Fixes: 16adf5d07987 ("usbnet: Remove over-broad module alias from zaurus.") Signed-off-by: Ross Maynard bids.7405@bigpond.com Cc: stable@vger.kernel.org Acked-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Reviewed-by: Andrew Lunn andrew@lunn.ch Link: https://lore.kernel.org/r/69b5423b-2013-9fc9-9569-58e707d9bafb@bigpond.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/cdc_ether.c | 21 +++++++++++++++++++++ drivers/net/usb/zaurus.c | 21 +++++++++++++++++++++ 2 files changed, 42 insertions(+)
--- a/drivers/net/usb/cdc_ether.c +++ b/drivers/net/usb/cdc_ether.c @@ -618,6 +618,13 @@ static const struct usb_device_id produc .match_flags = USB_DEVICE_ID_MATCH_INT_INFO | USB_DEVICE_ID_MATCH_DEVICE, .idVendor = 0x04DD, + .idProduct = 0x8005, /* A-300 */ + ZAURUS_FAKE_INTERFACE, + .driver_info = 0, +}, { + .match_flags = USB_DEVICE_ID_MATCH_INT_INFO + | USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = 0x04DD, .idProduct = 0x8006, /* B-500/SL-5600 */ ZAURUS_MASTER_INTERFACE, .driver_info = 0, @@ -625,11 +632,25 @@ static const struct usb_device_id produc .match_flags = USB_DEVICE_ID_MATCH_INT_INFO | USB_DEVICE_ID_MATCH_DEVICE, .idVendor = 0x04DD, + .idProduct = 0x8006, /* B-500/SL-5600 */ + ZAURUS_FAKE_INTERFACE, + .driver_info = 0, +}, { + .match_flags = USB_DEVICE_ID_MATCH_INT_INFO + | USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = 0x04DD, .idProduct = 0x8007, /* C-700 */ ZAURUS_MASTER_INTERFACE, .driver_info = 0, }, { .match_flags = USB_DEVICE_ID_MATCH_INT_INFO + | USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = 0x04DD, + .idProduct = 0x8007, /* C-700 */ + ZAURUS_FAKE_INTERFACE, + .driver_info = 0, +}, { + .match_flags = USB_DEVICE_ID_MATCH_INT_INFO | USB_DEVICE_ID_MATCH_DEVICE, .idVendor = 0x04DD, .idProduct = 0x9031, /* C-750 C-760 */ --- a/drivers/net/usb/zaurus.c +++ b/drivers/net/usb/zaurus.c @@ -289,11 +289,25 @@ static const struct usb_device_id produc .match_flags = USB_DEVICE_ID_MATCH_INT_INFO | USB_DEVICE_ID_MATCH_DEVICE, .idVendor = 0x04DD, + .idProduct = 0x8005, /* A-300 */ + ZAURUS_FAKE_INTERFACE, + .driver_info = (unsigned long)&bogus_mdlm_info, +}, { + .match_flags = USB_DEVICE_ID_MATCH_INT_INFO + | USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = 0x04DD, .idProduct = 0x8006, /* B-500/SL-5600 */ ZAURUS_MASTER_INTERFACE, .driver_info = ZAURUS_PXA_INFO, }, { .match_flags = USB_DEVICE_ID_MATCH_INT_INFO + | USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = 0x04DD, + .idProduct = 0x8006, /* B-500/SL-5600 */ + ZAURUS_FAKE_INTERFACE, + .driver_info = (unsigned long)&bogus_mdlm_info, +}, { + .match_flags = USB_DEVICE_ID_MATCH_INT_INFO | USB_DEVICE_ID_MATCH_DEVICE, .idVendor = 0x04DD, .idProduct = 0x8007, /* C-700 */ @@ -301,6 +315,13 @@ static const struct usb_device_id produc .driver_info = ZAURUS_PXA_INFO, }, { .match_flags = USB_DEVICE_ID_MATCH_INT_INFO + | USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = 0x04DD, + .idProduct = 0x8007, /* C-700 */ + ZAURUS_FAKE_INTERFACE, + .driver_info = (unsigned long)&bogus_mdlm_info, +}, { + .match_flags = USB_DEVICE_ID_MATCH_INT_INFO | USB_DEVICE_ID_MATCH_DEVICE, .idVendor = 0x04DD, .idProduct = 0x9031, /* C-750 C-760 */
From: Xiubo Li xiubli@redhat.com
commit e7e607bd00481745550389a29ecabe33e13d67cf upstream.
Flushing the dirty buffer may take a long time if the cluster is overloaded or if there is network issue. So we should ping the MDSs periodically to keep alive, else the MDS will blocklist the kclient.
Cc: stable@vger.kernel.org Link: https://tracker.ceph.com/issues/61843 Signed-off-by: Xiubo Li xiubli@redhat.com Reviewed-by: Milind Changire mchangir@redhat.com Signed-off-by: Ilya Dryomov idryomov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ceph/mds_client.c | 4 ++-- fs/ceph/mds_client.h | 5 +++++ fs/ceph/super.c | 10 ++++++++++ 3 files changed, 17 insertions(+), 2 deletions(-)
--- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -4762,7 +4762,7 @@ static void delayed_work(struct work_str
dout("mdsc delayed_work\n");
- if (mdsc->stopping) + if (mdsc->stopping >= CEPH_MDSC_STOPPING_FLUSHED) return;
mutex_lock(&mdsc->mutex); @@ -4941,7 +4941,7 @@ void send_flush_mdlog(struct ceph_mds_se void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc) { dout("pre_umount\n"); - mdsc->stopping = 1; + mdsc->stopping = CEPH_MDSC_STOPPING_BEGIN;
ceph_mdsc_iterate_sessions(mdsc, send_flush_mdlog, true); ceph_mdsc_iterate_sessions(mdsc, lock_unlock_session, false); --- a/fs/ceph/mds_client.h +++ b/fs/ceph/mds_client.h @@ -380,6 +380,11 @@ struct cap_wait { int want; };
+enum { + CEPH_MDSC_STOPPING_BEGIN = 1, + CEPH_MDSC_STOPPING_FLUSHED = 2, +}; + /* * mds client state */ --- a/fs/ceph/super.c +++ b/fs/ceph/super.c @@ -1374,6 +1374,16 @@ static void ceph_kill_sb(struct super_bl ceph_mdsc_pre_umount(fsc->mdsc); flush_fs_workqueues(fsc);
+ /* + * Though the kill_anon_super() will finally trigger the + * sync_filesystem() anyway, we still need to do it here + * and then bump the stage of shutdown to stop the work + * queue as earlier as possible. + */ + sync_filesystem(s); + + fsc->mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHED; + kill_anon_super(s);
fsc->client->extra_mon_dispatch = NULL;
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
commit da042eb4f061a0b54aedadcaa15391490c48e1ad upstream.
The OF node reference obtained from of_parse_phandle() should be dropped if node is not compatible with arm,scmi-shmem.
Fixes: 507cd4d2c5eb ("firmware: arm_scmi: Add compatibility checks for shmem node") Cc: stable@vger.kernel.org Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Reviewed-by: Cristian Marussi cristian.marussi@arm.com Link: https://lore.kernel.org/r/20230719061652.8850-1-krzysztof.kozlowski@linaro.o... Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/firmware/arm_scmi/mailbox.c | 4 +++- drivers/firmware/arm_scmi/smc.c | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-)
--- a/drivers/firmware/arm_scmi/mailbox.c +++ b/drivers/firmware/arm_scmi/mailbox.c @@ -166,8 +166,10 @@ static int mailbox_chan_setup(struct scm return -ENOMEM;
shmem = of_parse_phandle(cdev->of_node, "shmem", idx); - if (!of_device_is_compatible(shmem, "arm,scmi-shmem")) + if (!of_device_is_compatible(shmem, "arm,scmi-shmem")) { + of_node_put(shmem); return -ENXIO; + }
ret = of_address_to_resource(shmem, 0, &res); of_node_put(shmem); --- a/drivers/firmware/arm_scmi/smc.c +++ b/drivers/firmware/arm_scmi/smc.c @@ -118,8 +118,10 @@ static int smc_chan_setup(struct scmi_ch return -ENOMEM;
np = of_parse_phandle(cdev->of_node, "shmem", 0); - if (!of_device_is_compatible(np, "arm,scmi-shmem")) + if (!of_device_is_compatible(np, "arm,scmi-shmem")) { + of_node_put(np); return -ENXIO; + }
ret = of_address_to_resource(np, 0, &res); of_node_put(np);
From: gaoming gaoming20@hihonor.com
commit daf60d6cca26e50d65dac374db92e58de745ad26 upstream.
The call stack shown below is a scenario in the Linux 4.19 kernel. Allocating memory failed where exfat fs use kmalloc_array due to system memory fragmentation, while the u-disk was inserted without recognition. Devices such as u-disk using the exfat file system are pluggable and may be insert into the system at any time. However, long-term running systems cannot guarantee the continuity of physical memory. Therefore, it's necessary to address this issue.
Binder:2632_6: page allocation failure: order:4, mode:0x6040c0(GFP_KERNEL|__GFP_COMP), nodemask=(null) Call trace: [242178.097582] dump_backtrace+0x0/0x4 [242178.097589] dump_stack+0xf4/0x134 [242178.097598] warn_alloc+0xd8/0x144 [242178.097603] __alloc_pages_nodemask+0x1364/0x1384 [242178.097608] kmalloc_order+0x2c/0x510 [242178.097612] kmalloc_order_trace+0x40/0x16c [242178.097618] __kmalloc+0x360/0x408 [242178.097624] load_alloc_bitmap+0x160/0x284 [242178.097628] exfat_fill_super+0xa3c/0xe7c [242178.097635] mount_bdev+0x2e8/0x3a0 [242178.097638] exfat_fs_mount+0x40/0x50 [242178.097643] mount_fs+0x138/0x2e8 [242178.097649] vfs_kern_mount+0x90/0x270 [242178.097655] do_mount+0x798/0x173c [242178.097659] ksys_mount+0x114/0x1ac [242178.097665] __arm64_sys_mount+0x24/0x34 [242178.097671] el0_svc_common+0xb8/0x1b8 [242178.097676] el0_svc_handler+0x74/0x90 [242178.097681] el0_svc+0x8/0x340
By analyzing the exfat code,we found that continuous physical memory is not required here,so kvmalloc_array is used can solve this problem.
Cc: stable@vger.kernel.org Signed-off-by: gaoming gaoming20@hihonor.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/exfat/balloc.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/fs/exfat/balloc.c +++ b/fs/exfat/balloc.c @@ -69,7 +69,7 @@ static int exfat_allocate_bitmap(struct } sbi->map_sectors = ((need_map_size - 1) >> (sb->s_blocksize_bits)) + 1; - sbi->vol_amap = kmalloc_array(sbi->map_sectors, + sbi->vol_amap = kvmalloc_array(sbi->map_sectors, sizeof(struct buffer_head *), GFP_KERNEL); if (!sbi->vol_amap) return -ENOMEM; @@ -84,7 +84,7 @@ static int exfat_allocate_bitmap(struct while (j < i) brelse(sbi->vol_amap[j++]);
- kfree(sbi->vol_amap); + kvfree(sbi->vol_amap); sbi->vol_amap = NULL; return -EIO; } @@ -138,7 +138,7 @@ void exfat_free_bitmap(struct exfat_sb_i for (i = 0; i < sbi->map_sectors; i++) __brelse(sbi->vol_amap[i]);
- kfree(sbi->vol_amap); + kvfree(sbi->vol_amap); }
int exfat_set_bitmap(struct inode *inode, unsigned int clu, bool sync)
From: Namjae Jeon linkinjeon@kernel.org
commit d42334578eba1390859012ebb91e1e556d51db49 upstream.
exfat_extract_uni_name copies characters from a given file name entry into the 'uniname' variable. This variable is actually defined on the stack of the exfat_readdir() function. According to the definition of the 'exfat_uni_name' type, the file name should be limited 255 characters (+ null teminator space), but the exfat_get_uniname_from_ext_entry() function can write more characters because there is no check if filename entries exceeds max filename length. This patch add the check not to copy filename characters when exceeding max filename length.
Cc: stable@vger.kernel.org Cc: Yuezhang Mo Yuezhang.Mo@sony.com Reported-by: Maxim Suhanov dfirblog@gmail.com Reviewed-by: Sungjong Seo sj1557.seo@samsung.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/exfat/dir.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/fs/exfat/dir.c +++ b/fs/exfat/dir.c @@ -34,6 +34,7 @@ static int exfat_get_uniname_from_ext_en { int i, err; struct exfat_entry_set_cache es; + unsigned int uni_len = 0, len;
err = exfat_get_dentry_set(&es, sb, p_dir, entry, ES_ALL_ENTRIES); if (err) @@ -52,7 +53,10 @@ static int exfat_get_uniname_from_ext_en if (exfat_get_entry_type(ep) != TYPE_EXTEND) break;
- exfat_extract_uni_name(ep, uniname); + len = exfat_extract_uni_name(ep, uniname); + uni_len += len; + if (len != EXFAT_FILE_NAME_LEN || uni_len >= MAX_NAME_LENGTH) + break; uniname += EXFAT_FILE_NAME_LEN; }
@@ -1079,7 +1083,8 @@ rewind: if (entry_type == TYPE_EXTEND) { unsigned short entry_uniname[16], unichar;
- if (step != DIRENT_STEP_NAME) { + if (step != DIRENT_STEP_NAME || + name_len >= MAX_NAME_LENGTH) { step = DIRENT_STEP_FILE; continue; }
From: Sungjong Seo sj1557.seo@samsung.com
commit ff84772fd45d486e4fc78c82e2f70ce5333543e6 upstream.
There is a potential deadlock reported by syzbot as below:
====================================================== WARNING: possible circular locking dependency detected 6.4.0-next-20230707-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor330/5073 is trying to acquire lock: ffff8880218527a0 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock_killable include/linux/mmap_lock.h:151 [inline] ffff8880218527a0 (&mm->mmap_lock){++++}-{3:3}, at: get_mmap_lock_carefully mm/memory.c:5293 [inline] ffff8880218527a0 (&mm->mmap_lock){++++}-{3:3}, at: lock_mm_and_find_vma+0x369/0x510 mm/memory.c:5344 but task is already holding lock: ffff888019f760e0 (&sbi->s_lock){+.+.}-{3:3}, at: exfat_iterate+0x117/0xb50 fs/exfat/dir.c:232
which lock already depends on the new lock.
Chain exists of: &mm->mmap_lock --> mapping.invalidate_lock#3 --> &sbi->s_lock
Possible unsafe locking scenario:
CPU0 CPU1 ---- ---- lock(&sbi->s_lock); lock(mapping.invalidate_lock#3); lock(&sbi->s_lock); rlock(&mm->mmap_lock);
Let's try to avoid above potential deadlock condition by moving dir_emit*() out of sbi->s_lock coverage.
Fixes: ca06197382bd ("exfat: add directory operations") Cc: stable@vger.kernel.org #v5.7+ Reported-by: syzbot+1741a5d9b79989c10bdc@syzkaller.appspotmail.com Link: https://lore.kernel.org/lkml/00000000000078ee7e060066270b@google.com/T/#u Tested-by: syzbot+1741a5d9b79989c10bdc@syzkaller.appspotmail.com Signed-off-by: Sungjong Seo sj1557.seo@samsung.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/exfat/dir.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-)
--- a/fs/exfat/dir.c +++ b/fs/exfat/dir.c @@ -218,7 +218,10 @@ static void exfat_free_namebuf(struct ex exfat_init_namebuf(nb); }
-/* skip iterating emit_dots when dir is empty */ +/* + * Before calling dir_emit*(), sbi->s_lock should be released + * because page fault can occur in dir_emit*(). + */ #define ITER_POS_FILLED_DOTS (2) static int exfat_iterate(struct file *file, struct dir_context *ctx) { @@ -233,11 +236,10 @@ static int exfat_iterate(struct file *fi int err = 0, fake_offset = 0;
exfat_init_namebuf(nb); - mutex_lock(&EXFAT_SB(sb)->s_lock);
cpos = ctx->pos; if (!dir_emit_dots(file, ctx)) - goto unlock; + goto out;
if (ctx->pos == ITER_POS_FILLED_DOTS) { cpos = 0; @@ -249,16 +251,18 @@ static int exfat_iterate(struct file *fi /* name buffer should be allocated before use */ err = exfat_alloc_namebuf(nb); if (err) - goto unlock; + goto out; get_new: + mutex_lock(&EXFAT_SB(sb)->s_lock); + if (ei->flags == ALLOC_NO_FAT_CHAIN && cpos >= i_size_read(inode)) goto end_of_dir;
err = exfat_readdir(inode, &cpos, &de); if (err) { /* - * At least we tried to read a sector. Move cpos to next sector - * position (should be aligned). + * At least we tried to read a sector. + * Move cpos to next sector position (should be aligned). */ if (err == -EIO) { cpos += 1 << (sb->s_blocksize_bits); @@ -281,16 +285,10 @@ get_new: inum = iunique(sb, EXFAT_ROOT_INO); }
- /* - * Before calling dir_emit(), sb_lock should be released. - * Because page fault can occur in dir_emit() when the size - * of buffer given from user is larger than one page size. - */ mutex_unlock(&EXFAT_SB(sb)->s_lock); if (!dir_emit(ctx, nb->lfn, strlen(nb->lfn), inum, (de.attr & ATTR_SUBDIR) ? DT_DIR : DT_REG)) - goto out_unlocked; - mutex_lock(&EXFAT_SB(sb)->s_lock); + goto out; ctx->pos = cpos; goto get_new;
@@ -298,9 +296,8 @@ end_of_dir: if (!cpos && fake_offset) cpos = ITER_POS_FILLED_DOTS; ctx->pos = cpos; -unlock: mutex_unlock(&EXFAT_SB(sb)->s_lock); -out_unlocked: +out: /* * To improve performance, free namebuf after unlock sb_lock. * If namebuf is not allocated, this function do nothing
From: Olivier Maignial olivier.maignial@hotmail.fr
commit 8544cda94dae6be3f1359539079c68bb731428b1 upstream.
Reading ECC status is failing.
tx58cxgxsxraix_ecc_get_status() is using on-stack buffer for SPINAND_GET_FEATURE_OP() output. It is not suitable for DMA needs of spi-mem.
Fix this by using the spi-mem operations dedicated buffer spinand->scratchbuf.
See spinand->scratchbuf: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/incl... spi_mem_check_op(): https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/driv...
Fixes: 10949af1681d ("mtd: spinand: Add initial support for Toshiba TC58CVG2S0H") Cc: stable@vger.kernel.org Signed-off-by: Olivier Maignial olivier.maignial@hotmail.fr Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/DB4P250MB1032553D05FBE36DEE0D311EFE23A@DB4... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/mtd/nand/spi/toshiba.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/mtd/nand/spi/toshiba.c +++ b/drivers/mtd/nand/spi/toshiba.c @@ -73,7 +73,7 @@ static int tx58cxgxsxraix_ecc_get_status { struct nand_device *nand = spinand_to_nand(spinand); u8 mbf = 0; - struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, &mbf); + struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, spinand->scratchbuf);
switch (status & STATUS_ECC_MASK) { case STATUS_ECC_NO_BITFLIPS: @@ -92,7 +92,7 @@ static int tx58cxgxsxraix_ecc_get_status if (spi_mem_exec_op(spinand->spimem, &op)) return nanddev_get_ecc_conf(nand)->strength;
- mbf >>= 4; + mbf = *(spinand->scratchbuf) >> 4;
if (WARN_ON(mbf > nanddev_get_ecc_conf(nand)->strength || !mbf)) return nanddev_get_ecc_conf(nand)->strength;
From: Olivier Maignial olivier.maignial@hotmail.fr
commit f5a05060670a4d8d6523afc7963eb559c2e3615f upstream.
Reading ECC status is failing.
w25n02kv_ecc_get_status() is using on-stack buffer for SPINAND_GET_FEATURE_OP() output. It is not suitable for DMA needs of spi-mem.
Fix this by using the spi-mem operations dedicated buffer spinand->scratchbuf.
See spinand->scratchbuf: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/incl... spi_mem_check_op(): https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/driv...
Fixes: 6154c7a58348 ("mtd: spinand: winbond: add Winbond W25N02KV flash support") Cc: stable@vger.kernel.org Signed-off-by: Olivier Maignial olivier.maignial@hotmail.fr Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/DB4P250MB1032EDB9E36B764A33769039FE23A@DB4... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/mtd/nand/spi/winbond.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/mtd/nand/spi/winbond.c +++ b/drivers/mtd/nand/spi/winbond.c @@ -108,7 +108,7 @@ static int w25n02kv_ecc_get_status(struc { struct nand_device *nand = spinand_to_nand(spinand); u8 mbf = 0; - struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, &mbf); + struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, spinand->scratchbuf);
switch (status & STATUS_ECC_MASK) { case STATUS_ECC_NO_BITFLIPS: @@ -126,7 +126,7 @@ static int w25n02kv_ecc_get_status(struc if (spi_mem_exec_op(spinand->spimem, &op)) return nanddev_get_ecc_conf(nand)->strength;
- mbf >>= 4; + mbf = *(spinand->scratchbuf) >> 4;
if (WARN_ON(mbf > nanddev_get_ecc_conf(nand)->strength || !mbf)) return nanddev_get_ecc_conf(nand)->strength;
From: Arseniy Krasnov AVKrasnov@sberdevices.ru
commit 7e6b04f9238eab0f684fafd158c1f32ea65b9eaa upstream.
It is incorrect to calculate number of OOB bytes for ECC engine using some "already known" ECC step size (1024 bytes here). Number of such bytes for ECC engine must be whole OOB except 2 bytes for bad block marker, while proper ECC step size and strength will be selected by ECC logic.
Fixes: 8fae856c5350 ("mtd: rawnand: meson: add support for Amlogic NAND flash controller") Cc: Stable@vger.kernel.org Signed-off-by: Arseniy Krasnov AVKrasnov@sberdevices.ru Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/20230705065211.293500-1-AVKrasnov@sberdevi... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/mtd/nand/raw/meson_nand.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/drivers/mtd/nand/raw/meson_nand.c +++ b/drivers/mtd/nand/raw/meson_nand.c @@ -1184,7 +1184,6 @@ static int meson_nand_attach_chip(struct struct meson_nfc *nfc = nand_get_controller_data(nand); struct meson_nfc_nand_chip *meson_chip = to_meson_nand(nand); struct mtd_info *mtd = nand_to_mtd(nand); - int nsectors = mtd->writesize / 1024; int ret;
if (!mtd->name) { @@ -1202,7 +1201,7 @@ static int meson_nand_attach_chip(struct nand->options |= NAND_NO_SUBPAGE_WRITE;
ret = nand_ecc_choose_conf(nand, nfc->data->ecc_caps, - mtd->oobsize - 2 * nsectors); + mtd->oobsize - 2); if (ret) { dev_err(nfc->dev, "failed to ECC init\n"); return -EINVAL;
From: Song Shuai suagrfillet@gmail.com
commit fbe7d19d2b7fcbd38905ba9f691be8f245c6faa6 upstream.
Since RISC-V Linux v6.4, the commit 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping") changes phys_ram_base from the physical start of the kernel to the actual start of the DRAM.
The Crash-utility's VTOP() still uses phys_ram_base and kernel_map.virt_addr to translate kernel virtual address, that failed the Crash with Linux v6.4 [1].
Export kernel_map.va_kernel_pa_offset in vmcoreinfo to help Crash translate the kernel virtual address correctly.
Fixes: 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping") Link: https://lore.kernel.org/linux-riscv/20230724040649.220279-1-suagrfillet@gmai... [1] Signed-off-by: Song Shuai suagrfillet@gmail.com Reviewed-by: Xianting Tian xianting.tian@linux.alibaba.com Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com Link: https://lore.kernel.org/r/20230724100917.309061-1-suagrfillet@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/riscv/kernel/crash_core.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/riscv/kernel/crash_core.c b/arch/riscv/kernel/crash_core.c index b351a3c01355..55f1d7856b54 100644 --- a/arch/riscv/kernel/crash_core.c +++ b/arch/riscv/kernel/crash_core.c @@ -18,4 +18,6 @@ void arch_crash_save_vmcoreinfo(void) vmcoreinfo_append_str("NUMBER(MODULES_END)=0x%lx\n", MODULES_END); #endif vmcoreinfo_append_str("NUMBER(KERNEL_LINK_ADDR)=0x%lx\n", KERNEL_LINK_ADDR); + vmcoreinfo_append_str("NUMBER(va_kernel_pa_offset)=0x%lx\n", + kernel_map.va_kernel_pa_offset); }
From: Jiri Olsa jolsa@kernel.org
commit f2c67a3e60d1071b65848efaa8c3b66c363dd025 upstream.
The nesting protection in bpf_perf_event_output relies on disabled preemption, which is guaranteed for kprobes and tracepoints.
However bpf_perf_event_output can be also called from uprobes context through bpf_prog_run_array_sleepable function which disables migration, but keeps preemption enabled.
This can cause task to be preempted by another one inside the nesting protection and lead eventually to two tasks using same perf_sample_data buffer and cause crashes like:
kernel tried to execute NX-protected page - exploit attempt? (uid: 0) BUG: unable to handle page fault for address: ffffffff82be3eea ... Call Trace: ? __die+0x1f/0x70 ? page_fault_oops+0x176/0x4d0 ? exc_page_fault+0x132/0x230 ? asm_exc_page_fault+0x22/0x30 ? perf_output_sample+0x12b/0x910 ? perf_event_output+0xd0/0x1d0 ? bpf_perf_event_output+0x162/0x1d0 ? bpf_prog_c6271286d9a4c938_krava1+0x76/0x87 ? __uprobe_perf_func+0x12b/0x540 ? uprobe_dispatcher+0x2c4/0x430 ? uprobe_notify_resume+0x2da/0xce0 ? atomic_notifier_call_chain+0x7b/0x110 ? exit_to_user_mode_prepare+0x13e/0x290 ? irqentry_exit_to_user_mode+0x5/0x30 ? asm_exc_int3+0x35/0x40
Fixing this by disabling preemption in bpf_perf_event_output.
Cc: stable@vger.kernel.org Fixes: 8c7dcb84e3b7 ("bpf: implement sleepable uprobes by chaining gps") Acked-by: Hou Tao houtao1@huawei.com Signed-off-by: Jiri Olsa jolsa@kernel.org Link: https://lore.kernel.org/r/20230725084206.580930-2-jolsa@kernel.org Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/bpf_trace.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
--- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -661,8 +661,7 @@ static DEFINE_PER_CPU(int, bpf_trace_nes BPF_CALL_5(bpf_perf_event_output, struct pt_regs *, regs, struct bpf_map *, map, u64, flags, void *, data, u64, size) { - struct bpf_trace_sample_data *sds = this_cpu_ptr(&bpf_trace_sds); - int nest_level = this_cpu_inc_return(bpf_trace_nest_level); + struct bpf_trace_sample_data *sds; struct perf_raw_record raw = { .frag = { .size = size, @@ -670,7 +669,11 @@ BPF_CALL_5(bpf_perf_event_output, struct }, }; struct perf_sample_data *sd; - int err; + int nest_level, err; + + preempt_disable(); + sds = this_cpu_ptr(&bpf_trace_sds); + nest_level = this_cpu_inc_return(bpf_trace_nest_level);
if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(sds->sds))) { err = -EBUSY; @@ -688,9 +691,9 @@ BPF_CALL_5(bpf_perf_event_output, struct perf_sample_save_raw_data(sd, &raw);
err = __bpf_perf_event_output(regs, map, flags, sd); - out: this_cpu_dec(bpf_trace_nest_level); + preempt_enable(); return err; }
From: Dinh Nguyen dinguyen@kernel.org
commit db66795f61354c373ecdadbdae1ed253a96c47cb upstream.
The correct dts property for the SCL falling time is "i2c-scl-falling-time-ns".
Fixes: c8da1d15b8a4 ("arm64: dts: stratix10: i2c clock running out of spec") Cc: stable@vger.kernel.org Signed-off-by: Dinh Nguyen dinguyen@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts | 2 +- arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
--- a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts +++ b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts @@ -145,7 +145,7 @@ status = "okay"; clock-frequency = <100000>; i2c-sda-falling-time-ns = <890>; /* hcnt */ - i2c-sdl-falling-time-ns = <890>; /* lcnt */ + i2c-scl-falling-time-ns = <890>; /* lcnt */
pinctrl-names = "default", "gpio"; pinctrl-0 = <&i2c1_pmx_func>; --- a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts +++ b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts @@ -141,7 +141,7 @@ status = "okay"; clock-frequency = <100000>; i2c-sda-falling-time-ns = <890>; /* hcnt */ - i2c-sdl-falling-time-ns = <890>; /* lcnt */ + i2c-scl-falling-time-ns = <890>; /* lcnt */
adc@14 { compatible = "lltc,ltc2497";
From: Laszlo Ersek lersek@redhat.com
commit 9bc3047374d5bec163e83e743709e23753376f0c upstream.
Commit a096ccca6e50 initializes the "sk_uid" field in the protocol socket (struct sock) from the "/dev/net/tun" device node's owner UID. Per original commit 86741ec25462 ("net: core: Add a UID field to struct sock.", 2016-11-04), that's wrong: the idea is to cache the UID of the userspace process that creates the socket. Commit 86741ec25462 mentions socket() and accept(); with "tun", the action that creates the socket is open("/dev/net/tun").
Therefore the device node's owner UID is irrelevant. In most cases, "/dev/net/tun" will be owned by root, so in practice, commit a096ccca6e50 has no observable effect:
- before, "sk_uid" would be zero, due to undefined behavior (CVE-2023-1076),
- after, "sk_uid" would be zero, due to "/dev/net/tun" being owned by root.
What matters is the (fs)UID of the process performing the open(), so cache that in "sk_uid".
Cc: Eric Dumazet edumazet@google.com Cc: Lorenzo Colitti lorenzo@google.com Cc: Paolo Abeni pabeni@redhat.com Cc: Pietro Borrello borrello@diag.uniroma1.it Cc: netdev@vger.kernel.org Cc: stable@vger.kernel.org Fixes: a096ccca6e50 ("tun: tun_chr_open(): correctly initialize socket uid") Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2173435 Signed-off-by: Laszlo Ersek lersek@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/tun.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -3469,7 +3469,7 @@ static int tun_chr_open(struct inode *in tfile->socket.file = file; tfile->socket.ops = &tun_socket_ops;
- sock_init_data_uid(&tfile->socket, &tfile->sk, inode->i_uid); + sock_init_data_uid(&tfile->socket, &tfile->sk, current_fsuid());
tfile->sk.sk_write_space = tun_sock_write_space; tfile->sk.sk_sndbuf = INT_MAX;
From: Laszlo Ersek lersek@redhat.com
commit 5c9241f3ceab3257abe2923a59950db0dc8bb737 upstream.
Commit 66b2c338adce initializes the "sk_uid" field in the protocol socket (struct sock) from the "/dev/tapX" device node's owner UID. Per original commit 86741ec25462 ("net: core: Add a UID field to struct sock.", 2016-11-04), that's wrong: the idea is to cache the UID of the userspace process that creates the socket. Commit 86741ec25462 mentions socket() and accept(); with "tap", the action that creates the socket is open("/dev/tapX").
Therefore the device node's owner UID is irrelevant. In most cases, "/dev/tapX" will be owned by root, so in practice, commit 66b2c338adce has no observable effect:
- before, "sk_uid" would be zero, due to undefined behavior (CVE-2023-1076),
- after, "sk_uid" would be zero, due to "/dev/tapX" being owned by root.
What matters is the (fs)UID of the process performing the open(), so cache that in "sk_uid".
Cc: Eric Dumazet edumazet@google.com Cc: Lorenzo Colitti lorenzo@google.com Cc: Paolo Abeni pabeni@redhat.com Cc: Pietro Borrello borrello@diag.uniroma1.it Cc: netdev@vger.kernel.org Cc: stable@vger.kernel.org Fixes: 66b2c338adce ("tap: tap_open(): correctly initialize socket uid") Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2173435 Signed-off-by: Laszlo Ersek lersek@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/tap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/tap.c +++ b/drivers/net/tap.c @@ -534,7 +534,7 @@ static int tap_open(struct inode *inode, q->sock.state = SS_CONNECTED; q->sock.file = file; q->sock.ops = &tap_socket_ops; - sock_init_data_uid(&q->sock, &q->sk, inode->i_uid); + sock_init_data_uid(&q->sock, &q->sk, current_fsuid()); q->sk.sk_write_space = tap_sock_write_space; q->sk.sk_destruct = tap_sock_destruct; q->flags = IFF_VNET_HDR | IFF_NO_PI | IFF_TAP;
From: Paul Fertser fercerpav@gmail.com
commit 421033deb91521aa6a9255e495cb106741a52275 upstream.
On DBDC devices the first (internal) phy is only capable of using 2.4 GHz band, and the 5 GHz band is exposed via a separate phy object, so avoid the false advertising.
Reported-by: Rani Hod rani.hod@gmail.com Closes: https://github.com/openwrt/openwrt/pull/12361 Fixes: 7660a1bd0c22 ("mt76: mt7615: register ext_phy if DBDC is detected") Cc: stable@vger.kernel.org Signed-off-by: Paul Fertser fercerpav@gmail.com Reviewed-by: Simon Horman simon.horman@corigine.com Acked-by: Felix Fietkau nbd@nbd.name Signed-off-by: Kalle Valo kvalo@kernel.org Link: https://lore.kernel.org/r/20230605073408.8699-1-fercerpav@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c +++ b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c @@ -128,12 +128,12 @@ mt7615_eeprom_parse_hw_band_cap(struct m case MT_EE_5GHZ: dev->mphy.cap.has_5ghz = true; break; - case MT_EE_2GHZ: - dev->mphy.cap.has_2ghz = true; - break; case MT_EE_DBDC: dev->dbdc_support = true; fallthrough; + case MT_EE_2GHZ: + dev->mphy.cap.has_2ghz = true; + break; default: dev->mphy.cap.has_2ghz = true; dev->mphy.cap.has_5ghz = true;
From: Michael Kelley mikelley@microsoft.com
commit d5ace2a776442d80674eff9ed42e737f7dd95056 upstream.
On hardware that supports Indirect Branch Tracking (IBT), Hyper-V VMs with ConfigVersion 9.3 or later support IBT in the guest. However, current versions of Hyper-V have a bug in that there's not an ENDBR64 instruction at the beginning of the hypercall page. Since hypercalls are made with an indirect call to the hypercall page, all hypercall attempts fail with an exception and Linux panics.
A Hyper-V fix is in progress to add ENDBR64. But guard against the Linux panic by clearing X86_FEATURE_IBT if the hypercall page doesn't start with ENDBR. The VM will boot and run without IBT.
If future Linux 32-bit kernels were to support IBT, additional hypercall page hackery would be needed to make IBT work for such kernels in a Hyper-V VM.
Cc: stable@vger.kernel.org Signed-off-by: Michael Kelley mikelley@microsoft.com Link: https://lore.kernel.org/r/1690001476-98594-1-git-send-email-mikelley@microso... Signed-off-by: Wei Liu wei.liu@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/hyperv/hv_init.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
--- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -14,6 +14,7 @@ #include <asm/apic.h> #include <asm/desc.h> #include <asm/sev.h> +#include <asm/ibt.h> #include <asm/hypervisor.h> #include <asm/hyperv-tlfs.h> #include <asm/mshyperv.h> @@ -472,6 +473,26 @@ void __init hyperv_init(void) }
/* + * Some versions of Hyper-V that provide IBT in guest VMs have a bug + * in that there's no ENDBR64 instruction at the entry to the + * hypercall page. Because hypercalls are invoked via an indirect call + * to the hypercall page, all hypercall attempts fail when IBT is + * enabled, and Linux panics. For such buggy versions, disable IBT. + * + * Fixed versions of Hyper-V always provide ENDBR64 on the hypercall + * page, so if future Linux kernel versions enable IBT for 32-bit + * builds, additional hypercall page hackery will be required here + * to provide an ENDBR32. + */ +#ifdef CONFIG_X86_KERNEL_IBT + if (cpu_feature_enabled(X86_FEATURE_IBT) && + *(u32 *)hv_hypercall_pg != gen_endbr()) { + setup_clear_cpu_cap(X86_FEATURE_IBT); + pr_warn("Hyper-V: Disabling IBT because of Hyper-V bug\n"); + } +#endif + + /* * hyperv_init() is called before LAPIC is initialized: see * apic_intr_mode_init() -> x86_platform.apic_post_init() and * apic_bsp_setup() -> setup_local_APIC(). The direct-mode STIMER
From: Ilya Dryomov idryomov@gmail.com
commit 9d01e07fd1bfb4daae156ab528aa196f5ac2b2bc upstream.
Due to rbd_try_acquire_lock() effectively swallowing all but EBLOCKLISTED error from rbd_try_lock() ("request lock anyway") and rbd_request_lock() returning ETIMEDOUT error not only for an actual notify timeout but also when the lock owner doesn't respond, a busy loop inside of rbd_acquire_lock() between rbd_try_acquire_lock() and rbd_request_lock() is possible.
Requesting the lock on EBUSY error (returned by get_lock_owner_info() if an incompatible lock or invalid lock owner is detected) makes very little sense. The same goes for ETIMEDOUT error (might pop up pretty much anywhere if osd_request_timeout option is set) and many others.
Just fail I/O requests on rbd_dev->acquiring_list immediately on any error from rbd_try_lock().
Cc: stable@vger.kernel.org # 588159009d5b: rbd: retrieve and check lock owner twice before blocklisting Cc: stable@vger.kernel.org Signed-off-by: Ilya Dryomov idryomov@gmail.com Reviewed-by: Dongsheng Yang dongsheng.yang@easystack.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/block/rbd.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-)
--- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -3675,7 +3675,7 @@ static int rbd_lock(struct rbd_device *r ret = ceph_cls_lock(osdc, &rbd_dev->header_oid, &rbd_dev->header_oloc, RBD_LOCK_NAME, CEPH_CLS_LOCK_EXCLUSIVE, cookie, RBD_LOCK_TAG, "", 0); - if (ret) + if (ret && ret != -EEXIST) return ret;
__rbd_lock(rbd_dev, cookie); @@ -3878,7 +3878,7 @@ static struct ceph_locker *get_lock_owne &rbd_dev->header_oloc, RBD_LOCK_NAME, &lock_type, &lock_tag, &lockers, &num_lockers); if (ret) { - rbd_warn(rbd_dev, "failed to retrieve lockers: %d", ret); + rbd_warn(rbd_dev, "failed to get header lockers: %d", ret); return ERR_PTR(ret); }
@@ -3940,8 +3940,10 @@ static int find_watcher(struct rbd_devic ret = ceph_osdc_list_watchers(osdc, &rbd_dev->header_oid, &rbd_dev->header_oloc, &watchers, &num_watchers); - if (ret) + if (ret) { + rbd_warn(rbd_dev, "failed to get watchers: %d", ret); return ret; + }
sscanf(locker->id.cookie, RBD_LOCK_COOKIE_PREFIX " %llu", &cookie); for (i = 0; i < num_watchers; i++) { @@ -3985,8 +3987,12 @@ static int rbd_try_lock(struct rbd_devic locker = refreshed_locker = NULL;
ret = rbd_lock(rbd_dev); - if (ret != -EBUSY) + if (!ret) + goto out; + if (ret != -EBUSY) { + rbd_warn(rbd_dev, "failed to lock header: %d", ret); goto out; + }
/* determine if the current lock holder is still alive */ locker = get_lock_owner_info(rbd_dev); @@ -4089,11 +4095,8 @@ static int rbd_try_acquire_lock(struct r
ret = rbd_try_lock(rbd_dev); if (ret < 0) { - rbd_warn(rbd_dev, "failed to lock header: %d", ret); - if (ret == -EBLOCKLISTED) - goto out; - - ret = 1; /* request lock anyway */ + rbd_warn(rbd_dev, "failed to acquire lock: %d", ret); + goto out; } if (ret > 0) { up_write(&rbd_dev->lock_rwsem); @@ -6627,12 +6630,11 @@ static int rbd_add_acquire_lock(struct r cancel_delayed_work_sync(&rbd_dev->lock_dwork); if (!ret) ret = -ETIMEDOUT; - }
- if (ret) { - rbd_warn(rbd_dev, "failed to acquire exclusive lock: %ld", ret); - return ret; + rbd_warn(rbd_dev, "failed to acquire lock: %ld", ret); } + if (ret) + return ret;
/* * The lock may have been released by now, unless automatic lock
From: Jiri Olsa jolsa@kernel.org
commit d62cc390c2e99ae267ffe4b8d7e2e08b6c758c32 upstream.
We received report [1] of kernel crash, which is caused by using nesting protection without disabled preemption.
The bpf_event_output can be called by programs executed by bpf_prog_run_array_cg function that disabled migration but keeps preemption enabled.
This can cause task to be preempted by another one inside the nesting protection and lead eventually to two tasks using same perf_sample_data buffer and cause crashes like:
BUG: kernel NULL pointer dereference, address: 0000000000000001 #PF: supervisor instruction fetch in kernel mode #PF: error_code(0x0010) - not-present page ... ? perf_output_sample+0x12a/0x9a0 ? finish_task_switch.isra.0+0x81/0x280 ? perf_event_output+0x66/0xa0 ? bpf_event_output+0x13a/0x190 ? bpf_event_output_data+0x22/0x40 ? bpf_prog_dfc84bbde731b257_cil_sock4_connect+0x40a/0xacb ? xa_load+0x87/0xe0 ? __cgroup_bpf_run_filter_sock_addr+0xc1/0x1a0 ? release_sock+0x3e/0x90 ? sk_setsockopt+0x1a1/0x12f0 ? udp_pre_connect+0x36/0x50 ? inet_dgram_connect+0x93/0xa0 ? __sys_connect+0xb4/0xe0 ? udp_setsockopt+0x27/0x40 ? __pfx_udp_push_pending_frames+0x10/0x10 ? __sys_setsockopt+0xdf/0x1a0 ? __x64_sys_connect+0xf/0x20 ? do_syscall_64+0x3a/0x90 ? entry_SYSCALL_64_after_hwframe+0x72/0xdc
Fixing this by disabling preemption in bpf_event_output.
[1] https://github.com/cilium/cilium/issues/26756 Cc: stable@vger.kernel.org Reported-by: Oleg "livelace" Popov o.popov@livelace.ru Closes: https://github.com/cilium/cilium/issues/26756 Fixes: 2a916f2f546c ("bpf: Use migrate_disable/enable in array macros and cgroup/lirc code.") Acked-by: Hou Tao houtao1@huawei.com Signed-off-by: Jiri Olsa jolsa@kernel.org Link: https://lore.kernel.org/r/20230725084206.580930-3-jolsa@kernel.org Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/bpf_trace.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -718,7 +718,6 @@ static DEFINE_PER_CPU(struct bpf_trace_s u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size, void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy) { - int nest_level = this_cpu_inc_return(bpf_event_output_nest_level); struct perf_raw_frag frag = { .copy = ctx_copy, .size = ctx_size, @@ -735,8 +734,12 @@ u64 bpf_event_output(struct bpf_map *map }; struct perf_sample_data *sd; struct pt_regs *regs; + int nest_level; u64 ret;
+ preempt_disable(); + nest_level = this_cpu_inc_return(bpf_event_output_nest_level); + if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bpf_misc_sds.sds))) { ret = -EBUSY; goto out; @@ -751,6 +754,7 @@ u64 bpf_event_output(struct bpf_map *map ret = __bpf_perf_event_output(regs, map, flags, sd); out: this_cpu_dec(bpf_event_output_nest_level); + preempt_enable(); return ret; }
From: Paulo Alcantara pc@manguebit.com
commit 11260c3d608b59231f4c228147a795ab21a10b33 upstream.
Customer reported that they couldn't mount their DFS link that was seen by the client as a DFS interlink -- special form of DFS link where its single target may point to a different DFS namespace -- and it turned out that it was just a regular DFS link where its referral header flags missed the StorageServers bit thus making the client think it couldn't tree connect to target directly without requiring further referrals.
When the DFS link referral header flags misses the StoraServers bit and its target doesn't respond to any referrals, then tree connect to it.
Fixes: a1c0d00572fc ("cifs: share dfs connections and supers") Cc: stable@vger.kernel.org Signed-off-by: Paulo Alcantara (SUSE) pc@manguebit.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/smb/client/dfs.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/fs/smb/client/dfs.c +++ b/fs/smb/client/dfs.c @@ -178,8 +178,12 @@ static int __dfs_mount_share(struct cifs struct dfs_cache_tgt_list tl = DFS_CACHE_TGT_LIST_INIT(tl);
rc = dfs_get_referral(mnt_ctx, ref_path + 1, NULL, &tl); - if (rc) + if (rc) { + rc = cifs_mount_get_tcon(mnt_ctx); + if (!rc) + rc = cifs_is_path_remote(mnt_ctx); break; + }
tit = dfs_cache_get_tgt_iterator(&tl); if (!tit) {
From: Naveen N Rao naveen@kernel.org
commit 41a506ef71eb38d94fe133f565c87c3e06ccc072 upstream.
With ppc64 -mprofile-kernel and ppc32 -pg, profiling instructions to call into ftrace are emitted right at function entry. The instruction sequence used is minimal to reduce overhead. Crucially, a stackframe is not created for the function being traced. This breaks stack unwinding since the function being traced does not have a stackframe for itself. As such, it never shows up in the backtrace:
/sys/kernel/debug/tracing # echo 1 > /proc/sys/kernel/stack_tracer_enabled /sys/kernel/debug/tracing # cat stack_trace Depth Size Location (17 entries) ----- ---- -------- 0) 4144 32 ftrace_call+0x4/0x44 1) 4112 432 get_page_from_freelist+0x26c/0x1ad0 2) 3680 496 __alloc_pages+0x290/0x1280 3) 3184 336 __folio_alloc+0x34/0x90 4) 2848 176 vma_alloc_folio+0xd8/0x540 5) 2672 272 __handle_mm_fault+0x700/0x1cc0 6) 2400 208 handle_mm_fault+0xf0/0x3f0 7) 2192 80 ___do_page_fault+0x3e4/0xbe0 8) 2112 160 do_page_fault+0x30/0xc0 9) 1952 256 data_access_common_virt+0x210/0x220 10) 1696 400 0xc00000000f16b100 11) 1296 384 load_elf_binary+0x804/0x1b80 12) 912 208 bprm_execve+0x2d8/0x7e0 13) 704 64 do_execveat_common+0x1d0/0x2f0 14) 640 160 sys_execve+0x54/0x70 15) 480 64 system_call_exception+0x138/0x350 16) 416 416 system_call_common+0x160/0x2c4
Fix this by having ftrace create a dummy stackframe for the function being traced. With this, backtraces now capture the function being traced:
/sys/kernel/debug/tracing # cat stack_trace Depth Size Location (17 entries) ----- ---- -------- 0) 3888 32 _raw_spin_trylock+0x8/0x70 1) 3856 576 get_page_from_freelist+0x26c/0x1ad0 2) 3280 64 __alloc_pages+0x290/0x1280 3) 3216 336 __folio_alloc+0x34/0x90 4) 2880 176 vma_alloc_folio+0xd8/0x540 5) 2704 416 __handle_mm_fault+0x700/0x1cc0 6) 2288 96 handle_mm_fault+0xf0/0x3f0 7) 2192 48 ___do_page_fault+0x3e4/0xbe0 8) 2144 192 do_page_fault+0x30/0xc0 9) 1952 608 data_access_common_virt+0x210/0x220 10) 1344 16 0xc0000000334bbb50 11) 1328 416 load_elf_binary+0x804/0x1b80 12) 912 64 bprm_execve+0x2d8/0x7e0 13) 848 176 do_execveat_common+0x1d0/0x2f0 14) 672 192 sys_execve+0x54/0x70 15) 480 64 system_call_exception+0x138/0x350 16) 416 416 system_call_common+0x160/0x2c4
This results in two additional stores in the ftrace entry code, but produces reliable backtraces.
Fixes: 153086644fd1 ("powerpc/ftrace: Add support for -mprofile-kernel ftrace ABI") Cc: stable@vger.kernel.org Signed-off-by: Naveen N Rao naveen@kernel.org Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://msgid.link/20230621051349.759567-1-naveen@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/powerpc/kernel/trace/ftrace_mprofile.S | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/arch/powerpc/kernel/trace/ftrace_mprofile.S +++ b/arch/powerpc/kernel/trace/ftrace_mprofile.S @@ -33,6 +33,9 @@ * and then arrange for the ftrace function to be called. */ .macro ftrace_regs_entry allregs + /* Create a minimal stack frame for representing B */ + PPC_STLU r1, -STACK_FRAME_MIN_SIZE(r1) + /* Create our stack frame + pt_regs */ PPC_STLU r1,-SWITCH_FRAME_SIZE(r1)
@@ -42,7 +45,7 @@
#ifdef CONFIG_PPC64 /* Save the original return address in A's stack frame */ - std r0, LRSAVE+SWITCH_FRAME_SIZE(r1) + std r0, LRSAVE+SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE(r1) /* Ok to continue? */ lbz r3, PACA_FTRACE_ENABLED(r13) cmpdi r3, 0 @@ -77,6 +80,8 @@ mflr r7 /* Save it as pt_regs->nip */ PPC_STL r7, _NIP(r1) + /* Also save it in B's stackframe header for proper unwind */ + PPC_STL r7, LRSAVE+SWITCH_FRAME_SIZE(r1) /* Save the read LR in pt_regs->link */ PPC_STL r0, _LINK(r1)
@@ -142,7 +147,7 @@ #endif
/* Pop our stack frame */ - addi r1, r1, SWITCH_FRAME_SIZE + addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
#ifdef CONFIG_LIVEPATCH_64 /* Based on the cmpd above, if the NIP was altered handle livepatch */
From: Mike Rapoport (IBM) rppt@kernel.org
commit c2ff2b736c41cc63bb0aaec85cccfead9fbcfe92 upstream.
Christoph Biedl reported early OOM on recent kernels:
swapper: page allocation failure: order:0, mode:0x100(__GFP_ZERO), nodemask=(null) CPU: 0 PID: 0 Comm: swapper Not tainted 6.3.0-rc4+ #16 Hardware name: 9000/785/C3600 Backtrace: [<10408594>] show_stack+0x48/0x5c [<10e152d8>] dump_stack_lvl+0x48/0x64 [<10e15318>] dump_stack+0x24/0x34 [<105cf7f8>] warn_alloc+0x10c/0x1c8 [<105d068c>] __alloc_pages+0xbbc/0xcf8 [<105d0e4c>] __get_free_pages+0x28/0x78 [<105ad10c>] __pte_alloc_kernel+0x30/0x98 [<10406934>] set_fixmap+0xec/0xf4 [<10411ad4>] patch_map.constprop.0+0xa8/0xdc [<10411bb0>] __patch_text_multiple+0xa8/0x208 [<10411d78>] patch_text+0x30/0x48 [<1041246c>] arch_jump_label_transform+0x90/0xcc [<1056f734>] jump_label_update+0xd4/0x184 [<1056fc9c>] static_key_enable_cpuslocked+0xc0/0x110 [<1056fd08>] static_key_enable+0x1c/0x2c [<1011362c>] init_mem_debugging_and_hardening+0xdc/0xf8 [<1010141c>] start_kernel+0x5f0/0xa98 [<10105da8>] start_parisc+0xb8/0xe4
Mem-Info: active_anon:0 inactive_anon:0 isolated_anon:0 active_file:0 inactive_file:0 isolated_file:0 unevictable:0 dirty:0 writeback:0 slab_reclaimable:0 slab_unreclaimable:0 mapped:0 shmem:0 pagetables:0 sec_pagetables:0 bounce:0 kernel_misc_reclaimable:0 free:0 free_pcp:0 free_cma:0 Node 0 active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:0kB shmem:0kB +writeback_tmp:0kB kernel_stack:0kB pagetables:0kB sec_pagetables:0kB all_unreclaimable? no Normal free:0kB boost:0kB min:0kB low:0kB high:0kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB +present:1048576kB managed:1039360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB lowmem_reserve[]: 0 0 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB 0 total pagecache pages 0 pages in swap cache Free swap = 0kB Total swap = 0kB 262144 pages RAM 0 pages HighMem/MovableOnly 2304 pages reserved Backtrace: [<10411d78>] patch_text+0x30/0x48 [<1041246c>] arch_jump_label_transform+0x90/0xcc [<1056f734>] jump_label_update+0xd4/0x184 [<1056fc9c>] static_key_enable_cpuslocked+0xc0/0x110 [<1056fd08>] static_key_enable+0x1c/0x2c [<1011362c>] init_mem_debugging_and_hardening+0xdc/0xf8 [<1010141c>] start_kernel+0x5f0/0xa98 [<10105da8>] start_parisc+0xb8/0xe4
Kernel Fault: Code=15 (Data TLB miss fault) at addr 0f7fe3c0 CPU: 0 PID: 0 Comm: swapper Not tainted 6.3.0-rc4+ #16 Hardware name: 9000/785/C3600
This happens because patching static key code temporarily maps it via fixmap and if it happens before page allocator is initialized set_fixmap() cannot allocate memory using pte_alloc_kernel().
Make sure that fixmap page tables are preallocated early so that pte_offset_kernel() in set_fixmap() never resorts to pte allocation.
Signed-off-by: Mike Rapoport (IBM) rppt@kernel.org Acked-by: Vlastimil Babka vbabka@suse.cz Signed-off-by: Helge Deller deller@gmx.de Tested-by: Christoph Biedl linux-kernel.bfrz@manchmal.in-ulm.de Tested-by: John David Anglin dave.anglin@bell.net Cc: stable@vger.kernel.org # v6.4+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/parisc/mm/fixmap.c | 3 --- arch/parisc/mm/init.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+), 3 deletions(-)
--- a/arch/parisc/mm/fixmap.c +++ b/arch/parisc/mm/fixmap.c @@ -19,9 +19,6 @@ void notrace set_fixmap(enum fixed_addre pmd_t *pmd = pmd_offset(pud, vaddr); pte_t *pte;
- if (pmd_none(*pmd)) - pte = pte_alloc_kernel(pmd, vaddr); - pte = pte_offset_kernel(pmd, vaddr); set_pte_at(&init_mm, vaddr, pte, __mk_pte(phys, PAGE_KERNEL_RWX)); flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE); --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -671,6 +671,39 @@ static void __init gateway_init(void) PAGE_SIZE, PAGE_GATEWAY, 1); }
+static void __init fixmap_init(void) +{ + unsigned long addr = FIXMAP_START; + unsigned long end = FIXMAP_START + FIXMAP_SIZE; + pgd_t *pgd = pgd_offset_k(addr); + p4d_t *p4d = p4d_offset(pgd, addr); + pud_t *pud = pud_offset(p4d, addr); + pmd_t *pmd; + + BUILD_BUG_ON(FIXMAP_SIZE > PMD_SIZE); + +#if CONFIG_PGTABLE_LEVELS == 3 + if (pud_none(*pud)) { + pmd = memblock_alloc(PAGE_SIZE << PMD_TABLE_ORDER, + PAGE_SIZE << PMD_TABLE_ORDER); + if (!pmd) + panic("fixmap: pmd allocation failed.\n"); + pud_populate(NULL, pud, pmd); + } +#endif + + pmd = pmd_offset(pud, addr); + do { + pte_t *pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); + if (!pte) + panic("fixmap: pte allocation failed.\n"); + + pmd_populate_kernel(&init_mm, pmd, pte); + + addr += PAGE_SIZE; + } while (addr < end); +} + static void __init parisc_bootmem_free(void) { unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, }; @@ -685,6 +718,7 @@ void __init paging_init(void) setup_bootmem(); pagetable_init(); gateway_init(); + fixmap_init(); flush_cache_all_local(); /* start with known state */ flush_tlb_all_local(NULL);
From: Mark Brown broonie@kernel.org
commit 69af56ae56a48a2522aad906c4461c6c7c092737 upstream.
We have a function sve_sync_from_fpsimd_zeropad() which is used by the ptrace code to update the SVE state when the user writes to the the FPSIMD register set. Currently this checks that the task has SVE enabled but this will miss updates for tasks which have streaming SVE enabled if SVE has not been enabled for the thread, also do the conversion if the task has streaming SVE enabled.
Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Signed-off-by: Mark Brown broonie@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230803-arm64-fix-ptrace-ssve-no-sve-v1-3-49df214... Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/kernel/fpsimd.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -835,7 +835,8 @@ void sve_sync_from_fpsimd_zeropad(struct void *sst = task->thread.sve_state; struct user_fpsimd_state const *fst = &task->thread.uw.fpsimd_state;
- if (!test_tsk_thread_flag(task, TIF_SVE)) + if (!test_tsk_thread_flag(task, TIF_SVE) && + !thread_sm_enabled(&task->thread)) return;
vq = sve_vq_from_vl(thread_get_cur_vl(&task->thread));
From: Mark Brown broonie@kernel.org
commit c9bb40b7f786662e33d71afe236442b0b61f0446 upstream.
When setting SME vector lengths we clear TIF_SME to reenable SME traps, doing a reallocation of the backing storage on next use. We do this using clear_thread_flag() which operates on the current thread, meaning that when setting the vector length via ptrace we may both not force traps for the target task and force a spurious flush of any SME state that the tracing task may have.
Clear the flag in the target task.
Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Reported-by: David Spickett David.Spickett@arm.com Signed-off-by: Mark Brown broonie@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230803-arm64-fix-ptrace-tif-sme-v1-1-88312fd6fbf... Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/kernel/fpsimd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -910,7 +910,7 @@ int vec_set_vector_length(struct task_st */ task->thread.svcr &= ~(SVCR_SM_MASK | SVCR_ZA_MASK); - clear_thread_flag(TIF_SME); + clear_tsk_thread_flag(task, TIF_SME); free_sme = true; } }
From: Mark Brown broonie@kernel.org
commit 507ea5dd92d23fcf10e4d1a68a443c86a49753ed upstream.
Currently we guard FPSIMD/SVE state conversions with a check for the system supporting SVE but SME only systems may need to sync streaming mode SVE state so add a check for SME support too. These functions are only used by the ptrace code.
Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Signed-off-by: Mark Brown broonie@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230803-arm64-fix-ptrace-ssve-no-sve-v1-2-49df214... Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/kernel/fpsimd.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -679,7 +679,7 @@ static void fpsimd_to_sve(struct task_st void *sst = task->thread.sve_state; struct user_fpsimd_state const *fst = &task->thread.uw.fpsimd_state;
- if (!system_supports_sve()) + if (!system_supports_sve() && !system_supports_sme()) return;
vq = sve_vq_from_vl(thread_get_cur_vl(&task->thread)); @@ -705,7 +705,7 @@ static void sve_to_fpsimd(struct task_st unsigned int i; __uint128_t const *p;
- if (!system_supports_sve()) + if (!system_supports_sve() && !system_supports_sme()) return;
vl = thread_get_cur_vl(&task->thread);
From: Mark Brown broonie@kernel.org
commit 89a65c3f170e5c3b05a626046c68354e2afd7912 upstream.
When setting ZT0 via ptrace we do not currently force a reload of the floating point register state from memory, do that to ensure that the newly set value gets loaded into the registers on next task execution.
The function was templated off the function for FPSIMD which due to our providing the option of embedding a FPSIMD regset within the SVE regset does not directly include the flush.
Fixes: f90b529bcbe5 ("arm64/sme: Implement ZT0 ptrace support") Signed-off-by: Mark Brown broonie@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230803-arm64-fix-ptrace-zt0-flush-v1-1-72e854eaf... Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/kernel/ptrace.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index d7f4f0d1ae12..740e81e9db04 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -1180,6 +1180,8 @@ static int zt_set(struct task_struct *target, if (ret == 0) target->thread.svcr |= SVCR_ZA_MASK;
+ fpsimd_flush_task_state(target); + return ret; }
From: Mark Brown broonie@kernel.org
commit 045aecdfcb2e060db142d83a0f4082380c465d2c upstream.
Systems which implement SME without also implementing SVE are architecturally valid but were not initially supported by the kernel, unfortunately we missed one issue in the ptrace code.
The SVE register setting code is shared between SVE and streaming mode SVE. When we set full SVE register state we currently enable TIF_SVE unconditionally, in the case where streaming SVE is being configured on a system that supports vanilla SVE this is not an issue since we always initialise enough state for both vector lengths but on a system which only support SME it will result in us attempting to restore the SVE vector length after having set streaming SVE registers.
Fix this by making the enabling of SVE conditional on setting SVE vector state. If we set streaming SVE state and SVE was not already enabled this will result in a SVE access trap on next use of normal SVE, this will cause us to flush our register state but this is fine since the only way to trigger a SVE access trap would be to exit streaming mode which will cause the in register state to be flushed anyway.
Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Signed-off-by: Mark Brown broonie@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230803-arm64-fix-ptrace-ssve-no-sve-v1-1-49df214... Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/kernel/ptrace.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -932,11 +932,13 @@ static int sve_set_common(struct task_st /* * Ensure target->thread.sve_state is up to date with target's * FPSIMD regs, so that a short copyin leaves trailing - * registers unmodified. Always enable SVE even if going into - * streaming mode. + * registers unmodified. Only enable SVE if we are + * configuring normal SVE, a system with streaming SVE may not + * have normal SVE. */ fpsimd_sync_to_sve(target); - set_tsk_thread_flag(target, TIF_SVE); + if (type == ARM64_VEC_SVE) + set_tsk_thread_flag(target, TIF_SVE); target->thread.fp_type = FP_STATE_SVE;
BUILD_BUG_ON(SVE_PT_SVE_OFFSET != sizeof(header));
From: Aleksa Sarai cyphar@cyphar.com
commit a0fc452a5d7fed986205539259df1d60546f536c upstream.
O_TMPFILE is actually __O_TMPFILE|O_DIRECTORY. This means that the old fast-path check for RESOLVE_CACHED would reject all users passing O_DIRECTORY with -EAGAIN, when in fact the intended test was to check for __O_TMPFILE.
Cc: stable@vger.kernel.org # v5.12+ Fixes: 99668f618062 ("fs: expose LOOKUP_CACHED through openat2() RESOLVE_CACHED") Signed-off-by: Aleksa Sarai cyphar@cyphar.com Message-Id: 20230806-resolve_cached-o_tmpfile-v1-1-7ba16308465e@cyphar.com Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/open.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/open.c +++ b/fs/open.c @@ -1271,7 +1271,7 @@ inline int build_open_flags(const struct lookup_flags |= LOOKUP_IN_ROOT; if (how->resolve & RESOLVE_CACHED) { /* Don't bother even trying for create/truncate/tmpfile open */ - if (flags & (O_TRUNC | O_CREAT | O_TMPFILE)) + if (flags & (O_TRUNC | O_CREAT | __O_TMPFILE)) return -EAGAIN; lookup_flags |= LOOKUP_CACHED; }
From: Guchun Chen guchun.chen@amd.com
commit 2dedcf414bb01b8d966eb445db1d181d92304fb2 upstream.
Add a check to avoid null pointer dereference as below:
[ 90.002283] general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN NOPTI [ 90.002292] KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] [ 90.002346] ? exc_general_protection+0x159/0x240 [ 90.002352] ? asm_exc_general_protection+0x26/0x30 [ 90.002357] ? ttm_bo_evict_swapout_allowable+0x322/0x5e0 [ttm] [ 90.002365] ? ttm_bo_evict_swapout_allowable+0x42e/0x5e0 [ttm] [ 90.002373] ttm_bo_swapout+0x134/0x7f0 [ttm] [ 90.002383] ? __pfx_ttm_bo_swapout+0x10/0x10 [ttm] [ 90.002391] ? lock_acquire+0x44d/0x4f0 [ 90.002398] ? ttm_device_swapout+0xa5/0x260 [ttm] [ 90.002412] ? lock_acquired+0x355/0xa00 [ 90.002416] ? do_raw_spin_trylock+0xb6/0x190 [ 90.002421] ? __pfx_lock_acquired+0x10/0x10 [ 90.002426] ? ttm_global_swapout+0x25/0x210 [ttm] [ 90.002442] ttm_device_swapout+0x198/0x260 [ttm] [ 90.002456] ? __pfx_ttm_device_swapout+0x10/0x10 [ttm] [ 90.002472] ttm_global_swapout+0x75/0x210 [ttm] [ 90.002486] ttm_tt_populate+0x187/0x3f0 [ttm] [ 90.002501] ttm_bo_handle_move_mem+0x437/0x590 [ttm] [ 90.002517] ttm_bo_validate+0x275/0x430 [ttm] [ 90.002530] ? __pfx_ttm_bo_validate+0x10/0x10 [ttm] [ 90.002544] ? kasan_save_stack+0x33/0x60 [ 90.002550] ? kasan_set_track+0x25/0x30 [ 90.002554] ? __kasan_kmalloc+0x8f/0xa0 [ 90.002558] ? amdgpu_gtt_mgr_new+0x81/0x420 [amdgpu] [ 90.003023] ? ttm_resource_alloc+0xf6/0x220 [ttm] [ 90.003038] amdgpu_bo_pin_restricted+0x2dd/0x8b0 [amdgpu] [ 90.003210] ? __x64_sys_ioctl+0x131/0x1a0 [ 90.003210] ? do_syscall_64+0x60/0x90
Fixes: a2848d08742c ("drm/ttm: never consider pinned BOs for eviction&swap") Tested-by: Mikhail Gavrilov mikhail.v.gavrilov@gmail.com Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Christian König christian.koenig@amd.com Cc: stable@vger.kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20230724024229.1118444-1-guchu... Signed-off-by: Christian König christian.koenig@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/ttm/ttm_bo.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -519,7 +519,8 @@ static bool ttm_bo_evict_swapout_allowab
if (bo->pin_count) { *locked = false; - *busy = false; + if (busy) + *busy = false; return false; }
From: Janusz Krzysztofik janusz.krzysztofik@linux.intel.com
commit a337b64f0d5717248a0c894e2618e658e6a9de9f upstream.
Infinite waits for completion of GPU activity have been observed in CI, mostly inside __i915_active_wait(), triggered by igt@gem_barrier_race or igt@perf@stress-open-close. Root cause analysis, based of ftrace dumps generated with a lot of extra trace_printk() calls added to the code, revealed loops of request dependencies being accidentally built, preventing the requests from being processed, each waiting for completion of another one's activity.
After we substitute a new request for a last active one tracked on a timeline, we set up a dependency of our new request to wait on completion of current activity of that previous one. While doing that, we must take care of keeping the old request still in memory until we use its attributes for setting up that await dependency, or we can happen to set up the await dependency on an unrelated request that already reuses the memory previously allocated to the old one, already released. Combined with perf adding consecutive kernel context remote requests to different user context timelines, unresolvable loops of await dependencies can be built, leading do infinite waits.
We obtain a pointer to the previous request to wait upon when we substitute it with a pointer to our new request in an active tracker, e.g. in intel_timeline.last_request. In some processing paths we protect that old request from being freed before we use it by getting a reference to it under RCU protection, but in others, e.g. __i915_request_commit() -> __i915_request_add_to_timeline() -> __i915_request_ensure_ordering(), we don't. But anyway, since the requests' memory is SLAB_FAILSAFE_BY_RCU, that RCU protection is not sufficient against reuse of memory.
We could protect i915_request's memory from being prematurely reused by calling its release function via call_rcu() and using rcu_read_lock() consequently, as proposed in v1. However, that approach leads to significant (up to 10 times) increase of SLAB utilization by i915_request SLAB cache. Another potential approach is to take a reference to the previous active fence.
When updating an active fence tracker, we first lock the new fence, substitute a pointer of the current active fence with the new one, then we lock the substituted fence. With this approach, there is a time window after the substitution and before the lock when the request can be concurrently released by an interrupt handler and its memory reused, then we may happen to lock and return a new, unrelated request.
Always get a reference to the current active fence first, before replacing it with a new one. Having it protected from premature release and reuse, lock it and then replace with the new one but only if not yet signalled via a potential concurrent interrupt nor replaced with another one by a potential concurrent thread, otherwise retry, starting from getting a reference to the new current one. Adjust users to not get a reference to the previous active fence themselves and always put the reference got by __i915_active_fence_set() when no longer needed.
v3: Fix lockdep splat reports and other issues caused by incorrect use of try_cmpxchg() (use (cmpxchg() != prev) instead) v2: Protect request's memory by getting a reference to it in favor of delegating its release to call_rcu() (Chris)
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/8211 Fixes: df9f85d8582e ("drm/i915: Serialise i915_active_fence_set() with itself") Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Janusz Krzysztofik janusz.krzysztofik@linux.intel.com Cc: stable@vger.kernel.org # v5.6+ Reviewed-by: Andi Shyti andi.shyti@linux.intel.com Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230720093543.832147-2-janusz... (cherry picked from commit 946e047a3d88d46d15b5c5af0414098e12b243f7) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/i915_active.c | 99 +++++++++++++++++++++++++----------- drivers/gpu/drm/i915/i915_request.c | 11 ++++ 2 files changed, 81 insertions(+), 29 deletions(-)
--- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -449,8 +449,11 @@ int i915_active_add_request(struct i915_ } } while (unlikely(is_barrier(active)));
- if (!__i915_active_fence_set(active, fence)) + fence = __i915_active_fence_set(active, fence); + if (!fence) __i915_active_acquire(ref); + else + dma_fence_put(fence);
out: i915_active_release(ref); @@ -469,13 +472,9 @@ __i915_active_set_fence(struct i915_acti return NULL; }
- rcu_read_lock(); prev = __i915_active_fence_set(active, fence); - if (prev) - prev = dma_fence_get_rcu(prev); - else + if (!prev) __i915_active_acquire(ref); - rcu_read_unlock();
return prev; } @@ -1019,10 +1018,11 @@ void i915_request_add_active_barriers(st * * Records the new @fence as the last active fence along its timeline in * this active tracker, moving the tracking callbacks from the previous - * fence onto this one. Returns the previous fence (if not already completed), - * which the caller must ensure is executed before the new fence. To ensure - * that the order of fences within the timeline of the i915_active_fence is - * understood, it should be locked by the caller. + * fence onto this one. Gets and returns a reference to the previous fence + * (if not already completed), which the caller must put after making sure + * that it is executed before the new fence. To ensure that the order of + * fences within the timeline of the i915_active_fence is understood, it + * should be locked by the caller. */ struct dma_fence * __i915_active_fence_set(struct i915_active_fence *active, @@ -1031,7 +1031,23 @@ __i915_active_fence_set(struct i915_acti struct dma_fence *prev; unsigned long flags;
- if (fence == rcu_access_pointer(active->fence)) + /* + * In case of fences embedded in i915_requests, their memory is + * SLAB_FAILSAFE_BY_RCU, then it can be reused right after release + * by new requests. Then, there is a risk of passing back a pointer + * to a new, completely unrelated fence that reuses the same memory + * while tracked under a different active tracker. Combined with i915 + * perf open/close operations that build await dependencies between + * engine kernel context requests and user requests from different + * timelines, this can lead to dependency loops and infinite waits. + * + * As a countermeasure, we try to get a reference to the active->fence + * first, so if we succeed and pass it back to our user then it is not + * released and potentially reused by an unrelated request before the + * user has a chance to set up an await dependency on it. + */ + prev = i915_active_fence_get(active); + if (fence == prev) return fence;
GEM_BUG_ON(test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)); @@ -1040,27 +1056,56 @@ __i915_active_fence_set(struct i915_acti * Consider that we have two threads arriving (A and B), with * C already resident as the active->fence. * - * A does the xchg first, and so it sees C or NULL depending - * on the timing of the interrupt handler. If it is NULL, the - * previous fence must have been signaled and we know that - * we are first on the timeline. If it is still present, - * we acquire the lock on that fence and serialise with the interrupt - * handler, in the process removing it from any future interrupt - * callback. A will then wait on C before executing (if present). - * - * As B is second, it sees A as the previous fence and so waits for - * it to complete its transition and takes over the occupancy for - * itself -- remembering that it needs to wait on A before executing. + * Both A and B have got a reference to C or NULL, depending on the + * timing of the interrupt handler. Let's assume that if A has got C + * then it has locked C first (before B). * * Note the strong ordering of the timeline also provides consistent * nesting rules for the fence->lock; the inner lock is always the * older lock. */ spin_lock_irqsave(fence->lock, flags); - prev = xchg(__active_fence_slot(active), fence); - if (prev) { - GEM_BUG_ON(prev == fence); + if (prev) spin_lock_nested(prev->lock, SINGLE_DEPTH_NESTING); + + /* + * A does the cmpxchg first, and so it sees C or NULL, as before, or + * something else, depending on the timing of other threads and/or + * interrupt handler. If not the same as before then A unlocks C if + * applicable and retries, starting from an attempt to get a new + * active->fence. Meanwhile, B follows the same path as A. + * Once A succeeds with cmpxch, B fails again, retires, gets A from + * active->fence, locks it as soon as A completes, and possibly + * succeeds with cmpxchg. + */ + while (cmpxchg(__active_fence_slot(active), prev, fence) != prev) { + if (prev) { + spin_unlock(prev->lock); + dma_fence_put(prev); + } + spin_unlock_irqrestore(fence->lock, flags); + + prev = i915_active_fence_get(active); + GEM_BUG_ON(prev == fence); + + spin_lock_irqsave(fence->lock, flags); + if (prev) + spin_lock_nested(prev->lock, SINGLE_DEPTH_NESTING); + } + + /* + * If prev is NULL then the previous fence must have been signaled + * and we know that we are first on the timeline. If it is still + * present then, having the lock on that fence already acquired, we + * serialise with the interrupt handler, in the process of removing it + * from any future interrupt callback. A will then wait on C before + * executing (if present). + * + * As B is second, it sees A as the previous fence and so waits for + * it to complete its transition and takes over the occupancy for + * itself -- remembering that it needs to wait on A before executing. + */ + if (prev) { __list_del_entry(&active->cb.node); spin_unlock(prev->lock); /* serialise with prev->cb_list */ } @@ -1077,11 +1122,7 @@ int i915_active_fence_set(struct i915_ac int err = 0;
/* Must maintain timeline ordering wrt previous active requests */ - rcu_read_lock(); fence = __i915_active_fence_set(active, &rq->fence); - if (fence) /* but the previous fence may not belong to that timeline! */ - fence = dma_fence_get_rcu(fence); - rcu_read_unlock(); if (fence) { err = i915_request_await_dma_fence(rq, fence); dma_fence_put(fence); --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1661,6 +1661,11 @@ __i915_request_ensure_parallel_ordering(
request_to_parent(rq)->parallel.last_rq = i915_request_get(rq);
+ /* + * Users have to put a reference potentially got by + * __i915_active_fence_set() to the returned request + * when no longer needed + */ return to_request(__i915_active_fence_set(&timeline->last_request, &rq->fence)); } @@ -1707,6 +1712,10 @@ __i915_request_ensure_ordering(struct i9 0); }
+ /* + * Users have to put the reference to prev potentially got + * by __i915_active_fence_set() when no longer needed + */ return prev; }
@@ -1760,6 +1769,8 @@ __i915_request_add_to_timeline(struct i9 prev = __i915_request_ensure_ordering(rq, timeline); else prev = __i915_request_ensure_parallel_ordering(rq, timeline); + if (prev) + i915_request_put(prev);
/* * Make sure that no request gazumped us - if it was allocated after
From: Andi Shyti andi.shyti@linux.intel.com
commit d14560ac1b595aa2e792365e91fea6aeaee66c2b upstream.
Fix the 'NV' definition postfix that is supposed to be INV.
Take the chance to also order properly the registers based on their address and call the GEN12_GFX_CCS_AUX_INV address as GEN12_CCS_AUX_INV like all the other similar registers.
Remove also VD1, VD3 and VE1 registers that don't exist and add BCS0 and CCS0.
Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org # v5.8+ Reviewed-by: Nirmoy Das nirmoy.das@intel.com Reviewed-by: Andrzej Hajda andrzej.hajda@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230725001950.1014671-2-andi.... (cherry picked from commit 2f0b927d3ca3440445975ebde27f3df1c3ed6f76) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 8 ++++---- drivers/gpu/drm/i915/gt/intel_gt_regs.h | 16 ++++++++-------- drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++--- 3 files changed, 15 insertions(+), 15 deletions(-)
--- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -256,8 +256,8 @@ int gen12_emit_flush_rcs(struct i915_req
if (!HAS_FLAT_CCS(rq->engine->i915)) { /* hsdes: 1809175790 */ - cs = gen12_emit_aux_table_inv(rq->engine->gt, - cs, GEN12_GFX_CCS_AUX_NV); + cs = gen12_emit_aux_table_inv(rq->engine->gt, cs, + GEN12_CCS_AUX_INV); }
*cs++ = preparser_disable(false); @@ -317,10 +317,10 @@ int gen12_emit_flush_xcs(struct i915_req if (aux_inv) { /* hsdes: 1809175790 */ if (rq->engine->class == VIDEO_DECODE_CLASS) cs = gen12_emit_aux_table_inv(rq->engine->gt, - cs, GEN12_VD0_AUX_NV); + cs, GEN12_VD0_AUX_INV); else cs = gen12_emit_aux_table_inv(rq->engine->gt, - cs, GEN12_VE0_AUX_NV); + cs, GEN12_VE0_AUX_INV); }
if (mode & EMIT_INVALIDATE) --- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h @@ -331,9 +331,11 @@ #define GEN8_PRIVATE_PAT_HI _MMIO(0x40e0 + 4) #define GEN10_PAT_INDEX(index) _MMIO(0x40e0 + (index) * 4) #define BSD_HWS_PGA_GEN7 _MMIO(0x4180) -#define GEN12_GFX_CCS_AUX_NV _MMIO(0x4208) -#define GEN12_VD0_AUX_NV _MMIO(0x4218) -#define GEN12_VD1_AUX_NV _MMIO(0x4228) + +#define GEN12_CCS_AUX_INV _MMIO(0x4208) +#define GEN12_VD0_AUX_INV _MMIO(0x4218) +#define GEN12_VE0_AUX_INV _MMIO(0x4238) +#define GEN12_BCS0_AUX_INV _MMIO(0x4248)
#define GEN8_RTCR _MMIO(0x4260) #define GEN8_M1TCR _MMIO(0x4264) @@ -341,14 +343,12 @@ #define GEN8_BTCR _MMIO(0x426c) #define GEN8_VTCR _MMIO(0x4270)
-#define GEN12_VD2_AUX_NV _MMIO(0x4298) -#define GEN12_VD3_AUX_NV _MMIO(0x42a8) -#define GEN12_VE0_AUX_NV _MMIO(0x4238) - #define BLT_HWS_PGA_GEN7 _MMIO(0x4280)
-#define GEN12_VE1_AUX_NV _MMIO(0x42b8) +#define GEN12_VD2_AUX_INV _MMIO(0x4298) +#define GEN12_CCS0_AUX_INV _MMIO(0x42c8) #define AUX_INV REG_BIT(0) + #define VEBOX_HWS_PGA_GEN7 _MMIO(0x4380)
#define GEN12_AUX_ERR_DBG _MMIO(0x43f4) --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -1367,7 +1367,7 @@ gen12_emit_indirect_ctx_rcs(const struct /* hsdes: 1809175790 */ if (!HAS_FLAT_CCS(ce->engine->i915)) cs = gen12_emit_aux_table_inv(ce->engine->gt, - cs, GEN12_GFX_CCS_AUX_NV); + cs, GEN12_CCS_AUX_INV);
/* Wa_16014892111 */ if (IS_DG2(ce->engine->i915)) @@ -1394,10 +1394,10 @@ gen12_emit_indirect_ctx_xcs(const struct if (!HAS_FLAT_CCS(ce->engine->i915)) { if (ce->engine->class == VIDEO_DECODE_CLASS) cs = gen12_emit_aux_table_inv(ce->engine->gt, - cs, GEN12_VD0_AUX_NV); + cs, GEN12_VD0_AUX_INV); else if (ce->engine->class == VIDEO_ENHANCEMENT_CLASS) cs = gen12_emit_aux_table_inv(ce->engine->gt, - cs, GEN12_VE0_AUX_NV); + cs, GEN12_VE0_AUX_INV); }
return cs;
From: Mike Kravetz mike.kravetz@oracle.com
commit 16f8eb3eea9eb2a1568279d64ca4dc977e7aa538 upstream.
This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9
The reverted commit fixed up routines primarily used by readahead code such that they could also be used by hugetlb. Unfortunately, this caused a performance regression as pointed out by the Closes: tag.
The hugetlb code which uses page_cache_next_miss will be addressed in a subsequent patch.
Link: https://lkml.kernel.org/r/20230621212403.174710-1-mike.kravetz@oracle.com Fixes: 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one") Signed-off-by: Mike Kravetz mike.kravetz@oracle.com Reported-by: kernel test robot oliver.sang@intel.com Closes: https://lore.kernel.org/oe-lkp/202306211346.1e9ff03e-oliver.sang@intel.com Reviewed-by: Sidhartha Kumar sidhartha.kumar@oracle.com Cc: Ackerley Tng ackerleytng@google.com Cc: Erdem Aktas erdemaktas@google.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Matthew Wilcox willy@infradead.org Cc: Muchun Song songmuchun@bytedance.com Cc: Vishal Annapurve vannapurve@google.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/filemap.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-)
--- a/mm/filemap.c +++ b/mm/filemap.c @@ -1760,9 +1760,7 @@ bool __folio_lock_or_retry(struct folio * * Return: The index of the gap if found, otherwise an index outside the * range specified (in which case 'return - index >= max_scan' will be true). - * In the rare case of index wrap-around, 0 will be returned. 0 will also - * be returned if index == 0 and there is a gap at the index. We can not - * wrap-around if passed index == 0. + * In the rare case of index wrap-around, 0 will be returned. */ pgoff_t page_cache_next_miss(struct address_space *mapping, pgoff_t index, unsigned long max_scan) @@ -1772,13 +1770,12 @@ pgoff_t page_cache_next_miss(struct addr while (max_scan--) { void *entry = xas_next(&xas); if (!entry || xa_is_value(entry)) - return xas.xa_index; - if (xas.xa_index == 0 && index != 0) - return xas.xa_index; + break; + if (xas.xa_index == 0) + break; }
- /* No gaps in range and no wrap-around, return index beyond range */ - return xas.xa_index + 1; + return xas.xa_index; } EXPORT_SYMBOL(page_cache_next_miss);
@@ -1799,9 +1796,7 @@ EXPORT_SYMBOL(page_cache_next_miss); * * Return: The index of the gap if found, otherwise an index outside the * range specified (in which case 'index - return >= max_scan' will be true). - * In the rare case of wrap-around, ULONG_MAX will be returned. ULONG_MAX - * will also be returned if index == ULONG_MAX and there is a gap at the - * index. We can not wrap-around if passed index == ULONG_MAX. + * In the rare case of wrap-around, ULONG_MAX will be returned. */ pgoff_t page_cache_prev_miss(struct address_space *mapping, pgoff_t index, unsigned long max_scan) @@ -1811,13 +1806,12 @@ pgoff_t page_cache_prev_miss(struct addr while (max_scan--) { void *entry = xas_prev(&xas); if (!entry || xa_is_value(entry)) - return xas.xa_index; - if (xas.xa_index == ULONG_MAX && index != ULONG_MAX) - return xas.xa_index; + break; + if (xas.xa_index == ULONG_MAX) + break; }
- /* No gaps in range and no wrap-around, return index beyond range */ - return xas.xa_index - 1; + return xas.xa_index; } EXPORT_SYMBOL(page_cache_prev_miss);
From: Stephen Rothwell sfr@canb.auug.org.au
commit d9ffa069e006fa2873b94fbf2387546942d4f85b upstream.
After merging the net-next tree, today's linux-next build (sparc64 defconfig) failed like this:
drivers/net/ethernet/sun/sunvnet_common.c: In function 'vnet_handle_offloads': drivers/net/ethernet/sun/sunvnet_common.c:1277:16: error: implicit declaration of function 'skb_gso_segment'; did you mean 'skb_gso_reset'? [-Werror=implicit-function-declaration] 1277 | segs = skb_gso_segment(skb, dev->features & ~NETIF_F_TSO); | ^~~~~~~~~~~~~~~ | skb_gso_reset drivers/net/ethernet/sun/sunvnet_common.c:1277:14: warning: assignment to 'struct sk_buff *' from 'int' makes pointer from integer without a cast [-Wint-conversion] 1277 | segs = skb_gso_segment(skb, dev->features & ~NETIF_F_TSO); | ^
Fixes: d457a0e329b0 ("net: move gso declarations and functions to their own files") Signed-off-by: Stephen Rothwell sfr@canb.auug.org.au Reviewed-by: Simon Horman simon.horman@corigine.com Link: https://lore.kernel.org/r/20230613164639.164b2991@canb.auug.org.au Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/sun/sunvnet_common.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/net/ethernet/sun/sunvnet_common.c +++ b/drivers/net/ethernet/sun/sunvnet_common.c @@ -25,6 +25,7 @@ #endif
#include <net/ip.h> +#include <net/gso.h> #include <net/icmp.h> #include <net/route.h>
From: Geert Uytterhoeven geert+renesas@glider.be
commit a29b2fccf5f2689a9637be85ff1f51c834c6fb33 upstream.
smatch reports:
drivers/clk/imx/clk-imx93.c:294 imx93_clocks_probe() error: uninitialized symbol 'base'.
Indeed, in case of an error, the wrong (yet uninitialized) variable is converted to an error code and returned. Fix this by propagating the error code in the correct variable.
Fixes: e02ba11b45764705 ("clk: imx93: fix memory leak and missing unwind goto in imx93_clocks_probe") Reported-by: Dan Carpenter dan.carpenter@linaro.org Closes: https://lore.kernel.org/all/9c2acd81-3ad8-485d-819e-9e4201277831@kadam.mount... Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/all/202306161533.4YDmL22b-lkp@intel.com/ Signed-off-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://lore.kernel.org/r/20230711150812.3562221-1-geert+renesas@glider.be Reviewed-by: Peng Fan peng.fan@nxp.com Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/clk/imx/clk-imx93.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/clk/imx/clk-imx93.c +++ b/drivers/clk/imx/clk-imx93.c @@ -291,7 +291,7 @@ static int imx93_clocks_probe(struct pla anatop_base = devm_of_iomap(dev, np, 0, NULL); of_node_put(np); if (WARN_ON(IS_ERR(anatop_base))) { - ret = PTR_ERR(base); + ret = PTR_ERR(anatop_base); goto unregister_hws; }
From: Linus Torvalds torvalds@linux-foundation.org
commit 797964253d358cf8d705614dda394dbe30120223 upstream.
In commit 20ea1e7d13c1 ("file: always lock position for FMODE_ATOMIC_POS") we ended up always taking the file pos lock, because pidfd_getfd() could get a reference to the file even when it didn't have an elevated file count due to threading of other sharing cases.
But Mateusz Guzik reports that the extra locking is actually measurable, so let's re-introduce the optimization, and only force the locking for directory traversal.
Directories need the lock for correctness reasons, while regular files only need it for "POSIX semantics". Since pidfd_getfd() is about debuggers etc special things that are _way_ outside of POSIX, we can relax the rules for that case.
Reported-by: Mateusz Guzik mjguzik@gmail.com Cc: Christian Brauner brauner@kernel.org Link: https://lore.kernel.org/linux-fsdevel/20230803095311.ijpvhx3fyrbkasul@f/ Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/file.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-)
--- a/fs/file.c +++ b/fs/file.c @@ -1036,12 +1036,28 @@ unsigned long __fdget_raw(unsigned int f return __fget_light(fd, 0); }
+/* + * Try to avoid f_pos locking. We only need it if the + * file is marked for FMODE_ATOMIC_POS, and it can be + * accessed multiple ways. + * + * Always do it for directories, because pidfd_getfd() + * can make a file accessible even if it otherwise would + * not be, and for directories this is a correctness + * issue, not a "POSIX requirement". + */ +static inline bool file_needs_f_pos_lock(struct file *file) +{ + return (file->f_mode & FMODE_ATOMIC_POS) && + (file_count(file) > 1 || S_ISDIR(file_inode(file)->i_mode)); +} + unsigned long __fdget_pos(unsigned int fd) { unsigned long v = __fdget(fd); struct file *file = (struct file *)(v & ~3);
- if (file && (file->f_mode & FMODE_ATOMIC_POS)) { + if (file && file_needs_f_pos_lock(file)) { v |= FDPUT_POS_UNLOCK; mutex_lock(&file->f_pos_lock); }
From: Roman Gushchin roman.gushchin@linux.dev
commit 3b8abb3239530c423c0b97e42af7f7e856e1ee96 upstream.
KCSAN found an issue in obj_stock_flush_required(): stock->cached_objcg can be reset between the check and dereference:
================================================================== BUG: KCSAN: data-race in drain_all_stock / drain_obj_stock
write to 0xffff888237c2a2f8 of 8 bytes by task 19625 on cpu 0: drain_obj_stock+0x408/0x4e0 mm/memcontrol.c:3306 refill_obj_stock+0x9c/0x1e0 mm/memcontrol.c:3340 obj_cgroup_uncharge+0xe/0x10 mm/memcontrol.c:3408 memcg_slab_free_hook mm/slab.h:587 [inline] __cache_free mm/slab.c:3373 [inline] __do_kmem_cache_free mm/slab.c:3577 [inline] kmem_cache_free+0x105/0x280 mm/slab.c:3602 __d_free fs/dcache.c:298 [inline] dentry_free fs/dcache.c:375 [inline] __dentry_kill+0x422/0x4a0 fs/dcache.c:621 dentry_kill+0x8d/0x1e0 dput+0x118/0x1f0 fs/dcache.c:913 __fput+0x3bf/0x570 fs/file_table.c:329 ____fput+0x15/0x20 fs/file_table.c:349 task_work_run+0x123/0x160 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop+0xcf/0xe0 kernel/entry/common.c:171 exit_to_user_mode_prepare+0x6a/0xa0 kernel/entry/common.c:203 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x26/0x140 kernel/entry/common.c:296 do_syscall_64+0x4d/0xc0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd
read to 0xffff888237c2a2f8 of 8 bytes by task 19632 on cpu 1: obj_stock_flush_required mm/memcontrol.c:3319 [inline] drain_all_stock+0x174/0x2a0 mm/memcontrol.c:2361 try_charge_memcg+0x6d0/0xd10 mm/memcontrol.c:2703 try_charge mm/memcontrol.c:2837 [inline] mem_cgroup_charge_skmem+0x51/0x140 mm/memcontrol.c:7290 sock_reserve_memory+0xb1/0x390 net/core/sock.c:1025 sk_setsockopt+0x800/0x1e70 net/core/sock.c:1525 udp_lib_setsockopt+0x99/0x6c0 net/ipv4/udp.c:2692 udp_setsockopt+0x73/0xa0 net/ipv4/udp.c:2817 sock_common_setsockopt+0x61/0x70 net/core/sock.c:3668 __sys_setsockopt+0x1c3/0x230 net/socket.c:2271 __do_sys_setsockopt net/socket.c:2282 [inline] __se_sys_setsockopt net/socket.c:2279 [inline] __x64_sys_setsockopt+0x66/0x80 net/socket.c:2279 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
value changed: 0xffff8881382d52c0 -> 0xffff888138893740
Reported by Kernel Concurrency Sanitizer on: CPU: 1 PID: 19632 Comm: syz-executor.0 Not tainted 6.3.0-rc2-syzkaller-00387-g534293368afa #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Fix it by using READ_ONCE()/WRITE_ONCE() for all accesses to stock->cached_objcg.
Link: https://lkml.kernel.org/r/20230502160839.361544-1-roman.gushchin@linux.dev Fixes: bf4f059954dc ("mm: memcg/slab: obj_cgroup API") Signed-off-by: Roman Gushchin roman.gushchin@linux.dev Reported-by: syzbot+774c29891415ab0fd29d@syzkaller.appspotmail.com Reported-by: Dmitry Vyukov dvyukov@google.com Link: https://lore.kernel.org/linux-mm/CACT4Y+ZfucZhM60YPphWiCLJr6+SGFhT+jjm8k1P-a... Reviewed-by: Yosry Ahmed yosryahmed@google.com Acked-by: Shakeel Butt shakeelb@google.com Reviewed-by: Dmitry Vyukov dvyukov@google.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memcontrol.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-)
--- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3208,12 +3208,12 @@ void mod_objcg_state(struct obj_cgroup * * accumulating over a page of vmstat data or when pgdat or idx * changes. */ - if (stock->cached_objcg != objcg) { + if (READ_ONCE(stock->cached_objcg) != objcg) { old = drain_obj_stock(stock); obj_cgroup_get(objcg); stock->nr_bytes = atomic_read(&objcg->nr_charged_bytes) ? atomic_xchg(&objcg->nr_charged_bytes, 0) : 0; - stock->cached_objcg = objcg; + WRITE_ONCE(stock->cached_objcg, objcg); stock->cached_pgdat = pgdat; } else if (stock->cached_pgdat != pgdat) { /* Flush the existing cached vmstat data */ @@ -3267,7 +3267,7 @@ static bool consume_obj_stock(struct obj local_lock_irqsave(&memcg_stock.stock_lock, flags);
stock = this_cpu_ptr(&memcg_stock); - if (objcg == stock->cached_objcg && stock->nr_bytes >= nr_bytes) { + if (objcg == READ_ONCE(stock->cached_objcg) && stock->nr_bytes >= nr_bytes) { stock->nr_bytes -= nr_bytes; ret = true; } @@ -3279,7 +3279,7 @@ static bool consume_obj_stock(struct obj
static struct obj_cgroup *drain_obj_stock(struct memcg_stock_pcp *stock) { - struct obj_cgroup *old = stock->cached_objcg; + struct obj_cgroup *old = READ_ONCE(stock->cached_objcg);
if (!old) return NULL; @@ -3332,7 +3332,7 @@ static struct obj_cgroup *drain_obj_stoc stock->cached_pgdat = NULL; }
- stock->cached_objcg = NULL; + WRITE_ONCE(stock->cached_objcg, NULL); /* * The `old' objects needs to be released by the caller via * obj_cgroup_put() outside of memcg_stock_pcp::stock_lock. @@ -3343,10 +3343,11 @@ static struct obj_cgroup *drain_obj_stoc static bool obj_stock_flush_required(struct memcg_stock_pcp *stock, struct mem_cgroup *root_memcg) { + struct obj_cgroup *objcg = READ_ONCE(stock->cached_objcg); struct mem_cgroup *memcg;
- if (stock->cached_objcg) { - memcg = obj_cgroup_memcg(stock->cached_objcg); + if (objcg) { + memcg = obj_cgroup_memcg(objcg); if (memcg && mem_cgroup_is_descendant(memcg, root_memcg)) return true; } @@ -3365,10 +3366,10 @@ static void refill_obj_stock(struct obj_ local_lock_irqsave(&memcg_stock.stock_lock, flags);
stock = this_cpu_ptr(&memcg_stock); - if (stock->cached_objcg != objcg) { /* reset if necessary */ + if (READ_ONCE(stock->cached_objcg) != objcg) { /* reset if necessary */ old = drain_obj_stock(stock); obj_cgroup_get(objcg); - stock->cached_objcg = objcg; + WRITE_ONCE(stock->cached_objcg, objcg); stock->nr_bytes = atomic_read(&objcg->nr_charged_bytes) ? atomic_xchg(&objcg->nr_charged_bytes, 0) : 0; allow_uncharge = true; /* Allow uncharge when objcg changes */
From: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp
commit ea303f72d70ce2f0b0aa94ab127085289768c5a6 upstream.
syzbot is reporting too large allocation at ntfs_load_attr_list(), for a crafted filesystem can have huge data_size.
Reported-by: syzbot syzbot+89dbb3a789a5b9711793@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=89dbb3a789a5b9711793 Signed-off-by: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ntfs3/attrlist.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/ntfs3/attrlist.c +++ b/fs/ntfs3/attrlist.c @@ -52,7 +52,7 @@ int ntfs_load_attr_list(struct ntfs_inod
if (!attr->non_res) { lsize = le32_to_cpu(attr->res.data_size); - le = kmalloc(al_aligned(lsize), GFP_NOFS); + le = kmalloc(al_aligned(lsize), GFP_NOFS | __GFP_NOWARN); if (!le) { err = -ENOMEM; goto out; @@ -80,7 +80,7 @@ int ntfs_load_attr_list(struct ntfs_inod if (err < 0) goto out;
- le = kmalloc(al_aligned(lsize), GFP_NOFS); + le = kmalloc(al_aligned(lsize), GFP_NOFS | __GFP_NOWARN); if (!le) { err = -ENOMEM; goto out;
From: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp
commit 726ccdba1521007fab4b2b7565d255fa0f2b770c upstream.
syzbot is reporting lockdep warning in __stack_depot_save(), for the caller of __stack_depot_save() (i.e. __kasan_record_aux_stack() in this report) is responsible for masking __GFP_KSWAPD_RECLAIM flag in order not to wake kswapd which in turn wakes kcompactd.
Since kasan/kmsan functions might be called with arbitrary locks held, mask __GFP_KSWAPD_RECLAIM flag from all GFP_NOWAIT/GFP_ATOMIC allocations in kasan/kmsan.
Note that kmsan_save_stack_with_flags() is changed to mask both __GFP_DIRECT_RECLAIM flag and __GFP_KSWAPD_RECLAIM flag, for wakeup_kswapd() from wake_all_kswapds() from __alloc_pages_slowpath() calls wakeup_kcompactd() if __GFP_KSWAPD_RECLAIM flag is set and __GFP_DIRECT_RECLAIM flag is not set.
Link: https://lkml.kernel.org/r/656cb4f5-998b-c8d7-3c61-c2d37aa90f9a@I-love.SAKURA... Signed-off-by: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp Reported-by: syzbot syzbot+ece2915262061d6e0ac1@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=ece2915262061d6e0ac1 Reviewed-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Alexander Potapenko glider@google.com Cc: Andrey Konovalov andreyknvl@gmail.com Cc: Andrey Ryabinin ryabinin.a.a@gmail.com Cc: Dmitry Vyukov dvyukov@google.com Cc: Marco Elver elver@google.com Cc: Mel Gorman mgorman@techsingularity.net Cc: Vincenzo Frascino vincenzo.frascino@arm.com Cc: Vlastimil Babka vbabka@suse.cz Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/kasan/generic.c | 4 ++-- mm/kasan/tags.c | 2 +- mm/kmsan/core.c | 6 +++--- mm/kmsan/instrumentation.c | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-)
--- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -489,7 +489,7 @@ static void __kasan_record_aux_stack(voi return;
alloc_meta->aux_stack[1] = alloc_meta->aux_stack[0]; - alloc_meta->aux_stack[0] = kasan_save_stack(GFP_NOWAIT, can_alloc); + alloc_meta->aux_stack[0] = kasan_save_stack(0, can_alloc); }
void kasan_record_aux_stack(void *addr) @@ -519,7 +519,7 @@ void kasan_save_free_info(struct kmem_ca if (!free_meta) return;
- kasan_set_track(&free_meta->free_track, GFP_NOWAIT); + kasan_set_track(&free_meta->free_track, 0); /* The object was freed and has free track set. */ *(u8 *)kasan_mem_to_shadow(object) = KASAN_SLAB_FREETRACK; } --- a/mm/kasan/tags.c +++ b/mm/kasan/tags.c @@ -140,5 +140,5 @@ void kasan_save_alloc_info(struct kmem_c
void kasan_save_free_info(struct kmem_cache *cache, void *object) { - save_stack_info(cache, object, GFP_NOWAIT, true); + save_stack_info(cache, object, 0, true); } --- a/mm/kmsan/core.c +++ b/mm/kmsan/core.c @@ -74,7 +74,7 @@ depot_stack_handle_t kmsan_save_stack_wi nr_entries = stack_trace_save(entries, KMSAN_STACK_DEPTH, 0);
/* Don't sleep. */ - flags &= ~__GFP_DIRECT_RECLAIM; + flags &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
handle = __stack_depot_save(entries, nr_entries, flags, true); return stack_depot_set_extra_bits(handle, extra); @@ -245,7 +245,7 @@ depot_stack_handle_t kmsan_internal_chai extra_bits = kmsan_extra_bits(depth, uaf);
entries[0] = KMSAN_CHAIN_MAGIC_ORIGIN; - entries[1] = kmsan_save_stack_with_flags(GFP_ATOMIC, 0); + entries[1] = kmsan_save_stack_with_flags(__GFP_HIGH, 0); entries[2] = id; /* * @entries is a local var in non-instrumented code, so KMSAN does not @@ -253,7 +253,7 @@ depot_stack_handle_t kmsan_internal_chai * positives when __stack_depot_save() passes it to instrumented code. */ kmsan_internal_unpoison_memory(entries, sizeof(entries), false); - handle = __stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC, + handle = __stack_depot_save(entries, ARRAY_SIZE(entries), __GFP_HIGH, true); return stack_depot_set_extra_bits(handle, extra_bits); } --- a/mm/kmsan/instrumentation.c +++ b/mm/kmsan/instrumentation.c @@ -282,7 +282,7 @@ void __msan_poison_alloca(void *address,
/* stack_depot_save() may allocate memory. */ kmsan_enter_runtime(); - handle = stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC); + handle = stack_depot_save(entries, ARRAY_SIZE(entries), __GFP_HIGH); kmsan_leave_runtime();
kmsan_internal_set_shadow_origin(address, size, -1, handle,
From: Prince Kumar Maurya princekumarmaurya06@gmail.com
commit ea2b62f305893992156a798f665847e0663c9f41 upstream.
sb_getblk(inode->i_sb, parent) return a null ptr and taking lock on that leads to the null-ptr-deref bug.
Reported-by: syzbot+aad58150cbc64ba41bdc@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=aad58150cbc64ba41bdc Signed-off-by: Prince Kumar Maurya princekumarmaurya06@gmail.com Message-Id: 20230531013141.19487-1-princekumarmaurya06@gmail.com Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/sysv/itree.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/fs/sysv/itree.c +++ b/fs/sysv/itree.c @@ -145,6 +145,10 @@ static int alloc_branch(struct inode *in */ parent = block_to_cpu(SYSV_SB(inode->i_sb), branch[n-1].key); bh = sb_getblk(inode->i_sb, parent); + if (!bh) { + sysv_free_block(inode->i_sb, branch[n].key); + break; + } lock_buffer(bh); memset(bh->b_data, 0, blocksize); branch[n].bh = bh;
From: Sungwoo Kim iam@sung-woo.kim
commit 1728137b33c00d5a2b5110ed7aafb42e7c32e4a1 upstream.
l2cap_sock_release(sk) frees sk. However, sk's children are still alive and point to the already free'd sk's address. To fix this, l2cap_sock_release(sk) also cleans sk's children.
================================================================== BUG: KASAN: use-after-free in l2cap_sock_ready_cb+0xb7/0x100 net/bluetooth/l2cap_sock.c:1650 Read of size 8 at addr ffff888104617aa8 by task kworker/u3:0/276
CPU: 0 PID: 276 Comm: kworker/u3:0 Not tainted 6.2.0-00001-gef397bd4d5fb-dirty #59 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 Workqueue: hci2 hci_rx_work Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x72/0x95 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:306 [inline] print_report+0x175/0x478 mm/kasan/report.c:417 kasan_report+0xb1/0x130 mm/kasan/report.c:517 l2cap_sock_ready_cb+0xb7/0x100 net/bluetooth/l2cap_sock.c:1650 l2cap_chan_ready+0x10e/0x1e0 net/bluetooth/l2cap_core.c:1386 l2cap_config_req+0x753/0x9f0 net/bluetooth/l2cap_core.c:4480 l2cap_bredr_sig_cmd net/bluetooth/l2cap_core.c:5739 [inline] l2cap_sig_channel net/bluetooth/l2cap_core.c:6509 [inline] l2cap_recv_frame+0xe2e/0x43c0 net/bluetooth/l2cap_core.c:7788 l2cap_recv_acldata+0x6ed/0x7e0 net/bluetooth/l2cap_core.c:8506 hci_acldata_packet net/bluetooth/hci_core.c:3813 [inline] hci_rx_work+0x66e/0xbc0 net/bluetooth/hci_core.c:4048 process_one_work+0x4ea/0x8e0 kernel/workqueue.c:2289 worker_thread+0x364/0x8e0 kernel/workqueue.c:2436 kthread+0x1b9/0x200 kernel/kthread.c:376 ret_from_fork+0x2c/0x50 arch/x86/entry/entry_64.S:308 </TASK>
Allocated by task 288: kasan_save_stack+0x22/0x50 mm/kasan/common.c:45 kasan_set_track+0x25/0x30 mm/kasan/common.c:52 ____kasan_kmalloc mm/kasan/common.c:374 [inline] __kasan_kmalloc+0x82/0x90 mm/kasan/common.c:383 kasan_kmalloc include/linux/kasan.h:211 [inline] __do_kmalloc_node mm/slab_common.c:968 [inline] __kmalloc+0x5a/0x140 mm/slab_common.c:981 kmalloc include/linux/slab.h:584 [inline] sk_prot_alloc+0x113/0x1f0 net/core/sock.c:2040 sk_alloc+0x36/0x3c0 net/core/sock.c:2093 l2cap_sock_alloc.constprop.0+0x39/0x1c0 net/bluetooth/l2cap_sock.c:1852 l2cap_sock_create+0x10d/0x220 net/bluetooth/l2cap_sock.c:1898 bt_sock_create+0x183/0x290 net/bluetooth/af_bluetooth.c:132 __sock_create+0x226/0x380 net/socket.c:1518 sock_create net/socket.c:1569 [inline] __sys_socket_create net/socket.c:1606 [inline] __sys_socket_create net/socket.c:1591 [inline] __sys_socket+0x112/0x200 net/socket.c:1639 __do_sys_socket net/socket.c:1652 [inline] __se_sys_socket net/socket.c:1650 [inline] __x64_sys_socket+0x40/0x50 net/socket.c:1650 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3f/0x90 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x72/0xdc
Freed by task 288: kasan_save_stack+0x22/0x50 mm/kasan/common.c:45 kasan_set_track+0x25/0x30 mm/kasan/common.c:52 kasan_save_free_info+0x2e/0x50 mm/kasan/generic.c:523 ____kasan_slab_free mm/kasan/common.c:236 [inline] ____kasan_slab_free mm/kasan/common.c:200 [inline] __kasan_slab_free+0x10a/0x190 mm/kasan/common.c:244 kasan_slab_free include/linux/kasan.h:177 [inline] slab_free_hook mm/slub.c:1781 [inline] slab_free_freelist_hook mm/slub.c:1807 [inline] slab_free mm/slub.c:3787 [inline] __kmem_cache_free+0x88/0x1f0 mm/slub.c:3800 sk_prot_free net/core/sock.c:2076 [inline] __sk_destruct+0x347/0x430 net/core/sock.c:2168 sk_destruct+0x9c/0xb0 net/core/sock.c:2183 __sk_free+0x82/0x220 net/core/sock.c:2194 sk_free+0x7c/0xa0 net/core/sock.c:2205 sock_put include/net/sock.h:1991 [inline] l2cap_sock_kill+0x256/0x2b0 net/bluetooth/l2cap_sock.c:1257 l2cap_sock_release+0x1a7/0x220 net/bluetooth/l2cap_sock.c:1428 __sock_release+0x80/0x150 net/socket.c:650 sock_close+0x19/0x30 net/socket.c:1368 __fput+0x17a/0x5c0 fs/file_table.c:320 task_work_run+0x132/0x1c0 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x113/0x120 kernel/entry/common.c:203 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x21/0x50 kernel/entry/common.c:296 do_syscall_64+0x4c/0x90 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x72/0xdc
The buggy address belongs to the object at ffff888104617800 which belongs to the cache kmalloc-1k of size 1024 The buggy address is located 680 bytes inside of 1024-byte region [ffff888104617800, ffff888104617c00)
The buggy address belongs to the physical page: page:00000000dbca6a80 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff888104614000 pfn:0x104614 head:00000000dbca6a80 order:2 compound_mapcount:0 subpages_mapcount:0 compound_pincount:0 flags: 0x200000000010200(slab|head|node=0|zone=2) raw: 0200000000010200 ffff888100041dc0 ffffea0004212c10 ffffea0004234b10 raw: ffff888104614000 0000000000080002 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected
Memory state around the buggy address: ffff888104617980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888104617a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888104617a80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^ ffff888104617b00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888104617b80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================
Ack: This bug is found by FuzzBT with a modified Syzkaller. Other contributors are Ruoyu Wu and Hui Peng. Signed-off-by: Sungwoo Kim iam@sung-woo.kim Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/bluetooth/l2cap_sock.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/net/bluetooth/l2cap_sock.c +++ b/net/bluetooth/l2cap_sock.c @@ -46,6 +46,7 @@ static const struct proto_ops l2cap_sock static void l2cap_sock_init(struct sock *sk, struct sock *parent); static struct sock *l2cap_sock_alloc(struct net *net, struct socket *sock, int proto, gfp_t prio, int kern); +static void l2cap_sock_cleanup_listen(struct sock *parent);
bool l2cap_is_socket(struct socket *sock) { @@ -1415,6 +1416,7 @@ static int l2cap_sock_release(struct soc if (!sk) return 0;
+ l2cap_sock_cleanup_listen(sk); bt_sock_unlink(&l2cap_sk_list, sk);
err = l2cap_sock_shutdown(sock, SHUT_RDWR);
From: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp
commit 8b64d420fe2450f82848178506d3e3a0bd195539 upstream.
syzbot is reporting false a positive ODEBUG message immediately after ODEBUG was disabled due to OOM.
[ 1062.309646][T22911] ODEBUG: Out of memory. ODEBUG disabled [ 1062.886755][ T5171] ------------[ cut here ]------------ [ 1062.892770][ T5171] ODEBUG: assert_init not available (active state 0) object: ffffc900056afb20 object type: timer_list hint: process_timeout+0x0/0x40
CPU 0 [ T5171] CPU 1 [T22911] -------------- -------------- debug_object_assert_init() { if (!debug_objects_enabled) return; db = get_bucket(addr); lookup_object_or_alloc() { debug_objects_enabled = 0; return NULL; } debug_objects_oom() { pr_warn("Out of memory. ODEBUG disabled\n"); // all buckets get emptied here, and } lookup_object_or_alloc(addr, db, descr, false, true) { // this bucket is already empty. return ERR_PTR(-ENOENT); } // Emits false positive warning. debug_print_object(&o, "assert_init"); }
Recheck debug_object_enabled in debug_print_object() to avoid that.
Reported-by: syzbot syzbot+7937ba6a50bdd00fffdf@syzkaller.appspotmail.com Suggested-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/492fe2ae-5141-d548-ebd5-62f5fe2e57f7@I-love.SAKURA... Closes: https://syzkaller.appspot.com/bug?extid=7937ba6a50bdd00fffdf Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/debugobjects.c | 9 +++++++++ 1 file changed, 9 insertions(+)
--- a/lib/debugobjects.c +++ b/lib/debugobjects.c @@ -498,6 +498,15 @@ static void debug_print_object(struct de const struct debug_obj_descr *descr = obj->descr; static int limit;
+ /* + * Don't report if lookup_object_or_alloc() by the current thread + * failed because lookup_object_or_alloc()/debug_objects_oom() by a + * concurrent thread turned off debug_objects_enabled and cleared + * the hash buckets. + */ + if (!debug_objects_enabled) + return; + if (limit < 5 && descr != descr_test) { void *hint = descr->debug_hint ? descr->debug_hint(obj->object) : NULL;
From: Alan Stern stern@rowland.harvard.edu
commit 5e1627cb43ddf1b24b92eb26f8d958a3f5676ccb upstream.
The syzbot fuzzer identified a problem in the usbnet driver:
usb 1-1: BOGUS urb xfer, pipe 3 != type 1 WARNING: CPU: 0 PID: 754 at drivers/usb/core/urb.c:504 usb_submit_urb+0xed6/0x1880 drivers/usb/core/urb.c:504 Modules linked in: CPU: 0 PID: 754 Comm: kworker/0:2 Not tainted 6.4.0-rc7-syzkaller-00014-g692b7dc87ca6 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023 Workqueue: mld mld_ifc_work RIP: 0010:usb_submit_urb+0xed6/0x1880 drivers/usb/core/urb.c:504 Code: 7c 24 18 e8 2c b4 5b fb 48 8b 7c 24 18 e8 42 07 f0 fe 41 89 d8 44 89 e1 4c 89 ea 48 89 c6 48 c7 c7 a0 c9 fc 8a e8 5a 6f 23 fb <0f> 0b e9 58 f8 ff ff e8 fe b3 5b fb 48 81 c5 c0 05 00 00 e9 84 f7 RSP: 0018:ffffc9000463f568 EFLAGS: 00010086 RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000 RDX: ffff88801eb28000 RSI: ffffffff814c03b7 RDI: 0000000000000001 RBP: ffff8881443b7190 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000003 R13: ffff88802a77cb18 R14: 0000000000000003 R15: ffff888018262500 FS: 0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000556a99c15a18 CR3: 0000000028c71000 CR4: 0000000000350ef0 Call Trace: <TASK> usbnet_start_xmit+0xfe5/0x2190 drivers/net/usb/usbnet.c:1453 __netdev_start_xmit include/linux/netdevice.h:4918 [inline] netdev_start_xmit include/linux/netdevice.h:4932 [inline] xmit_one net/core/dev.c:3578 [inline] dev_hard_start_xmit+0x187/0x700 net/core/dev.c:3594 ...
This bug is caused by the fact that usbnet trusts the bulk endpoint addresses its probe routine receives in the driver_info structure, and it does not check to see that these endpoints actually exist and have the expected type and directions.
The fix is simply to add such a check.
Reported-and-tested-by: syzbot+63ee658b9a100ffadbe2@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-usb/000000000000a56e9105d0cec021@google.com/ Signed-off-by: Alan Stern stern@rowland.harvard.edu CC: Oliver Neukum oneukum@suse.com Link: https://lore.kernel.org/r/ea152b6d-44df-4f8a-95c6-4db51143dcc1@rowland.harva... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/usbnet.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/net/usb/usbnet.c +++ b/drivers/net/usb/usbnet.c @@ -1775,6 +1775,10 @@ usbnet_probe (struct usb_interface *udev } else if (!info->in || !info->out) status = usbnet_get_endpoints (dev, udev); else { + u8 ep_addrs[3] = { + info->in + USB_DIR_IN, info->out + USB_DIR_OUT, 0 + }; + dev->in = usb_rcvbulkpipe (xdev, info->in); dev->out = usb_sndbulkpipe (xdev, info->out); if (!(info->flags & FLAG_NO_SETINT)) @@ -1784,6 +1788,8 @@ usbnet_probe (struct usb_interface *udev else status = 0;
+ if (status == 0 && !usb_check_bulk_endpoints(udev, ep_addrs)) + status = -EINVAL; } if (status >= 0 && dev->status) status = init_status (dev, udev);
From: Jan Kara jack@suse.cz
commit c541dce86c537714b6761a79a969c1623dfa222b upstream.
The reconfigure / remount code takes a lot of effort to protect filesystem's reconfiguration code from racing writes on remounting read-only. However during remounting read-only filesystem to read-write mode userspace writes can start immediately once we clear SB_RDONLY flag. This is inconvenient for example for ext4 because we need to do some writes to the filesystem (such as preparation of quota files) before we can take userspace writes so we are clearing SB_RDONLY flag before we are fully ready to accept userpace writes and syzbot has found a way to exploit this [1]. Also as far as I'm reading the code the filesystem remount code was protected from racing writes in the legacy mount path by the mount's MNT_READONLY flag so this is relatively new problem. It is actually fairly easy to protect remount read-write from racing writes using sb->s_readonly_remount flag so let's just do that instead of having to workaround these races in the filesystem code.
[1] https://lore.kernel.org/all/00000000000006a0df05f6667499@google.com/T/
Signed-off-by: Jan Kara jack@suse.cz Message-Id: 20230615113848.8439-1-jack@suse.cz Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/super.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
--- a/fs/super.c +++ b/fs/super.c @@ -903,6 +903,7 @@ int reconfigure_super(struct fs_context struct super_block *sb = fc->root->d_sb; int retval; bool remount_ro = false; + bool remount_rw = false; bool force = fc->sb_flags & SB_FORCE;
if (fc->sb_flags_mask & ~MS_RMT_MASK) @@ -920,7 +921,7 @@ int reconfigure_super(struct fs_context bdev_read_only(sb->s_bdev)) return -EACCES; #endif - + remount_rw = !(fc->sb_flags & SB_RDONLY) && sb_rdonly(sb); remount_ro = (fc->sb_flags & SB_RDONLY) && !sb_rdonly(sb); }
@@ -950,6 +951,14 @@ int reconfigure_super(struct fs_context if (retval) return retval; } + } else if (remount_rw) { + /* + * We set s_readonly_remount here to protect filesystem's + * reconfigure code from writes from userspace until + * reconfigure finishes. + */ + sb->s_readonly_remount = 1; + smp_wmb(); }
if (fc->ops->reconfigure) {
From: Jason Gunthorpe jgg@nvidia.com
commit 9883c7f84053cec2826ca3c56254601b5ce9cdbe upstream.
These routines are not intended to return zero, the callers cannot do anything sane with a 0 return. They should return an error which means future calls to GUP will not succeed, or they should return some non-zero number of pinned pages which means GUP should be called again.
If start + nr_pages overflows it should return -EOVERFLOW to signal the arguments are invalid.
Syzkaller keeps tripping on this when fuzzing GUP arguments.
Link: https://lkml.kernel.org/r/0-v1-3d5ed1f20d50+104-gup_overflow_jgg@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Reported-by: syzbot+353c7be4964c6253f24a@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/000000000000094fdd05faa4d3a4@google.com Reviewed-by: John Hubbard jhubbard@nvidia.com Reviewed-by: Lorenzo Stoakes lstoakes@gmail.com Reviewed-by: David Hildenbrand david@redhat.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/gup.c +++ b/mm/gup.c @@ -2977,7 +2977,7 @@ static int internal_get_user_pages_fast( start = untagged_addr(start) & PAGE_MASK; len = nr_pages << PAGE_SHIFT; if (check_add_overflow(start, len, &end)) - return 0; + return -EOVERFLOW; if (end > TASK_SIZE_MAX) return -EFAULT; if (unlikely(!access_ok((void __user *)start, len)))
From: Jan Kara jack@suse.cz
commit 404615d7f1dcd4cca200e9a7a9df3a1dcae1dd62 upstream.
Ext2 has fields in superblock reserved for subblock allocation support. However that never landed. Drop the many years dead code.
Reported-by: syzbot+af5e10f73dbff48f70af@syzkaller.appspotmail.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext2/ext2.h | 12 ------------ fs/ext2/super.c | 23 ++++------------------- 2 files changed, 4 insertions(+), 31 deletions(-)
--- a/fs/ext2/ext2.h +++ b/fs/ext2/ext2.h @@ -70,10 +70,7 @@ struct mb_cache; * second extended-fs super-block data in memory */ struct ext2_sb_info { - unsigned long s_frag_size; /* Size of a fragment in bytes */ - unsigned long s_frags_per_block;/* Number of fragments per block */ unsigned long s_inodes_per_block;/* Number of inodes per block */ - unsigned long s_frags_per_group;/* Number of fragments in a group */ unsigned long s_blocks_per_group;/* Number of blocks in a group */ unsigned long s_inodes_per_group;/* Number of inodes in a group */ unsigned long s_itb_per_group; /* Number of inode table blocks per group */ @@ -189,15 +186,6 @@ static inline struct ext2_sb_info *EXT2_ #define EXT2_FIRST_INO(s) (EXT2_SB(s)->s_first_ino)
/* - * Macro-instructions used to manage fragments - */ -#define EXT2_MIN_FRAG_SIZE 1024 -#define EXT2_MAX_FRAG_SIZE 4096 -#define EXT2_MIN_FRAG_LOG_SIZE 10 -#define EXT2_FRAG_SIZE(s) (EXT2_SB(s)->s_frag_size) -#define EXT2_FRAGS_PER_BLOCK(s) (EXT2_SB(s)->s_frags_per_block) - -/* * Structure of a blocks group descriptor */ struct ext2_group_desc --- a/fs/ext2/super.c +++ b/fs/ext2/super.c @@ -668,10 +668,9 @@ static int ext2_setup_super (struct supe es->s_max_mnt_count = cpu_to_le16(EXT2_DFL_MAX_MNT_COUNT); le16_add_cpu(&es->s_mnt_count, 1); if (test_opt (sb, DEBUG)) - ext2_msg(sb, KERN_INFO, "%s, %s, bs=%lu, fs=%lu, gc=%lu, " + ext2_msg(sb, KERN_INFO, "%s, %s, bs=%lu, gc=%lu, " "bpg=%lu, ipg=%lu, mo=%04lx]", EXT2FS_VERSION, EXT2FS_DATE, sb->s_blocksize, - sbi->s_frag_size, sbi->s_groups_count, EXT2_BLOCKS_PER_GROUP(sb), EXT2_INODES_PER_GROUP(sb), @@ -1012,14 +1011,7 @@ static int ext2_fill_super(struct super_ } }
- sbi->s_frag_size = EXT2_MIN_FRAG_SIZE << - le32_to_cpu(es->s_log_frag_size); - if (sbi->s_frag_size == 0) - goto cantfind_ext2; - sbi->s_frags_per_block = sb->s_blocksize / sbi->s_frag_size; - sbi->s_blocks_per_group = le32_to_cpu(es->s_blocks_per_group); - sbi->s_frags_per_group = le32_to_cpu(es->s_frags_per_group); sbi->s_inodes_per_group = le32_to_cpu(es->s_inodes_per_group);
sbi->s_inodes_per_block = sb->s_blocksize / EXT2_INODE_SIZE(sb); @@ -1045,11 +1037,10 @@ static int ext2_fill_super(struct super_ goto failed_mount; }
- if (sb->s_blocksize != sbi->s_frag_size) { + if (es->s_log_frag_size != es->s_log_block_size) { ext2_msg(sb, KERN_ERR, - "error: fragsize %lu != blocksize %lu" - "(not supported yet)", - sbi->s_frag_size, sb->s_blocksize); + "error: fragsize log %u != blocksize log %u", + le32_to_cpu(es->s_log_frag_size), sb->s_blocksize_bits); goto failed_mount; }
@@ -1066,12 +1057,6 @@ static int ext2_fill_super(struct super_ sbi->s_blocks_per_group, sbi->s_inodes_per_group + 3); goto failed_mount; } - if (sbi->s_frags_per_group > sb->s_blocksize * 8) { - ext2_msg(sb, KERN_ERR, - "error: #fragments per group too big: %lu", - sbi->s_frags_per_group); - goto failed_mount; - } if (sbi->s_inodes_per_group < sbi->s_inodes_per_block || sbi->s_inodes_per_group > sb->s_blocksize * 8) { ext2_msg(sb, KERN_ERR,
From: Filipe Manana fdmanana@suse.com
commit d8ccbd21918fd7fa6ce3226cffc22c444228e8ad upstream.
At add_new_free_space() we have these BUG_ON()'s that are there to deal with any failure to add free space to the in memory free space cache. Such failures are mostly -ENOMEM that should be very rare. However there's no need to have these BUG_ON()'s, we can just return any error to the caller and all callers and their upper call chain are already dealing with errors.
So just make add_new_free_space() return any errors, while removing the BUG_ON()'s, and returning the total amount of added free space to an optional u64 pointer argument.
Reported-by: syzbot+3ba856e07b7127889d8c@syzkaller.appspotmail.com Link: https://lore.kernel.org/linux-btrfs/000000000000e9cb8305ff4e8327@google.com/ Signed-off-by: Filipe Manana fdmanana@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/block-group.c | 51 ++++++++++++++++++++++++++++++--------------- fs/btrfs/block-group.h | 4 +-- fs/btrfs/free-space-tree.c | 24 +++++++++++++++------ 3 files changed, 53 insertions(+), 26 deletions(-)
--- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -499,12 +499,16 @@ static void fragment_free_space(struct b * used yet since their free space will be released as soon as the transaction * commits. */ -u64 add_new_free_space(struct btrfs_block_group *block_group, u64 start, u64 end) +int add_new_free_space(struct btrfs_block_group *block_group, u64 start, u64 end, + u64 *total_added_ret) { struct btrfs_fs_info *info = block_group->fs_info; - u64 extent_start, extent_end, size, total_added = 0; + u64 extent_start, extent_end, size; int ret;
+ if (total_added_ret) + *total_added_ret = 0; + while (start < end) { ret = find_first_extent_bit(&info->excluded_extents, start, &extent_start, &extent_end, @@ -517,10 +521,12 @@ u64 add_new_free_space(struct btrfs_bloc start = extent_end + 1; } else if (extent_start > start && extent_start < end) { size = extent_start - start; - total_added += size; ret = btrfs_add_free_space_async_trimmed(block_group, start, size); - BUG_ON(ret); /* -ENOMEM or logic error */ + if (ret) + return ret; + if (total_added_ret) + *total_added_ret += size; start = extent_end + 1; } else { break; @@ -529,13 +535,15 @@ u64 add_new_free_space(struct btrfs_bloc
if (start < end) { size = end - start; - total_added += size; ret = btrfs_add_free_space_async_trimmed(block_group, start, size); - BUG_ON(ret); /* -ENOMEM or logic error */ + if (ret) + return ret; + if (total_added_ret) + *total_added_ret += size; }
- return total_added; + return 0; }
/* @@ -779,8 +787,13 @@ next:
if (key.type == BTRFS_EXTENT_ITEM_KEY || key.type == BTRFS_METADATA_ITEM_KEY) { - total_found += add_new_free_space(block_group, last, - key.objectid); + u64 space_added; + + ret = add_new_free_space(block_group, last, key.objectid, + &space_added); + if (ret) + goto out; + total_found += space_added; if (key.type == BTRFS_METADATA_ITEM_KEY) last = key.objectid + fs_info->nodesize; @@ -795,11 +808,10 @@ next: } path->slots[0]++; } - ret = 0; - - total_found += add_new_free_space(block_group, last, - block_group->start + block_group->length);
+ ret = add_new_free_space(block_group, last, + block_group->start + block_group->length, + NULL); out: btrfs_free_path(path); return ret; @@ -2290,9 +2302,11 @@ static int read_one_block_group(struct b btrfs_free_excluded_extents(cache); } else if (cache->used == 0) { cache->cached = BTRFS_CACHE_FINISHED; - add_new_free_space(cache, cache->start, - cache->start + cache->length); + ret = add_new_free_space(cache, cache->start, + cache->start + cache->length, NULL); btrfs_free_excluded_extents(cache); + if (ret) + goto error; }
ret = btrfs_add_block_group_cache(info, cache); @@ -2728,9 +2742,12 @@ struct btrfs_block_group *btrfs_make_blo return ERR_PTR(ret); }
- add_new_free_space(cache, chunk_offset, chunk_offset + size); - + ret = add_new_free_space(cache, chunk_offset, chunk_offset + size, NULL); btrfs_free_excluded_extents(cache); + if (ret) { + btrfs_put_block_group(cache); + return ERR_PTR(ret); + }
/* * Ensure the corresponding space_info object is created and --- a/fs/btrfs/block-group.h +++ b/fs/btrfs/block-group.h @@ -277,8 +277,8 @@ int btrfs_cache_block_group(struct btrfs void btrfs_put_caching_control(struct btrfs_caching_control *ctl); struct btrfs_caching_control *btrfs_get_caching_control( struct btrfs_block_group *cache); -u64 add_new_free_space(struct btrfs_block_group *block_group, - u64 start, u64 end); +int add_new_free_space(struct btrfs_block_group *block_group, + u64 start, u64 end, u64 *total_added_ret); struct btrfs_trans_handle *btrfs_start_trans_remove_block_group( struct btrfs_fs_info *fs_info, const u64 chunk_offset); --- a/fs/btrfs/free-space-tree.c +++ b/fs/btrfs/free-space-tree.c @@ -1515,9 +1515,13 @@ static int load_free_space_bitmaps(struc if (prev_bit == 0 && bit == 1) { extent_start = offset; } else if (prev_bit == 1 && bit == 0) { - total_found += add_new_free_space(block_group, - extent_start, - offset); + u64 space_added; + + ret = add_new_free_space(block_group, extent_start, + offset, &space_added); + if (ret) + goto out; + total_found += space_added; if (total_found > CACHING_CTL_WAKE_UP) { total_found = 0; wake_up(&caching_ctl->wait); @@ -1529,8 +1533,9 @@ static int load_free_space_bitmaps(struc } } if (prev_bit == 1) { - total_found += add_new_free_space(block_group, extent_start, - end); + ret = add_new_free_space(block_group, extent_start, end, NULL); + if (ret) + goto out; extent_count++; }
@@ -1569,6 +1574,8 @@ static int load_free_space_extents(struc end = block_group->start + block_group->length;
while (1) { + u64 space_added; + ret = btrfs_next_item(root, path); if (ret < 0) goto out; @@ -1583,8 +1590,11 @@ static int load_free_space_extents(struc ASSERT(key.type == BTRFS_FREE_SPACE_EXTENT_KEY); ASSERT(key.objectid < end && key.objectid + key.offset <= end);
- total_found += add_new_free_space(block_group, key.objectid, - key.objectid + key.offset); + ret = add_new_free_space(block_group, key.objectid, + key.objectid + key.offset, &space_added); + if (ret) + goto out; + total_found += space_added; if (total_found > CACHING_CTL_WAKE_UP) { total_found = 0; wake_up(&caching_ctl->wait);
From: Chao Yu chao@kernel.org
commit a6ec83786ab9f13f25fb18166dee908845713a95 upstream.
syzbot reports below bug:
BUG: KASAN: slab-use-after-free in f2fs_truncate_data_blocks_range+0x122a/0x14c0 fs/f2fs/file.c:574 Read of size 4 at addr ffff88802a25c000 by task syz-executor148/5000
CPU: 1 PID: 5000 Comm: syz-executor148 Not tainted 6.4.0-rc7-syzkaller-00041-ge660abd551f1 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106 print_address_description.constprop.0+0x2c/0x3c0 mm/kasan/report.c:351 print_report mm/kasan/report.c:462 [inline] kasan_report+0x11c/0x130 mm/kasan/report.c:572 f2fs_truncate_data_blocks_range+0x122a/0x14c0 fs/f2fs/file.c:574 truncate_dnode+0x229/0x2e0 fs/f2fs/node.c:944 f2fs_truncate_inode_blocks+0x64b/0xde0 fs/f2fs/node.c:1154 f2fs_do_truncate_blocks+0x4ac/0xf30 fs/f2fs/file.c:721 f2fs_truncate_blocks+0x7b/0x300 fs/f2fs/file.c:749 f2fs_truncate.part.0+0x4a5/0x630 fs/f2fs/file.c:799 f2fs_truncate include/linux/fs.h:825 [inline] f2fs_setattr+0x1738/0x2090 fs/f2fs/file.c:1006 notify_change+0xb2c/0x1180 fs/attr.c:483 do_truncate+0x143/0x200 fs/open.c:66 handle_truncate fs/namei.c:3295 [inline] do_open fs/namei.c:3640 [inline] path_openat+0x2083/0x2750 fs/namei.c:3791 do_filp_open+0x1ba/0x410 fs/namei.c:3818 do_sys_openat2+0x16d/0x4c0 fs/open.c:1356 do_sys_open fs/open.c:1372 [inline] __do_sys_creat fs/open.c:1448 [inline] __se_sys_creat fs/open.c:1442 [inline] __x64_sys_creat+0xcd/0x120 fs/open.c:1442 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
The root cause is, inodeA references inodeB via inodeB's ino, once inodeA is truncated, it calls truncate_dnode() to truncate data blocks in inodeB's node page, it traverse mapping data from node->i.i_addr[0] to node->i.i_addr[ADDRS_PER_BLOCK() - 1], result in out-of-boundary access.
This patch fixes to add sanity check on dnode page in truncate_dnode(), so that, it can help to avoid triggering such issue, and once it encounters such issue, it will record newly introduced ERROR_INVALID_NODE_REFERENCE error into superblock, later fsck can detect such issue and try repairing.
Also, it removes f2fs_truncate_data_blocks() for cleanup due to the function has only one caller, and uses f2fs_truncate_data_blocks_range() instead.
Reported-and-tested-by: syzbot+12cb4425b22169b52036@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-f2fs-devel/000000000000f3038a05fef867f8@google... Signed-off-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/f2fs/f2fs.h | 1 - fs/f2fs/file.c | 5 ----- fs/f2fs/node.c | 14 ++++++++++++-- include/linux/f2fs_fs.h | 1 + 4 files changed, 13 insertions(+), 8 deletions(-)
--- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -3445,7 +3445,6 @@ static inline bool __is_valid_data_blkad * file.c */ int f2fs_sync_file(struct file *file, loff_t start, loff_t end, int datasync); -void f2fs_truncate_data_blocks(struct dnode_of_data *dn); int f2fs_do_truncate_blocks(struct inode *inode, u64 from, bool lock); int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock); int f2fs_truncate(struct inode *inode); --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -627,11 +627,6 @@ void f2fs_truncate_data_blocks_range(str dn->ofs_in_node, nr_free); }
-void f2fs_truncate_data_blocks(struct dnode_of_data *dn) -{ - f2fs_truncate_data_blocks_range(dn, ADDRS_PER_BLOCK(dn->inode)); -} - static int truncate_partial_data_page(struct inode *inode, u64 from, bool cache_only) { --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -925,6 +925,7 @@ static int truncate_node(struct dnode_of
static int truncate_dnode(struct dnode_of_data *dn) { + struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); struct page *page; int err;
@@ -932,16 +933,25 @@ static int truncate_dnode(struct dnode_o return 1;
/* get direct node */ - page = f2fs_get_node_page(F2FS_I_SB(dn->inode), dn->nid); + page = f2fs_get_node_page(sbi, dn->nid); if (PTR_ERR(page) == -ENOENT) return 1; else if (IS_ERR(page)) return PTR_ERR(page);
+ if (IS_INODE(page) || ino_of_node(page) != dn->inode->i_ino) { + f2fs_err(sbi, "incorrect node reference, ino: %lu, nid: %u, ino_of_node: %u", + dn->inode->i_ino, dn->nid, ino_of_node(page)); + set_sbi_flag(sbi, SBI_NEED_FSCK); + f2fs_handle_error(sbi, ERROR_INVALID_NODE_REFERENCE); + f2fs_put_page(page, 1); + return -EFSCORRUPTED; + } + /* Make dnode_of_data for parameter */ dn->node_page = page; dn->ofs_in_node = 0; - f2fs_truncate_data_blocks(dn); + f2fs_truncate_data_blocks_range(dn, ADDRS_PER_BLOCK(dn->inode)); err = truncate_node(dn); if (err) { f2fs_put_page(page, 1); --- a/include/linux/f2fs_fs.h +++ b/include/linux/f2fs_fs.h @@ -103,6 +103,7 @@ enum f2fs_error { ERROR_INCONSISTENT_SIT, ERROR_CORRUPTED_VERITY_XATTR, ERROR_CORRUPTED_XATTR, + ERROR_INVALID_NODE_REFERENCE, ERROR_MAX, };
From: Pavel Begunkov asml.silence@gmail.com
commit 5498bf28d8f2bd63a46ad40f4427518615fb793f upstream.
It's racy to read ->cached_cq_tail without taking proper measures (usually grabbing ->completion_lock) as timeout requests with CQE offsets do, however they have never had a good semantics for from when they start counting. Annotate racy reads with data_race().
Reported-by: syzbot+cb265db2f3f3468ef436@syzkaller.appspotmail.com Signed-off-by: Pavel Begunkov asml.silence@gmail.com Link: https://lore.kernel.org/r/4de3685e185832a92a572df2be2c735d2e21a83d.168450605... Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- io_uring/timeout.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/io_uring/timeout.c +++ b/io_uring/timeout.c @@ -594,7 +594,7 @@ int io_timeout(struct io_kiocb *req, uns goto add; }
- tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); + tail = data_race(ctx->cached_cq_tail) - atomic_read(&ctx->cq_timeouts); timeout->target_seq = tail + off;
/* Update the last seq here in case io_flush_timeouts() hasn't.
From: Roger Quadros rogerq@kernel.org
[ Upstream commit d8403b9eeee66d5dd81ecb9445800b108c267ce3 ]
Once the ECC word endianness is converted to BE32, we force cast it to u32 so we can use elm_write_reg() which in turn uses writel().
Fixes below sparse warnings:
drivers/mtd/nand/raw/omap_elm.c:180:37: sparse: expected unsigned int [usertype] val drivers/mtd/nand/raw/omap_elm.c:180:37: sparse: got restricted __be32 [usertype] drivers/mtd/nand/raw/omap_elm.c:185:37: sparse: expected unsigned int [usertype] val drivers/mtd/nand/raw/omap_elm.c:185:37: sparse: got restricted __be32 [usertype] drivers/mtd/nand/raw/omap_elm.c:190:37: sparse: expected unsigned int [usertype] val drivers/mtd/nand/raw/omap_elm.c:190:37: sparse: got restricted __be32 [usertype]
drivers/mtd/nand/raw/omap_elm.c:200:40: sparse: sparse: restricted __be32 degrades to integer
drivers/mtd/nand/raw/omap_elm.c:206:39: sparse: sparse: restricted __be32 degrades to integer drivers/mtd/nand/raw/omap_elm.c:210:37: sparse: expected unsigned int [assigned] [usertype] val drivers/mtd/nand/raw/omap_elm.c:210:37: sparse: got restricted __be32 [usertype] drivers/mtd/nand/raw/omap_elm.c:213:37: sparse: expected unsigned int [assigned] [usertype] val drivers/mtd/nand/raw/omap_elm.c:213:37: sparse: got restricted __be32 [usertype] drivers/mtd/nand/raw/omap_elm.c:216:37: sparse: expected unsigned int [assigned] [usertype] val drivers/mtd/nand/raw/omap_elm.c:216:37: sparse: got restricted __be32 [usertype] drivers/mtd/nand/raw/omap_elm.c:219:37: sparse: expected unsigned int [assigned] [usertype] val drivers/mtd/nand/raw/omap_elm.c:219:37: sparse: got restricted __be32 [usertype] drivers/mtd/nand/raw/omap_elm.c:222:37: sparse: expected unsigned int [assigned] [usertype] val drivers/mtd/nand/raw/omap_elm.c:222:37: sparse: got restricted __be32 [usertype] drivers/mtd/nand/raw/omap_elm.c:225:37: sparse: expected unsigned int [assigned] [usertype] val drivers/mtd/nand/raw/omap_elm.c:225:37: sparse: got restricted __be32 [usertype] drivers/mtd/nand/raw/omap_elm.c:228:39: sparse: sparse: restricted __be32 degrades to integer
Fixes: bf22433575ef ("mtd: devices: elm: Add support for ELM error correction") Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/oe-kbuild-all/202306212211.WDXokuWh-lkp@intel.com/ Signed-off-by: Roger Quadros rogerq@kernel.org Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/20230624184021.7740-1-rogerq@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/nand/raw/omap_elm.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/mtd/nand/raw/omap_elm.c b/drivers/mtd/nand/raw/omap_elm.c index 6e1eac6644a66..4a97d4a76454a 100644 --- a/drivers/mtd/nand/raw/omap_elm.c +++ b/drivers/mtd/nand/raw/omap_elm.c @@ -177,17 +177,17 @@ static void elm_load_syndrome(struct elm_info *info, switch (info->bch_type) { case BCH8_ECC: /* syndrome fragment 0 = ecc[9-12B] */ - val = cpu_to_be32(*(u32 *) &ecc[9]); + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[9]); elm_write_reg(info, offset, val);
/* syndrome fragment 1 = ecc[5-8B] */ offset += 4; - val = cpu_to_be32(*(u32 *) &ecc[5]); + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[5]); elm_write_reg(info, offset, val);
/* syndrome fragment 2 = ecc[1-4B] */ offset += 4; - val = cpu_to_be32(*(u32 *) &ecc[1]); + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[1]); elm_write_reg(info, offset, val);
/* syndrome fragment 3 = ecc[0B] */ @@ -197,35 +197,35 @@ static void elm_load_syndrome(struct elm_info *info, break; case BCH4_ECC: /* syndrome fragment 0 = ecc[20-52b] bits */ - val = (cpu_to_be32(*(u32 *) &ecc[3]) >> 4) | + val = ((__force u32)cpu_to_be32(*(u32 *)&ecc[3]) >> 4) | ((ecc[2] & 0xf) << 28); elm_write_reg(info, offset, val);
/* syndrome fragment 1 = ecc[0-20b] bits */ offset += 4; - val = cpu_to_be32(*(u32 *) &ecc[0]) >> 12; + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[0]) >> 12; elm_write_reg(info, offset, val); break; case BCH16_ECC: - val = cpu_to_be32(*(u32 *) &ecc[22]); + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[22]); elm_write_reg(info, offset, val); offset += 4; - val = cpu_to_be32(*(u32 *) &ecc[18]); + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[18]); elm_write_reg(info, offset, val); offset += 4; - val = cpu_to_be32(*(u32 *) &ecc[14]); + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[14]); elm_write_reg(info, offset, val); offset += 4; - val = cpu_to_be32(*(u32 *) &ecc[10]); + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[10]); elm_write_reg(info, offset, val); offset += 4; - val = cpu_to_be32(*(u32 *) &ecc[6]); + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[6]); elm_write_reg(info, offset, val); offset += 4; - val = cpu_to_be32(*(u32 *) &ecc[2]); + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[2]); elm_write_reg(info, offset, val); offset += 4; - val = cpu_to_be32(*(u32 *) &ecc[0]) >> 16; + val = (__force u32)cpu_to_be32(*(u32 *)&ecc[0]) >> 16; elm_write_reg(info, offset, val); break; default:
From: Johan Jonker jbx6244@gmail.com
[ Upstream commit d0ca3b92b7a6f42841ea9da8492aaf649db79780 ]
Rockchip boot blocks are written per 4 x 512 byte sectors per page. Each page with boot blocks must have a page address (PA) pointer in OOB to the next page.
The currently advertised free OOB area starts at offset 6, like if 4 PA bytes were located right after the BBM. This is wrong as the PA bytes are located right before the ECC bytes.
Fix the layout by allowing access to all bytes between the BBM and the PA bytes instead of reserving 4 bytes right after the BBM.
This change breaks existing jffs2 users.
Fixes: 058e0e847d54 ("mtd: rawnand: rockchip: NFC driver for RK3308, RK2928 and others") Signed-off-by: Johan Jonker jbx6244@gmail.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/d202f12d-188c-20e8-f2c2-9cc874ad4d22@gmail... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/nand/raw/rockchip-nand-controller.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/drivers/mtd/nand/raw/rockchip-nand-controller.c b/drivers/mtd/nand/raw/rockchip-nand-controller.c index 2312e27362cbe..37fc07ba57aab 100644 --- a/drivers/mtd/nand/raw/rockchip-nand-controller.c +++ b/drivers/mtd/nand/raw/rockchip-nand-controller.c @@ -562,9 +562,10 @@ static int rk_nfc_write_page_raw(struct nand_chip *chip, const u8 *buf, * BBM OOB1 OOB2 OOB3 |......| PA0 PA1 PA2 PA3 * * The rk_nfc_ooblayout_free() function already has reserved - * these 4 bytes with: + * these 4 bytes together with 2 bytes for BBM + * by reducing it's length: * - * oob_region->offset = NFC_SYS_DATA_SIZE + 2; + * oob_region->length = rknand->metadata_size - NFC_SYS_DATA_SIZE - 2; */ if (!i) memcpy(rk_nfc_oob_ptr(chip, i), @@ -933,12 +934,8 @@ static int rk_nfc_ooblayout_free(struct mtd_info *mtd, int section, if (section) return -ERANGE;
- /* - * The beginning of the OOB area stores the reserved data for the NFC, - * the size of the reserved data is NFC_SYS_DATA_SIZE bytes. - */ oob_region->length = rknand->metadata_size - NFC_SYS_DATA_SIZE - 2; - oob_region->offset = NFC_SYS_DATA_SIZE + 2; + oob_region->offset = 2;
return 0; }
From: Johan Jonker jbx6244@gmail.com
[ Upstream commit ea690ad78dd611e3906df5b948a516000b05c1cb ]
Currently, read/write_page_hwecc() and read/write_page_raw() are not aligned: there is a mismatch in the OOB bytes which are not read/written at the same offset in both cases (raw vs. hwecc).
This is a real problem when relying on the presence of the Page Addresses (PA) when using the NAND chip as a boot device, as the BootROM expects additional data in the OOB area at specific locations.
Rockchip boot blocks are written per 4 x 512 byte sectors per page. Each page with boot blocks must have a page address (PA) pointer in OOB to the next page. Pages are written in a pattern depending on the NAND chip ID.
Generate boot block page address and pattern for hwecc in user space and copy PA data to/from the already reserved last 4 bytes before ECC in the chip->oob_poi data layout.
Align the different helpers. This change breaks existing jffs2 users.
Fixes: 058e0e847d54 ("mtd: rawnand: rockchip: NFC driver for RK3308, RK2928 and others") Signed-off-by: Johan Jonker jbx6244@gmail.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/5e782c08-862b-51ae-47ff-3299940928ca@gmail... Signed-off-by: Sasha Levin sashal@kernel.org --- .../mtd/nand/raw/rockchip-nand-controller.c | 34 ++++++++++++------- 1 file changed, 21 insertions(+), 13 deletions(-)
diff --git a/drivers/mtd/nand/raw/rockchip-nand-controller.c b/drivers/mtd/nand/raw/rockchip-nand-controller.c index 37fc07ba57aab..5a04680342c32 100644 --- a/drivers/mtd/nand/raw/rockchip-nand-controller.c +++ b/drivers/mtd/nand/raw/rockchip-nand-controller.c @@ -598,7 +598,7 @@ static int rk_nfc_write_page_hwecc(struct nand_chip *chip, const u8 *buf, int pages_per_blk = mtd->erasesize / mtd->writesize; int ret = 0, i, boot_rom_mode = 0; dma_addr_t dma_data, dma_oob; - u32 reg; + u32 tmp; u8 *oob;
nand_prog_page_begin_op(chip, page, 0, NULL, 0); @@ -625,6 +625,13 @@ static int rk_nfc_write_page_hwecc(struct nand_chip *chip, const u8 *buf, * * 0xFF 0xFF 0xFF 0xFF | BBM OOB1 OOB2 OOB3 | ... * + * The code here just swaps the first 4 bytes with the last + * 4 bytes without losing any data. + * + * The chip->oob_poi data layout: + * + * BBM OOB1 OOB2 OOB3 |......| PA0 PA1 PA2 PA3 + * * Configure the ECC algorithm supported by the boot ROM. */ if ((page < (pages_per_blk * rknand->boot_blks)) && @@ -635,21 +642,17 @@ static int rk_nfc_write_page_hwecc(struct nand_chip *chip, const u8 *buf, }
for (i = 0; i < ecc->steps; i++) { - if (!i) { - reg = 0xFFFFFFFF; - } else { + if (!i) + oob = chip->oob_poi + (ecc->steps - 1) * NFC_SYS_DATA_SIZE; + else oob = chip->oob_poi + (i - 1) * NFC_SYS_DATA_SIZE; - reg = oob[0] | oob[1] << 8 | oob[2] << 16 | - oob[3] << 24; - }
- if (!i && boot_rom_mode) - reg = (page & (pages_per_blk - 1)) * 4; + tmp = oob[0] | oob[1] << 8 | oob[2] << 16 | oob[3] << 24;
if (nfc->cfg->type == NFC_V9) - nfc->oob_buf[i] = reg; + nfc->oob_buf[i] = tmp; else - nfc->oob_buf[i * (oob_step / 4)] = reg; + nfc->oob_buf[i * (oob_step / 4)] = tmp; }
dma_data = dma_map_single(nfc->dev, (void *)nfc->page_buf, @@ -812,12 +815,17 @@ static int rk_nfc_read_page_hwecc(struct nand_chip *chip, u8 *buf, int oob_on, goto timeout_err; }
- for (i = 1; i < ecc->steps; i++) { - oob = chip->oob_poi + (i - 1) * NFC_SYS_DATA_SIZE; + for (i = 0; i < ecc->steps; i++) { + if (!i) + oob = chip->oob_poi + (ecc->steps - 1) * NFC_SYS_DATA_SIZE; + else + oob = chip->oob_poi + (i - 1) * NFC_SYS_DATA_SIZE; + if (nfc->cfg->type == NFC_V9) tmp = nfc->oob_buf[i]; else tmp = nfc->oob_buf[i * (oob_step / 4)]; + *oob++ = (u8)tmp; *oob++ = (u8)(tmp >> 8); *oob++ = (u8)(tmp >> 16);
From: Chen-Yu Tsai wenst@chromium.org
[ Upstream commit 1eb8d61ac5c9c7ec56bb96d433532807509b9288 ]
This reverts commit 860690a93ef23b567f781c1b631623e27190f101.
On the MT8183, the SSPM related clocks were removed claiming a lack of usage. This however causes some issues when the driver was converted to the new simple-probe mechanism. This mechanism allocates enough space for all the clocks defined in the clock driver, not the highest index in the DT binding. This leads to out-of-bound writes if their are holes in the DT binding or the driver (due to deprecated or unimplemented clocks). These errors can go unnoticed and cause memory corruption, leading to crashes in unrelated areas, or nothing at all. KASAN will detect them.
Add the SSPM related clocks back to the MT8183 clock driver to fully implement the DT binding. The SSPM clocks are for the power management co-processor, and should never be turned off. They are marked as such.
Fixes: 3f37ba7cc385 ("clk: mediatek: mt8183: Convert all remaining clocks to common probe") Signed-off-by: Chen-Yu Tsai wenst@chromium.org Link: https://lore.kernel.org/r/20230719074251.1219089-1-wenst@chromium.org Reviewed-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/mediatek/clk-mt8183.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+)
diff --git a/drivers/clk/mediatek/clk-mt8183.c b/drivers/clk/mediatek/clk-mt8183.c index 2336a1b69c093..3b605c30e8494 100644 --- a/drivers/clk/mediatek/clk-mt8183.c +++ b/drivers/clk/mediatek/clk-mt8183.c @@ -328,6 +328,14 @@ static const char * const atb_parents[] = { "syspll_d5" };
+static const char * const sspm_parents[] = { + "clk26m", + "univpll_d2_d4", + "syspll_d2_d2", + "univpll_d2_d2", + "syspll_d3" +}; + static const char * const dpi0_parents[] = { "clk26m", "tvdpll_d2", @@ -506,6 +514,9 @@ static const struct mtk_mux top_muxes[] = { /* CLK_CFG_6 */ MUX_GATE_CLR_SET_UPD(CLK_TOP_MUX_ATB, "atb_sel", atb_parents, 0xa0, 0xa4, 0xa8, 0, 2, 7, 0x004, 24), + MUX_GATE_CLR_SET_UPD_FLAGS(CLK_TOP_MUX_SSPM, "sspm_sel", + sspm_parents, 0xa0, 0xa4, 0xa8, 8, 3, 15, 0x004, 25, + CLK_IS_CRITICAL | CLK_SET_RATE_PARENT), MUX_GATE_CLR_SET_UPD(CLK_TOP_MUX_DPI0, "dpi0_sel", dpi0_parents, 0xa0, 0xa4, 0xa8, 16, 4, 23, 0x004, 26), MUX_GATE_CLR_SET_UPD(CLK_TOP_MUX_SCAM, "scam_sel", @@ -671,10 +682,18 @@ static const struct mtk_gate_regs infra3_cg_regs = { GATE_MTK(_id, _name, _parent, &infra2_cg_regs, _shift, \ &mtk_clk_gate_ops_setclr)
+#define GATE_INFRA2_FLAGS(_id, _name, _parent, _shift, _flag) \ + GATE_MTK_FLAGS(_id, _name, _parent, &infra2_cg_regs, \ + _shift, &mtk_clk_gate_ops_setclr, _flag) + #define GATE_INFRA3(_id, _name, _parent, _shift) \ GATE_MTK(_id, _name, _parent, &infra3_cg_regs, _shift, \ &mtk_clk_gate_ops_setclr)
+#define GATE_INFRA3_FLAGS(_id, _name, _parent, _shift, _flag) \ + GATE_MTK_FLAGS(_id, _name, _parent, &infra3_cg_regs, \ + _shift, &mtk_clk_gate_ops_setclr, _flag) + static const struct mtk_gate infra_clks[] = { /* INFRA0 */ GATE_INFRA0(CLK_INFRA_PMIC_TMR, "infra_pmic_tmr", "axi_sel", 0), @@ -746,7 +765,11 @@ static const struct mtk_gate infra_clks[] = { GATE_INFRA2(CLK_INFRA_UNIPRO_TICK, "infra_unipro_tick", "fufs_sel", 12), GATE_INFRA2(CLK_INFRA_UFS_MP_SAP_BCLK, "infra_ufs_mp_sap_bck", "fufs_sel", 13), GATE_INFRA2(CLK_INFRA_MD32_BCLK, "infra_md32_bclk", "axi_sel", 14), + /* infra_sspm is main clock in co-processor, should not be closed in Linux. */ + GATE_INFRA2_FLAGS(CLK_INFRA_SSPM, "infra_sspm", "sspm_sel", 15, CLK_IS_CRITICAL), GATE_INFRA2(CLK_INFRA_UNIPRO_MBIST, "infra_unipro_mbist", "axi_sel", 16), + /* infra_sspm_bus_hclk is main clock in co-processor, should not be closed in Linux. */ + GATE_INFRA2_FLAGS(CLK_INFRA_SSPM_BUS_HCLK, "infra_sspm_bus_hclk", "axi_sel", 17, CLK_IS_CRITICAL), GATE_INFRA2(CLK_INFRA_I2C5, "infra_i2c5", "i2c_sel", 18), GATE_INFRA2(CLK_INFRA_I2C5_ARBITER, "infra_i2c5_arbiter", "i2c_sel", 19), GATE_INFRA2(CLK_INFRA_I2C5_IMM, "infra_i2c5_imm", "i2c_sel", 20), @@ -764,6 +787,10 @@ static const struct mtk_gate infra_clks[] = { GATE_INFRA3(CLK_INFRA_MSDC0_SELF, "infra_msdc0_self", "msdc50_0_sel", 0), GATE_INFRA3(CLK_INFRA_MSDC1_SELF, "infra_msdc1_self", "msdc50_0_sel", 1), GATE_INFRA3(CLK_INFRA_MSDC2_SELF, "infra_msdc2_self", "msdc50_0_sel", 2), + /* infra_sspm_26m_self is main clock in co-processor, should not be closed in Linux. */ + GATE_INFRA3_FLAGS(CLK_INFRA_SSPM_26M_SELF, "infra_sspm_26m_self", "f_f26m_ck", 3, CLK_IS_CRITICAL), + /* infra_sspm_32k_self is main clock in co-processor, should not be closed in Linux. */ + GATE_INFRA3_FLAGS(CLK_INFRA_SSPM_32K_SELF, "infra_sspm_32k_self", "f_f26m_ck", 4, CLK_IS_CRITICAL), GATE_INFRA3(CLK_INFRA_UFS_AXI, "infra_ufs_axi", "axi_sel", 5), GATE_INFRA3(CLK_INFRA_I2C6, "infra_i2c6", "i2c_sel", 6), GATE_INFRA3(CLK_INFRA_AP_MSDC0, "infra_ap_msdc0", "msdc50_hclk_sel", 7),
From: Arnd Bergmann arnd@arndb.de
[ Upstream commit 71c8f9cf2623d0db79665f876b95afcdd8214aec ]
gcc gets confused when -ftrivial-auto-var-init=pattern is used on sparse bit fields such as 'struct spi_mem_op', which caused the previous false positive warning about an uninitialized variable:
drivers/mtd/spi-nor/spansion.c: error: 'op' is used uninitialized [-Werror=uninitialized]
In fact, the variable is fully initialized and gcc does not see it being used, so the warning is entirely bogus. The problem appears to be a misoptimization in the initialization of single bit fields when the rest of the bytes are not initialized.
A previous workaround added another initialization, which ended up shutting up the warning in spansion.c, though it apparently still happens in other files as reported by Peter Foley in the gcc bugzilla. The workaround of adding a fake initialization seems particularly bad because it would set values that can never be correct but prevent the compiler from warning about actually missing initializations.
Revert the broken workaround and instead pad the structure to only have bitfields that add up to full bytes, which should avoid this behavior in all drivers.
I also filed a new bug against gcc with what I found, so this can hopefully be addressed in future gcc releases. At the moment, only gcc-12 and gcc-13 are affected.
Cc: Peter Foley pefoley2@pefoley.com Cc: Pedro Falcato pedro.falcato@gmail.com Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110743 Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108402 Link: https://godbolt.org/z/efMMsG1Kx Fixes: 420c4495b5e56 ("mtd: spi-nor: spansion: make sure local struct does not contain garbage") Signed-off-by: Arnd Bergmann arnd@arndb.de Acked-by: Mark Brown broonie@kernel.org Acked-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/20230719190045.4007391-1-arnd@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/spansion.c | 4 ++-- include/linux/spi/spi-mem.h | 4 ++++ 2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c index 36876aa849ede..15f9a80c10b9b 100644 --- a/drivers/mtd/spi-nor/spansion.c +++ b/drivers/mtd/spi-nor/spansion.c @@ -361,7 +361,7 @@ static int cypress_nor_determine_addr_mode_by_sr1(struct spi_nor *nor, */ static int cypress_nor_set_addr_mode_nbytes(struct spi_nor *nor) { - struct spi_mem_op op = {}; + struct spi_mem_op op; u8 addr_mode; int ret;
@@ -492,7 +492,7 @@ s25fs256t_post_bfpt_fixup(struct spi_nor *nor, const struct sfdp_parameter_header *bfpt_header, const struct sfdp_bfpt *bfpt) { - struct spi_mem_op op = {}; + struct spi_mem_op op; int ret;
ret = cypress_nor_set_addr_mode_nbytes(nor); diff --git a/include/linux/spi/spi-mem.h b/include/linux/spi/spi-mem.h index 8e984d75f5b6c..6b0a7dc48a4b7 100644 --- a/include/linux/spi/spi-mem.h +++ b/include/linux/spi/spi-mem.h @@ -101,6 +101,7 @@ struct spi_mem_op { u8 nbytes; u8 buswidth; u8 dtr : 1; + u8 __pad : 7; u16 opcode; } cmd;
@@ -108,6 +109,7 @@ struct spi_mem_op { u8 nbytes; u8 buswidth; u8 dtr : 1; + u8 __pad : 7; u64 val; } addr;
@@ -115,12 +117,14 @@ struct spi_mem_op { u8 nbytes; u8 buswidth; u8 dtr : 1; + u8 __pad : 7; } dummy;
struct { u8 buswidth; u8 dtr : 1; u8 ecc : 1; + u8 __pad : 6; enum spi_mem_data_dir dir; unsigned int nbytes; union {
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
[ Upstream commit c6abce60338aa2080973cd95be0aedad528bb41f ]
'op-cs' is copied in 'fun->mchip_number' which is used to access the 'mchip_offsets' and the 'rnb_gpio' arrays. These arrays have NAND_MAX_CHIPS elements, so the index must be below this limit.
Fix the sanity check in order to avoid the NAND_MAX_CHIPS value. This would lead to out-of-bound accesses.
Fixes: 54309d657767 ("mtd: rawnand: fsl_upm: Implement exec_op()") Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Reviewed-by: Dan Carpenter dan.carpenter@linaro.org Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/cd01cba1c7eda58bdabaae174c78c067325803d2.1... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/nand/raw/fsl_upm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mtd/nand/raw/fsl_upm.c b/drivers/mtd/nand/raw/fsl_upm.c index 086426139173f..7366e85c09fd9 100644 --- a/drivers/mtd/nand/raw/fsl_upm.c +++ b/drivers/mtd/nand/raw/fsl_upm.c @@ -135,7 +135,7 @@ static int fun_exec_op(struct nand_chip *chip, const struct nand_operation *op, unsigned int i; int ret;
- if (op->cs > NAND_MAX_CHIPS) + if (op->cs >= NAND_MAX_CHIPS) return -EINVAL;
if (check_only)
From: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com
[ Upstream commit 6722b25712054c0f903b839b8f5088438dd04df3 ]
altmap->free includes the entire free space from which altmap blocks can be allocated. So when checking whether the kernel is doing altmap block free, compute the boundary correctly, otherwise memory hotunplug can fail.
Fixes: 9ef34630a461 ("powerpc/mm: Fallback to RAM if the altmap is unusable") Signed-off-by: "Aneesh Kumar K.V" aneesh.kumar@linux.ibm.com Reviewed-by: David Hildenbrand david@redhat.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://msgid.link/20230724181320.471386-1-aneesh.kumar@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/mm/init_64.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c index fe1b83020e0df..0ec5b45b1e86a 100644 --- a/arch/powerpc/mm/init_64.c +++ b/arch/powerpc/mm/init_64.c @@ -314,8 +314,7 @@ void __ref vmemmap_free(unsigned long start, unsigned long end, start = ALIGN_DOWN(start, page_size); if (altmap) { alt_start = altmap->base_pfn; - alt_end = altmap->base_pfn + altmap->reserve + - altmap->free + altmap->alloc + altmap->align; + alt_end = altmap->base_pfn + altmap->reserve + altmap->free; }
pr_debug("vmemmap_free %lx...%lx\n", start, end);
From: Alexander Stein alexander.stein@ew.tq-group.com
[ Upstream commit ee31742bf17636da1304af77b2cb1c29b5dda642 ]
When hactive is not aligned to 8 pixels, it is aligned accordingly and hfront porch needs to be reduced the same amount. Unfortunately the front porch is set to the difference rather than reducing it. There are some Samsung TVs which can't cope with a front porch of instead of 70.
Fixes: 94dfec48fca7 ("drm/imx: Add 8 pixel alignment fix") Signed-off-by: Alexander Stein alexander.stein@ew.tq-group.com Reviewed-by: Philipp Zabel p.zabel@pengutronix.de Link: https://lore.kernel.org/r/20230515072137.116211-1-alexander.stein@ew.tq-grou... [p.zabel@pengutronix.de: Fixed subject] Signed-off-by: Philipp Zabel p.zabel@pengutronix.de Link: https://patchwork.freedesktop.org/patch/msgid/20230515072137.116211-1-alexan... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c index 5f26090b0c985..89585b31b985e 100644 --- a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c +++ b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c @@ -310,7 +310,7 @@ static void ipu_crtc_mode_set_nofb(struct drm_crtc *crtc) dev_warn(ipu_crtc->dev, "8-pixel align hactive %d -> %d\n", sig_cfg.mode.hactive, new_hactive);
- sig_cfg.mode.hfront_porch = new_hactive - sig_cfg.mode.hactive; + sig_cfg.mode.hfront_porch -= new_hactive - sig_cfg.mode.hactive; sig_cfg.mode.hactive = new_hactive; }
From: Lijo Lazar lijo.lazar@amd.com
commit db3b5cb64a9ca301d14ed027e470834316720e42 upstream.
Use the generic term fw_reserved_memory for FW reserve region. This region may also hold discovery TMR in addition to other reserve regions. This region size could be larger than discovery tmr size, hence don't change the discovery tmr size based on this.
Signed-off-by: Lijo Lazar lijo.lazar@amd.com Reviewed-by: Le Ma le.ma@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com This change fixes reading IP discovery from debugfs. It needed to be hand modified because GC 9.4.3 support isn't introduced in older kernels until 228ce176434b ("drm/amdgpu: Handle VRAM dependencies on GFXIP9.4.3") Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2748 Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 35 +++++++++++++++++--------------- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 3 +- 2 files changed, 21 insertions(+), 17 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -1623,14 +1623,15 @@ static int amdgpu_ttm_training_reserve_v return 0; }
-static void amdgpu_ttm_training_data_block_init(struct amdgpu_device *adev) +static void amdgpu_ttm_training_data_block_init(struct amdgpu_device *adev, + uint32_t reserve_size) { struct psp_memory_training_context *ctx = &adev->psp.mem_train_ctx;
memset(ctx, 0, sizeof(*ctx));
ctx->c2p_train_data_offset = - ALIGN((adev->gmc.mc_vram_size - adev->mman.discovery_tmr_size - SZ_1M), SZ_1M); + ALIGN((adev->gmc.mc_vram_size - reserve_size - SZ_1M), SZ_1M); ctx->p2c_train_data_offset = (adev->gmc.mc_vram_size - GDDR6_MEM_TRAINING_OFFSET); ctx->train_data_size = @@ -1648,9 +1649,10 @@ static void amdgpu_ttm_training_data_blo */ static int amdgpu_ttm_reserve_tmr(struct amdgpu_device *adev) { - int ret; struct psp_memory_training_context *ctx = &adev->psp.mem_train_ctx; bool mem_train_support = false; + uint32_t reserve_size = 0; + int ret;
if (!amdgpu_sriov_vf(adev)) { if (amdgpu_atomfirmware_mem_training_supported(adev)) @@ -1666,14 +1668,15 @@ static int amdgpu_ttm_reserve_tmr(struct * Otherwise, fallback to legacy approach to check and reserve tmr block for ip * discovery data and G6 memory training data respectively */ - adev->mman.discovery_tmr_size = - amdgpu_atomfirmware_get_fw_reserved_fb_size(adev); - if (!adev->mman.discovery_tmr_size) - adev->mman.discovery_tmr_size = DISCOVERY_TMR_OFFSET; + if (adev->bios) + reserve_size = + amdgpu_atomfirmware_get_fw_reserved_fb_size(adev); + if (!reserve_size) + reserve_size = DISCOVERY_TMR_OFFSET;
if (mem_train_support) { /* reserve vram for mem train according to TMR location */ - amdgpu_ttm_training_data_block_init(adev); + amdgpu_ttm_training_data_block_init(adev, reserve_size); ret = amdgpu_bo_create_kernel_at(adev, ctx->c2p_train_data_offset, ctx->train_data_size, @@ -1687,14 +1690,13 @@ static int amdgpu_ttm_reserve_tmr(struct ctx->init = PSP_MEM_TRAIN_RESERVE_SUCCESS; }
- ret = amdgpu_bo_create_kernel_at(adev, - adev->gmc.real_vram_size - adev->mman.discovery_tmr_size, - adev->mman.discovery_tmr_size, - &adev->mman.discovery_memory, - NULL); + ret = amdgpu_bo_create_kernel_at( + adev, adev->gmc.real_vram_size - reserve_size, + reserve_size, &adev->mman.fw_reserved_memory, NULL); if (ret) { DRM_ERROR("alloc tmr failed(%d)!\n", ret); - amdgpu_bo_free_kernel(&adev->mman.discovery_memory, NULL, NULL); + amdgpu_bo_free_kernel(&adev->mman.fw_reserved_memory, + NULL, NULL); return ret; }
@@ -1881,8 +1883,9 @@ void amdgpu_ttm_fini(struct amdgpu_devic /* return the stolen vga memory back to VRAM */ amdgpu_bo_free_kernel(&adev->mman.stolen_vga_memory, NULL, NULL); amdgpu_bo_free_kernel(&adev->mman.stolen_extended_memory, NULL, NULL); - /* return the IP Discovery TMR memory back to VRAM */ - amdgpu_bo_free_kernel(&adev->mman.discovery_memory, NULL, NULL); + /* return the FW reserved memory back to VRAM */ + amdgpu_bo_free_kernel(&adev->mman.fw_reserved_memory, NULL, + NULL); if (adev->mman.stolen_reserved_size) amdgpu_bo_free_kernel(&adev->mman.stolen_reserved_memory, NULL, NULL); --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h @@ -78,7 +78,8 @@ struct amdgpu_mman { /* discovery */ uint8_t *discovery_bin; uint32_t discovery_tmr_size; - struct amdgpu_bo *discovery_memory; + /* fw reserved memory */ + struct amdgpu_bo *fw_reserved_memory;
/* firmware VRAM reservation */ u64 fw_vram_usage_start_offset;
From: Sean Christopherson seanjc@google.com
[ Upstream commit 3bcbc20942db5d738221cca31a928efc09827069 ]
To allow running rseq and KVM's rseq selftests as statically linked binaries, initialize the various "trampoline" pointers to point directly at the expect glibc symbols, and skip the dlysm() lookups if the rseq size is non-zero, i.e. the binary is statically linked *and* the libc registered its own rseq.
Define weak versions of the symbols so as not to break linking against libc versions that don't support rseq in any capacity.
The KVM selftests in particular are often statically linked so that they can be run on targets with very limited runtime environments, i.e. test machines.
Fixes: 233e667e1ae3 ("selftests/rseq: Uplift rseq selftests for compatibility with glibc-2.35") Cc: Aaron Lewis aaronlewis@google.com Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson seanjc@google.com Message-Id: 20230721223352.2333911-1-seanjc@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/rseq/rseq.c | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c index 4e4aa006004c8..a723da2532441 100644 --- a/tools/testing/selftests/rseq/rseq.c +++ b/tools/testing/selftests/rseq/rseq.c @@ -34,9 +34,17 @@ #include "../kselftest.h" #include "rseq.h"
-static const ptrdiff_t *libc_rseq_offset_p; -static const unsigned int *libc_rseq_size_p; -static const unsigned int *libc_rseq_flags_p; +/* + * Define weak versions to play nice with binaries that are statically linked + * against a libc that doesn't support registering its own rseq. + */ +__weak ptrdiff_t __rseq_offset; +__weak unsigned int __rseq_size; +__weak unsigned int __rseq_flags; + +static const ptrdiff_t *libc_rseq_offset_p = &__rseq_offset; +static const unsigned int *libc_rseq_size_p = &__rseq_size; +static const unsigned int *libc_rseq_flags_p = &__rseq_flags;
/* Offset from the thread pointer to the rseq area. */ ptrdiff_t rseq_offset; @@ -155,9 +163,17 @@ unsigned int get_rseq_feature_size(void) static __attribute__((constructor)) void rseq_init(void) { - libc_rseq_offset_p = dlsym(RTLD_NEXT, "__rseq_offset"); - libc_rseq_size_p = dlsym(RTLD_NEXT, "__rseq_size"); - libc_rseq_flags_p = dlsym(RTLD_NEXT, "__rseq_flags"); + /* + * If the libc's registered rseq size isn't already valid, it may be + * because the binary is dynamically linked and not necessarily due to + * libc not having registered a restartable sequence. Try to find the + * symbols if that's the case. + */ + if (!*libc_rseq_size_p) { + libc_rseq_offset_p = dlsym(RTLD_NEXT, "__rseq_offset"); + libc_rseq_size_p = dlsym(RTLD_NEXT, "__rseq_size"); + libc_rseq_flags_p = dlsym(RTLD_NEXT, "__rseq_flags"); + } if (libc_rseq_size_p && libc_rseq_offset_p && libc_rseq_flags_p && *libc_rseq_size_p != 0) { /* rseq registration owned by glibc */
From: Xu Yang xu.yang_2@nxp.com
[ Upstream commit ee70b908f77a9d8f689dea986f09e6d7dc481934 ]
Property name "phy-3p0-supply" is used instead of "phy-reg_3p0-supply".
Fixes: 9f30b6b1a957 ("ARM: dts: imx: Add basic dtsi file for imx6sll") cc: stable@vger.kernel.org Signed-off-by: Xu Yang xu.yang_2@nxp.com Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/imx6sll.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/boot/dts/imx6sll.dtsi b/arch/arm/boot/dts/imx6sll.dtsi index 2873369a57c02..3659fd5ecfa62 100644 --- a/arch/arm/boot/dts/imx6sll.dtsi +++ b/arch/arm/boot/dts/imx6sll.dtsi @@ -552,7 +552,7 @@ usbphy2: usb-phy@20ca000 { reg = <0x020ca000 0x1000>; interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>; clocks = <&clks IMX6SLL_CLK_USBPHY2>; - phy-reg_3p0-supply = <®_3p0>; + phy-3p0-supply = <®_3p0>; fsl,anatop = <&anatop>; };
From: Andi Shyti andi.shyti@linux.intel.com
[ Upstream commit b2f59e9026038a5bbcbc0019fa58f963138211ee ]
We always assumed that a device might either have AUX or FLAT CCS, but this is an approximation that is not always true, e.g. PVC represents an exception.
Set the basis for future finer selection by implementing a boolean gen12_needs_ccs_aux_inv() function that tells whether aux invalidation is needed or not.
Currently PVC is the only exception to the above mentioned rule.
Requires: 059ae7ae2a1c ("drm/i915/gt: Cleanup aux invalidation registers") Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: Matt Roper matthew.d.roper@intel.com Cc: Jonathan Cavitt jonathan.cavitt@intel.com Cc: stable@vger.kernel.org # v5.8+ Reviewed-by: Matt Roper matthew.d.roper@intel.com Reviewed-by: Andrzej Hajda andrzej.hajda@intel.com Reviewed-by: Nirmoy Das nirmoy.das@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230725001950.1014671-3-andi.... (cherry picked from commit c827655b87ad201ebe36f2e28d16b5491c8f7801) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c index f4ed89646fd26..c5b926e3f20d7 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -165,6 +165,18 @@ static u32 preparser_disable(bool state) return MI_ARB_CHECK | 1 << 8 | state; }
+static bool gen12_needs_ccs_aux_inv(struct intel_engine_cs *engine) +{ + if (IS_PONTEVECCHIO(engine->i915)) + return false; + + /* + * so far platforms supported by i915 having + * flat ccs do not require AUX invalidation + */ + return !HAS_FLAT_CCS(engine->i915); +} + u32 *gen12_emit_aux_table_inv(struct intel_gt *gt, u32 *cs, const i915_reg_t inv_reg) { u32 gsi_offset = gt->uncore->gsi_offset; @@ -236,7 +248,7 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode) else if (engine->class == COMPUTE_CLASS) flags &= ~PIPE_CONTROL_3D_ENGINE_FLAGS;
- if (!HAS_FLAT_CCS(rq->engine->i915)) + if (gen12_needs_ccs_aux_inv(rq->engine)) count = 8 + 4; else count = 8; @@ -254,7 +266,7 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode)
cs = gen8_emit_pipe_control(cs, flags, LRC_PPHWSP_SCRATCH_ADDR);
- if (!HAS_FLAT_CCS(rq->engine->i915)) { + if (gen12_needs_ccs_aux_inv(rq->engine)) { /* hsdes: 1809175790 */ cs = gen12_emit_aux_table_inv(rq->engine->gt, cs, GEN12_CCS_AUX_INV); @@ -276,7 +288,7 @@ int gen12_emit_flush_xcs(struct i915_request *rq, u32 mode) if (mode & EMIT_INVALIDATE) { cmd += 2;
- if (!HAS_FLAT_CCS(rq->engine->i915) && + if (gen12_needs_ccs_aux_inv(rq->engine) && (rq->engine->class == VIDEO_DECODE_CLASS || rq->engine->class == VIDEO_ENHANCEMENT_CLASS)) { aux_inv = rq->engine->mask &
From: Jonathan Cavitt jonathan.cavitt@intel.com
[ Upstream commit 78a6ccd65fa3a7cc697810db079cc4b84dff03d5 ]
All memory traffic must be quiesced before requesting an aux invalidation on platforms that use Aux CCS.
Fixes: 972282c4cf24 ("drm/i915/gen12: Add aux table invalidate for all engines") Requires: a2a4aa0eef3b ("drm/i915: Add the gen12_needs_ccs_aux_inv helper") Signed-off-by: Jonathan Cavitt jonathan.cavitt@intel.com Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org # v5.8+ Reviewed-by: Nirmoy Das nirmoy.das@intel.com Reviewed-by: Andrzej Hajda andrzej.hajda@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230725001950.1014671-4-andi.... (cherry picked from commit ad8ebf12217e451cd19804b1c3e97ad56491c74a) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c index c5b926e3f20d7..6e914c3f5019a 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -193,7 +193,11 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode) { struct intel_engine_cs *engine = rq->engine;
- if (mode & EMIT_FLUSH) { + /* + * On Aux CCS platforms the invalidation of the Aux + * table requires quiescing memory traffic beforehand + */ + if (mode & EMIT_FLUSH || gen12_needs_ccs_aux_inv(engine)) { u32 flags = 0; u32 *cs;
From: Tejas Upadhyay tejas.upadhyay@intel.com
[ Upstream commit d922b80b1010cd6164fa7d3c197b4fbf94b47beb ]
For mtl, workaround suggests that, SW insert a dummy PIPE_CONTROL prior to PIPE_CONTROL which contains a post sync: Timestamp or Write Immediate.
Bspec: 72197
V5: - Remove ret variable - Andi V4: - Update commit message, avoid returing cs - Andi/Matt V3: - Wrap dummy pipe control stuff in API - Andi V2: - Fix kernel test robot warnings
Closes: https://lore.kernel.org/oe-kbuild-all/202305121525.3EWdGoBY-lkp@intel.com/ Signed-off-by: Tejas Upadhyay tejas.upadhyay@intel.com Reviewed-by: Andi Shyti andi.shyti@linux.intel.com Reviewed-by: Andrzej Hajda andrzej.hajda@intel.com Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230601110959.1715927-1-tejas... Stable-dep-of: 592b228f12e1 ("drm/i915/gt: Rename flags with bit_group_X according to the datasheet") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 38 ++++++++++++++++++++++++ 1 file changed, 38 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c index 6e914c3f5019a..6210b38a2d382 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -189,6 +189,27 @@ u32 *gen12_emit_aux_table_inv(struct intel_gt *gt, u32 *cs, const i915_reg_t inv return cs; }
+static int mtl_dummy_pipe_control(struct i915_request *rq) +{ + /* Wa_14016712196 */ + if (IS_MTL_GRAPHICS_STEP(rq->engine->i915, M, STEP_A0, STEP_B0) || + IS_MTL_GRAPHICS_STEP(rq->engine->i915, P, STEP_A0, STEP_B0)) { + u32 *cs; + + /* dummy PIPE_CONTROL + depth flush */ + cs = intel_ring_begin(rq, 6); + if (IS_ERR(cs)) + return PTR_ERR(cs); + cs = gen12_emit_pipe_control(cs, + 0, + PIPE_CONTROL_DEPTH_CACHE_FLUSH, + LRC_PPHWSP_SCRATCH_ADDR); + intel_ring_advance(rq, cs); + } + + return 0; +} + int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode) { struct intel_engine_cs *engine = rq->engine; @@ -199,8 +220,13 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode) */ if (mode & EMIT_FLUSH || gen12_needs_ccs_aux_inv(engine)) { u32 flags = 0; + int err; u32 *cs;
+ err = mtl_dummy_pipe_control(rq); + if (err) + return err; + flags |= PIPE_CONTROL_TILE_CACHE_FLUSH; flags |= PIPE_CONTROL_FLUSH_L3; flags |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH; @@ -233,6 +259,11 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode) if (mode & EMIT_INVALIDATE) { u32 flags = 0; u32 *cs, count; + int err; + + err = mtl_dummy_pipe_control(rq); + if (err) + return err;
flags |= PIPE_CONTROL_COMMAND_CACHE_INVALIDATE; flags |= PIPE_CONTROL_TLB_INVALIDATE; @@ -749,6 +780,13 @@ u32 *gen12_emit_fini_breadcrumb_rcs(struct i915_request *rq, u32 *cs) PIPE_CONTROL_DC_FLUSH_ENABLE | PIPE_CONTROL_FLUSH_ENABLE);
+ /* Wa_14016712196 */ + if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) || + IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0)) + /* dummy PIPE_CONTROL + depth flush */ + cs = gen12_emit_pipe_control(cs, 0, + PIPE_CONTROL_DEPTH_CACHE_FLUSH, 0); + if (GRAPHICS_VER(i915) == 12 && GRAPHICS_VER_FULL(i915) < IP_VER(12, 50)) /* Wa_1409600907 */ flags |= PIPE_CONTROL_DEPTH_STALL;
From: Andi Shyti andi.shyti@linux.intel.com
[ Upstream commit 592b228f12e15867a63e3a6eeeb54c5c12662a62 ]
In preparation of the next patch align with the datasheet (BSPEC 47112) with the naming of the pipe control set of flag values. The variable "flags" in gen12_emit_flush_rcs() is applied as a set of flags called Bit Group 1.
Define also the Bit Group 0 as bit_group_0 where currently only PIPE_CONTROL0_HDC_PIPELINE_FLUSH bit is set.
Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org # v5.8+ Reviewed-by: Matt Roper matthew.d.roper@intel.com Reviewed-by: Andrzej Hajda andrzej.hajda@intel.com Reviewed-by: Nirmoy Das nirmoy.das@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230725001950.1014671-5-andi.... (cherry picked from commit f2dcd21d5a22e13f2fbfe7ab65149038b93cf2ff) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 34 +++++++++++++----------- drivers/gpu/drm/i915/gt/gen8_engine_cs.h | 18 ++++++++----- 2 files changed, 29 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c index 6210b38a2d382..5d2175e918dd2 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -219,7 +219,8 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode) * table requires quiescing memory traffic beforehand */ if (mode & EMIT_FLUSH || gen12_needs_ccs_aux_inv(engine)) { - u32 flags = 0; + u32 bit_group_0 = 0; + u32 bit_group_1 = 0; int err; u32 *cs;
@@ -227,32 +228,33 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode) if (err) return err;
- flags |= PIPE_CONTROL_TILE_CACHE_FLUSH; - flags |= PIPE_CONTROL_FLUSH_L3; - flags |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH; - flags |= PIPE_CONTROL_DEPTH_CACHE_FLUSH; + bit_group_0 |= PIPE_CONTROL0_HDC_PIPELINE_FLUSH; + + bit_group_1 |= PIPE_CONTROL_TILE_CACHE_FLUSH; + bit_group_1 |= PIPE_CONTROL_FLUSH_L3; + bit_group_1 |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH; + bit_group_1 |= PIPE_CONTROL_DEPTH_CACHE_FLUSH; /* Wa_1409600907:tgl,adl-p */ - flags |= PIPE_CONTROL_DEPTH_STALL; - flags |= PIPE_CONTROL_DC_FLUSH_ENABLE; - flags |= PIPE_CONTROL_FLUSH_ENABLE; + bit_group_1 |= PIPE_CONTROL_DEPTH_STALL; + bit_group_1 |= PIPE_CONTROL_DC_FLUSH_ENABLE; + bit_group_1 |= PIPE_CONTROL_FLUSH_ENABLE;
- flags |= PIPE_CONTROL_STORE_DATA_INDEX; - flags |= PIPE_CONTROL_QW_WRITE; + bit_group_1 |= PIPE_CONTROL_STORE_DATA_INDEX; + bit_group_1 |= PIPE_CONTROL_QW_WRITE;
- flags |= PIPE_CONTROL_CS_STALL; + bit_group_1 |= PIPE_CONTROL_CS_STALL;
if (!HAS_3D_PIPELINE(engine->i915)) - flags &= ~PIPE_CONTROL_3D_ARCH_FLAGS; + bit_group_1 &= ~PIPE_CONTROL_3D_ARCH_FLAGS; else if (engine->class == COMPUTE_CLASS) - flags &= ~PIPE_CONTROL_3D_ENGINE_FLAGS; + bit_group_1 &= ~PIPE_CONTROL_3D_ENGINE_FLAGS;
cs = intel_ring_begin(rq, 6); if (IS_ERR(cs)) return PTR_ERR(cs);
- cs = gen12_emit_pipe_control(cs, - PIPE_CONTROL0_HDC_PIPELINE_FLUSH, - flags, LRC_PPHWSP_SCRATCH_ADDR); + cs = gen12_emit_pipe_control(cs, bit_group_0, bit_group_1, + LRC_PPHWSP_SCRATCH_ADDR); intel_ring_advance(rq, cs); }
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.h b/drivers/gpu/drm/i915/gt/gen8_engine_cs.h index 655e5c00ddc27..a44eda096557c 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.h +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.h @@ -49,25 +49,29 @@ u32 *gen12_emit_fini_breadcrumb_rcs(struct i915_request *rq, u32 *cs); u32 *gen12_emit_aux_table_inv(struct intel_gt *gt, u32 *cs, const i915_reg_t inv_reg);
static inline u32 * -__gen8_emit_pipe_control(u32 *batch, u32 flags0, u32 flags1, u32 offset) +__gen8_emit_pipe_control(u32 *batch, u32 bit_group_0, + u32 bit_group_1, u32 offset) { memset(batch, 0, 6 * sizeof(u32));
- batch[0] = GFX_OP_PIPE_CONTROL(6) | flags0; - batch[1] = flags1; + batch[0] = GFX_OP_PIPE_CONTROL(6) | bit_group_0; + batch[1] = bit_group_1; batch[2] = offset;
return batch + 6; }
-static inline u32 *gen8_emit_pipe_control(u32 *batch, u32 flags, u32 offset) +static inline u32 *gen8_emit_pipe_control(u32 *batch, + u32 bit_group_1, u32 offset) { - return __gen8_emit_pipe_control(batch, 0, flags, offset); + return __gen8_emit_pipe_control(batch, 0, bit_group_1, offset); }
-static inline u32 *gen12_emit_pipe_control(u32 *batch, u32 flags0, u32 flags1, u32 offset) +static inline u32 *gen12_emit_pipe_control(u32 *batch, u32 bit_group_0, + u32 bit_group_1, u32 offset) { - return __gen8_emit_pipe_control(batch, flags0, flags1, offset); + return __gen8_emit_pipe_control(batch, bit_group_0, + bit_group_1, offset); }
static inline u32 *
From: Jonathan Cavitt jonathan.cavitt@intel.com
[ Upstream commit 0fde2f23516a00fd90dfb980b66b4665fcbfa659 ]
For platforms that use Aux CCS, wait for aux invalidation to complete by checking the aux invalidation register bit is cleared.
Fixes: 972282c4cf24 ("drm/i915/gen12: Add aux table invalidate for all engines") Signed-off-by: Jonathan Cavitt jonathan.cavitt@intel.com Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org # v5.8+ Reviewed-by: Nirmoy Das nirmoy.das@intel.com Reviewed-by: Andrzej Hajda andrzej.hajda@intel.com Reviewed-by: Matt Roper matthew.d.roper@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230725001950.1014671-7-andi.... (cherry picked from commit d459c86f00aa98028d155a012c65dc42f7c37e76) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 17 ++++++++++++----- drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 1 + 2 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c index 5d2175e918dd2..b2d69ce4a749b 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -184,7 +184,15 @@ u32 *gen12_emit_aux_table_inv(struct intel_gt *gt, u32 *cs, const i915_reg_t inv *cs++ = MI_LOAD_REGISTER_IMM(1) | MI_LRI_MMIO_REMAP_EN; *cs++ = i915_mmio_reg_offset(inv_reg) + gsi_offset; *cs++ = AUX_INV; - *cs++ = MI_NOOP; + + *cs++ = MI_SEMAPHORE_WAIT_TOKEN | + MI_SEMAPHORE_REGISTER_POLL | + MI_SEMAPHORE_POLL | + MI_SEMAPHORE_SAD_EQ_SDD; + *cs++ = 0; + *cs++ = i915_mmio_reg_offset(inv_reg) + gsi_offset; + *cs++ = 0; + *cs++ = 0;
return cs; } @@ -285,10 +293,9 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode) else if (engine->class == COMPUTE_CLASS) flags &= ~PIPE_CONTROL_3D_ENGINE_FLAGS;
+ count = 8; if (gen12_needs_ccs_aux_inv(rq->engine)) - count = 8 + 4; - else - count = 8; + count += 8;
cs = intel_ring_begin(rq, count); if (IS_ERR(cs)) @@ -331,7 +338,7 @@ int gen12_emit_flush_xcs(struct i915_request *rq, u32 mode) aux_inv = rq->engine->mask & ~GENMASK(_BCS(I915_MAX_BCS - 1), BCS0); if (aux_inv) - cmd += 4; + cmd += 8; } }
diff --git a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h index 5d143e2a8db03..02125a1db2796 100644 --- a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h +++ b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h @@ -121,6 +121,7 @@ #define MI_SEMAPHORE_TARGET(engine) ((engine)<<15) #define MI_SEMAPHORE_WAIT MI_INSTR(0x1c, 2) /* GEN8+ */ #define MI_SEMAPHORE_WAIT_TOKEN MI_INSTR(0x1c, 3) /* GEN12+ */ +#define MI_SEMAPHORE_REGISTER_POLL (1 << 16) #define MI_SEMAPHORE_POLL (1 << 15) #define MI_SEMAPHORE_SAD_GT_SDD (0 << 12) #define MI_SEMAPHORE_SAD_GTE_SDD (1 << 12)
From: Andi Shyti andi.shyti@linux.intel.com
[ Upstream commit 6a35f22d222528e1b157c6978c9424d2f8cbe0a1 ]
Perform some refactoring with the purpose of keeping in one single place all the operations around the aux table invalidation.
With this refactoring add more engines where the invalidation should be performed.
Fixes: 972282c4cf24 ("drm/i915/gen12: Add aux table invalidate for all engines") Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: Jonathan Cavitt jonathan.cavitt@intel.com Cc: Matt Roper matthew.d.roper@intel.com Cc: stable@vger.kernel.org # v5.8+ Reviewed-by: Andrzej Hajda andrzej.hajda@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230725001950.1014671-8-andi.... (cherry picked from commit 76ff7789d6e63d1a10b3b58f5c70b2e640c7a880) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 66 +++++++++++++----------- drivers/gpu/drm/i915/gt/gen8_engine_cs.h | 3 +- drivers/gpu/drm/i915/gt/intel_lrc.c | 17 +----- 3 files changed, 41 insertions(+), 45 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c index b2d69ce4a749b..024e212b5f80d 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -165,21 +165,47 @@ static u32 preparser_disable(bool state) return MI_ARB_CHECK | 1 << 8 | state; }
+static i915_reg_t gen12_get_aux_inv_reg(struct intel_engine_cs *engine) +{ + switch (engine->id) { + case RCS0: + return GEN12_CCS_AUX_INV; + case BCS0: + return GEN12_BCS0_AUX_INV; + case VCS0: + return GEN12_VD0_AUX_INV; + case VCS2: + return GEN12_VD2_AUX_INV; + case VECS0: + return GEN12_VE0_AUX_INV; + case CCS0: + return GEN12_CCS0_AUX_INV; + default: + return INVALID_MMIO_REG; + } +} + static bool gen12_needs_ccs_aux_inv(struct intel_engine_cs *engine) { + i915_reg_t reg = gen12_get_aux_inv_reg(engine); + if (IS_PONTEVECCHIO(engine->i915)) return false;
/* - * so far platforms supported by i915 having - * flat ccs do not require AUX invalidation + * So far platforms supported by i915 having flat ccs do not require + * AUX invalidation. Check also whether the engine requires it. */ - return !HAS_FLAT_CCS(engine->i915); + return i915_mmio_reg_valid(reg) && !HAS_FLAT_CCS(engine->i915); }
-u32 *gen12_emit_aux_table_inv(struct intel_gt *gt, u32 *cs, const i915_reg_t inv_reg) +u32 *gen12_emit_aux_table_inv(struct intel_engine_cs *engine, u32 *cs) { - u32 gsi_offset = gt->uncore->gsi_offset; + i915_reg_t inv_reg = gen12_get_aux_inv_reg(engine); + u32 gsi_offset = engine->gt->uncore->gsi_offset; + + if (!gen12_needs_ccs_aux_inv(engine)) + return cs;
*cs++ = MI_LOAD_REGISTER_IMM(1) | MI_LRI_MMIO_REMAP_EN; *cs++ = i915_mmio_reg_offset(inv_reg) + gsi_offset; @@ -310,11 +336,7 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode)
cs = gen8_emit_pipe_control(cs, flags, LRC_PPHWSP_SCRATCH_ADDR);
- if (gen12_needs_ccs_aux_inv(rq->engine)) { - /* hsdes: 1809175790 */ - cs = gen12_emit_aux_table_inv(rq->engine->gt, cs, - GEN12_CCS_AUX_INV); - } + cs = gen12_emit_aux_table_inv(engine, cs);
*cs++ = preparser_disable(false); intel_ring_advance(rq, cs); @@ -325,21 +347,14 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode)
int gen12_emit_flush_xcs(struct i915_request *rq, u32 mode) { - intel_engine_mask_t aux_inv = 0; - u32 cmd, *cs; + u32 cmd = 4; + u32 *cs;
- cmd = 4; if (mode & EMIT_INVALIDATE) { cmd += 2;
- if (gen12_needs_ccs_aux_inv(rq->engine) && - (rq->engine->class == VIDEO_DECODE_CLASS || - rq->engine->class == VIDEO_ENHANCEMENT_CLASS)) { - aux_inv = rq->engine->mask & - ~GENMASK(_BCS(I915_MAX_BCS - 1), BCS0); - if (aux_inv) - cmd += 8; - } + if (gen12_needs_ccs_aux_inv(rq->engine)) + cmd += 8; }
cs = intel_ring_begin(rq, cmd); @@ -370,14 +385,7 @@ int gen12_emit_flush_xcs(struct i915_request *rq, u32 mode) *cs++ = 0; /* upper addr */ *cs++ = 0; /* value */
- if (aux_inv) { /* hsdes: 1809175790 */ - if (rq->engine->class == VIDEO_DECODE_CLASS) - cs = gen12_emit_aux_table_inv(rq->engine->gt, - cs, GEN12_VD0_AUX_INV); - else - cs = gen12_emit_aux_table_inv(rq->engine->gt, - cs, GEN12_VE0_AUX_INV); - } + cs = gen12_emit_aux_table_inv(rq->engine, cs);
if (mode & EMIT_INVALIDATE) *cs++ = preparser_disable(false); diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.h b/drivers/gpu/drm/i915/gt/gen8_engine_cs.h index a44eda096557c..867ba697aceb8 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.h +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.h @@ -13,6 +13,7 @@ #include "intel_gt_regs.h" #include "intel_gpu_commands.h"
+struct intel_engine_cs; struct intel_gt; struct i915_request;
@@ -46,7 +47,7 @@ u32 *gen8_emit_fini_breadcrumb_rcs(struct i915_request *rq, u32 *cs); u32 *gen11_emit_fini_breadcrumb_rcs(struct i915_request *rq, u32 *cs); u32 *gen12_emit_fini_breadcrumb_rcs(struct i915_request *rq, u32 *cs);
-u32 *gen12_emit_aux_table_inv(struct intel_gt *gt, u32 *cs, const i915_reg_t inv_reg); +u32 *gen12_emit_aux_table_inv(struct intel_engine_cs *engine, u32 *cs);
static inline u32 * __gen8_emit_pipe_control(u32 *batch, u32 bit_group_0, diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index c0c16c06eb29e..502a1c0093aab 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -1364,10 +1364,7 @@ gen12_emit_indirect_ctx_rcs(const struct intel_context *ce, u32 *cs) IS_DG2_G11(ce->engine->i915)) cs = gen8_emit_pipe_control(cs, PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE, 0);
- /* hsdes: 1809175790 */ - if (!HAS_FLAT_CCS(ce->engine->i915)) - cs = gen12_emit_aux_table_inv(ce->engine->gt, - cs, GEN12_CCS_AUX_INV); + cs = gen12_emit_aux_table_inv(ce->engine, cs);
/* Wa_16014892111 */ if (IS_DG2(ce->engine->i915)) @@ -1390,17 +1387,7 @@ gen12_emit_indirect_ctx_xcs(const struct intel_context *ce, u32 *cs) PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE, 0);
- /* hsdes: 1809175790 */ - if (!HAS_FLAT_CCS(ce->engine->i915)) { - if (ce->engine->class == VIDEO_DECODE_CLASS) - cs = gen12_emit_aux_table_inv(ce->engine->gt, - cs, GEN12_VD0_AUX_INV); - else if (ce->engine->class == VIDEO_ENHANCEMENT_CLASS) - cs = gen12_emit_aux_table_inv(ce->engine->gt, - cs, GEN12_VE0_AUX_INV); - } - - return cs; + return gen12_emit_aux_table_inv(ce->engine, cs); }
static void
From: Andi Shyti andi.shyti@linux.intel.com
[ Upstream commit 824df77ab2107d8d4740b834b276681a41ae1ac8 ]
Enable the CCS_FLUSH bit 13 in the control pipe for render and compute engines in platforms starting from Meteor Lake (BSPEC 43904 and 47112).
For the copy engine add MI_FLUSH_DW_CCS (bit 16) in the command streamer.
Fixes: 972282c4cf24 ("drm/i915/gen12: Add aux table invalidate for all engines") Requires: 8da173db894a ("drm/i915/gt: Rename flags with bit_group_X according to the datasheet") Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: Jonathan Cavitt jonathan.cavitt@intel.com Cc: Nirmoy Das nirmoy.das@intel.com Cc: stable@vger.kernel.org # v5.8+ Reviewed-by: Matt Roper matthew.d.roper@intel.com Reviewed-by: Andrzej Hajda andrzej.hajda@intel.com Reviewed-by: Nirmoy Das nirmoy.das@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230725001950.1014671-6-andi.... (cherry picked from commit b70df82b428774875c7c56d3808102165891547c) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 11 +++++++++++ drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 1 + 2 files changed, 12 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c index 024e212b5f80d..2702ad4c26c88 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -264,6 +264,13 @@ int gen12_emit_flush_rcs(struct i915_request *rq, u32 mode)
bit_group_0 |= PIPE_CONTROL0_HDC_PIPELINE_FLUSH;
+ /* + * When required, in MTL and beyond platforms we + * need to set the CCS_FLUSH bit in the pipe control + */ + if (GRAPHICS_VER_FULL(rq->i915) >= IP_VER(12, 70)) + bit_group_0 |= PIPE_CONTROL_CCS_FLUSH; + bit_group_1 |= PIPE_CONTROL_TILE_CACHE_FLUSH; bit_group_1 |= PIPE_CONTROL_FLUSH_L3; bit_group_1 |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH; @@ -378,6 +385,10 @@ int gen12_emit_flush_xcs(struct i915_request *rq, u32 mode) cmd |= MI_INVALIDATE_TLB; if (rq->engine->class == VIDEO_DECODE_CLASS) cmd |= MI_INVALIDATE_BSD; + + if (gen12_needs_ccs_aux_inv(rq->engine) && + rq->engine->class == COPY_ENGINE_CLASS) + cmd |= MI_FLUSH_DW_CCS; }
*cs++ = cmd; diff --git a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h index 02125a1db2796..2bd8d98d21102 100644 --- a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h +++ b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h @@ -300,6 +300,7 @@ #define PIPE_CONTROL_QW_WRITE (1<<14) #define PIPE_CONTROL_POST_SYNC_OP_MASK (3<<14) #define PIPE_CONTROL_DEPTH_STALL (1<<13) +#define PIPE_CONTROL_CCS_FLUSH (1<<13) /* MTL+ */ #define PIPE_CONTROL_WRITE_FLUSH (1<<12) #define PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH (1<<12) /* gen6+ */ #define PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE (1<<11) /* MBZ on ILK */
On Wed, Aug 09, 2023 at 12:38:51PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.4.10-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.4.y and the diffstat can be found below.
For RCU, Tested-by: Joel Fernandes (Google) joel@joelfernandes.org
thanks,
- Joel
thanks,
greg k-h
Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.4.10-rc1
Andi Shyti andi.shyti@linux.intel.com drm/i915/gt: Enable the CCS_FLUSH bit in the pipe control and in the CS
Andi Shyti andi.shyti@linux.intel.com drm/i915/gt: Support aux invalidation on all engines
Jonathan Cavitt jonathan.cavitt@intel.com drm/i915/gt: Poll aux invalidation register bit on invalidation
Andi Shyti andi.shyti@linux.intel.com drm/i915/gt: Rename flags with bit_group_X according to the datasheet
Tejas Upadhyay tejas.upadhyay@intel.com drm/i915/gt: Add workaround 14016712196
Jonathan Cavitt jonathan.cavitt@intel.com drm/i915/gt: Ensure memory quiesced before invalidation
Andi Shyti andi.shyti@linux.intel.com drm/i915: Add the gen12_needs_ccs_aux_inv helper
Xu Yang xu.yang_2@nxp.com ARM: dts: nxp/imx6sll: fix wrong property name in usbphy node
Sean Christopherson seanjc@google.com selftests/rseq: Play nice with binaries statically linked against glibc 2.35+
Lijo Lazar lijo.lazar@amd.com drm/amdgpu: Use apt name for FW reserved region
Alexander Stein alexander.stein@ew.tq-group.com drm/imx/ipuv3: Fix front porch adjustment upon hactive aligning
Aneesh Kumar K.V aneesh.kumar@linux.ibm.com powerpc/mm/altmap: Fix altmap boundary check
Christophe JAILLET christophe.jaillet@wanadoo.fr mtd: rawnand: fsl_upm: Fix an off-by one test in fun_exec_op()
Arnd Bergmann arnd@arndb.de mtd: spi-nor: avoid holes in struct spi_mem_op
Chen-Yu Tsai wenst@chromium.org clk: mediatek: mt8183: Add back SSPM related clocks
Johan Jonker jbx6244@gmail.com mtd: rawnand: rockchip: Align hwecc vs. raw page helper layouts
Johan Jonker jbx6244@gmail.com mtd: rawnand: rockchip: fix oobfree offset and description
Roger Quadros rogerq@kernel.org mtd: rawnand: omap_elm: Fix incorrect type in assignment
Pavel Begunkov asml.silence@gmail.com io_uring: annotate offset timeout races
Chao Yu chao@kernel.org f2fs: fix to do sanity check on direct node in truncate_dnode()
Filipe Manana fdmanana@suse.com btrfs: remove BUG_ON()'s in add_new_free_space()
Jan Kara jack@suse.cz ext2: Drop fragment support
Jason Gunthorpe jgg@ziepe.ca mm/gup: do not return 0 from pin_user_pages_fast() for bad args
Jan Kara jack@suse.cz fs: Protect reconfiguration of sb read-write from racing writes
Alan Stern stern@rowland.harvard.edu net: usbnet: Fix WARNING in usbnet_start_xmit/usb_submit_urb
Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp debugobjects: Recheck debug_objects_enabled before reporting
Sungwoo Kim iam@sung-woo.kim Bluetooth: L2CAP: Fix use-after-free in l2cap_sock_ready_cb
Prince Kumar Maurya princekumarmaurya06@gmail.com fs/sysv: Null check to prevent null-ptr-deref bug
Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp kasan,kmsan: remove __GFP_KSWAPD_RECLAIM usage from kasan/kmsan
Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp fs/ntfs3: Use __GFP_NOWARN allocation at ntfs_load_attr_list()
Roman Gushchin roman.gushchin@linux.dev mm: kmem: fix a NULL pointer dereference in obj_stock_flush_required()
Linus Torvalds torvalds@linux-foundation.org file: reinstate f_pos locking optimization for regular files
Geert Uytterhoeven geert+renesas@glider.be clk: imx93: Propagate correct error in imx93_clocks_probe()
Stephen Rothwell sfr@canb.auug.org.au sunvnet: fix sparc64 build error after gso code split
Mike Kravetz mike.kravetz@oracle.com Revert "page cache: fix page_cache_next/prev_miss off by one"
Andi Shyti andi.shyti@linux.intel.com drm/i915/gt: Cleanup aux invalidation registers
Janusz Krzysztofik janusz.krzysztofik@linux.intel.com drm/i915: Fix premature release of request's reusable memory
Guchun Chen guchun.chen@amd.com drm/ttm: check null pointer before accessing when swapping
Aleksa Sarai cyphar@cyphar.com open: make RESOLVE_CACHED correctly test for O_TMPFILE
Mark Brown broonie@kernel.org arm64/ptrace: Don't enable SVE when setting streaming SVE
Mark Brown broonie@kernel.org arm64/ptrace: Flush FP state when setting ZT0
Mark Brown broonie@kernel.org arm64/fpsimd: Sync FPSIMD state with SVE for SME only systems
Mark Brown broonie@kernel.org arm64/fpsimd: Clear SME state in the target task when setting the VL
Mark Brown broonie@kernel.org arm64/fpsimd: Sync and zero pad FPSIMD state for streaming SVE
Mike Rapoport (IBM) rppt@kernel.org parisc/mm: preallocate fixmap page tables at init
Naveen N Rao naveen@kernel.org powerpc/ftrace: Create a dummy stackframe to fix stack unwind
Paulo Alcantara pc@manguebit.com smb: client: fix dfs link mount against w2k8
Jiri Olsa jolsa@kernel.org bpf: Disable preemption in bpf_event_output
Ilya Dryomov idryomov@gmail.com rbd: prevent busy loop when requesting exclusive lock
Michael Kelley mikelley@microsoft.com x86/hyperv: Disable IBT when hypercall page lacks ENDBR instruction
Paul Fertser fercerpav@gmail.com wifi: mt76: mt7615: do not advertise 5 GHz on first phy of MT7615D (DBDC)
Laszlo Ersek lersek@redhat.com net: tap_open(): set sk_uid from current_fsuid()
Laszlo Ersek lersek@redhat.com net: tun_chr_open(): set sk_uid from current_fsuid()
Dinh Nguyen dinguyen@kernel.org arm64: dts: stratix10: fix incorrect I2C property for SCL signal
Jiri Olsa jolsa@kernel.org bpf: Disable preemption in bpf_perf_event_output
Song Shuai suagrfillet@gmail.com riscv: Export va_kernel_pa_offset in vmcoreinfo
Arseniy Krasnov AVKrasnov@sberdevices.ru mtd: rawnand: meson: fix OOB available bytes for ECC
Olivier Maignial olivier.maignial@hotmail.fr mtd: spinand: winbond: Fix ecc_get_status
Olivier Maignial olivier.maignial@hotmail.fr mtd: spinand: toshiba: Fix ecc_get_status
Sungjong Seo sj1557.seo@samsung.com exfat: release s_lock before calling dir_emit()
Namjae Jeon linkinjeon@kernel.org exfat: check if filename entries exceeds max filename length
gaoming gaoming20@hihonor.com exfat: use kvmalloc_array/kvfree instead of kmalloc_array/kfree
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org firmware: arm_scmi: Drop OF node reference in the transport channel setup
Xiubo Li xiubli@redhat.com ceph: defer stopping mdsc delayed_work
Ross Maynard bids.7405@bigpond.com USB: zaurus: Add ID for A-300/B-500/C-700
Ilya Dryomov idryomov@gmail.com libceph: fix potential hang in ceph_osdc_notify()
Song Shuai suagrfillet@gmail.com Documentation: kdump: Add va_kernel_pa_offset for RISCV64
Michael Kelley mikelley@microsoft.com scsi: storvsc: Limit max_sectors for virtual Fibre Channel devices
Steffen Maier maier@linux.ibm.com scsi: zfcp: Defer fc_rport blocking until after ADISC response
Boqun Feng boqun.feng@gmail.com rust: allocator: Prevent mis-aligned allocation
Stefano Garzarella sgarzare@redhat.com test/vsock: remove vsock_perf executable on `make clean`
Eric Dumazet edumazet@google.com tcp_metrics: fix data-race in tcpm_suck_dst() vs fastopen
Eric Dumazet edumazet@google.com tcp_metrics: annotate data-races around tm->tcpm_net
Eric Dumazet edumazet@google.com tcp_metrics: annotate data-races around tm->tcpm_vals[]
Eric Dumazet edumazet@google.com tcp_metrics: annotate data-races around tm->tcpm_lock
Eric Dumazet edumazet@google.com tcp_metrics: annotate data-races around tm->tcpm_stamp
Eric Dumazet edumazet@google.com tcp_metrics: fix addr_same() helper
Jonas Gorski jonas.gorski@bisdn.de prestera: fix fallback to previous version on same major version
Leon Romanovsky leon@kernel.org net/mlx5e: Set proper IPsec source port in L4 selector
Jianbo Liu jianbol@nvidia.com net/mlx5: fs_core: Skip the FTs in the same FS_TYPE_PRIO_CHAINS fs_prio
Jianbo Liu jianbol@nvidia.com net/mlx5: fs_core: Make find_closest_ft more generic
Benjamin Poirier bpoirier@nvidia.com vxlan: Fix nexthop hash size
Yue Haibing yuehaibing@huawei.com ip6mr: Fix skb_under_panic in ip6mr_cache_report()
Alexandra Winter wintera@linux.ibm.com s390/qeth: Don't call dev_close/dev_open (DOWN/UP)
Lin Ma linma@zju.edu.cn net: dcb: choose correct policy to parse DCB_ATTR_BCN
Michael Chan michael.chan@broadcom.com bnxt_en: Fix max_mtu setting for multi-buf XDP
Somnath Kotur somnath.kotur@broadcom.com bnxt_en: Fix page pool logic for page size >= 64K
Kuniyuki Iwashima kuniyu@amazon.com selftest: net: Assert on a proper value in so_incoming_cpu.c.
Mark Brown broonie@kernel.org net: netsec: Ignore 'phy-mode' on SynQuacer in DT mode
Yuanjun Gong ruc_gongyuanjun@163.com net: korina: handle clk prepare error in korina_probe()
Dan Carpenter dan.carpenter@linaro.org net: ll_temac: fix error checking of irq_of_parse_and_map()
Tomas Glozar tglozar@redhat.com bpf: sockmap: Remove preempt_disable in sock_map_sk_acquire
valis sec@valis.email net/sched: cls_route: No longer copy tcf_result on update to avoid use-after-free
valis sec@valis.email net/sched: cls_fw: No longer copy tcf_result on update to avoid use-after-free
valis sec@valis.email net/sched: cls_u32: No longer copy tcf_result on update to avoid use-after-free
Hou Tao houtao1@huawei.com bpf, cpumap: Handle skb as well when clean up ptr_ring
Hou Tao houtao1@huawei.com bpf, cpumap: Make sure kthread is running before map update returns
Andrii Nakryiko andrii@kernel.org bpf: Centralize permissions checks for all BPF map types
Andrii Nakryiko andrii@kernel.org bpf: Inline map creation logic in map_create() function
Andrii Nakryiko andrii@kernel.org bpf: Move unprivileged checks into map_create() and bpf_prog_load()
Michal Schmidt mschmidt@redhat.com octeon_ep: initialize mbox mutexes
Jakub Kicinski kuba@kernel.org bnxt: don't handle XDP in netpoll
Rafal Rogalski rafalx.rogalski@intel.com ice: Fix RDMA VSI removal during queue rebuild
Duoming Zhou duoming@zju.edu.cn net: usb: lan78xx: reorder cleanup operations to avoid UAF bugs
Kuniyuki Iwashima kuniyu@amazon.com net/sched: taprio: Limit TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME to INT_MAX.
Eric Dumazet edumazet@google.com net: annotate data-races around sk->sk_priority
Eric Dumazet edumazet@google.com net: add missing data-race annotation for sk_ll_usec
Eric Dumazet edumazet@google.com net: add missing data-race annotations around sk->sk_peek_off
Eric Dumazet edumazet@google.com net: annotate data-races around sk->sk_mark
Eric Dumazet edumazet@google.com net: add missing READ_ONCE(sk->sk_rcvbuf) annotation
Eric Dumazet edumazet@google.com net: add missing READ_ONCE(sk->sk_sndbuf) annotation
Eric Dumazet edumazet@google.com net: add missing READ_ONCE(sk->sk_rcvlowat) annotation
Eric Dumazet edumazet@google.com net: annotate data-races around sk->sk_max_pacing_rate
Eric Dumazet edumazet@google.com net: annotate data-race around sk->sk_txrehash
Eric Dumazet edumazet@google.com net: annotate data-races around sk->sk_reserved_mem
Richard Gobert richardbgobert@gmail.com net: gro: fix misuse of CB in udp socket lookup
Eric Dumazet edumazet@google.com net: move gso declarations and functions to their own files
Konstantin Khorenko khorenko@virtuozzo.com qed: Fix scheduling in a tasklet while getting stats
Thierry Reding treding@nvidia.com net: stmmac: tegra: Properly allocate clock bulk data
Chengfeng Ye dg573847474@gmail.com mISDN: hfcpci: Fix potential deadlock on &hc->lock
Jamal Hadi Salim jhs@mojatatu.com net: sched: cls_u32: Fix match key mis-addressing
Georg Müller georgmueller@gmx.net perf test uprobe_from_different_cu: Skip if there is no gcc
Yuanjun Gong ruc_gongyuanjun@163.com net: dsa: fix value check in bcm_sf2_sw_probe()
Lin Ma linma@zju.edu.cn rtnetlink: let rtnl_bridge_setlink checks IFLA_BRIDGE_MODE length
Lin Ma linma@zju.edu.cn bpf: Add length check for SK_DIAG_BPF_STORAGE_REQ_MAP_FD parsing
Shay Drory shayd@nvidia.com net/mlx5: Unregister devlink params in case interface is down
Chris Mi cmi@nvidia.com net/mlx5: fs_chains: Fix ft prio if ignore_flow_level is not supported
Jianbo Liu jianbol@nvidia.com net/mlx5e: kTLS, Fix protection domain in use syndrome when devlink reload
Dragos Tatulea dtatulea@nvidia.com net/mlx5e: xsk: Fix crash on regular rq reactivation
Dragos Tatulea dtatulea@nvidia.com net/mlx5e: xsk: Fix invalid buffer access for legacy rq
Jianbo Liu jianbol@nvidia.com net/mlx5e: Move representor neigh cleanup to profile cleanup_tx
Amir Tzin amirtz@nvidia.com net/mlx5e: Fix crash moving to switchdev mode when ntuple offload is set
Chris Mi cmi@nvidia.com net/mlx5e: Don't hold encap tbl lock if there is no encap action
Shay Drory shayd@nvidia.com net/mlx5: Honor user input for migratable port fn attr
Yuanjun Gong ruc_gongyuanjun@163.com net/mlx5e: fix return value check in mlx5e_ipsec_remove_trailer()
Zhengchao Shao shaozhengchao@huawei.com net/mlx5: fix potential memory leak in mlx5e_init_rep_rx
Zhengchao Shao shaozhengchao@huawei.com net/mlx5: DR, fix memory leak in mlx5dr_cmd_create_reformat_ctx
Zhengchao Shao shaozhengchao@huawei.com net/mlx5e: fix double free in macsec_fs_tx_create_crypto_table_groups
Ilan Peer ilan.peer@intel.com wifi: cfg80211: Fix return value in scan logic
Haixin Yu yuhaixin.yhx@linux.alibaba.com perf pmu arm64: Fix reading the PMU cpu slots in sysfs
Gao Xiang xiang@kernel.org erofs: fix wrong primary bvec selection on deduplicated extents
Heiko Carstens hca@linux.ibm.com KVM: s390: fix sthyi error handling
Sven Schnelle svens@linux.ibm.com s390/vmem: split pages when debug pagealloc is enabled
ndesaulniers@google.com ndesaulniers@google.com word-at-a-time: use the same return type for has_zero regardless of endianness
Durai Manickam KR durai.manickamkr@microchip.com ARM: dts: at91: sam9x60: fix the SOC detection
Claudiu Beznea claudiu.beznea@microchip.com ARM: dts: at91: use generic name for shutdown controller
Claudiu Beznea claudiu.beznea@microchip.com ARM: dts: at91: use clock-controller name for sckc nodes
Claudiu Beznea claudiu.beznea@microchip.com ARM: dts: at91: use clock-controller name for PMC nodes
Cristian Marussi cristian.marussi@arm.com firmware: arm_scmi: Fix chan_free cleanup on SMC
Lucas Stach l.stach@pengutronix.de soc: imx: imx8mp-blk-ctrl: register HSIO PLL clock as bus_power_dev child
Dmitry Baryshkov dmitry.baryshkov@linaro.org ARM: dts: nxp/imx: limit sk-imx53 supported frequencies
Yury Norov yury.norov@gmail.com lib/bitmap: workaround const_eval test build failure
Sukrut Bellary sukrut.bellary@linux.com firmware: arm_scmi: Fix signed error return values handling
Punit Agrawal punit.agrawal@bytedance.com firmware: smccc: Fix use of uninitialised results structure
Benjamin Gaignard benjamin.gaignard@collabora.com arm64: dts: freescale: Fix VPU G2 clock
Hugo Villeneuve hvilleneuve@dimonoff.com arm64: dts: imx8mn-var-som: add missing pull-up for onboard PHY reset pinmux
Yashwanth Varakala y.varakala@phytec.de arm64: dts: phycore-imx8mm: Correction in gpio-line-names
Yashwanth Varakala y.varakala@phytec.de arm64: dts: phycore-imx8mm: Label typo-fix of VPU
Tim Harvey tharvey@gateworks.com arm64: dts: imx8mm-venice-gw7904: disable disp_blk_ctrl
Tim Harvey tharvey@gateworks.com arm64: dts: imx8mm-venice-gw7903: disable disp_blk_ctrl
Robin Murphy robin.murphy@arm.com iommu/arm-smmu-v3: Document nesting-related errata
Robin Murphy robin.murphy@arm.com iommu/arm-smmu-v3: Add explicit feature for nesting
Robin Murphy robin.murphy@arm.com iommu/arm-smmu-v3: Document MMU-700 erratum 2812531
Robin Murphy robin.murphy@arm.com iommu/arm-smmu-v3: Work around MMU-600 erratum 1076982
Jann Horn jannh@google.com mm: lock_vma_under_rcu() must check vma->anon_vma under vma lock
Diffstat:
Documentation/admin-guide/kdump/vmcoreinfo.rst | 6 + Documentation/arm64/silicon-errata.rst | 4 + Makefile | 4 +- arch/arm/boot/dts/at91-qil_a9260.dts | 2 +- arch/arm/boot/dts/at91-sama5d27_som1_ek.dts | 2 +- arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts | 2 +- arch/arm/boot/dts/at91-sama5d2_xplained.dts | 2 +- arch/arm/boot/dts/at91rm9200.dtsi | 2 +- arch/arm/boot/dts/at91sam9260.dtsi | 4 +- arch/arm/boot/dts/at91sam9260ek.dts | 2 +- arch/arm/boot/dts/at91sam9261.dtsi | 4 +- arch/arm/boot/dts/at91sam9263.dtsi | 4 +- arch/arm/boot/dts/at91sam9g20.dtsi | 2 +- arch/arm/boot/dts/at91sam9g20ek_common.dtsi | 2 +- arch/arm/boot/dts/at91sam9g25.dtsi | 2 +- arch/arm/boot/dts/at91sam9g35.dtsi | 2 +- arch/arm/boot/dts/at91sam9g45.dtsi | 6 +- arch/arm/boot/dts/at91sam9n12.dtsi | 4 +- arch/arm/boot/dts/at91sam9rl.dtsi | 6 +- arch/arm/boot/dts/at91sam9x25.dtsi | 2 +- arch/arm/boot/dts/at91sam9x35.dtsi | 2 +- arch/arm/boot/dts/at91sam9x5.dtsi | 6 +- arch/arm/boot/dts/imx53-sk-imx53.dts | 10 + arch/arm/boot/dts/imx6sll.dtsi | 2 +- arch/arm/boot/dts/sam9x60.dtsi | 32 +-- arch/arm/boot/dts/sama5d2.dtsi | 6 +- arch/arm/boot/dts/sama5d3.dtsi | 6 +- arch/arm/boot/dts/sama5d3_emac.dtsi | 2 +- arch/arm/boot/dts/sama5d4.dtsi | 6 +- arch/arm/boot/dts/sama7g5.dtsi | 4 +- arch/arm/boot/dts/usb_a9260.dts | 2 +- arch/arm/boot/dts/usb_a9263.dts | 2 +- .../boot/dts/altera/socfpga_stratix10_socdk.dts | 2 +- .../dts/altera/socfpga_stratix10_socdk_nand.dts | 2 +- .../dts/freescale/imx8mm-phyboard-polis-rdk.dts | 2 +- .../boot/dts/freescale/imx8mm-phycore-som.dtsi | 4 +- .../boot/dts/freescale/imx8mm-venice-gw7903.dts | 4 + .../boot/dts/freescale/imx8mm-venice-gw7904.dts | 4 + arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi | 2 +- arch/arm64/boot/dts/freescale/imx8mq.dtsi | 2 +- arch/arm64/kernel/fpsimd.c | 9 +- arch/arm64/kernel/ptrace.c | 10 +- arch/parisc/mm/fixmap.c | 3 - arch/parisc/mm/init.c | 34 +++ arch/powerpc/include/asm/word-at-a-time.h | 2 +- arch/powerpc/kernel/trace/ftrace_mprofile.S | 9 +- arch/powerpc/mm/init_64.c | 3 +- arch/riscv/kernel/crash_core.c | 2 + arch/s390/kernel/sthyi.c | 6 +- arch/s390/kvm/intercept.c | 9 +- arch/s390/mm/vmem.c | 2 + arch/x86/hyperv/hv_init.c | 21 ++ drivers/block/rbd.c | 28 ++- drivers/clk/imx/clk-imx93.c | 2 +- drivers/clk/mediatek/clk-mt8183.c | 27 ++ drivers/firmware/arm_scmi/mailbox.c | 4 +- drivers/firmware/arm_scmi/raw_mode.c | 5 +- drivers/firmware/arm_scmi/smc.c | 21 +- drivers/firmware/smccc/soc_id.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 35 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 3 +- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 178 ++++++++++---- drivers/gpu/drm/i915/gt/gen8_engine_cs.h | 21 +- drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 2 + drivers/gpu/drm/i915/gt/intel_gt_regs.h | 16 +- drivers/gpu/drm/i915/gt/intel_lrc.c | 17 +- drivers/gpu/drm/i915/i915_active.c | 99 +++++--- drivers/gpu/drm/i915/i915_request.c | 11 + drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c | 2 +- drivers/gpu/drm/ttm/ttm_bo.c | 3 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 50 ++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 8 + drivers/isdn/hardware/mISDN/hfcpci.c | 10 +- drivers/mtd/nand/raw/fsl_upm.c | 2 +- drivers/mtd/nand/raw/meson_nand.c | 3 +- drivers/mtd/nand/raw/omap_elm.c | 24 +- drivers/mtd/nand/raw/rockchip-nand-controller.c | 45 ++-- drivers/mtd/nand/spi/toshiba.c | 4 +- drivers/mtd/nand/spi/winbond.c | 4 +- drivers/mtd/spi-nor/spansion.c | 4 +- drivers/net/dsa/bcm_sf2.c | 8 +- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 85 ++++--- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 14 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 2 +- drivers/net/ethernet/broadcom/tg3.c | 1 + drivers/net/ethernet/intel/ice/ice_main.c | 18 ++ drivers/net/ethernet/korina.c | 3 +- .../ethernet/marvell/octeon_ep/octep_ctrl_mbox.c | 3 + .../net/ethernet/marvell/prestera/prestera_pci.c | 3 +- .../ethernet/mellanox/mlx5/core/en/tc_tun_encap.c | 3 - .../net/ethernet/mellanox/mlx5/core/en/xsk/rx.c | 5 +- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 4 +- .../mellanox/mlx5/core/en_accel/ipsec_rxtx.c | 4 +- .../ethernet/mellanox/mlx5/core/en_accel/ktls.c | 8 - .../ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 29 ++- .../mellanox/mlx5/core/en_accel/macsec_fs.c | 1 + drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c | 10 + drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 29 ++- drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 20 +- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 21 +- .../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 3 +- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 105 ++++++-- .../ethernet/mellanox/mlx5/core/lib/fs_chains.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/main.c | 1 + .../ethernet/mellanox/mlx5/core/steering/dr_cmd.c | 5 +- drivers/net/ethernet/myricom/myri10ge/myri10ge.c | 1 + drivers/net/ethernet/qlogic/qed/qed_dev_api.h | 16 ++ drivers/net/ethernet/qlogic/qed/qed_fcoe.c | 19 +- drivers/net/ethernet/qlogic/qed/qed_fcoe.h | 17 +- drivers/net/ethernet/qlogic/qed/qed_hw.c | 26 +- drivers/net/ethernet/qlogic/qed/qed_iscsi.c | 19 +- drivers/net/ethernet/qlogic/qed/qed_iscsi.h | 8 +- drivers/net/ethernet/qlogic/qed/qed_l2.c | 19 +- drivers/net/ethernet/qlogic/qed/qed_l2.h | 24 ++ drivers/net/ethernet/qlogic/qed/qed_main.c | 6 +- drivers/net/ethernet/sfc/siena/tx_common.c | 1 + drivers/net/ethernet/sfc/tx_common.c | 1 + drivers/net/ethernet/socionext/netsec.c | 11 + drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c | 3 +- drivers/net/ethernet/sun/sunvnet_common.c | 1 + drivers/net/ethernet/xilinx/ll_temac_main.c | 12 +- drivers/net/tap.c | 3 +- drivers/net/tun.c | 2 +- drivers/net/usb/cdc_ether.c | 21 ++ drivers/net/usb/lan78xx.c | 7 +- drivers/net/usb/r8152.c | 1 + drivers/net/usb/usbnet.c | 6 + drivers/net/usb/zaurus.c | 21 ++ drivers/net/wireguard/device.c | 1 + drivers/net/wireless/intel/iwlwifi/mvm/tx.c | 1 + drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c | 6 +- drivers/s390/net/qeth_core.h | 1 - drivers/s390/net/qeth_core_main.c | 2 - drivers/s390/net/qeth_l2_main.c | 9 +- drivers/s390/net/qeth_l3_main.c | 8 +- drivers/s390/scsi/zfcp_fc.c | 6 +- drivers/scsi/storvsc_drv.c | 4 + drivers/soc/imx/imx8mp-blk-ctrl.c | 2 +- fs/btrfs/block-group.c | 51 ++-- fs/btrfs/block-group.h | 4 +- fs/btrfs/free-space-tree.c | 24 +- fs/ceph/mds_client.c | 4 +- fs/ceph/mds_client.h | 5 + fs/ceph/super.c | 10 + fs/erofs/zdata.c | 7 +- fs/exfat/balloc.c | 6 +- fs/exfat/dir.c | 36 +-- fs/ext2/ext2.h | 12 - fs/ext2/super.c | 23 +- fs/f2fs/f2fs.h | 1 - fs/f2fs/file.c | 5 - fs/f2fs/node.c | 14 +- fs/file.c | 18 +- fs/ntfs3/attrlist.c | 4 +- fs/open.c | 2 +- fs/smb/client/dfs.c | 6 +- fs/super.c | 11 +- fs/sysv/itree.c | 4 + include/asm-generic/word-at-a-time.h | 2 +- include/linux/f2fs_fs.h | 1 + include/linux/netdevice.h | 26 +- include/linux/skbuff.h | 71 ------ include/linux/spi/spi-mem.h | 4 + include/net/gro.h | 44 ++++ include/net/gso.h | 109 ++++++++ include/net/inet_sock.h | 7 +- include/net/ip.h | 2 +- include/net/route.h | 4 +- include/net/udp.h | 1 + include/net/vxlan.h | 4 +- io_uring/timeout.c | 2 +- kernel/bpf/bloom_filter.c | 3 - kernel/bpf/bpf_local_storage.c | 3 - kernel/bpf/bpf_struct_ops.c | 3 - kernel/bpf/cpumap.c | 39 +-- kernel/bpf/devmap.c | 3 - kernel/bpf/hashtab.c | 6 - kernel/bpf/lpm_trie.c | 3 - kernel/bpf/queue_stack_maps.c | 4 - kernel/bpf/reuseport_array.c | 3 - kernel/bpf/stackmap.c | 3 - kernel/bpf/syscall.c | 136 ++++++---- kernel/trace/bpf_trace.c | 17 +- lib/Makefile | 6 + lib/debugobjects.c | 9 + lib/test_bitmap.c | 8 +- mm/filemap.c | 26 +- mm/gup.c | 2 +- mm/kasan/generic.c | 4 +- mm/kasan/tags.c | 2 +- mm/kmsan/core.c | 6 +- mm/kmsan/instrumentation.c | 2 +- mm/memcontrol.c | 19 +- mm/memory.c | 28 ++- net/bluetooth/l2cap_sock.c | 2 + net/can/raw.c | 2 +- net/ceph/osd_client.c | 20 +- net/core/Makefile | 2 +- net/core/bpf_sk_storage.c | 5 +- net/core/dev.c | 70 +----- net/core/gro.c | 59 +---- net/core/gso.c | 273 +++++++++++++++++++++ net/core/rtnetlink.c | 8 +- net/core/skbuff.c | 142 +---------- net/core/sock.c | 45 ++-- net/core/sock_map.c | 6 - net/dcb/dcbnl.c | 2 +- net/dccp/ipv6.c | 4 +- net/ipv4/af_inet.c | 1 + net/ipv4/esp4_offload.c | 1 + net/ipv4/gre_offload.c | 1 + net/ipv4/inet_diag.c | 4 +- net/ipv4/ip_output.c | 9 +- net/ipv4/ip_sockglue.c | 2 +- net/ipv4/raw.c | 2 +- net/ipv4/route.c | 4 +- net/ipv4/tcp_ipv4.c | 4 +- net/ipv4/tcp_metrics.c | 70 ++++-- net/ipv4/tcp_offload.c | 1 + net/ipv4/udp.c | 9 +- net/ipv4/udp_offload.c | 8 +- net/ipv6/esp6_offload.c | 1 + net/ipv6/ip6_offload.c | 1 + net/ipv6/ip6_output.c | 1 + net/ipv6/ip6mr.c | 2 +- net/ipv6/ping.c | 2 +- net/ipv6/raw.c | 6 +- net/ipv6/route.c | 7 +- net/ipv6/tcp_ipv6.c | 9 +- net/ipv6/udp.c | 12 +- net/ipv6/udp_offload.c | 8 +- net/l2tp/l2tp_ip6.c | 2 +- net/mac80211/tx.c | 1 + net/mpls/af_mpls.c | 1 + net/mpls/mpls_gso.c | 1 + net/mptcp/sockopt.c | 2 +- net/netfilter/nf_flow_table_ip.c | 1 + net/netfilter/nfnetlink_queue.c | 1 + net/netfilter/nft_socket.c | 2 +- net/netfilter/xt_socket.c | 4 +- net/nsh/nsh.c | 1 + net/openvswitch/actions.c | 1 + net/openvswitch/datapath.c | 1 + net/packet/af_packet.c | 12 +- net/sched/act_police.c | 1 + net/sched/cls_fw.c | 1 - net/sched/cls_route.c | 1 - net/sched/cls_u32.c | 57 ++++- net/sched/sch_cake.c | 1 + net/sched/sch_netem.c | 1 + net/sched/sch_taprio.c | 16 +- net/sched/sch_tbf.c | 1 + net/sctp/offload.c | 1 + net/smc/af_smc.c | 2 +- net/unix/af_unix.c | 2 +- net/wireless/scan.c | 2 +- net/xdp/xsk.c | 2 +- net/xdp/xskmap.c | 4 - net/xfrm/xfrm_device.c | 1 + net/xfrm/xfrm_interface_core.c | 1 + net/xfrm/xfrm_output.c | 1 + net/xfrm/xfrm_policy.c | 2 +- rust/bindings/bindings_helper.h | 1 + rust/kernel/allocator.rs | 74 ++++-- tools/perf/arch/arm64/util/pmu.c | 7 +- .../tests/shell/test_uprobe_from_different_cu.sh | 8 +- .../selftests/bpf/prog_tests/unpriv_bpf_disabled.c | 6 +- tools/testing/selftests/net/so_incoming_cpu.c | 2 +- tools/testing/selftests/rseq/rseq.c | 28 ++- .../tc-testing/tc-tests/qdiscs/taprio.json | 25 ++ tools/testing/vsock/Makefile | 2 +- 272 files changed, 2293 insertions(+), 1234 deletions(-)
On Wed, Aug 09, 2023 at 12:38:51PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.4.10-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.4.y and the diffstat can be found below.
thanks,
greg k-h
Tested rc1 against the Fedora build system (aarch64, ppc64le, s390x, x86_64), and boot tested x86_64. No regressions noted.
Tested-by: Justin M. Forbes jforbes@fedoraproject.org
Hello,
On 2023-08-09T12:38:51+02:00 Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.4.10-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.4.y and the diffstat can be found below.
This rc kernel passes DAMON functionality test[1] on my test machine. Attaching the test results summary below. Please note that I retrieved the kernel from linux-stable-rc tree[2].
Tested-by: SeongJae Park sj@kernel.org
[1] https://github.com/awslabs/damon-tests/tree/next/corr [2] 80226b49c77f ("Linux 6.4.10-rc1")
Thanks, SJ
[...]
---
ok 1 selftests: damon: debugfs_attrs.sh ok 2 selftests: damon: debugfs_schemes.sh ok 3 selftests: damon: debugfs_target_ids.sh ok 4 selftests: damon: debugfs_empty_targets.sh ok 5 selftests: damon: debugfs_huge_count_read_write.sh ok 6 selftests: damon: debugfs_duplicate_context_creation.sh ok 7 selftests: damon: debugfs_rm_non_contexts.sh ok 8 selftests: damon: sysfs.sh ok 9 selftests: damon: sysfs_update_removed_scheme_dir.sh ok 10 selftests: damon: reclaim.sh ok 11 selftests: damon: lru_sort.sh ok 1 selftests: damon-tests: kunit.sh ok 2 selftests: damon-tests: huge_count_read_write.sh ok 3 selftests: damon-tests: buffer_overflow.sh ok 4 selftests: damon-tests: rm_contexts.sh ok 5 selftests: damon-tests: record_null_deref.sh ok 6 selftests: damon-tests: dbgfs_target_ids_read_before_terminate_race.sh ok 7 selftests: damon-tests: dbgfs_target_ids_pid_leak.sh ok 8 selftests: damon-tests: damo_tests.sh ok 9 selftests: damon-tests: masim-record.sh ok 10 selftests: damon-tests: build_i386.sh ok 11 selftests: damon-tests: build_m68k.sh ok 12 selftests: damon-tests: build_arm64.sh ok 13 selftests: damon-tests: build_i386_idle_flag.sh ok 14 selftests: damon-tests: build_i386_highpte.sh ok 15 selftests: damon-tests: build_nomemcg.sh
PASS
On 8/9/23 3:38 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.4.10-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.4.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On 8/9/23 03:38, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.4.10-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.4.y and the diffstat can be found below.
thanks,
greg k-h
On ARCH_BRCMSTB using 32-bit and 64-bit ARM kernels, build tested on BMIPS_GENERIC:
Tested-by: Florian Fainelli florian.fainelli@broadcom.com
On Wed, Aug 09, 2023 at 12:38:51PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Successfully compiled and installed bindeb-pkgs on my computer (Acer Aspire E15, Intel Core i3 Haswell). No noticeable regressions.
Tested-by: Bagas Sanjaya bagasdotme@gmail.com
On Wed, Aug 09, 2023 at 12:38:51PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Tested-by: Conor Dooley conor.dooley@microchip.com
Thanks, Conor.
On Wed, Aug 09, 2023 at 12:38:51PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000. Anything received after that time might be too late.
Build results: total: 157 pass: 157 fail: 0 Qemu test results: total: 522 pass: 522 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
On Wed, Aug 09, 2023 at 12:38:51PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
For Rust, build tested and booted to see the minimal and printing Rust samples output in the kernel log:
Tested-by: Miguel Ojeda ojeda@kernel.org # Rust
Cheers, Miguel
On Wed, 9 Aug 2023 at 16:13, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.4.10-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.4.y and the diffstat can be found below.
thanks,
greg k-h
While building Linux stable rc 6.4 x86_64 with clang-17 failed due to following warnings / errors.
Regressions found on x86_64:
- build/clang-nightly-lkftconfig-kselftest - build/clang-nightly-x86_64_defconfig - build/clang-nightly-lkftconfig - build/clang-lkftconfig - build/clang-nightly-allmodconfig
Build errors: ----- ld.lld: error: ./arch/x86/kernel/vmlinux.lds:193: at least one side of the expression must be absolute make[2]: *** [scripts/Makefile.vmlinux:34: vmlinux] Error 1
Reported-by: Linux Kernel Functional Testing lkft@linaro.org
upstream report: ----- - https://lore.kernel.org/llvm/CA+G9fYsdUeNu-gwbs0+T6XHi4hYYk=Y9725-wFhZ7gJMsp...
Proposed fix patch: ----- [PATCH] x86/srso: fix build breakage for LD=ld.lld - https://lore.kernel.org/lkml/20230809-gds-v1-1-eaac90b0cbcc@google.com/T/
-- Linaro LKFT https://lkft.linaro.org
On Wed, 09 Aug 2023 12:38:51 +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.4.10 release. There are 165 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.4.10-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.4.y and the diffstat can be found below.
thanks,
greg k-h
All tests passing for Tegra ...
Test results for stable-v6.4: 11 builds: 11 pass, 0 fail 28 boots: 28 pass, 0 fail 130 tests: 130 pass, 0 fail
Linux version: 6.4.10-rc1-g80226b49c77f Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra194-p3509-0000+p3668-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Tested-by: Thierry Reding treding@nvidia.com
linux-stable-mirror@lists.linaro.org