This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.16.12-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.16.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 5.16.12-rc1
Miaohe Lin linmiaohe@huawei.com memblock: use kfree() to release kmalloced memblock regions
Marc Zyngier maz@kernel.org gpio: tegra186: Fix chip_data type confusion
Sean Anderson seanga2@gmail.com pinctrl: k210: Fix bias-pull-up
Dan Carpenter dan.carpenter@oracle.com pinctrl: fix loop in k210_pinconf_get_drive()
daniel.starke@siemens.com daniel.starke@siemens.com tty: n_gsm: fix deadlock in gsmtty_open()
daniel.starke@siemens.com daniel.starke@siemens.com tty: n_gsm: fix wrong modem processing in convergence layer type 2
daniel.starke@siemens.com daniel.starke@siemens.com tty: n_gsm: fix wrong tty control line for flow control
daniel.starke@siemens.com daniel.starke@siemens.com tty: n_gsm: fix NULL pointer access due to DLCI release
daniel.starke@siemens.com daniel.starke@siemens.com tty: n_gsm: fix proper link termination after failed open
daniel.starke@siemens.com daniel.starke@siemens.com tty: n_gsm: fix encoding of command/response bit
daniel.starke@siemens.com daniel.starke@siemens.com tty: n_gsm: fix encoding of control signal octet bit DV
Liu Yuntao liuyuntao10@huawei.com hugetlbfs: fix a truncation issue in hugepages parameter
Aneesh Kumar K.V aneesh.kumar@linux.ibm.com mm/hugetlb: fix kernel crash with hugetlb mremap
Changbin Du changbin.du@intel.com riscv: fix oops caused by irqsoff latency tracer
Damien Le Moal damien.lemoal@opensource.wdc.com riscv: fix nommu_k210_sdcard_defconfig
Mike Marciniszyn mike.marciniszyn@cornelisnetworks.com IB/qib: Fix duplicate sysfs directory name
Jens Axboe axboe@kernel.dk tps6598x: clear int mask on probe failure
Oliver Graute oliver.graute@kococonnector.com staging: fbtft: fb_st7789v: reset display before initialization
Chuansheng Liu chuansheng.liu@intel.com thermal: int340x: fix memory leak in int3400_notify()
Jason Gunthorpe jgg@ziepe.ca RDMA/cma: Do not change route.addr.src_addr outside state checks
Qu Wenruo wqu@suse.com btrfs: reduce extent threshold for autodefrag
Qu Wenruo wqu@suse.com btrfs: autodefrag: only scan one inode once
Qu Wenruo wqu@suse.com btrfs: defrag: allow defrag_one_cluster() to skip large extent which is not a target
Dāvis Mosāns davispuh@gmail.com btrfs: prevent copying too big compressed lzo segment
Qu Wenruo wqu@suse.com btrfs: defrag: remove an ambiguous condition for rejection
Qu Wenruo wqu@suse.com btrfs: defrag: don't defrag extents which are already at max capacity
Qu Wenruo wqu@suse.com btrfs: defrag: don't try to merge regular extents with preallocated extents
Mårten Lindahl marten.lindahl@axis.com driver core: Free DMA range map when device is released
Christophe Kerello christophe.kerello@foss.st.com mtd: core: Fix a conflict between MTD and NVMEM on wp-gpios property
Christophe Kerello christophe.kerello@foss.st.com nvmem: core: Fix a conflict between MTD and NVMEM on wp-gpios property
Hongyu Xie xiehongyu1@kylinos.cn xhci: Prevent futile URB re-submissions due to incorrect return value.
Puma Hsu pumahsu@google.com xhci: re-initialize the HC during resume if HCE was set
Sebastian Andrzej Siewior bigeasy@linutronix.de usb: dwc3: gadget: Let the interrupt handler disable bottom halves.
Hans de Goede hdegoede@redhat.com usb: dwc3: pci: Fix Bay Trail phy GPIO mappings
Hans de Goede hdegoede@redhat.com usb: dwc3: pci: Add "snps,dis_u2_susphy_quirk" for Intel Bay Trail
Fabrice Gasnier fabrice.gasnier@foss.st.com usb: dwc2: drd: fix soft connect when gadget is unconfigured
Daniele Palmas dnlplm@gmail.com USB: serial: option: add Telit LE910R1 compositions
Slark Xiao slark_xiao@163.com USB: serial: option: add support for DW5829e
Steven Rostedt (Google) rostedt@goodmis.org tracefs: Set the group ownership in apply_options() not parse_options()
Szymon Heidrich szymon.heidrich@gmail.com USB: gadget: validate endpoint index for xilinx udc
Daehwan Jung dh10.jung@samsung.com usb: gadget: rndis: add spinlock for rndis response list
Dmytro Bagrii dimich.dmb@gmail.com Revert "USB: serial: ch341: add new Product ID for CH341A"
Sergey Shtylyov s.shtylyov@omp.ru ata: pata_hpt37x: disable primary channel on HPT371
Phil Elwell phil@raspberrypi.com sc16is7xx: Fix for incorrect data being transmitted
Miaoqian Lin linmq006@gmail.com iio: Fix error handling for PM
Lorenzo Bianconi lorenzo@kernel.org iio: imu: st_lsm6dsx: wait for settling time in st_lsm6dsx_read_oneshot
Sean Nyekjaer sean@geanix.com iio: accel: fxls8962af: add padding to regmap for SPI
Cosmin Tanislav demonsingur@gmail.com iio: adc: ad7124: fix mask used for setting AIN_BUFP & AIN_BUFM bits
Oleksij Rempel linux@rempel-privat.de iio: adc: tsc2046: fix memory corruption by preventing array overflow
Christophe JAILLET christophe.jaillet@wanadoo.fr iio: adc: men_z188_adc: Fix a resource leak in an error handling path
Nuno Sá nuno.sa@analog.com iio:imu:adis16480: fix buffering for devices with no burst mode
Steven Rostedt (Google) rostedt@goodmis.org tracing: Have traceon and traceoff trigger honor the instance
Daniel Bristot de Oliveira bristot@kernel.org tracing: Dump stacktrace trigger to the corresponding instance
Kumar Kartikeya Dwivedi memxor@gmail.com bpf: Fix crash due to out of bounds access into reg2btf_ids.
Kumar Kartikeya Dwivedi memxor@gmail.com bpf: Extend kfunc with PTR_TO_CTX, PTR_TO_MEM argument support
Bart Van Assche bvanassche@acm.org RDMA/ib_srp: Fix a deadlock
ChenXiaoSong chenxiaosong2@huawei.com configfs: fix a race in configfs_{,un}register_subsystem()
Michael Chan michael.chan@broadcom.com bnxt_en: Increase firmware message response DMA wait time
Md Haris Iqbal haris.iqbal@ionos.com RDMA/rtrs-clt: Move free_permit from free_clt to rtrs_clt_close
Md Haris Iqbal haris.iqbal@ionos.com RDMA/rtrs-clt: Fix possible double free in error case
Eric Dumazet edumazet@google.com net-timestamp: convert sk->sk_tskey to atomic_t
Eric Dumazet edumazet@google.com net: use sk_is_tcp() in more places
Prasad Kumpatla quic_pkumpatl@quicinc.com regmap-irq: Update interrupt clear register for proper reset
Samuel Holland samuel@sholland.org gpio: rockchip: Reset int_bothedge when changing trigger
Pali Rohár pali@kernel.org PCI: mvebu: Fix device enumeration regression
Zhou Qingyang zhou1615@umn.edu spi: spi-zynq-qspi: Fix a NULL pointer dereference in zynq_qspi_exec_mem_op()
Lama Kayal lkayal@nvidia.com net/mlx5e: Add missing increment of count
Maher Sanalla msanalla@nvidia.com net/mlx5: Update log_max_qp value to be 17 at most
Yevgeny Kliteynik kliteyn@nvidia.com net/mlx5: DR, Fix slab-out-of-bounds in mlx5_cmd_dr_create_fte
Tariq Toukan tariqt@nvidia.com net/mlx5e: kTLS, Use CHECKSUM_UNNECESSARY for device-offloaded packets
Maor Dickman maord@nvidia.com net/mlx5e: MPLSoUDP decap, fix check for unsupported matches
Yevgeny Kliteynik kliteyn@nvidia.com net/mlx5: DR, Fix the threshold that defines when pool sync is initiated
Ariel Levkovich lariel@nvidia.com net/mlx5: Fix wrong limitation of metadata match on ecpf
Maor Gottlieb maorg@nvidia.com net/mlx5: Fix possible deadlock on rule deletion
Yevgeny Kliteynik kliteyn@nvidia.com net/mlx5: DR, Don't allow match on IP w/o matching on full ethertype/ip_version
Sukadev Bhattiprolu sukadev@linux.ibm.com ibmvnic: schedule failover only if vioctl fails
Yevgeny Kliteynik kliteyn@nvidia.com net/mlx5: DR, Cache STE shadow memory
Dan Carpenter dan.carpenter@oracle.com udp_tunnel: Fix end of loop test in udp_tunnel_nic_unregister()
Hans de Goede hdegoede@redhat.com surface: surface3_power: Fix battery readings on batteries without a serial number
Fabio M. De Francesco fmdefrancesco@gmail.com net/smc: Use a mutex for locking "struct smc_pnettable"
Florian Westphal fw@strlen.de netfilter: nf_tables: fix memory leak during stateful obj update
Baruch Siach baruch.siach@siklu.com net: mdio-ipq4019: add delay after clock enable
Christophe JAILLET christophe.jaillet@wanadoo.fr nfp: flower: Fix a potential leak in nfp_tunnel_add_shared_mac()
Vladimir Oltean vladimir.oltean@nxp.com net: dsa: avoid call to __dev_set_promiscuity() while rtnl_mutex isn't held
Pablo Neira Ayuso pablo@netfilter.org netfilter: nf_tables: unregister flowtable hooks on netns exit
Christophe Leroy christophe.leroy@csgroup.eu net: Force inlining of checksum functions in net/checksum.h
Xiaoke Wang xkernel.wang@foxmail.com net: ll_temac: check the return value of devm_kmalloc()
Paul Blakey paulb@nvidia.com net/sched: act_ct: Fix flow table lookup after ct clear or switching zones
Michel Dänzer mdaenzer@redhat.com drm/amd/display: For vblank_disable_immediate, check PSR is really used
Matt Roper matthew.d.roper@intel.com drm/i915/dg2: Print PHY name properly on calibration error
Maxime Ripard maxime@cerno.tech drm/vc4: crtc: Fix runtime_pm reference counting
Stefano Garzarella sgarzare@redhat.com block: clear iocb->private in blkdev_bio_end_io_async()
Roi Dayan roid@nvidia.com net/mlx5e: TC, Reject rules with drop and modify hdr action
Roi Dayan roid@nvidia.com net/mlx5e: TC, Reject rules with forward and drop actions
Gal Pressman gal@nvidia.com net/mlx5e: Fix wrong return value on ioctl EEPROM query failure
Maxime Ripard maxime@cerno.tech drm/edid: Always set RGB444
Paul Blakey paulb@nvidia.com openvswitch: Fix setting ipv6 fields causing hw csum failure
Mauri Sandberg maukka@ext.kapsi.fi net: mv643xx_eth: process retval from of_get_mac_address
Tao Liu thomas.liu@ucloud.cn gso: do not skip outer ip header in case of ipip and net_failover
Konrad Dybcio konrad.dybcio@somainline.org clk: qcom: gcc-msm8994: Remove NoC clocks
Dan Carpenter dan.carpenter@oracle.com tipc: Fix end of loop tests for list_for_each_entry()
Christoph Hellwig hch@lst.de nvme: also mark passthrough-only namespaces ready in nvme_update_ns_info
Eric Dumazet edumazet@google.com net: __pskb_pull_tail() & pskb_carve_frag_list() drop_monitor friends
Eric Dumazet edumazet@google.com io_uring: add a schedule point in io_add_buffers()
Eric Dumazet edumazet@google.com bpf: Add schedule points in batch ops
Yonghong Song yhs@fb.com bpf: Fix a bpf_timer initialization issue
Felix Maurer fmaurer@redhat.com selftests: bpf: Check bpf_msg_push_data return value
Felix Maurer fmaurer@redhat.com bpf: Do not try bpf_msg_push_data with len 0
Kumar Kartikeya Dwivedi memxor@gmail.com bpf: Fix crash due to incorrect copy_map_value
Meir Lichtinger meirl@nvidia.com net/mlx5: Update the list of the PCI supported devices
Tom Rix trix@redhat.com ice: initialize local variable 'tlv'
Tom Rix trix@redhat.com ice: check the return of ice_ptp_gettimex64
Jacob Keller jacob.e.keller@intel.com ice: fix concurrent reset and removal of VFs
Michal Swiatkowski michal.swiatkowski@linux.intel.com ice: fix setting l4 port flag when adding filter
Chris Mi cmi@nvidia.com net/mlx5: Fix tc max supported prio for nic mode
Guenter Roeck linux@roeck-us.net hwmon: Handle failure to register sensor with thermal zone correctly
Kalesh AP kalesh-anakkur.purayil@broadcom.com bnxt_en: Restore the resets_reliable flag in bnxt_open()
Pavan Chebbi pavan.chebbi@broadcom.com bnxt_en: Fix incorrect multicast rx mask setting when not requested
Michael Chan michael.chan@broadcom.com bnxt_en: Fix occasional ethtool -t loopback test failures
Michael Chan michael.chan@broadcom.com bnxt_en: Fix offline ethtool selftest with RDMA enabled
Somnath Kotur somnath.kotur@broadcom.com bnxt_en: Fix active FEC reporting to ethtool
Kalesh AP kalesh-anakkur.purayil@broadcom.com bnxt_en: Fix devlink fw_activate
Manish Chopra manishc@marvell.com bnx2x: fix driver load from initrd
Paolo Abeni pabeni@redhat.com selftests: mptcp: be more conservative with cookie MPJ limits
Paolo Abeni pabeni@redhat.com selftests: mptcp: fix diag instability
Paolo Abeni pabeni@redhat.com mptcp: add mibs counter for ignored incoming options
Paolo Abeni pabeni@redhat.com mptcp: fix race in incoming ADD_ADDR option processing
Alexey Bayduraev alexey.v.bayduraev@linux.intel.com perf data: Fix double free in perf_session__delete()
Zhengjun Xing zhengjun.xing@linux.intel.com perf evlist: Fix failed to use cpu list for uncore events
Mikko Perttunen mperttunen@nvidia.com gpu: host1x: Always return syncpoint value when waiting
Mateusz Palczewski mateusz.palczewski@intel.com Revert "i40e: Fix reset bw limit when DCB enabled with 1 TC"
Xin Long lucien.xin@gmail.com ping: remove pr_err from ping_lookup
Pablo Neira Ayuso pablo@netfilter.org netfilter: nf_tables_offload: incorrect flow offload action array size
Pablo Neira Ayuso pablo@netfilter.org netfilter: xt_socket: missing ifdef CONFIG_IP6_NF_IPTABLES dependency
Eric Dumazet edumazet@google.com netfilter: xt_socket: fix a typo in socket_mt_destroy()
Oliver Neukum oneukum@suse.com CDC-NCM: avoid overflow in sanity checking
Oliver Neukum oneukum@suse.com USB: zaurus: support another broken Zaurus
Oliver Neukum oneukum@suse.com sr9700: sanity check for packet length
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Fix bw atomic check when switching between SAGV vs. no SAGV
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Correctly populate use_sagv_wm for all pipes
Imre Deak imre.deak@intel.com drm/i915: Disconnect PHYs left connected by BIOS on disabled ports
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Widen the QGV point mask
Qiang Yu qiang.yu@amd.com drm/amdgpu: check vm ready by amdgpu_vm->evicting flag
Chen Gong curry.gong@amd.com drm/amdgpu: do not enable asic reset for raven2
Evan Quan evan.quan@amd.com drm/amdgpu: disable MMHUB PG for Picasso
Mario Limonciello mario.limonciello@amd.com drm/amd: Check if ASPM is enabled from PCIe subsystem
Evan Quan evan.quan@amd.com drm/amd/pm: fix some OEM SKU specific stability issues
Bas Nieuwenhuizen bas@basnieuwenhuizen.nl drm/amd/display: Protect update_bw_bounding_box FPU code.
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Fix stream->link_enc unassigned during stream removal
Maxim Levitsky mlevitsk@redhat.com KVM: x86: nSVM: disallow userspace setting of MSR_AMD64_TSC_RATIO to non default value when tsc scaling disabled
Liang Zhang zhangliang5@huawei.com KVM: x86/mmu: make apf token non-zero to fix bug
Helge Deller deller@gmx.de parisc/unaligned: Fix ldw() and stw() unalignment handlers
Helge Deller deller@gmx.de parisc/unaligned: Fix fldd and fstd unaligned handlers on 32-bit kernel
Stefano Garzarella sgarzare@redhat.com vhost/vsock: don't check owner in vhost_vsock_stop() while releasing
Ondrej Mosnacek omosnace@redhat.com selinux: fix misuse of mutex_is_locked()
Dylan Yudaken dylany@fb.com io_uring: disallow modification of rsrc_data during quiesce
Jens Axboe axboe@kernel.dk io_uring: don't convert to jiffies for waiting on timeouts
Siarhei Volkau lis8215@gmail.com clk: jz4725b: fix mmc0 clock gating
Greg Kroah-Hartman gregkh@linuxfoundation.org slab: remove __alloc_size attribute from __kmalloc_track_caller
Su Yue l@damenly.su btrfs: tree-checker: check item_size for dev_item
Su Yue l@damenly.su btrfs: tree-checker: check item_size for inode_item
Michal Koutný mkoutny@suse.com cgroup-v1: Correct privileges check in release_agent writes
Zhang Qiao zhangqiao22@huawei.com cgroup/cpuset: Fix a race between cpuset_attach() and cpu hotplug
Matthew Wilcox (Oracle) willy@infradead.org mm/filemap: Fix handling of THPs in generic_file_buffered_read()
-------------
Diffstat:
Makefile | 4 +- arch/parisc/kernel/unaligned.c | 14 +-- arch/riscv/configs/nommu_k210_sdcard_defconfig | 2 +- arch/riscv/kernel/Makefile | 2 + arch/riscv/kernel/entry.S | 10 +- arch/riscv/kernel/trace_irq.c | 27 +++++ arch/riscv/kernel/trace_irq.h | 11 ++ arch/x86/kvm/mmu/mmu.c | 13 ++- arch/x86/kvm/svm/svm.c | 19 +++- block/fops.c | 2 + drivers/ata/pata_hpt37x.c | 14 +++ drivers/base/dd.c | 5 + drivers/base/regmap/regmap-irq.c | 20 ++-- drivers/clk/ingenic/jz4725b-cgu.c | 3 +- drivers/clk/qcom/gcc-msm8994.c | 106 ++---------------- drivers/gpio/gpio-rockchip.c | 56 +++++----- drivers/gpio/gpio-tegra186.c | 14 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 + drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 9 +- drivers/gpu/drm/amd/amdgpu/soc15.c | 9 +- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 17 +-- .../amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c | 2 + drivers/gpu/drm/amd/display/dc/core/dc.c | 7 +- drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 4 - .../drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c | 32 +++++- drivers/gpu/drm/drm_edid.c | 2 +- drivers/gpu/drm/i915/display/intel_bw.c | 18 +++- drivers/gpu/drm/i915/display/intel_bw.h | 8 +- drivers/gpu/drm/i915/display/intel_snps_phy.c | 2 +- drivers/gpu/drm/i915/display/intel_tc.c | 26 +++-- drivers/gpu/drm/i915/intel_pm.c | 22 ++-- drivers/gpu/drm/vc4/vc4_crtc.c | 8 +- drivers/gpu/host1x/syncpt.c | 19 +--- drivers/hwmon/hwmon.c | 14 +-- drivers/iio/accel/bmc150-accel-core.c | 5 +- drivers/iio/accel/fxls8962af-core.c | 12 ++- drivers/iio/accel/fxls8962af-i2c.c | 2 +- drivers/iio/accel/fxls8962af-spi.c | 2 +- drivers/iio/accel/fxls8962af.h | 3 +- drivers/iio/accel/kxcjk-1013.c | 5 +- drivers/iio/accel/mma9551.c | 5 +- drivers/iio/accel/mma9553.c | 5 +- drivers/iio/adc/ad7124.c | 2 +- drivers/iio/adc/men_z188_adc.c | 9 +- drivers/iio/adc/ti-tsc2046.c | 4 +- drivers/iio/gyro/bmg160_core.c | 5 +- drivers/iio/imu/adis16480.c | 7 +- drivers/iio/imu/kmx61.c | 5 +- drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c | 6 +- drivers/iio/magnetometer/bmc150_magn.c | 5 +- drivers/infiniband/core/cma.c | 40 ++++--- drivers/infiniband/hw/qib/qib_sysfs.c | 2 +- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 39 +++---- drivers/infiniband/ulp/srp/ib_srp.c | 6 +- drivers/mtd/mtdcore.c | 2 + drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 3 + drivers/net/ethernet/broadcom/bnxt/bnxt.c | 47 +++++--- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c | 39 +++++-- drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 17 ++- drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c | 12 ++- drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h | 2 +- drivers/net/ethernet/ibm/ibmvnic.c | 6 +- drivers/net/ethernet/intel/i40e/i40e_main.c | 12 +-- drivers/net/ethernet/intel/ice/ice.h | 1 - drivers/net/ethernet/intel/ice/ice_common.c | 2 +- drivers/net/ethernet/intel/ice/ice_main.c | 2 + drivers/net/ethernet/intel/ice/ice_ptp.c | 5 +- drivers/net/ethernet/intel/ice/ice_tc_lib.c | 4 +- drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 42 +++++--- drivers/net/ethernet/marvell/mv643xx_eth.c | 24 +++-- .../mellanox/mlx5/core/en/tc_tun_mplsoudp.c | 28 ++--- .../net/ethernet/mellanox/mlx5/core/en_ethtool.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 3 +- .../net/ethernet/mellanox/mlx5/core/en_selftest.c | 1 + drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 12 +++ .../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 4 - drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 2 + .../ethernet/mellanox/mlx5/core/lib/fs_chains.c | 3 + drivers/net/ethernet/mellanox/mlx5/core/main.c | 4 +- .../mellanox/mlx5/core/steering/dr_icm_pool.c | 120 ++++++++++++++------- .../mellanox/mlx5/core/steering/dr_matcher.c | 20 +--- .../ethernet/mellanox/mlx5/core/steering/dr_ste.c | 32 +++++- .../mellanox/mlx5/core/steering/dr_types.h | 10 ++ .../ethernet/mellanox/mlx5/core/steering/fs_dr.c | 33 ++++-- .../ethernet/mellanox/mlx5/core/steering/mlx5dr.h | 5 + .../ethernet/netronome/nfp/flower/tunnel_conf.c | 4 +- drivers/net/ethernet/xilinx/ll_temac_main.c | 2 + drivers/net/mdio/mdio-ipq4019.c | 6 +- drivers/net/usb/cdc_ether.c | 12 +++ drivers/net/usb/cdc_ncm.c | 8 +- drivers/net/usb/sr9700.c | 2 +- drivers/net/usb/zaurus.c | 12 +++ drivers/nvme/host/core.c | 6 +- drivers/nvmem/core.c | 2 +- drivers/pci/controller/pci-mvebu.c | 3 +- drivers/pinctrl/pinctrl-k210.c | 4 +- drivers/platform/surface/surface3_power.c | 13 ++- drivers/spi/spi-zynq-qspi.c | 3 + drivers/staging/fbtft/fb_st7789v.c | 2 + .../intel/int340x_thermal/int3400_thermal.c | 4 + drivers/tty/n_gsm.c | 61 +++++++---- drivers/tty/serial/sc16is7xx.c | 3 + drivers/usb/dwc2/core.h | 2 + drivers/usb/dwc2/drd.c | 6 +- drivers/usb/dwc3/dwc3-pci.c | 17 ++- drivers/usb/dwc3/gadget.c | 2 + drivers/usb/gadget/function/rndis.c | 8 ++ drivers/usb/gadget/function/rndis.h | 1 + drivers/usb/gadget/udc/udc-xilinx.c | 6 ++ drivers/usb/host/xhci.c | 28 +++-- drivers/usb/serial/ch341.c | 1 - drivers/usb/serial/option.c | 12 +++ drivers/usb/typec/tipd/core.c | 7 +- drivers/vhost/vsock.c | 21 ++-- fs/btrfs/ctree.h | 2 +- fs/btrfs/file.c | 97 ++++++----------- fs/btrfs/inode.c | 4 +- fs/btrfs/ioctl.c | 81 +++++++++++--- fs/btrfs/lzo.c | 11 ++ fs/btrfs/tree-checker.c | 15 +++ fs/configfs/dir.c | 14 +++ fs/io_uring.c | 24 +++-- fs/tracefs/inode.c | 5 +- include/linux/bpf.h | 9 +- include/linux/nvmem-provider.h | 4 +- include/linux/skmsg.h | 6 -- include/linux/slab.h | 3 +- include/net/checksum.h | 50 +++++---- include/net/netfilter/nf_tables.h | 2 +- include/net/netfilter/nf_tables_offload.h | 2 - include/net/sock.h | 9 +- kernel/bpf/btf.c | 97 +++++++++++++---- kernel/bpf/syscall.c | 3 + kernel/cgroup/cgroup-v1.c | 6 +- kernel/cgroup/cpuset.c | 2 + kernel/trace/trace_events_trigger.c | 59 ++++++++-- mm/filemap.c | 8 +- mm/hugetlb.c | 11 +- mm/memblock.c | 10 +- net/can/j1939/transport.c | 2 +- net/core/filter.c | 3 + net/core/skbuff.c | 12 +-- net/core/sock.c | 10 +- net/dsa/master.c | 7 +- net/dsa/port.c | 20 ++-- net/ipv4/af_inet.c | 5 +- net/ipv4/ip_output.c | 2 +- net/ipv4/ping.c | 1 - net/ipv4/udp_tunnel_nic.c | 2 +- net/ipv6/ip6_offload.c | 2 + net/ipv6/ip6_output.c | 2 +- net/mptcp/mib.c | 2 + net/mptcp/mib.h | 2 + net/mptcp/pm.c | 8 +- net/mptcp/pm_netlink.c | 19 +++- net/netfilter/nf_tables_api.c | 16 ++- net/netfilter/nf_tables_offload.c | 3 +- net/netfilter/nft_dup_netdev.c | 6 ++ net/netfilter/nft_fwd_netdev.c | 6 ++ net/netfilter/nft_immediate.c | 12 ++- net/netfilter/xt_socket.c | 4 +- net/openvswitch/actions.c | 46 ++++++-- net/sched/act_ct.c | 5 - net/smc/smc_pnet.c | 42 ++++---- net/smc/smc_pnet.h | 2 +- net/tipc/name_table.c | 2 +- net/tipc/socket.c | 2 +- security/selinux/ima.c | 4 +- tools/perf/util/data.c | 7 +- tools/perf/util/evlist-hybrid.c | 4 +- .../selftests/bpf/progs/test_sockmap_kern.h | 26 +++-- tools/testing/selftests/net/mptcp/diag.sh | 44 ++++++-- tools/testing/selftests/net/mptcp/mptcp_join.sh | 15 ++- 174 files changed, 1543 insertions(+), 787 deletions(-)
From: "Matthew Wilcox (Oracle)" willy@infradead.org
When a THP is present in the page cache, we can return it several times, leading to userspace seeing the same data repeatedly if doing a read() that crosses a 64-page boundary. This is probably not a security issue (since the data all comes from the same file), but it can be interpreted as a transient data corruption issue. Fortunately, it is very rare as it can only occur when CONFIG_READ_ONLY_THP_FOR_FS is enabled, and it can only happen to executables. We don't often call read() on executables.
This bug is fixed differently in v5.17 by commit 6b24ca4a1a8d ("mm: Use multi-index entries in the page cache"). That commit is unsuitable for backporting, so fix this in the clearest way. It sacrifices a little performance for clarity, but this should never be a performance path in these kernel versions.
Fixes: cbd59c48ae2b ("mm/filemap: use head pages in generic_file_buffered_read") Cc: stable@vger.kernel.org # v5.15, v5.16 Link: https://lore.kernel.org/r/df3b5d1c-a36b-2c73-3e27-99e74983de3a@suse.cz/ Analyzed-by: Adam Majer amajer@suse.com Analyzed-by: Dirk Mueller dmueller@suse.com Bisected-by: Takashi Iwai tiwai@suse.de Reported-by: Vlastimil Babka vbabka@suse.cz Tested-by: Vlastimil Babka vbabka@suse.cz Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/filemap.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
--- a/mm/filemap.c +++ b/mm/filemap.c @@ -2365,8 +2365,12 @@ static void filemap_get_read_batch(struc break; if (PageReadahead(head)) break; - xas.xa_index = head->index + thp_nr_pages(head) - 1; - xas.xa_offset = (xas.xa_index >> xas.xa_shift) & XA_CHUNK_MASK; + if (PageHead(head)) { + xas_set(&xas, head->index + thp_nr_pages(head)); + /* Handle wrap correctly */ + if (xas.xa_index - 1 >= max) + break; + } continue; put_page: put_page(head);
From: Zhang Qiao zhangqiao22@huawei.com
commit 05c7b7a92cc87ff8d7fde189d0fade250697573c upstream.
As previously discussed(https://lkml.org/lkml/2022/1/20/51), cpuset_attach() is affected with similar cpu hotplug race, as follow scenario:
cpuset_attach() cpu hotplug --------------------------- ---------------------- down_write(cpuset_rwsem) guarantee_online_cpus() // (load cpus_attach) sched_cpu_deactivate set_cpu_active() // will change cpu_active_mask set_cpus_allowed_ptr(cpus_attach) __set_cpus_allowed_ptr_locked() // (if the intersection of cpus_attach and cpu_active_mask is empty, will return -EINVAL) up_write(cpuset_rwsem)
To avoid races such as described above, protect cpuset_attach() call with cpu_hotplug_lock.
Fixes: be367d099270 ("cgroups: let ss->can_attach and ss->attach do whole threadgroups at a time") Cc: stable@vger.kernel.org # v2.6.32+ Reported-by: Zhao Gongyi zhaogongyi@huawei.com Signed-off-by: Zhang Qiao zhangqiao22@huawei.com Acked-by: Waiman Long longman@redhat.com Reviewed-by: Michal Koutný mkoutny@suse.com Signed-off-by: Tejun Heo tj@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/cgroup/cpuset.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -2269,6 +2269,7 @@ static void cpuset_attach(struct cgroup_ cgroup_taskset_first(tset, &css); cs = css_cs(css);
+ cpus_read_lock(); percpu_down_write(&cpuset_rwsem);
guarantee_online_mems(cs, &cpuset_attach_nodemask_to); @@ -2322,6 +2323,7 @@ static void cpuset_attach(struct cgroup_ wake_up(&cpuset_attach_wq);
percpu_up_write(&cpuset_rwsem); + cpus_read_unlock(); }
/* The various types of files and directories in a cpuset file system */
From: Michal Koutný mkoutny@suse.com
commit 467a726b754f474936980da793b4ff2ec3e382a7 upstream.
The idea is to check: a) the owning user_ns of cgroup_ns, b) capabilities in init_user_ns.
The commit 24f600856418 ("cgroup-v1: Require capabilities to set release_agent") got this wrong in the write handler of release_agent since it checked user_ns of the opener (may be different from the owning user_ns of cgroup_ns). Secondly, to avoid possibly confused deputy, the capability of the opener must be checked.
Fixes: 24f600856418 ("cgroup-v1: Require capabilities to set release_agent") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/stable/20220216121142.GB30035@blackbody.suse.cz/ Signed-off-by: Michal Koutný mkoutny@suse.com Reviewed-by: Masami Ichikawa(CIP) masami.ichikawa@cybertrust.co.jp Signed-off-by: Tejun Heo tj@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/cgroup/cgroup-v1.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/kernel/cgroup/cgroup-v1.c +++ b/kernel/cgroup/cgroup-v1.c @@ -546,6 +546,7 @@ static ssize_t cgroup_release_agent_writ char *buf, size_t nbytes, loff_t off) { struct cgroup *cgrp; + struct cgroup_file_ctx *ctx;
BUILD_BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX);
@@ -553,8 +554,9 @@ static ssize_t cgroup_release_agent_writ * Release agent gets called with all capabilities, * require capabilities to set release agent. */ - if ((of->file->f_cred->user_ns != &init_user_ns) || - !capable(CAP_SYS_ADMIN)) + ctx = of->priv; + if ((ctx->ns->user_ns != &init_user_ns) || + !file_ns_capable(of->file, &init_user_ns, CAP_SYS_ADMIN)) return -EPERM;
cgrp = cgroup_kn_lock_live(of->kn, false);
From: Su Yue l@damenly.su
commit 0c982944af27d131d3b74242f3528169f66950ad upstream.
while mounting the crafted image, out-of-bounds access happens:
[350.429619] UBSAN: array-index-out-of-bounds in fs/btrfs/struct-funcs.c:161:1 [350.429636] index 1048096 is out of range for type 'page *[16]' [350.429650] CPU: 0 PID: 9 Comm: kworker/u8:1 Not tainted 5.16.0-rc4 #1 [350.429652] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-1ubuntu1.1 04/01/2014 [350.429653] Workqueue: btrfs-endio-meta btrfs_work_helper [btrfs] [350.429772] Call Trace: [350.429774] <TASK> [350.429776] dump_stack_lvl+0x47/0x5c [350.429780] ubsan_epilogue+0x5/0x50 [350.429786] __ubsan_handle_out_of_bounds+0x66/0x70 [350.429791] btrfs_get_16+0xfd/0x120 [btrfs] [350.429832] check_leaf+0x754/0x1a40 [btrfs] [350.429874] ? filemap_read+0x34a/0x390 [350.429878] ? load_balance+0x175/0xfc0 [350.429881] validate_extent_buffer+0x244/0x310 [btrfs] [350.429911] btrfs_validate_metadata_buffer+0xf8/0x100 [btrfs] [350.429935] end_bio_extent_readpage+0x3af/0x850 [btrfs] [350.429969] ? newidle_balance+0x259/0x480 [350.429972] end_workqueue_fn+0x29/0x40 [btrfs] [350.429995] btrfs_work_helper+0x71/0x330 [btrfs] [350.430030] ? __schedule+0x2fb/0xa40 [350.430033] process_one_work+0x1f6/0x400 [350.430035] ? process_one_work+0x400/0x400 [350.430036] worker_thread+0x2d/0x3d0 [350.430037] ? process_one_work+0x400/0x400 [350.430038] kthread+0x165/0x190 [350.430041] ? set_kthread_struct+0x40/0x40 [350.430043] ret_from_fork+0x1f/0x30 [350.430047] </TASK> [350.430077] BTRFS warning (device loop0): bad eb member start: ptr 0xffe20f4e start 20975616 member offset 4293005178 size 2
check_leaf() is checking the leaf:
corrupt leaf: root=4 block=29396992 slot=1, bad key order, prev (16140901064495857664 1 0) current (1 204 12582912) leaf 29396992 items 6 free space 3565 generation 6 owner DEV_TREE leaf 29396992 flags 0x1(WRITTEN) backref revision 1 fs uuid a62e00e8-e94e-4200-8217-12444de93c2e chunk uuid cecbd0f7-9ca0-441e-ae9f-f782f9732bd8 item 0 key (16140901064495857664 INODE_ITEM 0) itemoff 3955 itemsize 40 generation 0 transid 0 size 0 nbytes 17592186044416 block group 0 mode 52667 links 33 uid 0 gid 2104132511 rdev 94223634821136 sequence 100305 flags 0x2409000(none) atime 0.0 (1970-01-01 08:00:00) ctime 2973280098083405823.4294967295 (-269783007-01-01 21:37:03) mtime 18446744071572723616.4026825121 (1902-04-16 12:40:00) otime 9249929404488876031.4294967295 (622322949-04-16 04:25:58) item 1 key (1 DEV_EXTENT 12582912) itemoff 3907 itemsize 48 dev extent chunk_tree 3 chunk_objectid 256 chunk_offset 12582912 length 8388608 chunk_tree_uuid cecbd0f7-9ca0-441e-ae9f-f782f9732bd8
The corrupted leaf of device tree has an inode item. The leaf passed checksum and others checks in validate_extent_buffer until check_leaf_item(). Because of the key type BTRFS_INODE_ITEM, check_inode_item() is called even we are in the device tree. Since the item offset + sizeof(struct btrfs_inode_item) > eb->len, out-of-bounds access is triggered.
The item end vs leaf boundary check has been done before check_leaf_item(), so fix it by checking item size in check_inode_item() before access of the inode item in extent buffer.
Other check functions except check_dev_item() in check_leaf_item() have their item size checks. The commit for check_dev_item() is followed.
No regression observed during running fstests.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215299 CC: stable@vger.kernel.org # 5.10+ CC: Wenqing Liu wenqingliu0120@gmail.com Signed-off-by: Su Yue l@damenly.su Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/tree-checker.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/fs/btrfs/tree-checker.c +++ b/fs/btrfs/tree-checker.c @@ -1007,6 +1007,7 @@ static int check_inode_item(struct exten struct btrfs_inode_item *iitem; u64 super_gen = btrfs_super_generation(fs_info->super_copy); u32 valid_mask = (S_IFMT | S_ISUID | S_ISGID | S_ISVTX | 0777); + const u32 item_size = btrfs_item_size_nr(leaf, slot); u32 mode; int ret; u32 flags; @@ -1016,6 +1017,12 @@ static int check_inode_item(struct exten if (unlikely(ret < 0)) return ret;
+ if (unlikely(item_size != sizeof(*iitem))) { + generic_err(leaf, slot, "invalid item size: has %u expect %zu", + item_size, sizeof(*iitem)); + return -EUCLEAN; + } + iitem = btrfs_item_ptr(leaf, slot, struct btrfs_inode_item);
/* Here we use super block generation + 1 to handle log tree */
From: Su Yue l@damenly.su
commit ea1d1ca4025ac6c075709f549f9aa036b5b6597d upstream.
Check item size before accessing the device item to avoid out of bound access, similar to inode_item check.
Signed-off-by: Su Yue l@damenly.su Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/tree-checker.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/fs/btrfs/tree-checker.c +++ b/fs/btrfs/tree-checker.c @@ -965,6 +965,7 @@ static int check_dev_item(struct extent_ struct btrfs_key *key, int slot) { struct btrfs_dev_item *ditem; + const u32 item_size = btrfs_item_size_nr(leaf, slot);
if (unlikely(key->objectid != BTRFS_DEV_ITEMS_OBJECTID)) { dev_item_err(leaf, slot, @@ -972,6 +973,13 @@ static int check_dev_item(struct extent_ key->objectid, BTRFS_DEV_ITEMS_OBJECTID); return -EUCLEAN; } + + if (unlikely(item_size != sizeof(*ditem))) { + dev_item_err(leaf, slot, "invalid item size: has %u expect %zu", + item_size, sizeof(*ditem)); + return -EUCLEAN; + } + ditem = btrfs_item_ptr(leaf, slot, struct btrfs_dev_item); if (unlikely(btrfs_device_id(leaf, ditem) != key->offset)) { dev_item_err(leaf, slot,
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
commit 93dd04ab0b2b32ae6e70284afc764c577156658e upstream.
Commit c37495d6254c ("slab: add __alloc_size attributes for better bounds checking") added __alloc_size attributes to a bunch of kmalloc function prototypes. Unfortunately the change to __kmalloc_track_caller seems to cause clang to generate broken code and the first time this is called when booting, the box will crash.
While the compiler problems are being reworked and attempted to be solved [1], let's just drop the attribute to solve the issue now. Once it is resolved it can be added back.
[1] https://github.com/ClangBuiltLinux/linux/issues/1599
Fixes: c37495d6254c ("slab: add __alloc_size attributes for better bounds checking") Cc: stable stable@vger.kernel.org Cc: Kees Cook keescook@chromium.org Cc: Daniel Micay danielmicay@gmail.com Cc: Nick Desaulniers ndesaulniers@google.com Cc: Christoph Lameter cl@linux.com Cc: Pekka Enberg penberg@kernel.org Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Andrew Morton akpm@linux-foundation.org Cc: Vlastimil Babka vbabka@suse.cz Cc: Nathan Chancellor nathan@kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Cc: llvm@lists.linux.dev Acked-by: Nick Desaulniers ndesaulniers@google.com Acked-by: David Rientjes rientjes@google.com Acked-by: Kees Cook keescook@chromium.org Signed-off-by: Vlastimil Babka vbabka@suse.cz Link: https://lore.kernel.org/r/20220218131358.3032912-1-gregkh@linuxfoundation.or... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/slab.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -669,8 +669,7 @@ static inline __alloc_size(1, 2) void *k * allocator where we care about the real place the memory allocation * request comes from. */ -extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) - __alloc_size(1); +extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller); #define kmalloc_track_caller(size, flags) \ __kmalloc_track_caller(size, flags, _RET_IP_)
From: Siarhei Volkau lis8215@gmail.com
commit 2f0754f27a230fee6e6d753f07585cee03bedfe3 upstream.
The mmc0 clock gate bit was mistakenly assigned to "i2s" clock. You can find that the same bit is assigned to "mmc0" too. It leads to mmc0 hang for a long time after any sound activity also it prevented PM_SLEEP to work properly. I guess it was introduced by copy-paste from jz4740 driver where it is really controls I2S clock gate.
Fixes: 226dfa4726eb ("clk: Add Ingenic jz4725b CGU driver") Signed-off-by: Siarhei Volkau lis8215@gmail.com Tested-by: Siarhei Volkau lis8215@gmail.com Reviewed-by: Paul Cercueil paul@crapouillou.net Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220205171849.687805-2-lis8215@gmail.com Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/clk/ingenic/jz4725b-cgu.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/drivers/clk/ingenic/jz4725b-cgu.c +++ b/drivers/clk/ingenic/jz4725b-cgu.c @@ -139,11 +139,10 @@ static const struct ingenic_cgu_clk_info },
[JZ4725B_CLK_I2S] = { - "i2s", CGU_CLK_MUX | CGU_CLK_DIV | CGU_CLK_GATE, + "i2s", CGU_CLK_MUX | CGU_CLK_DIV, .parents = { JZ4725B_CLK_EXT, JZ4725B_CLK_PLL_HALF, -1, -1 }, .mux = { CGU_REG_CPCCR, 31, 1 }, .div = { CGU_REG_I2SCDR, 0, 1, 9, -1, -1, -1 }, - .gate = { CGU_REG_CLKGR, 6 }, },
[JZ4725B_CLK_SPI] = {
From: Jens Axboe axboe@kernel.dk
commit 228339662b398a59b3560cd571deb8b25b253c7e upstream.
If an application calls io_uring_enter(2) with a timespec passed in, convert that timespec to ktime_t rather than jiffies. The latter does not provide the granularity the application may expect, and may in fact provided different granularity on different systems, depending on what the HZ value is configured at.
Turn the timespec into an absolute ktime_t, and use that with schedule_hrtimeout() instead.
Link: https://github.com/axboe/liburing/issues/531 Cc: stable@vger.kernel.org Reported-by: Bob Chen chenbo.chen@alibaba-inc.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/io_uring.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
--- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -7633,7 +7633,7 @@ static int io_run_task_work_sig(void) /* when returns >0, the caller should retry */ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx, struct io_wait_queue *iowq, - signed long *timeout) + ktime_t timeout) { int ret;
@@ -7645,8 +7645,9 @@ static inline int io_cqring_wait_schedul if (test_bit(0, &ctx->check_cq_overflow)) return 1;
- *timeout = schedule_timeout(*timeout); - return !*timeout ? -ETIME : 1; + if (!schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS)) + return -ETIME; + return 1; }
/* @@ -7659,7 +7660,7 @@ static int io_cqring_wait(struct io_ring { struct io_wait_queue iowq; struct io_rings *rings = ctx->rings; - signed long timeout = MAX_SCHEDULE_TIMEOUT; + ktime_t timeout = KTIME_MAX; int ret;
do { @@ -7675,7 +7676,7 @@ static int io_cqring_wait(struct io_ring
if (get_timespec64(&ts, uts)) return -EFAULT; - timeout = timespec64_to_jiffies(&ts); + timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns()); }
if (sig) { @@ -7707,7 +7708,7 @@ static int io_cqring_wait(struct io_ring } prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq, TASK_INTERRUPTIBLE); - ret = io_cqring_wait_schedule(ctx, &iowq, &timeout); + ret = io_cqring_wait_schedule(ctx, &iowq, timeout); finish_wait(&ctx->cq_wait, &iowq.wq); cond_resched(); } while (ret > 0);
From: Dylan Yudaken dylany@fb.com
commit 80912cef18f16f8fe59d1fb9548d4364342be360 upstream.
io_rsrc_ref_quiesce will unlock the uring while it waits for references to the io_rsrc_data to be killed. There are other places to the data that might add references to data via calls to io_rsrc_node_switch. There is a race condition where this reference can be added after the completion has been signalled. At this point the io_rsrc_ref_quiesce call will wake up and relock the uring, assuming the data is unused and can be freed - although it is actually being used.
To fix this check in io_rsrc_ref_quiesce if a resource has been revived.
Reported-by: syzbot+ca8bf833622a1662745b@syzkaller.appspotmail.com Cc: stable@vger.kernel.org Signed-off-by: Dylan Yudaken dylany@fb.com Link: https://lore.kernel.org/r/20220222161751.995746-1-dylany@fb.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/io_uring.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
--- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -7865,7 +7865,15 @@ static __cold int io_rsrc_ref_quiesce(st ret = wait_for_completion_interruptible(&data->done); if (!ret) { mutex_lock(&ctx->uring_lock); - break; + if (atomic_read(&data->refs) > 0) { + /* + * it has been revived by another thread while + * we were unlocked + */ + mutex_unlock(&ctx->uring_lock); + } else { + break; + } }
atomic_inc(&data->refs);
From: Ondrej Mosnacek omosnace@redhat.com
commit ce2fc710c9d2b25afc710f49bb2065b4439a62bc upstream.
mutex_is_locked() tests whether the mutex is locked *by any task*, while here we want to test if it is held *by the current task*. To avoid false/missed WARNINGs, use lockdep_assert_is_held() and lockdep_assert_is_not_held() instead, which do the right thing (though they are a no-op if CONFIG_LOCKDEP=n).
Cc: stable@vger.kernel.org Fixes: 2554a48f4437 ("selinux: measure state and policy capabilities") Signed-off-by: Ondrej Mosnacek omosnace@redhat.com Signed-off-by: Paul Moore paul@paul-moore.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- security/selinux/ima.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/security/selinux/ima.c +++ b/security/selinux/ima.c @@ -77,7 +77,7 @@ void selinux_ima_measure_state_locked(st size_t policy_len; int rc = 0;
- WARN_ON(!mutex_is_locked(&state->policy_mutex)); + lockdep_assert_held(&state->policy_mutex);
state_str = selinux_ima_collect_state(state); if (!state_str) { @@ -117,7 +117,7 @@ void selinux_ima_measure_state_locked(st */ void selinux_ima_measure_state(struct selinux_state *state) { - WARN_ON(mutex_is_locked(&state->policy_mutex)); + lockdep_assert_not_held(&state->policy_mutex);
mutex_lock(&state->policy_mutex); selinux_ima_measure_state_locked(state);
From: Stefano Garzarella sgarzare@redhat.com
commit a58da53ffd70294ebea8ecd0eb45fd0d74add9f9 upstream.
vhost_vsock_stop() calls vhost_dev_check_owner() to check the device ownership. It expects current->mm to be valid.
vhost_vsock_stop() is also called by vhost_vsock_dev_release() when the user has not done close(), so when we are in do_exit(). In this case current->mm is invalid and we're releasing the device, so we should clean it anyway.
Let's check the owner only when vhost_vsock_stop() is called by an ioctl.
When invoked from release we can not fail so we don't check return code of vhost_vsock_stop(). We need to stop vsock even if it's not the owner.
Fixes: 433fc58e6bf2 ("VSOCK: Introduce vhost_vsock.ko") Cc: stable@vger.kernel.org Reported-by: syzbot+1e3ea63db39f2b4440e0@syzkaller.appspotmail.com Reported-and-tested-by: syzbot+3140b17cb44a7b174008@syzkaller.appspotmail.com Signed-off-by: Stefano Garzarella sgarzare@redhat.com Acked-by: Jason Wang jasowang@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/vhost/vsock.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-)
--- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -629,16 +629,18 @@ err: return ret; }
-static int vhost_vsock_stop(struct vhost_vsock *vsock) +static int vhost_vsock_stop(struct vhost_vsock *vsock, bool check_owner) { size_t i; - int ret; + int ret = 0;
mutex_lock(&vsock->dev.mutex);
- ret = vhost_dev_check_owner(&vsock->dev); - if (ret) - goto err; + if (check_owner) { + ret = vhost_dev_check_owner(&vsock->dev); + if (ret) + goto err; + }
for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { struct vhost_virtqueue *vq = &vsock->vqs[i]; @@ -753,7 +755,12 @@ static int vhost_vsock_dev_release(struc * inefficient. Room for improvement here. */ vsock_for_each_connected_socket(vhost_vsock_reset_orphans);
- vhost_vsock_stop(vsock); + /* Don't check the owner, because we are in the release path, so we + * need to stop the vsock device in any case. + * vhost_vsock_stop() can not fail in this case, so we don't need to + * check the return code. + */ + vhost_vsock_stop(vsock, false); vhost_vsock_flush(vsock); vhost_dev_stop(&vsock->dev);
@@ -868,7 +875,7 @@ static long vhost_vsock_dev_ioctl(struct if (start) return vhost_vsock_start(vsock); else - return vhost_vsock_stop(vsock); + return vhost_vsock_stop(vsock, true); case VHOST_GET_FEATURES: features = VHOST_VSOCK_FEATURES; if (copy_to_user(argp, &features, sizeof(features)))
From: Helge Deller deller@gmx.de
commit dd2288f4a020d693360e3e8d72f8b9d9c25f5ef6 upstream.
Usually the kernel provides fixup routines to emulate the fldd and fstd floating-point instructions if they load or store 8-byte from/to a not natuarally aligned memory location.
On a 32-bit kernel I noticed that those unaligned handlers didn't worked and instead the application got a SEGV. While checking the code I found two problems:
First, the OPCODE_FLDD_L and OPCODE_FSTD_L cases were ifdef'ed out by the CONFIG_PA20 option, and as such those weren't built on a pure 32-bit kernel. This is now fixed by moving the CONFIG_PA20 #ifdef to prevent the compilation of OPCODE_LDD_L and OPCODE_FSTD_L only, and handling the fldd and fstd instructions.
The second problem are two bugs in the 32-bit inline assembly code, where the wrong registers where used. The calculation of the natural alignment used %2 (vall) instead of %3 (ior), and the first word was stored back to address %1 (valh) instead of %3 (ior).
Signed-off-by: Helge Deller deller@gmx.de Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/parisc/kernel/unaligned.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/parisc/kernel/unaligned.c +++ b/arch/parisc/kernel/unaligned.c @@ -397,7 +397,7 @@ static int emulate_std(struct pt_regs *r __asm__ __volatile__ ( " mtsp %4, %%sr1\n" " zdep %2, 29, 2, %%r19\n" -" dep %%r0, 31, 2, %2\n" +" dep %%r0, 31, 2, %3\n" " mtsar %%r19\n" " zvdepi -2, 32, %%r19\n" "1: ldw 0(%%sr1,%3),%%r20\n" @@ -409,7 +409,7 @@ static int emulate_std(struct pt_regs *r " andcm %%r21, %%r19, %%r21\n" " or %1, %%r20, %1\n" " or %2, %%r21, %2\n" -"3: stw %1,0(%%sr1,%1)\n" +"3: stw %1,0(%%sr1,%3)\n" "4: stw %%r1,4(%%sr1,%3)\n" "5: stw %2,8(%%sr1,%3)\n" " copy %%r0, %0\n" @@ -596,7 +596,6 @@ void handle_unaligned(struct pt_regs *re ret = ERR_NOTHANDLED; /* "undefined", but lets kill them. */ break; } -#ifdef CONFIG_PA20 switch (regs->iir & OPCODE2_MASK) { case OPCODE_FLDD_L: @@ -607,14 +606,15 @@ void handle_unaligned(struct pt_regs *re flop=1; ret = emulate_std(regs, R2(regs->iir),1); break; +#ifdef CONFIG_PA20 case OPCODE_LDD_L: ret = emulate_ldd(regs, R2(regs->iir),0); break; case OPCODE_STD_L: ret = emulate_std(regs, R2(regs->iir),0); break; - } #endif + } switch (regs->iir & OPCODE3_MASK) { case OPCODE_FLDW_L:
From: Helge Deller deller@gmx.de
commit a97279836867b1cb50a3d4f0b1bf60e0abe6d46c upstream.
Fix 3 bugs:
a) emulate_stw() doesn't return the error code value, so faulting instructions are not reported and aborted.
b) Tell emulate_ldw() to handle fldw_l as floating point instruction
c) Tell emulate_ldw() to handle ldw_m as integer instruction
Signed-off-by: Helge Deller deller@gmx.de Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/parisc/kernel/unaligned.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/arch/parisc/kernel/unaligned.c +++ b/arch/parisc/kernel/unaligned.c @@ -340,7 +340,7 @@ static int emulate_stw(struct pt_regs *r : "r" (val), "r" (regs->ior), "r" (regs->isr) : "r19", "r20", "r21", "r22", "r1", FIXUP_BRANCH_CLOBBER );
- return 0; + return ret; } static int emulate_std(struct pt_regs *regs, int frreg, int flop) { @@ -619,10 +619,10 @@ void handle_unaligned(struct pt_regs *re { case OPCODE_FLDW_L: flop=1; - ret = emulate_ldw(regs, R2(regs->iir),0); + ret = emulate_ldw(regs, R2(regs->iir), 1); break; case OPCODE_LDW_M: - ret = emulate_ldw(regs, R2(regs->iir),1); + ret = emulate_ldw(regs, R2(regs->iir), 0); break;
case OPCODE_FSTW_L:
From: Liang Zhang zhangliang5@huawei.com
commit 6f3c1fc53d86d580d8d6d749c4af23705e4f6f79 upstream.
In current async pagefault logic, when a page is ready, KVM relies on kvm_arch_can_dequeue_async_page_present() to determine whether to deliver a READY event to the Guest. This function test token value of struct kvm_vcpu_pv_apf_data, which must be reset to zero by Guest kernel when a READY event is finished by Guest. If value is zero meaning that a READY event is done, so the KVM can deliver another. But the kvm_arch_setup_async_pf() may produce a valid token with zero value, which is confused with previous mention and may lead the loss of this READY event.
This bug may cause task blocked forever in Guest: INFO: task stress:7532 blocked for more than 1254 seconds. Not tainted 5.10.0 #16 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:stress state:D stack: 0 pid: 7532 ppid: 1409 flags:0x00000080 Call Trace: __schedule+0x1e7/0x650 schedule+0x46/0xb0 kvm_async_pf_task_wait_schedule+0xad/0xe0 ? exit_to_user_mode_prepare+0x60/0x70 __kvm_handle_async_pf+0x4f/0xb0 ? asm_exc_page_fault+0x8/0x30 exc_page_fault+0x6f/0x110 ? asm_exc_page_fault+0x8/0x30 asm_exc_page_fault+0x1e/0x30 RIP: 0033:0x402d00 RSP: 002b:00007ffd31912500 EFLAGS: 00010206 RAX: 0000000000071000 RBX: ffffffffffffffff RCX: 00000000021a32b0 RDX: 000000000007d011 RSI: 000000000007d000 RDI: 00000000021262b0 RBP: 00000000021262b0 R08: 0000000000000003 R09: 0000000000000086 R10: 00000000000000eb R11: 00007fefbdf2baa0 R12: 0000000000000000 R13: 0000000000000002 R14: 000000000007d000 R15: 0000000000001000
Signed-off-by: Liang Zhang zhangliang5@huawei.com Message-Id: 20220222031239.1076682-1-zhangliang5@huawei.com Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/mmu/mmu.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
--- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3905,12 +3905,23 @@ static void shadow_page_table_clear_floo walk_shadow_page_lockless_end(vcpu); }
+static u32 alloc_apf_token(struct kvm_vcpu *vcpu) +{ + /* make sure the token value is not 0 */ + u32 id = vcpu->arch.apf.id; + + if (id << 12 == 0) + vcpu->arch.apf.id = 1; + + return (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id; +} + static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, gfn_t gfn) { struct kvm_arch_async_pf arch;
- arch.token = (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id; + arch.token = alloc_apf_token(vcpu); arch.gfn = gfn; arch.direct_map = vcpu->arch.mmu->direct_map; arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu);
From: Maxim Levitsky mlevitsk@redhat.com
commit e910a53fb4f20aa012e46371ffb4c32c8da259b4 upstream.
If nested tsc scaling is disabled, MSR_AMD64_TSC_RATIO should never have non default value.
Due to way nested tsc scaling support was implmented in qemu, it would set this msr to 0 when nested tsc scaling was disabled. Ignore that value for now, as it causes no harm.
Fixes: 5228eb96a487 ("KVM: x86: nSVM: implement nested TSC scaling") Cc: stable@vger.kernel.org
Signed-off-by: Maxim Levitsky mlevitsk@redhat.com Message-Id: 20220223115649.319134-1-mlevitsk@redhat.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/svm/svm.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-)
--- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2903,8 +2903,23 @@ static int svm_set_msr(struct kvm_vcpu * u64 data = msr->data; switch (ecx) { case MSR_AMD64_TSC_RATIO: - if (!msr->host_initiated && !svm->tsc_scaling_enabled) - return 1; + + if (!svm->tsc_scaling_enabled) { + + if (!msr->host_initiated) + return 1; + /* + * In case TSC scaling is not enabled, always + * leave this MSR at the default value. + * + * Due to bug in qemu 6.2.0, it would try to set + * this msr to 0 if tsc scaling is not enabled. + * Ignore this value as well. + */ + if (data != 0 && data != svm->tsc_ratio_msr) + return 1; + break; + }
if (data & TSC_RATIO_RSVD) return 1;
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
commit 3743e7f6fcb938b7d8b7967e6a9442805e269b3d upstream.
[Why] Found when running igt@kms_atomic.
Userspace attempts to do a TEST_COMMIT when 0 streams which calls dc_remove_stream_from_ctx. This in turn calls link_enc_unassign which ends up modifying stream->link = NULL directly, causing the global link_enc to be removed preventing further link activity and future link validation from passing.
[How] We take care of link_enc unassignment at the start of link_enc_cfg_link_encs_assign so this call is no longer necessary.
Fixes global state from being modified while unlocked.
Reviewed-by: Jimmy Kizito Jimmy.Kizito@amd.com Acked-by: Jasdeep Dhillon jdhillon@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 4 ---- 1 file changed, 4 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c @@ -1876,10 +1876,6 @@ enum dc_status dc_remove_stream_from_ctx dc->res_pool, del_pipe->stream_res.stream_enc, false); - /* Release link encoder from stream in new dc_state. */ - if (dc->res_pool->funcs->link_enc_unassign) - dc->res_pool->funcs->link_enc_unassign(new_ctx, del_pipe->stream); - #if defined(CONFIG_DRM_AMD_DC_DCN) if (is_dp_128b_132b_signal(del_pipe)) { update_hpo_dp_stream_engine_usage(
From: Bas Nieuwenhuizen bas@basnieuwenhuizen.nl
commit 1432108d00e42ffa383240bcac8d58f89ae19104 upstream.
For DCN3/3.01/3.02 at least these use the fpu.
v2: squash in build fix for when DCN is not enabled (Leo)
Signed-off-by: Bas Nieuwenhuizen bas@basnieuwenhuizen.nl Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c | 2 ++ drivers/gpu/drm/amd/display/dc/core/dc.c | 7 +++++-- 2 files changed, 7 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c @@ -437,8 +437,10 @@ static void dcn3_get_memclk_states_from_ clk_mgr_base->bw_params->clk_table.num_entries = num_levels ? num_levels : 1;
/* Refresh bounding box */ + DC_FP_START(); clk_mgr_base->ctx->dc->res_pool->funcs->update_bw_bounding_box( clk_mgr->base.ctx->dc, clk_mgr_base->bw_params); + DC_FP_END(); }
static bool dcn3_is_smu_present(struct clk_mgr *clk_mgr_base) --- a/drivers/gpu/drm/amd/display/dc/core/dc.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c @@ -999,10 +999,13 @@ static bool dc_construct(struct dc *dc, goto fail; #ifdef CONFIG_DRM_AMD_DC_DCN dc->clk_mgr->force_smu_not_present = init_params->force_smu_not_present; -#endif
- if (dc->res_pool->funcs->update_bw_bounding_box) + if (dc->res_pool->funcs->update_bw_bounding_box) { + DC_FP_START(); dc->res_pool->funcs->update_bw_bounding_box(dc, dc->clk_mgr->bw_params); + DC_FP_END(); + } +#endif
/* Creation of current_state must occur after dc->dml * is initialized in dc_create_resource_pool because
From: Evan Quan evan.quan@amd.com
commit e3f3824874da78db5775a5cb9c0970cd1c6978bc upstream.
Add a quirk in sienna_cichlid_ppt.c to fix some OEM SKU specific stability issues.
Signed-off-by: Evan Quan evan.quan@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c | 32 +++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c @@ -418,6 +418,36 @@ static int sienna_cichlid_store_powerpla return 0; }
+static int sienna_cichlid_patch_pptable_quirk(struct smu_context *smu) +{ + struct amdgpu_device *adev = smu->adev; + uint32_t *board_reserved; + uint16_t *freq_table_gfx; + uint32_t i; + + /* Fix some OEM SKU specific stability issues */ + GET_PPTABLE_MEMBER(BoardReserved, &board_reserved); + if ((adev->pdev->device == 0x73DF) && + (adev->pdev->revision == 0XC3) && + (adev->pdev->subsystem_device == 0x16C2) && + (adev->pdev->subsystem_vendor == 0x1043)) + board_reserved[0] = 1387; + + GET_PPTABLE_MEMBER(FreqTableGfx, &freq_table_gfx); + if ((adev->pdev->device == 0x73DF) && + (adev->pdev->revision == 0XC3) && + ((adev->pdev->subsystem_device == 0x16C2) || + (adev->pdev->subsystem_device == 0x133C)) && + (adev->pdev->subsystem_vendor == 0x1043)) { + for (i = 0; i < NUM_GFXCLK_DPM_LEVELS; i++) { + if (freq_table_gfx[i] > 2500) + freq_table_gfx[i] = 2500; + } + } + + return 0; +} + static int sienna_cichlid_setup_pptable(struct smu_context *smu) { int ret = 0; @@ -438,7 +468,7 @@ static int sienna_cichlid_setup_pptable( if (ret) return ret;
- return ret; + return sienna_cichlid_patch_pptable_quirk(smu); }
static int sienna_cichlid_tables_init(struct smu_context *smu)
From: Mario Limonciello mario.limonciello@amd.com
commit 7294863a6f01248d72b61d38478978d638641bee upstream.
commit 0064b0ce85bb ("drm/amd/pm: enable ASPM by default") enabled ASPM by default but a variety of hardware configurations it turns out that this caused a regression.
* PPC64LE hardware does not support ASPM at a hardware level. CONFIG_PCIEASPM is often disabled on these architectures. * Some dGPUs on ALD platforms don't work with ASPM enabled and PCIe subsystem disables it
Check with the PCIe subsystem to see that ASPM has been enabled or not.
Fixes: 0064b0ce85bb ("drm/amd/pm: enable ASPM by default") Link: https://wiki.raptorcs.com/w/images/a/ad/P9_PHB_version1.0_27July2018_pub.pdf Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1723 Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1739 Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1885 Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1907 Tested-by: koba.ko@canonical.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -2014,6 +2014,9 @@ static int amdgpu_pci_probe(struct pci_d return -ENODEV; }
+ if (amdgpu_aspm == -1 && !pcie_aspm_enabled(pdev)) + amdgpu_aspm = 0; + if (amdgpu_virtual_display || amdgpu_device_asic_has_dc_support(flags & AMD_ASIC_MASK)) supports_atomic = true;
From: Evan Quan evan.quan@amd.com
commit f626dd0ff05043e5a7154770cc7cda66acee33a3 upstream.
MMHUB PG needs to be disabled for Picasso for stability reasons.
Signed-off-by: Evan Quan evan.quan@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/soc15.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c +++ b/drivers/gpu/drm/amd/amdgpu/soc15.c @@ -1114,8 +1114,11 @@ static int soc15_common_early_init(void AMD_CG_SUPPORT_SDMA_LS | AMD_CG_SUPPORT_VCN_MGCG;
+ /* + * MMHUB PG needs to be disabled for Picasso for + * stability reasons. + */ adev->pg_flags = AMD_PG_SUPPORT_SDMA | - AMD_PG_SUPPORT_MMHUB | AMD_PG_SUPPORT_VCN; } else { adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
From: Chen Gong curry.gong@amd.com
commit 1e2be869c8a7247a7253ef4f461f85e2f5931b95 upstream.
The GPU reset function of raven2 is not maintained or tested, so it should be very unstable.
Now the amdgpu_asic_reset function is added to amdgpu_pmops_suspend, which causes the S3 test of raven2 to fail, so the asic_reset of raven2 is ignored here.
Fixes: daf8de0874ab5b ("drm/amdgpu: always reset the asic in suspend (v2)") Signed-off-by: Chen Gong curry.gong@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/soc15.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c +++ b/drivers/gpu/drm/amd/amdgpu/soc15.c @@ -619,8 +619,8 @@ soc15_asic_reset_method(struct amdgpu_de static int soc15_asic_reset(struct amdgpu_device *adev) { /* original raven doesn't have full asic reset */ - if ((adev->apu_flags & AMD_APU_IS_RAVEN) && - !(adev->apu_flags & AMD_APU_IS_RAVEN2)) + if ((adev->apu_flags & AMD_APU_IS_RAVEN) || + (adev->apu_flags & AMD_APU_IS_RAVEN2)) return 0;
switch (soc15_asic_reset_method(adev)) {
From: Qiang Yu qiang.yu@amd.com
commit c1a66c3bc425ff93774fb2f6eefa67b83170dd7e upstream.
Workstation application ANSA/META v21.1.4 get this error dmesg when running CI test suite provided by ANSA/META: [drm:amdgpu_gem_va_ioctl [amdgpu]] *ERROR* Couldn't update BO_VA (-16)
This is caused by: 1. create a 256MB buffer in invisible VRAM 2. CPU map the buffer and access it causes vm_fault and try to move it to visible VRAM 3. force visible VRAM space and traverse all VRAM bos to check if evicting this bo is valuable 4. when checking a VM bo (in invisible VRAM), amdgpu_vm_evictable() will set amdgpu_vm->evicting, but latter due to not in visible VRAM, won't really evict it so not add it to amdgpu_vm->evicted 5. before next CS to clear the amdgpu_vm->evicting, user VM ops ioctl will pass amdgpu_vm_ready() (check amdgpu_vm->evicted) but fail in amdgpu_vm_bo_update_mapping() (check amdgpu_vm->evicting) and get this error log
This error won't affect functionality as next CS will finish the waiting VM ops. But we'd better clear the error log by checking the amdgpu_vm->evicting flag in amdgpu_vm_ready() to stop calling amdgpu_vm_bo_update_mapping() later.
Another reason is amdgpu_vm->evicted list holds all BOs (both user buffer and page table), but only page table BOs' eviction prevent VM ops. amdgpu_vm->evicting flag is set only for page table BOs, so we should use evicting flag instead of evicted list in amdgpu_vm_ready().
The side effect of this change is: previously blocked VM op (user buffer in "evicted" list but no page table in it) gets done immediately.
v2: update commit comments.
Acked-by: Paul Menzel pmenzel@molgen.mpg.de Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Qiang Yu qiang.yu@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -768,11 +768,16 @@ int amdgpu_vm_validate_pt_bos(struct amd * Check if all VM PDs/PTs are ready for updates * * Returns: - * True if eviction list is empty. + * True if VM is not evicting. */ bool amdgpu_vm_ready(struct amdgpu_vm *vm) { - return list_empty(&vm->evicted); + bool ret; + + amdgpu_vm_eviction_lock(vm); + ret = !vm->evicting; + amdgpu_vm_eviction_unlock(vm); + return ret; }
/**
[Public]
-----Original Message----- From: Greg Kroah-Hartman gregkh@linuxfoundation.org Sent: Monday, February 28, 2022 12:23 PM To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org; stable@vger.kernel.org; Paul Menzel pmenzel@molgen.mpg.de; Koenig, Christian Christian.Koenig@amd.com; Yu, Qiang Qiang.Yu@amd.com; Deucher, Alexander Alexander.Deucher@amd.com Subject: [PATCH 5.16 022/164] drm/amdgpu: check vm ready by amdgpu_vm->evicting flag
From: Qiang Yu qiang.yu@amd.com
commit c1a66c3bc425ff93774fb2f6eefa67b83170dd7e upstream.
Workstation application ANSA/META v21.1.4 get this error dmesg when running CI test suite provided by ANSA/META: [drm:amdgpu_gem_va_ioctl [amdgpu]] *ERROR* Couldn't update BO_VA (- 16)
This is caused by:
- create a 256MB buffer in invisible VRAM 2. CPU map the buffer and access
it causes vm_fault and try to move it to visible VRAM 3. force visible VRAM space and traverse all VRAM bos to check if evicting this bo is valuable 4. when checking a VM bo (in invisible VRAM), amdgpu_vm_evictable() will set amdgpu_vm->evicting, but latter due to not in visible VRAM, won't really evict it so not add it to amdgpu_vm->evicted 5. before next CS to clear the amdgpu_vm->evicting, user VM ops ioctl will pass amdgpu_vm_ready() (check amdgpu_vm->evicted) but fail in amdgpu_vm_bo_update_mapping() (check amdgpu_vm->evicting) and get this error log
This error won't affect functionality as next CS will finish the waiting VM ops. But we'd better clear the error log by checking the amdgpu_vm->evicting flag in amdgpu_vm_ready() to stop calling amdgpu_vm_bo_update_mapping() later.
Another reason is amdgpu_vm->evicted list holds all BOs (both user buffer and page table), but only page table BOs' eviction prevent VM ops. amdgpu_vm->evicting flag is set only for page table BOs, so we should use evicting flag instead of evicted list in amdgpu_vm_ready().
The side effect of this change is: previously blocked VM op (user buffer in "evicted" list but no page table in it) gets done immediately.
v2: update commit comments.
Acked-by: Paul Menzel pmenzel@molgen.mpg.de Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Qiang Yu qiang.yu@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
A regression was reported against this patch in 5.17. Please drop for now.
Thanks,
Alex
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -768,11 +768,16 @@ int amdgpu_vm_validate_pt_bos(struct amd
- Check if all VM PDs/PTs are ready for updates
- Returns:
- True if eviction list is empty.
*/
- True if VM is not evicting.
bool amdgpu_vm_ready(struct amdgpu_vm *vm) {
- return list_empty(&vm->evicted);
- bool ret;
- amdgpu_vm_eviction_lock(vm);
- ret = !vm->evicting;
- amdgpu_vm_eviction_unlock(vm);
- return ret;
}
/**
On Mon, Feb 28, 2022 at 06:24:53PM +0000, Deucher, Alexander wrote:
[Public]
-----Original Message----- From: Greg Kroah-Hartman gregkh@linuxfoundation.org Sent: Monday, February 28, 2022 12:23 PM To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org; stable@vger.kernel.org; Paul Menzel pmenzel@molgen.mpg.de; Koenig, Christian Christian.Koenig@amd.com; Yu, Qiang Qiang.Yu@amd.com; Deucher, Alexander Alexander.Deucher@amd.com Subject: [PATCH 5.16 022/164] drm/amdgpu: check vm ready by amdgpu_vm->evicting flag
From: Qiang Yu qiang.yu@amd.com
commit c1a66c3bc425ff93774fb2f6eefa67b83170dd7e upstream.
Workstation application ANSA/META v21.1.4 get this error dmesg when running CI test suite provided by ANSA/META: [drm:amdgpu_gem_va_ioctl [amdgpu]] *ERROR* Couldn't update BO_VA (- 16)
This is caused by:
- create a 256MB buffer in invisible VRAM 2. CPU map the buffer and access
it causes vm_fault and try to move it to visible VRAM 3. force visible VRAM space and traverse all VRAM bos to check if evicting this bo is valuable 4. when checking a VM bo (in invisible VRAM), amdgpu_vm_evictable() will set amdgpu_vm->evicting, but latter due to not in visible VRAM, won't really evict it so not add it to amdgpu_vm->evicted 5. before next CS to clear the amdgpu_vm->evicting, user VM ops ioctl will pass amdgpu_vm_ready() (check amdgpu_vm->evicted) but fail in amdgpu_vm_bo_update_mapping() (check amdgpu_vm->evicting) and get this error log
This error won't affect functionality as next CS will finish the waiting VM ops. But we'd better clear the error log by checking the amdgpu_vm->evicting flag in amdgpu_vm_ready() to stop calling amdgpu_vm_bo_update_mapping() later.
Another reason is amdgpu_vm->evicted list holds all BOs (both user buffer and page table), but only page table BOs' eviction prevent VM ops. amdgpu_vm->evicting flag is set only for page table BOs, so we should use evicting flag instead of evicted list in amdgpu_vm_ready().
The side effect of this change is: previously blocked VM op (user buffer in "evicted" list but no page table in it) gets done immediately.
v2: update commit comments.
Acked-by: Paul Menzel pmenzel@molgen.mpg.de Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Qiang Yu qiang.yu@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
A regression was reported against this patch in 5.17. Please drop for now.
Dropped from 5.10.y, 5.15.y, and 5.16.y. Please feel free to send it to stable@vger.kernel.org when it is now working correctly.
thanks,
greg k-h
From: Ville Syrjälä ville.syrjala@linux.intel.com
commit 3f33364836aacc28cd430d22cf22379e3b5ecd77 upstream.
adlp+ adds some extra bits to the QGV point mask. The code attempts to handle that but forgot to actually make sure we can store those bits in the bw state. Fix it.
Cc: stable@vger.kernel.org Cc: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Fixes: 192fbfb76744 ("drm/i915: Implement PSF GV point support") Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20220214091811.13725-4-ville.s... Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com (cherry picked from commit c0299cc9840b3805205173cc77782f317b78ea0e) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/display/intel_bw.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/gpu/drm/i915/display/intel_bw.h +++ b/drivers/gpu/drm/i915/display/intel_bw.h @@ -30,19 +30,19 @@ struct intel_bw_state { */ u8 pipe_sagv_reject;
+ /* bitmask of active pipes */ + u8 active_pipes; + /* * Current QGV points mask, which restricts * some particular SAGV states, not to confuse * with pipe_sagv_mask. */ - u8 qgv_points_mask; + u16 qgv_points_mask;
unsigned int data_rate[I915_MAX_PIPES]; u8 num_active_planes[I915_MAX_PIPES];
- /* bitmask of active pipes */ - u8 active_pipes; - int min_cdclk; };
From: Imre Deak imre.deak@intel.com
commit a40ee54e9a0958406469d46def03eec62aea0b69 upstream.
BIOS may leave a TypeC PHY in a connected state even though the corresponding port is disabled. This will prevent any hotplug events from being signalled (after the monitor deasserts and then reasserts its HPD) until the PHY is disconnected and so the driver will not detect a connected sink. Rebooting with the PHY in the connected state also results in a system hang.
Fix the above by disconnecting TypeC PHYs on disabled ports.
Before commit 64851a32c463e5 the PHY connected state was read out even for disabled ports and later the PHY got disconnected as a side effect of a tc_port_lock/unlock() sequence (during connector probing), hence recovering the port's hotplug functionality.
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5014 Fixes: 64851a32c463 ("drm/i915/tc: Add a mode for the TypeC PHY's disconnected state") Cc: stable@vger.kernel.org # v5.16+ Cc: José Roberto de Souza jose.souza@intel.com Signed-off-by: Imre Deak imre.deak@intel.com Reviewed-by: José Roberto de Souza jose.souza@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20220217152237.670220-1-imre.d... (cherry picked from commit ed0ccf349ffd9c80e7376d4d8c608643de990e86) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/display/intel_tc.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-)
--- a/drivers/gpu/drm/i915/display/intel_tc.c +++ b/drivers/gpu/drm/i915/display/intel_tc.c @@ -691,6 +691,8 @@ void intel_tc_port_sanitize(struct intel { struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); struct intel_encoder *encoder = &dig_port->base; + intel_wakeref_t tc_cold_wref; + enum intel_display_power_domain domain; int active_links = 0;
mutex_lock(&dig_port->tc_lock); @@ -702,12 +704,11 @@ void intel_tc_port_sanitize(struct intel
drm_WARN_ON(&i915->drm, dig_port->tc_mode != TC_PORT_DISCONNECTED); drm_WARN_ON(&i915->drm, dig_port->tc_lock_wakeref); - if (active_links) { - enum intel_display_power_domain domain; - intel_wakeref_t tc_cold_wref = tc_cold_block(dig_port, &domain);
- dig_port->tc_mode = intel_tc_port_get_current_mode(dig_port); + tc_cold_wref = tc_cold_block(dig_port, &domain);
+ dig_port->tc_mode = intel_tc_port_get_current_mode(dig_port); + if (active_links) { if (!icl_tc_phy_is_connected(dig_port)) drm_dbg_kms(&i915->drm, "Port %s: PHY disconnected with %d active link(s)\n", @@ -716,10 +717,23 @@ void intel_tc_port_sanitize(struct intel
dig_port->tc_lock_wakeref = tc_cold_block(dig_port, &dig_port->tc_lock_power_domain); - - tc_cold_unblock(dig_port, domain, tc_cold_wref); + } else { + /* + * TBT-alt is the default mode in any case the PHY ownership is not + * held (regardless of the sink's connected live state), so + * we'll just switch to disconnected mode from it here without + * a note. + */ + if (dig_port->tc_mode != TC_PORT_TBT_ALT) + drm_dbg_kms(&i915->drm, + "Port %s: PHY left in %s mode on disabled port, disconnecting it\n", + dig_port->tc_port_name, + tc_port_mode_name(dig_port->tc_mode)); + icl_tc_phy_disconnect(dig_port); }
+ tc_cold_unblock(dig_port, domain, tc_cold_wref); + drm_dbg_kms(&i915->drm, "Port %s: sanitize mode (%s)\n", dig_port->tc_port_name, tc_port_mode_name(dig_port->tc_mode));
From: Ville Syrjälä ville.syrjala@linux.intel.com
commit afc189df6bcc6be65961deb54e15ec60e7f85337 upstream.
When changing between SAGV vs. no SAGV on tgl+ we have to update the use_sagv_wm flag for all the crtcs or else an active pipe not already in the state will end up using the wrong watermarks. That is especially bad when we end up with the tighter non-SAGV watermarks with SAGV enabled. Usually ends up in underruns.
Cc: stable@vger.kernel.org Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Fixes: 7241c57d3140 ("drm/i915: Add TGL+ SAGV support") Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20220218064039.12834-2-ville.s... (cherry picked from commit 8dd8ffb824ca7b897ce9f2082ffa7e64831c22dc) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/intel_pm.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-)
--- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -4019,6 +4019,17 @@ static int intel_compute_sagv_mask(struc return ret; }
+ if (intel_can_enable_sagv(dev_priv, new_bw_state) != + intel_can_enable_sagv(dev_priv, old_bw_state)) { + ret = intel_atomic_serialize_global_state(&new_bw_state->base); + if (ret) + return ret; + } else if (new_bw_state->pipe_sagv_reject != old_bw_state->pipe_sagv_reject) { + ret = intel_atomic_lock_global_state(&new_bw_state->base); + if (ret) + return ret; + } + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { struct skl_pipe_wm *pipe_wm = &new_crtc_state->wm.skl.optimal; @@ -4034,17 +4045,6 @@ static int intel_compute_sagv_mask(struc intel_can_enable_sagv(dev_priv, new_bw_state); }
- if (intel_can_enable_sagv(dev_priv, new_bw_state) != - intel_can_enable_sagv(dev_priv, old_bw_state)) { - ret = intel_atomic_serialize_global_state(&new_bw_state->base); - if (ret) - return ret; - } else if (new_bw_state->pipe_sagv_reject != old_bw_state->pipe_sagv_reject) { - ret = intel_atomic_lock_global_state(&new_bw_state->base); - if (ret) - return ret; - } - return 0; }
From: Ville Syrjälä ville.syrjala@linux.intel.com
commit ec663bca9128f13eada25cd0446e7fcb5fcdc088 upstream.
If the only thing that is changing is SAGV vs. no SAGV but the number of active planes and the total data rates end up unchanged we currently bail out of intel_bw_atomic_check() early and forget to actually compute the new WGV point mask and thus won't actually enable/disable SAGV as requested. This ends up poorly if we end up running with SAGV enabled when we shouldn't. Usually ends up in underruns.
To fix this let's go through the QGV point mask computation if either the data rates/number of planes, or the state of SAGV is changing.
v2: Check more carefully if things are changing to avoid the extra calculations/debugs from introducing unwanted overhead
Cc: stable@vger.kernel.org Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com #v1 Fixes: 20f505f22531 ("drm/i915: Restrict qgv points which don't have enough bandwidth.") Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20220218064039.12834-3-ville.s... (cherry picked from commit 6b728595ffa51c087343c716bccbfc260f120e72) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/display/intel_bw.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/i915/display/intel_bw.c +++ b/drivers/gpu/drm/i915/display/intel_bw.c @@ -681,6 +681,7 @@ int intel_bw_atomic_check(struct intel_a unsigned int max_bw_point = 0, max_bw = 0; unsigned int num_qgv_points = dev_priv->max_bw[0].num_qgv_points; unsigned int num_psf_gv_points = dev_priv->max_bw[0].num_psf_gv_points; + bool changed = false; u32 mask = 0;
/* FIXME earlier gens need some checks too */ @@ -724,6 +725,8 @@ int intel_bw_atomic_check(struct intel_a new_bw_state->data_rate[crtc->pipe] = new_data_rate; new_bw_state->num_active_planes[crtc->pipe] = new_active_planes;
+ changed = true; + drm_dbg_kms(&dev_priv->drm, "pipe %c data rate %u num active planes %u\n", pipe_name(crtc->pipe), @@ -731,7 +734,19 @@ int intel_bw_atomic_check(struct intel_a new_bw_state->num_active_planes[crtc->pipe]); }
- if (!new_bw_state) + old_bw_state = intel_atomic_get_old_bw_state(state); + new_bw_state = intel_atomic_get_new_bw_state(state); + + if (new_bw_state && + intel_can_enable_sagv(dev_priv, old_bw_state) != + intel_can_enable_sagv(dev_priv, new_bw_state)) + changed = true; + + /* + * If none of our inputs (data rates, number of active + * planes, SAGV yes/no) changed then nothing to do here. + */ + if (!changed) return 0;
ret = intel_atomic_lock_global_state(&new_bw_state->base); @@ -814,7 +829,6 @@ int intel_bw_atomic_check(struct intel_a */ new_bw_state->qgv_points_mask = ~allowed_points & mask;
- old_bw_state = intel_atomic_get_old_bw_state(state); /* * If the actual mask had changed we need to make sure that * the commits are serialized(in case this is a nomodeset, nonblocking)
From: Oliver Neukum oneukum@suse.com
commit e9da0b56fe27206b49f39805f7dcda8a89379062 upstream.
A malicious device can leak heap data to user space providing bogus frame lengths. Introduce a sanity check.
Signed-off-by: Oliver Neukum oneukum@suse.com Reviewed-by: Grant Grundler grundler@chromium.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/sr9700.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/usb/sr9700.c +++ b/drivers/net/usb/sr9700.c @@ -413,7 +413,7 @@ static int sr9700_rx_fixup(struct usbnet /* ignore the CRC length */ len = (skb->data[1] | (skb->data[2] << 8)) - 4;
- if (len > ETH_FRAME_LEN) + if (len > ETH_FRAME_LEN || len > skb->len) return 0;
/* the last packet of current skb */
From: Oliver Neukum oneukum@suse.com
commit 6605cc67ca18b9d583eb96e18a20f5f4e726103c upstream.
This SL-6000 says Direct Line, not Ethernet
v2: added Reporter and Link
Signed-off-by: Oliver Neukum oneukum@suse.com Reported-by: Ross Maynard bids.7405@bigpond.com Link: https://bugzilla.kernel.org/show_bug.cgi?id=215361 Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/cdc_ether.c | 12 ++++++++++++ drivers/net/usb/zaurus.c | 12 ++++++++++++ 2 files changed, 24 insertions(+)
--- a/drivers/net/usb/cdc_ether.c +++ b/drivers/net/usb/cdc_ether.c @@ -583,6 +583,11 @@ static const struct usb_device_id produc .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET, \ .bInterfaceProtocol = USB_CDC_PROTO_NONE
+#define ZAURUS_FAKE_INTERFACE \ + .bInterfaceClass = USB_CLASS_COMM, \ + .bInterfaceSubClass = USB_CDC_SUBCLASS_MDLM, \ + .bInterfaceProtocol = USB_CDC_PROTO_NONE + /* SA-1100 based Sharp Zaurus ("collie"), or compatible; * wire-incompatible with true CDC Ethernet implementations. * (And, it seems, needlessly so...) @@ -638,6 +643,13 @@ static const struct usb_device_id produc .driver_info = 0, }, { .match_flags = USB_DEVICE_ID_MATCH_INT_INFO + | USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = 0x04DD, + .idProduct = 0x9032, /* SL-6000 */ + ZAURUS_FAKE_INTERFACE, + .driver_info = 0, +}, { + .match_flags = USB_DEVICE_ID_MATCH_INT_INFO | USB_DEVICE_ID_MATCH_DEVICE, .idVendor = 0x04DD, /* reported with some C860 units */ --- a/drivers/net/usb/zaurus.c +++ b/drivers/net/usb/zaurus.c @@ -256,6 +256,11 @@ static const struct usb_device_id produc .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET, \ .bInterfaceProtocol = USB_CDC_PROTO_NONE
+#define ZAURUS_FAKE_INTERFACE \ + .bInterfaceClass = USB_CLASS_COMM, \ + .bInterfaceSubClass = USB_CDC_SUBCLASS_MDLM, \ + .bInterfaceProtocol = USB_CDC_PROTO_NONE + /* SA-1100 based Sharp Zaurus ("collie"), or compatible. */ { .match_flags = USB_DEVICE_ID_MATCH_INT_INFO @@ -315,6 +320,13 @@ static const struct usb_device_id produc .driver_info = ZAURUS_PXA_INFO, }, { .match_flags = USB_DEVICE_ID_MATCH_INT_INFO + | USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = 0x04DD, + .idProduct = 0x9032, /* SL-6000 */ + ZAURUS_FAKE_INTERFACE, + .driver_info = (unsigned long)&bogus_mdlm_info, +}, { + .match_flags = USB_DEVICE_ID_MATCH_INT_INFO | USB_DEVICE_ID_MATCH_DEVICE, .idVendor = 0x04DD, /* reported with some C860 units */
From: Oliver Neukum oneukum@suse.com
commit 8d2b1a1ec9f559d30b724877da4ce592edc41fdc upstream.
A broken device may give an extreme offset like 0xFFF0 and a reasonable length for a fragment. In the sanity check as formulated now, this will create an integer overflow, defeating the sanity check. Both offset and offset + len need to be checked in such a manner that no overflow can occur. And those quantities should be unsigned.
Signed-off-by: Oliver Neukum oneukum@suse.com Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/cdc_ncm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/net/usb/cdc_ncm.c +++ b/drivers/net/usb/cdc_ncm.c @@ -1715,10 +1715,10 @@ int cdc_ncm_rx_fixup(struct usbnet *dev, { struct sk_buff *skb; struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0]; - int len; + unsigned int len; int nframes; int x; - int offset; + unsigned int offset; union { struct usb_cdc_ncm_ndp16 *ndp16; struct usb_cdc_ncm_ndp32 *ndp32; @@ -1790,8 +1790,8 @@ next_ndp: break; }
- /* sanity checking */ - if (((offset + len) > skb_in->len) || + /* sanity checking - watch out for integer wrap*/ + if ((offset > skb_in->len) || (len > skb_in->len - offset) || (len > ctx->rx_max) || (len < ETH_HLEN)) { netif_dbg(dev, rx_err, dev->net, "invalid frame detected (ignored) offset[%u]=%u, length=%u, skb=%p\n",
From: Eric Dumazet edumazet@google.com
commit 75063c9294fb239bbe64eb72141b6871fe526d29 upstream.
Calling nf_defrag_ipv4_disable() instead of nf_defrag_ipv6_disable() was probably not the intent.
I found this by code inspection, while chasing a possible issue in TPROXY.
Fixes: de8c12110a13 ("netfilter: disable defrag once its no longer needed") Signed-off-by: Eric Dumazet edumazet@google.com Reviewed-by: Florian Westphal fw@strlen.de Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/netfilter/xt_socket.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/net/netfilter/xt_socket.c +++ b/net/netfilter/xt_socket.c @@ -221,7 +221,7 @@ static void socket_mt_destroy(const stru if (par->family == NFPROTO_IPV4) nf_defrag_ipv4_disable(par->net); else if (par->family == NFPROTO_IPV6) - nf_defrag_ipv4_disable(par->net); + nf_defrag_ipv6_disable(par->net); }
static struct xt_match socket_mt_reg[] __read_mostly = {
From: Pablo Neira Ayuso pablo@netfilter.org
commit 2874b7911132f6975e668f6849c8ac93bc4e1f35 upstream.
nf_defrag_ipv6_disable() requires CONFIG_IP6_NF_IPTABLES.
Fixes: 75063c9294fb ("netfilter: xt_socket: fix a typo in socket_mt_destroy()") Reported-by: kernel test robot lkp@intel.com Reviewed-by: Eric Dumazetedumazet@google.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/netfilter/xt_socket.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/net/netfilter/xt_socket.c +++ b/net/netfilter/xt_socket.c @@ -220,8 +220,10 @@ static void socket_mt_destroy(const stru { if (par->family == NFPROTO_IPV4) nf_defrag_ipv4_disable(par->net); +#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES) else if (par->family == NFPROTO_IPV6) nf_defrag_ipv6_disable(par->net); +#endif }
static struct xt_match socket_mt_reg[] __read_mostly = {
From: Pablo Neira Ayuso pablo@netfilter.org
commit b1a5983f56e371046dcf164f90bfaf704d2b89f6 upstream.
immediate verdict expression needs to allocate one slot in the flow offload action array, however, immediate data expression does not need to do so.
fwd and dup expression need to allocate one slot, this is missing.
Add a new offload_action interface to report if this expression needs to allocate one slot in the flow offload action array.
Fixes: be2861dc36d7 ("netfilter: nft_{fwd,dup}_netdev: add offload support") Reported-and-tested-by: Nick Gregory Nick.Gregory@Sophos.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/netfilter/nf_tables.h | 2 +- include/net/netfilter/nf_tables_offload.h | 2 -- net/netfilter/nf_tables_offload.c | 3 ++- net/netfilter/nft_dup_netdev.c | 6 ++++++ net/netfilter/nft_fwd_netdev.c | 6 ++++++ net/netfilter/nft_immediate.c | 12 +++++++++++- 6 files changed, 26 insertions(+), 5 deletions(-)
--- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -889,9 +889,9 @@ struct nft_expr_ops { int (*offload)(struct nft_offload_ctx *ctx, struct nft_flow_rule *flow, const struct nft_expr *expr); + bool (*offload_action)(const struct nft_expr *expr); void (*offload_stats)(struct nft_expr *expr, const struct flow_stats *stats); - u32 offload_flags; const struct nft_expr_type *type; void *data; }; --- a/include/net/netfilter/nf_tables_offload.h +++ b/include/net/netfilter/nf_tables_offload.h @@ -67,8 +67,6 @@ struct nft_flow_rule { struct flow_rule *rule; };
-#define NFT_OFFLOAD_F_ACTION (1 << 0) - void nft_flow_rule_set_addr_type(struct nft_flow_rule *flow, enum flow_dissector_key_id addr_type);
--- a/net/netfilter/nf_tables_offload.c +++ b/net/netfilter/nf_tables_offload.c @@ -94,7 +94,8 @@ struct nft_flow_rule *nft_flow_rule_crea
expr = nft_expr_first(rule); while (nft_expr_more(rule, expr)) { - if (expr->ops->offload_flags & NFT_OFFLOAD_F_ACTION) + if (expr->ops->offload_action && + expr->ops->offload_action(expr)) num_actions++;
expr = nft_expr_next(expr); --- a/net/netfilter/nft_dup_netdev.c +++ b/net/netfilter/nft_dup_netdev.c @@ -67,6 +67,11 @@ static int nft_dup_netdev_offload(struct return nft_fwd_dup_netdev_offload(ctx, flow, FLOW_ACTION_MIRRED, oif); }
+static bool nft_dup_netdev_offload_action(const struct nft_expr *expr) +{ + return true; +} + static struct nft_expr_type nft_dup_netdev_type; static const struct nft_expr_ops nft_dup_netdev_ops = { .type = &nft_dup_netdev_type, @@ -75,6 +80,7 @@ static const struct nft_expr_ops nft_dup .init = nft_dup_netdev_init, .dump = nft_dup_netdev_dump, .offload = nft_dup_netdev_offload, + .offload_action = nft_dup_netdev_offload_action, };
static struct nft_expr_type nft_dup_netdev_type __read_mostly = { --- a/net/netfilter/nft_fwd_netdev.c +++ b/net/netfilter/nft_fwd_netdev.c @@ -77,6 +77,11 @@ static int nft_fwd_netdev_offload(struct return nft_fwd_dup_netdev_offload(ctx, flow, FLOW_ACTION_REDIRECT, oif); }
+static bool nft_fwd_netdev_offload_action(const struct nft_expr *expr) +{ + return true; +} + struct nft_fwd_neigh { u8 sreg_dev; u8 sreg_addr; @@ -219,6 +224,7 @@ static const struct nft_expr_ops nft_fwd .dump = nft_fwd_netdev_dump, .validate = nft_fwd_validate, .offload = nft_fwd_netdev_offload, + .offload_action = nft_fwd_netdev_offload_action, };
static const struct nft_expr_ops * --- a/net/netfilter/nft_immediate.c +++ b/net/netfilter/nft_immediate.c @@ -213,6 +213,16 @@ static int nft_immediate_offload(struct return 0; }
+static bool nft_immediate_offload_action(const struct nft_expr *expr) +{ + const struct nft_immediate_expr *priv = nft_expr_priv(expr); + + if (priv->dreg == NFT_REG_VERDICT) + return true; + + return false; +} + static const struct nft_expr_ops nft_imm_ops = { .type = &nft_imm_type, .size = NFT_EXPR_SIZE(sizeof(struct nft_immediate_expr)), @@ -224,7 +234,7 @@ static const struct nft_expr_ops nft_imm .dump = nft_immediate_dump, .validate = nft_immediate_validate, .offload = nft_immediate_offload, - .offload_flags = NFT_OFFLOAD_F_ACTION, + .offload_action = nft_immediate_offload_action, };
struct nft_expr_type nft_imm_type __read_mostly = {
From: Xin Long lucien.xin@gmail.com
commit cd33bdcbead882c2e58fdb4a54a7bd75b610a452 upstream.
As Jakub noticed, prints should be avoided on the datapath. Also, as packets would never come to the else branch in ping_lookup(), remove pr_err() from ping_lookup().
Fixes: 35a79e64de29 ("ping: fix the dif and sdif check in ping_lookup") Reported-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Xin Long lucien.xin@gmail.com Link: https://lore.kernel.org/r/1ef3f2fcd31bd681a193b1fcf235eee1603819bd.164567406... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/ping.c | 1 - 1 file changed, 1 deletion(-)
--- a/net/ipv4/ping.c +++ b/net/ipv4/ping.c @@ -187,7 +187,6 @@ static struct sock *ping_lookup(struct n (int)ident, &ipv6_hdr(skb)->daddr, dif); #endif } else { - pr_err("ping: protocol(%x) is not supported\n", ntohs(skb->protocol)); return NULL; }
From: Mateusz Palczewski mateusz.palczewski@intel.com
commit fe20371578ef640069e6ae9fa8038f60e7908565 upstream.
Revert of a patch that instead of fixing a AQ error when trying to reset BW limit introduced several regressions related to creation and managing TC. Currently there are errors when creating a TC on both PF and VF.
Error log: [17428.783095] i40e 0000:3b:00.1: AQ command Config VSI BW allocation per TC failed = 14 [17428.783107] i40e 0000:3b:00.1: Failed configuring TC map 0 for VSI 391 [17428.783254] i40e 0000:3b:00.1: AQ command Config VSI BW allocation per TC failed = 14 [17428.783259] i40e 0000:3b:00.1: Unable to configure TC map 0 for VSI 391
This reverts commit 3d2504663c41104b4359a15f35670cfa82de1bbf.
Fixes: 3d2504663c41 (i40e: Fix reset bw limit when DCB enabled with 1 TC) Signed-off-by: Mateusz Palczewski mateusz.palczewski@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Link: https://lore.kernel.org/r/20220223175347.1690692-1-anthony.l.nguyen@intel.co... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/intel/i40e/i40e_main.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-)
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -5372,15 +5372,7 @@ static int i40e_vsi_configure_bw_alloc(s /* There is no need to reset BW when mqprio mode is on. */ if (pf->flags & I40E_FLAG_TC_MQPRIO) return 0; - - if (!vsi->mqprio_qopt.qopt.hw) { - if (pf->flags & I40E_FLAG_DCB_ENABLED) - goto skip_reset; - - if (IS_ENABLED(CONFIG_I40E_DCB) && - i40e_dcb_hw_get_num_tc(&pf->hw) == 1) - goto skip_reset; - + if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) { ret = i40e_set_bw_limit(vsi, vsi->seid, 0); if (ret) dev_info(&pf->pdev->dev, @@ -5388,8 +5380,6 @@ static int i40e_vsi_configure_bw_alloc(s vsi->seid); return ret; } - -skip_reset: memset(&bw_data, 0, sizeof(bw_data)); bw_data.tc_valid_bits = enabled_tc; for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
From: Mikko Perttunen mperttunen@nvidia.com
commit 184b58fa816fb5ee1854daf0d430766422bf2a77 upstream.
The new TegraDRM UAPI uses syncpoint waiting with timeout set to zero to indicate reading the syncpoint value. To support that we need to return the syncpoint value always when waiting.
Fixes: 44e961381354 ("drm/tegra: Implement syncpoint wait UAPI") Signed-off-by: Mikko Perttunen mperttunen@nvidia.com Signed-off-by: Thierry Reding treding@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/host1x/syncpt.c | 19 ++----------------- 1 file changed, 2 insertions(+), 17 deletions(-)
--- a/drivers/gpu/host1x/syncpt.c +++ b/drivers/gpu/host1x/syncpt.c @@ -225,27 +225,12 @@ int host1x_syncpt_wait(struct host1x_syn void *ref; struct host1x_waitlist *waiter; int err = 0, check_count = 0; - u32 val;
if (value) - *value = 0; - - /* first check cache */ - if (host1x_syncpt_is_expired(sp, thresh)) { - if (value) - *value = host1x_syncpt_load(sp); + *value = host1x_syncpt_load(sp);
+ if (host1x_syncpt_is_expired(sp, thresh)) return 0; - } - - /* try to read from register */ - val = host1x_hw_syncpt_load(sp->host, sp); - if (host1x_syncpt_is_expired(sp, thresh)) { - if (value) - *value = val; - - goto done; - }
if (!timeout) { err = -EAGAIN;
From: Zhengjun Xing zhengjun.xing@linux.intel.com
commit 8a3d2ee0de3828e0d01f9682d35ee53704659bd0 upstream.
The 'perf record' and 'perf stat' commands have supported the option '-C/--cpus' to count or collect only on the list of CPUs provided.
Commit 1d3351e631fc34d7 ("perf tools: Enable on a list of CPUs for hybrid") add it to be supported for hybrid. For hybrid support, it checks the cpu list are available on hybrid PMU. But when we test only uncore events(or events not in cpu_core and cpu_atom), there is a bug:
Before:
# perf stat -C0 -e uncore_clock/clockticks/ sleep 1 failed to use cpu list 0
In this case, for uncore event, its pmu_name is not cpu_core or cpu_atom, so in evlist__fix_hybrid_cpus, perf_pmu__find_hybrid_pmu should return NULL,both events_nr and unmatched_count should be 0 ,then the cpu list check function evlist__fix_hybrid_cpus return -1 and the error "failed to use cpu list 0" will happen. Bypass "events_nr=0" case then the issue is fixed.
After:
# perf stat -C0 -e uncore_clock/clockticks/ sleep 1
Performance counter stats for 'CPU(s) 0':
195,476,873 uncore_clock/clockticks/
1.004518677 seconds time elapsed
When testing with at least one core event and uncore events, it has no issue.
# perf stat -C0 -e cpu_core/cpu-cycles/,uncore_clock/clockticks/ sleep 1
Performance counter stats for 'CPU(s) 0':
5,993,774 cpu_core/cpu-cycles/ 301,025,912 uncore_clock/clockticks/
1.003964934 seconds time elapsed
Fixes: 1d3351e631fc34d7 ("perf tools: Enable on a list of CPUs for hybrid") Reviewed-by: Kan Liang kan.liang@linux.intel.com Signed-off-by: Zhengjun Xing zhengjun.xing@linux.intel.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: alexander.shishkin@intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Jin Yao yao.jin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: Peter Zijlstra peterz@infradead.org Link: http://lore.kernel.org/lkml/20220218093127.1844241-1-zhengjun.xing@linux.int... Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/perf/util/evlist-hybrid.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/tools/perf/util/evlist-hybrid.c +++ b/tools/perf/util/evlist-hybrid.c @@ -153,8 +153,8 @@ int evlist__fix_hybrid_cpus(struct evlis perf_cpu_map__put(matched_cpus); perf_cpu_map__put(unmatched_cpus); } - - ret = (unmatched_count == events_nr) ? -1 : 0; + if (events_nr) + ret = (unmatched_count == events_nr) ? -1 : 0; out: perf_cpu_map__put(cpus); return ret;
From: Alexey Bayduraev alexey.v.bayduraev@linux.intel.com
commit 69560e366fc4d5fca7bebb0e44edbfafc8bcaf05 upstream.
When perf_data__create_dir() fails, it calls close_dir(), but perf_session__delete() also calls close_dir() and since dir.version and dir.nr were initialized by perf_data__create_dir(), a double free occurs.
This patch moves the initialization of dir.version and dir.nr after successful initialization of dir.files, that prevents double freeing. This behavior is already implemented in perf_data__open_dir().
Fixes: 145520631130bd64 ("perf data: Add perf_data__(create_dir|close_dir) functions") Signed-off-by: Alexey Bayduraev alexey.v.bayduraev@linux.intel.com Acked-by: Jiri Olsa jolsa@kernel.org Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Antonov alexander.antonov@linux.intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Alexei Budankov abudankov@huawei.com Cc: Andi Kleen ak@linux.intel.com Cc: Ingo Molnar mingo@redhat.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20220218152341.5197-2-alexey.v.bayduraev@linux.int... Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/perf/util/data.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/tools/perf/util/data.c +++ b/tools/perf/util/data.c @@ -44,10 +44,6 @@ int perf_data__create_dir(struct perf_da if (!files) return -ENOMEM;
- data->dir.version = PERF_DIR_VERSION; - data->dir.files = files; - data->dir.nr = nr; - for (i = 0; i < nr; i++) { struct perf_data_file *file = &files[i];
@@ -62,6 +58,9 @@ int perf_data__create_dir(struct perf_da file->fd = ret; }
+ data->dir.version = PERF_DIR_VERSION; + data->dir.files = files; + data->dir.nr = nr; return 0;
out_err:
From: Paolo Abeni pabeni@redhat.com
commit 837cf45df163a3780bc04b555700231e95b31dc9 upstream.
If an MPTCP endpoint received multiple consecutive incoming ADD_ADDR options, mptcp_pm_add_addr_received() can overwrite the current remote address value after the PM lock is released in mptcp_pm_nl_add_addr_received() and before such address is echoed.
Fix the issue caching the remote address value a little earlier and always using the cached value after releasing the PM lock.
Fixes: f7efc7771eac ("mptcp: drop argument port from mptcp_pm_announce_addr") Reviewed-by: Matthieu Baerts matthieu.baerts@tessares.net Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Mat Martineau mathew.j.martineau@linux.intel.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/mptcp/pm_netlink.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-)
--- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -606,6 +606,7 @@ static void mptcp_pm_nl_add_addr_receive unsigned int add_addr_accept_max; struct mptcp_addr_info remote; unsigned int subflows_max; + bool reset_port = false; int i, nr;
add_addr_accept_max = mptcp_pm_get_add_addr_accept_max(msk); @@ -615,15 +616,19 @@ static void mptcp_pm_nl_add_addr_receive msk->pm.add_addr_accepted, add_addr_accept_max, msk->pm.remote.family);
- if (lookup_subflow_by_daddr(&msk->conn_list, &msk->pm.remote)) + remote = msk->pm.remote; + if (lookup_subflow_by_daddr(&msk->conn_list, &remote)) goto add_addr_echo;
+ /* pick id 0 port, if none is provided the remote address */ + if (!remote.port) { + reset_port = true; + remote.port = sk->sk_dport; + } + /* connect to the specified remote address, using whatever * local address the routing configuration will pick. */ - remote = msk->pm.remote; - if (!remote.port) - remote.port = sk->sk_dport; nr = fill_local_addresses_vec(msk, addrs);
msk->pm.add_addr_accepted++; @@ -636,8 +641,12 @@ static void mptcp_pm_nl_add_addr_receive __mptcp_subflow_connect(sk, &addrs[i], &remote); spin_lock_bh(&msk->pm.lock);
+ /* be sure to echo exactly the received address */ + if (reset_port) + remote.port = 0; + add_addr_echo: - mptcp_pm_announce_addr(msk, &msk->pm.remote, true); + mptcp_pm_announce_addr(msk, &remote, true); mptcp_pm_nl_addr_send_ack(msk); }
From: Paolo Abeni pabeni@redhat.com
commit f73c1194634506ab60af0debef04671fc431a435 upstream.
The MPTCP in kernel path manager has some constraints on incoming addresses announce processing, so that in edge scenarios it can end-up dropping (ignoring) some of such announces.
The above is not very limiting in practice since such scenarios are very uncommon and MPTCP will recover due to ADD_ADDR retransmissions.
This patch adds a few MIB counters to account for such drop events to allow easier introspection of the critical scenarios.
Fixes: f7efc7771eac ("mptcp: drop argument port from mptcp_pm_announce_addr") Reviewed-by: Matthieu Baerts matthieu.baerts@tessares.net Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Mat Martineau mathew.j.martineau@linux.intel.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/mptcp/mib.c | 2 ++ net/mptcp/mib.h | 2 ++ net/mptcp/pm.c | 8 ++++++-- 3 files changed, 10 insertions(+), 2 deletions(-)
--- a/net/mptcp/mib.c +++ b/net/mptcp/mib.c @@ -35,12 +35,14 @@ static const struct snmp_mib mptcp_snmp_ SNMP_MIB_ITEM("AddAddr", MPTCP_MIB_ADDADDR), SNMP_MIB_ITEM("EchoAdd", MPTCP_MIB_ECHOADD), SNMP_MIB_ITEM("PortAdd", MPTCP_MIB_PORTADD), + SNMP_MIB_ITEM("AddAddrDrop", MPTCP_MIB_ADDADDRDROP), SNMP_MIB_ITEM("MPJoinPortSynRx", MPTCP_MIB_JOINPORTSYNRX), SNMP_MIB_ITEM("MPJoinPortSynAckRx", MPTCP_MIB_JOINPORTSYNACKRX), SNMP_MIB_ITEM("MPJoinPortAckRx", MPTCP_MIB_JOINPORTACKRX), SNMP_MIB_ITEM("MismatchPortSynRx", MPTCP_MIB_MISMATCHPORTSYNRX), SNMP_MIB_ITEM("MismatchPortAckRx", MPTCP_MIB_MISMATCHPORTACKRX), SNMP_MIB_ITEM("RmAddr", MPTCP_MIB_RMADDR), + SNMP_MIB_ITEM("RmAddrDrop", MPTCP_MIB_RMADDRDROP), SNMP_MIB_ITEM("RmSubflow", MPTCP_MIB_RMSUBFLOW), SNMP_MIB_ITEM("MPPrioTx", MPTCP_MIB_MPPRIOTX), SNMP_MIB_ITEM("MPPrioRx", MPTCP_MIB_MPPRIORX), --- a/net/mptcp/mib.h +++ b/net/mptcp/mib.h @@ -28,12 +28,14 @@ enum linux_mptcp_mib_field { MPTCP_MIB_ADDADDR, /* Received ADD_ADDR with echo-flag=0 */ MPTCP_MIB_ECHOADD, /* Received ADD_ADDR with echo-flag=1 */ MPTCP_MIB_PORTADD, /* Received ADD_ADDR with a port-number */ + MPTCP_MIB_ADDADDRDROP, /* Dropped incoming ADD_ADDR */ MPTCP_MIB_JOINPORTSYNRX, /* Received a SYN MP_JOIN with a different port-number */ MPTCP_MIB_JOINPORTSYNACKRX, /* Received a SYNACK MP_JOIN with a different port-number */ MPTCP_MIB_JOINPORTACKRX, /* Received an ACK MP_JOIN with a different port-number */ MPTCP_MIB_MISMATCHPORTSYNRX, /* Received a SYN MP_JOIN with a mismatched port-number */ MPTCP_MIB_MISMATCHPORTACKRX, /* Received an ACK MP_JOIN with a mismatched port-number */ MPTCP_MIB_RMADDR, /* Received RM_ADDR */ + MPTCP_MIB_RMADDRDROP, /* Dropped incoming RM_ADDR */ MPTCP_MIB_RMSUBFLOW, /* Remove a subflow */ MPTCP_MIB_MPPRIOTX, /* Transmit a MP_PRIO */ MPTCP_MIB_MPPRIORX, /* Received a MP_PRIO */ --- a/net/mptcp/pm.c +++ b/net/mptcp/pm.c @@ -194,6 +194,8 @@ void mptcp_pm_add_addr_received(struct m mptcp_pm_add_addr_send_ack(msk); } else if (mptcp_pm_schedule_work(msk, MPTCP_PM_ADD_ADDR_RECEIVED)) { pm->remote = *addr; + } else { + __MPTCP_INC_STATS(sock_net((struct sock *)msk), MPTCP_MIB_ADDADDRDROP); }
spin_unlock_bh(&pm->lock); @@ -234,8 +236,10 @@ void mptcp_pm_rm_addr_received(struct mp mptcp_event_addr_removed(msk, rm_list->ids[i]);
spin_lock_bh(&pm->lock); - mptcp_pm_schedule_work(msk, MPTCP_PM_RM_ADDR_RECEIVED); - pm->rm_list_rx = *rm_list; + if (mptcp_pm_schedule_work(msk, MPTCP_PM_RM_ADDR_RECEIVED)) + pm->rm_list_rx = *rm_list; + else + __MPTCP_INC_STATS(sock_net((struct sock *)msk), MPTCP_MIB_RMADDRDROP); spin_unlock_bh(&pm->lock); }
From: Paolo Abeni pabeni@redhat.com
commit 0cd33c5ffec12bd77a1c02db2469fac08f840939 upstream.
Instead of waiting for an arbitrary amount of time for the MPTCP MP_CAPABLE handshake to complete, explicitly wait for the relevant socket to enter into the established status.
Additionally let the data transfer application use the slowest transfer mode available (-r), to cope with very slow host, or high jitter caused by hosting VMs.
Fixes: df62f2ec3df6 ("selftests/mptcp: add diag interface tests") Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/258 Reported-and-tested-by: Matthieu Baerts matthieu.baerts@tessares.net Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Mat Martineau mathew.j.martineau@linux.intel.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/testing/selftests/net/mptcp/diag.sh | 44 +++++++++++++++++++++++++----- 1 file changed, 37 insertions(+), 7 deletions(-)
--- a/tools/testing/selftests/net/mptcp/diag.sh +++ b/tools/testing/selftests/net/mptcp/diag.sh @@ -71,6 +71,36 @@ chk_msk_remote_key_nr() __chk_nr "grep -c remote_key" $* }
+# $1: ns, $2: port +wait_local_port_listen() +{ + local listener_ns="${1}" + local port="${2}" + + local port_hex i + + port_hex="$(printf "%04X" "${port}")" + for i in $(seq 10); do + ip netns exec "${listener_ns}" cat /proc/net/tcp | \ + awk "BEGIN {rc=1} {if ($2 ~ /:${port_hex}$/ && $4 ~ /0A/) {rc=0; exit}} END {exit rc}" && + break + sleep 0.1 + done +} + +wait_connected() +{ + local listener_ns="${1}" + local port="${2}" + + local port_hex i + + port_hex="$(printf "%04X" "${port}")" + for i in $(seq 10); do + ip netns exec ${listener_ns} grep -q " 0100007F:${port_hex} " /proc/net/tcp && break + sleep 0.1 + done +}
trap cleanup EXIT ip netns add $ns @@ -81,15 +111,15 @@ echo "a" | \ ip netns exec $ns \ ./mptcp_connect -p 10000 -l -t ${timeout_poll} \ 0.0.0.0 >/dev/null & -sleep 0.1 +wait_local_port_listen $ns 10000 chk_msk_nr 0 "no msk on netns creation"
echo "b" | \ timeout ${timeout_test} \ ip netns exec $ns \ - ./mptcp_connect -p 10000 -j -t ${timeout_poll} \ + ./mptcp_connect -p 10000 -r 0 -t ${timeout_poll} \ 127.0.0.1 >/dev/null & -sleep 0.1 +wait_connected $ns 10000 chk_msk_nr 2 "after MPC handshake " chk_msk_remote_key_nr 2 "....chk remote_key" chk_msk_fallback_nr 0 "....chk no fallback" @@ -101,13 +131,13 @@ echo "a" | \ ip netns exec $ns \ ./mptcp_connect -p 10001 -l -s TCP -t ${timeout_poll} \ 0.0.0.0 >/dev/null & -sleep 0.1 +wait_local_port_listen $ns 10001 echo "b" | \ timeout ${timeout_test} \ ip netns exec $ns \ - ./mptcp_connect -p 10001 -j -t ${timeout_poll} \ + ./mptcp_connect -p 10001 -r 0 -t ${timeout_poll} \ 127.0.0.1 >/dev/null & -sleep 0.1 +wait_connected $ns 10001 chk_msk_fallback_nr 1 "check fallback" flush_pids
@@ -119,7 +149,7 @@ for I in `seq 1 $NR_CLIENTS`; do ./mptcp_connect -p $((I+10001)) -l -w 10 \ -t ${timeout_poll} 0.0.0.0 >/dev/null & done -sleep 0.1 +wait_local_port_listen $ns $((NR_CLIENTS + 10001))
for I in `seq 1 $NR_CLIENTS`; do echo "b" | \
From: Paolo Abeni pabeni@redhat.com
commit e35f885b357d47e04380a2056d1b2cc3e6f4f24b upstream.
Since commit 2843ff6f36db ("mptcp: remote addresses fullmesh"), an MPTCP client can attempt creating multiple MPJ subflow simultaneusly.
In such scenario the server, when syncookies are enabled, could end-up accepting incoming MPJ syn even above the configured subflow limit, as the such limit can be enforced in a reliable way only after the subflow creation. In case of syncookie, only after the 3rd ack reception.
As a consequence the related self-tests case sporadically fails, as it verify that the server always accept the expected number of MPJ syn.
Address the issues relaxing the MPJ syn number constrain. Note that the check on the accepted number of MPJ 3rd ack still remains intact.
Fixes: 2843ff6f36db ("mptcp: remote addresses fullmesh") Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Mat Martineau mathew.j.martineau@linux.intel.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/testing/selftests/net/mptcp/mptcp_join.sh | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-)
--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh +++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh @@ -624,6 +624,7 @@ chk_join_nr() local ack_nr=$4 local count local dump_stats + local with_cookie
printf "%02u %-36s %s" "$TEST_COUNT" "$msg" "syn" count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinSynRx | awk '{print $2}'` @@ -637,12 +638,20 @@ chk_join_nr() fi
echo -n " - synack" + with_cookie=`ip netns exec $ns2 sysctl -n net.ipv4.tcp_syncookies` count=`ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinSynAckRx | awk '{print $2}'` [ -z "$count" ] && count=0 if [ "$count" != "$syn_ack_nr" ]; then - echo "[fail] got $count JOIN[s] synack expected $syn_ack_nr" - ret=1 - dump_stats=1 + # simult connections exceeding the limit with cookie enabled could go up to + # synack validation as the conn limit can be enforced reliably only after + # the subflow creation + if [ "$with_cookie" = 2 ] && [ "$count" -gt "$syn_ack_nr" ] && [ "$count" -le "$syn_nr" ]; then + echo -n "[ ok ]" + else + echo "[fail] got $count JOIN[s] synack expected $syn_ack_nr" + ret=1 + dump_stats=1 + fi else echo -n "[ ok ]" fi
From: Manish Chopra manishc@marvell.com
commit e13ad1443684f7afaff24cf207e85e97885256bd upstream.
Commit b7a49f73059f ("bnx2x: Utilize firmware 7.13.21.0") added new firmware support in the driver with maintaining older firmware compatibility. However, older firmware was not added in MODULE_FIRMWARE() which caused missing firmware files in initrd image leading to driver load failure from initrd. This patch adds MODULE_FIRMWARE() for older firmware version to have firmware files included in initrd.
Fixes: b7a49f73059f ("bnx2x: Utilize firmware 7.13.21.0") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215627 Signed-off-by: Manish Chopra manishc@marvell.com Signed-off-by: Alok Prasad palok@marvell.com Signed-off-by: Ariel Elior aelior@marvell.com Link: https://lore.kernel.org/r/20220223085720.12021-1-manishc@marvell.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c @@ -100,6 +100,9 @@ MODULE_LICENSE("GPL"); MODULE_FIRMWARE(FW_FILE_NAME_E1); MODULE_FIRMWARE(FW_FILE_NAME_E1H); MODULE_FIRMWARE(FW_FILE_NAME_E2); +MODULE_FIRMWARE(FW_FILE_NAME_E1_V15); +MODULE_FIRMWARE(FW_FILE_NAME_E1H_V15); +MODULE_FIRMWARE(FW_FILE_NAME_E2_V15);
int bnx2x_num_queues; module_param_named(num_queues, bnx2x_num_queues, int, 0444);
From: Kalesh AP kalesh-anakkur.purayil@broadcom.com
commit 1278d17a1fb860e7eab4bc3ff4b026a87cbf5105 upstream.
To install a livepatch, first flash the package to NVM, and then activate the patch through the "HWRM_FW_LIVEPATCH" fw command. To uninstall a patch from NVM, flash the removal package and then activate it through the "HWRM_FW_LIVEPATCH" fw command.
The "HWRM_FW_LIVEPATCH" fw command has to consider following scenarios:
1. no patch in NVM and no patch active. Do nothing. 2. patch in NVM, but not active. Activate the patch currently in NVM. 3. patch is not in NVM, but active. Deactivate the patch. 4. patch in NVM and the patch active. Do nothing.
Fix the code to handle these scenarios during devlink "fw_activate".
To install and activate a live patch: devlink dev flash pci/0000:c1:00.0 file thor_patch.pkg devlink -f dev reload pci/0000:c1:00.0 action fw_activate limit no_reset
To remove and deactivate a live patch: devlink dev flash pci/0000:c1:00.0 file thor_patch_rem.pkg devlink -f dev reload pci/0000:c1:00.0 action fw_activate limit no_reset
Fixes: 3c4153394e2c ("bnxt_en: implement firmware live patching") Reviewed-by: Vikas Gupta vikas.gupta@broadcom.com Reviewed-by: Somnath Kotur somnath.kotur@broadcom.com Signed-off-by: Kalesh AP kalesh-anakkur.purayil@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c | 39 +++++++++++++++++----- 1 file changed, 31 insertions(+), 8 deletions(-)
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c @@ -366,6 +366,16 @@ bnxt_dl_livepatch_report_err(struct bnxt } }
+/* Live patch status in NVM */ +#define BNXT_LIVEPATCH_NOT_INSTALLED 0 +#define BNXT_LIVEPATCH_INSTALLED FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_INSTALL +#define BNXT_LIVEPATCH_REMOVED FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_ACTIVE +#define BNXT_LIVEPATCH_MASK (FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_INSTALL | \ + FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_ACTIVE) +#define BNXT_LIVEPATCH_ACTIVATED BNXT_LIVEPATCH_MASK + +#define BNXT_LIVEPATCH_STATE(flags) ((flags) & BNXT_LIVEPATCH_MASK) + static int bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack) { @@ -373,8 +383,9 @@ bnxt_dl_livepatch_activate(struct bnxt * struct hwrm_fw_livepatch_query_input *query_req; struct hwrm_fw_livepatch_output *patch_resp; struct hwrm_fw_livepatch_input *patch_req; + u16 flags, live_patch_state; + bool activated = false; u32 installed = 0; - u16 flags; u8 target; int rc;
@@ -393,7 +404,6 @@ bnxt_dl_livepatch_activate(struct bnxt * hwrm_req_drop(bp, query_req); return rc; } - patch_req->opcode = FW_LIVEPATCH_REQ_OPCODE_ACTIVATE; patch_req->loadtype = FW_LIVEPATCH_REQ_LOADTYPE_NVM_INSTALL; patch_resp = hwrm_req_hold(bp, patch_req);
@@ -406,12 +416,20 @@ bnxt_dl_livepatch_activate(struct bnxt * }
flags = le16_to_cpu(query_resp->status_flags); - if (~flags & FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_INSTALL) + live_patch_state = BNXT_LIVEPATCH_STATE(flags); + + if (live_patch_state == BNXT_LIVEPATCH_NOT_INSTALLED) continue; - if ((flags & FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_ACTIVE) && - !strncmp(query_resp->active_ver, query_resp->install_ver, - sizeof(query_resp->active_ver))) + + if (live_patch_state == BNXT_LIVEPATCH_ACTIVATED) { + activated = true; continue; + } + + if (live_patch_state == BNXT_LIVEPATCH_INSTALLED) + patch_req->opcode = FW_LIVEPATCH_REQ_OPCODE_ACTIVATE; + else if (live_patch_state == BNXT_LIVEPATCH_REMOVED) + patch_req->opcode = FW_LIVEPATCH_REQ_OPCODE_DEACTIVATE;
patch_req->fw_target = target; rc = hwrm_req_send(bp, patch_req); @@ -423,8 +441,13 @@ bnxt_dl_livepatch_activate(struct bnxt * }
if (!rc && !installed) { - NL_SET_ERR_MSG_MOD(extack, "No live patches found"); - rc = -ENOENT; + if (activated) { + NL_SET_ERR_MSG_MOD(extack, "Live patch already activated"); + rc = -EEXIST; + } else { + NL_SET_ERR_MSG_MOD(extack, "No live patches found"); + rc = -ENOENT; + } } hwrm_req_drop(bp, query_req); hwrm_req_drop(bp, patch_req);
From: Somnath Kotur somnath.kotur@broadcom.com
commit 84d3c83e6ea7d46cf3de3a54578af73eb24a64f2 upstream.
ethtool --show-fec <interface> does not show anything when the Active FEC setting in the chip is set to None. Fix it to properly return ETHTOOL_FEC_OFF in that case.
Fixes: 8b2775890ad8 ("bnxt_en: Report FEC settings to ethtool.") Signed-off-by: Somnath Kotur somnath.kotur@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -1944,6 +1944,9 @@ static int bnxt_get_fecparam(struct net_ case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_RS272_IEEE_ACTIVE: fec->active_fec |= ETHTOOL_FEC_LLRS; break; + case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_NONE_ACTIVE: + fec->active_fec |= ETHTOOL_FEC_OFF; + break; } return 0; }
From: Michael Chan michael.chan@broadcom.com
commit 6758f937669dba14c6aac7ca004edda42ec1b18d upstream.
For offline (destructive) self tests, we need to stop the RDMA driver first. Otherwise, the RDMA driver will run into unrecoverable errors when destructive firmware tests are being performed.
The irq_re_init parameter used in the half close and half open sequence when preparing the NIC for offline tests should be set to true because the RDMA driver will free all IRQs before the offline tests begin.
Fixes: 55fd0cf320c3 ("bnxt_en: Add external loopback test to ethtool selftest.") Reviewed-by: Edwin Peer edwin.peer@broadcom.com Reviewed-by: Ben Li ben.li@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 10 +++++----- drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 12 +++++++++--- 2 files changed, 14 insertions(+), 8 deletions(-)
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -10310,12 +10310,12 @@ int bnxt_half_open_nic(struct bnxt *bp) goto half_open_err; }
- rc = bnxt_alloc_mem(bp, false); + rc = bnxt_alloc_mem(bp, true); if (rc) { netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc); goto half_open_err; } - rc = bnxt_init_nic(bp, false); + rc = bnxt_init_nic(bp, true); if (rc) { netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc); goto half_open_err; @@ -10324,7 +10324,7 @@ int bnxt_half_open_nic(struct bnxt *bp)
half_open_err: bnxt_free_skbs(bp); - bnxt_free_mem(bp, false); + bnxt_free_mem(bp, true); dev_close(bp->dev); return rc; } @@ -10334,9 +10334,9 @@ half_open_err: */ void bnxt_half_close_nic(struct bnxt *bp) { - bnxt_hwrm_resource_free(bp, false, false); + bnxt_hwrm_resource_free(bp, false, true); bnxt_free_skbs(bp); - bnxt_free_mem(bp, false); + bnxt_free_mem(bp, true); }
void bnxt_reenable_sriov(struct bnxt *bp) --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -25,6 +25,7 @@ #include "bnxt_hsi.h" #include "bnxt.h" #include "bnxt_hwrm.h" +#include "bnxt_ulp.h" #include "bnxt_xdp.h" #include "bnxt_ptp.h" #include "bnxt_ethtool.h" @@ -3526,9 +3527,12 @@ static void bnxt_self_test(struct net_de if (!offline) { bnxt_run_fw_tests(bp, test_mask, &test_results); } else { - rc = bnxt_close_nic(bp, false, false); - if (rc) + bnxt_ulp_stop(bp); + rc = bnxt_close_nic(bp, true, false); + if (rc) { + bnxt_ulp_start(bp, rc); return; + } bnxt_run_fw_tests(bp, test_mask, &test_results);
buf[BNXT_MACLPBK_TEST_IDX] = 1; @@ -3538,6 +3542,7 @@ static void bnxt_self_test(struct net_de if (rc) { bnxt_hwrm_mac_loopback(bp, false); etest->flags |= ETH_TEST_FL_FAILED; + bnxt_ulp_start(bp, rc); return; } if (bnxt_run_loopback(bp)) @@ -3563,7 +3568,8 @@ static void bnxt_self_test(struct net_de } bnxt_hwrm_phy_loopback(bp, false, false); bnxt_half_close_nic(bp); - rc = bnxt_open_nic(bp, false, true); + rc = bnxt_open_nic(bp, true, true); + bnxt_ulp_start(bp, rc); } if (rc || bnxt_test_irq(bp)) { buf[BNXT_IRQ_TEST_IDX] = 1;
From: Michael Chan michael.chan@broadcom.com
commit cfcab3b3b61584a02bb523ffa99564eafa761dfe upstream.
In the current code, we setup the port to PHY or MAC loopback mode and then transmit a test broadcast packet for the loopback test. This scheme fails sometime if the port is shared with management firmware that can also send packets. The driver may receive the management firmware's packet and the test will fail when the contents don't match the test packet.
Change the test packet to use it's own MAC address as the destination and setup the port to only receive it's own MAC address. This should filter out other packets sent by management firmware.
Fixes: 91725d89b97a ("bnxt_en: Add PHY loopback to ethtool self-test.") Reviewed-by: Pavan Chebbi pavan.chebbi@broadcom.com Reviewed-by: Edwin Peer edwin.peer@broadcom.com Reviewed-by: Andy Gospodarek gospo@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 7 +++++++ drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 2 +- 3 files changed, 9 insertions(+), 1 deletion(-)
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -8623,6 +8623,9 @@ static int bnxt_init_chip(struct bnxt *b vnic->uc_filter_count = 1;
vnic->rx_mask = 0; + if (test_bit(BNXT_STATE_HALF_OPEN, &bp->state)) + goto skip_rx_mask; + if (bp->dev->flags & IFF_BROADCAST) vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_BCAST;
@@ -8643,6 +8646,7 @@ static int bnxt_init_chip(struct bnxt *b if (rc) goto err_out;
+skip_rx_mask: rc = bnxt_hwrm_set_coal(bp); if (rc) netdev_warn(bp->dev, "HWRM set coalescing failure rc: %x\n", @@ -10315,8 +10319,10 @@ int bnxt_half_open_nic(struct bnxt *bp) netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc); goto half_open_err; } + set_bit(BNXT_STATE_HALF_OPEN, &bp->state); rc = bnxt_init_nic(bp, true); if (rc) { + clear_bit(BNXT_STATE_HALF_OPEN, &bp->state); netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc); goto half_open_err; } @@ -10337,6 +10343,7 @@ void bnxt_half_close_nic(struct bnxt *bp bnxt_hwrm_resource_free(bp, false, true); bnxt_free_skbs(bp); bnxt_free_mem(bp, true); + clear_bit(BNXT_STATE_HALF_OPEN, &bp->state); }
void bnxt_reenable_sriov(struct bnxt *bp) --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1919,6 +1919,7 @@ struct bnxt { #define BNXT_STATE_RECOVER 12 #define BNXT_STATE_FW_NON_FATAL_COND 13 #define BNXT_STATE_FW_ACTIVATE_RESET 14 +#define BNXT_STATE_HALF_OPEN 15 /* For offline ethtool tests */
#define BNXT_NO_FW_ACCESS(bp) \ (test_bit(BNXT_STATE_FW_FATAL_COND, &(bp)->state) || \ --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -3433,7 +3433,7 @@ static int bnxt_run_loopback(struct bnxt if (!skb) return -ENOMEM; data = skb_put(skb, pkt_size); - eth_broadcast_addr(data); + ether_addr_copy(&data[i], bp->dev->dev_addr); i += ETH_ALEN; ether_addr_copy(&data[i], bp->dev->dev_addr); i += ETH_ALEN;
From: Pavan Chebbi pavan.chebbi@broadcom.com
commit 8cdb15924252e27af16c4a8fe0fc606ce5fd04dc upstream.
We should setup multicast only when net_device flags explicitly has IFF_MULTICAST set. Otherwise we will incorrectly turn it on even when not asked. Fix it by only passing the multicast table to the firmware if IFF_MULTICAST is set.
Fixes: 7d2837dd7a32 ("bnxt_en: Setup multicast properly after resetting device.") Signed-off-by: Pavan Chebbi pavan.chebbi@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4719,8 +4719,10 @@ static int bnxt_hwrm_cfa_l2_set_rx_mask( return rc;
req->vnic_id = cpu_to_le32(vnic->fw_vnic_id); - req->num_mc_entries = cpu_to_le32(vnic->mc_list_count); - req->mc_tbl_addr = cpu_to_le64(vnic->mc_list_mapping); + if (vnic->rx_mask & CFA_L2_SET_RX_MASK_REQ_MASK_MCAST) { + req->num_mc_entries = cpu_to_le32(vnic->mc_list_count); + req->mc_tbl_addr = cpu_to_le64(vnic->mc_list_mapping); + } req->mask = cpu_to_le32(vnic->rx_mask); return hwrm_req_send_silent(bp, req); } @@ -8635,7 +8637,7 @@ static int bnxt_init_chip(struct bnxt *b if (bp->dev->flags & IFF_ALLMULTI) { vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST; vnic->mc_list_count = 0; - } else { + } else if (bp->dev->flags & IFF_MULTICAST) { u32 mask = 0;
bnxt_mc_list_updated(bp, &mask); @@ -10759,7 +10761,7 @@ static void bnxt_set_rx_mode(struct net_ if (dev->flags & IFF_ALLMULTI) { mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST; vnic->mc_list_count = 0; - } else { + } else if (dev->flags & IFF_MULTICAST) { mc_update = bnxt_mc_list_updated(bp, &mask); }
@@ -10827,9 +10829,10 @@ skip_uc: !bnxt_promisc_ok(bp)) vnic->rx_mask &= ~CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS; rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, 0); - if (rc && vnic->mc_list_count) { + if (rc && (vnic->rx_mask & CFA_L2_SET_RX_MASK_REQ_MASK_MCAST)) { netdev_info(bp->dev, "Failed setting MC filters rc: %d, turning on ALL_MCAST mode\n", rc); + vnic->rx_mask &= ~CFA_L2_SET_RX_MASK_REQ_MASK_MCAST; vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST; vnic->mc_list_count = 0; rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, 0);
From: Kalesh AP kalesh-anakkur.purayil@broadcom.com
commit 0e0e3c5358470cbad10bd7ca29f84a44d179d286 upstream.
During ifdown, we call bnxt_inv_fw_health_reg() which will clear both the status_reliable and resets_reliable flags if these registers are mapped. This is correct because a FW reset during ifdown will clear these register mappings. If we detect that FW has gone through reset during the next ifup, we will remap these registers.
But during normal ifup with no FW reset, we need to restore the resets_reliable flag otherwise we will not show the reset counter during devlink diagnose.
Fixes: 8cc95ceb7087 ("bnxt_en: improve fw diagnose devlink health messages") Reviewed-by: Vikas Gupta vikas.gupta@broadcom.com Reviewed-by: Pavan Chebbi pavan.chebbi@broadcom.com Reviewed-by: Somnath Kotur somnath.kotur@broadcom.com Signed-off-by: Kalesh AP kalesh-anakkur.purayil@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-)
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -7776,6 +7776,19 @@ static int bnxt_map_fw_health_regs(struc return 0; }
+static void bnxt_remap_fw_health_regs(struct bnxt *bp) +{ + if (!bp->fw_health) + return; + + if (bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY) { + bp->fw_health->status_reliable = true; + bp->fw_health->resets_reliable = true; + } else { + bnxt_try_map_fw_health_reg(bp); + } +} + static int bnxt_hwrm_error_recovery_qcfg(struct bnxt *bp) { struct bnxt_fw_health *fw_health = bp->fw_health; @@ -9836,8 +9849,8 @@ static int bnxt_hwrm_if_change(struct bn resc_reinit = true; if (flags & FUNC_DRV_IF_CHANGE_RESP_FLAGS_HOT_FW_RESET_DONE) fw_reset = true; - else if (bp->fw_health && !bp->fw_health->status_reliable) - bnxt_try_map_fw_health_reg(bp); + else + bnxt_remap_fw_health_regs(bp);
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) { netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n");
From: Guenter Roeck linux@roeck-us.net
commit 1b5f517cca36292076d9e38fa6e33a257703e62e upstream.
If an attempt is made to a sensor with a thermal zone and it fails, the call to devm_thermal_zone_of_sensor_register() may return -ENODEV. This may result in crashes similar to the following.
Unable to handle kernel NULL pointer dereference at virtual address 00000000000003cd ... Internal error: Oops: 96000021 [#1] PREEMPT SMP ... pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : mutex_lock+0x18/0x60 lr : thermal_zone_device_update+0x40/0x2e0 sp : ffff800014c4fc60 x29: ffff800014c4fc60 x28: ffff365ee3f6e000 x27: ffffdde218426790 x26: ffff365ee3f6e000 x25: 0000000000000000 x24: ffff365ee3f6e000 x23: ffffdde218426870 x22: ffff365ee3f6e000 x21: 00000000000003cd x20: ffff365ee8bf3308 x19: ffffffffffffffed x18: 0000000000000000 x17: ffffdde21842689c x16: ffffdde1cb7a0b7c x15: 0000000000000040 x14: ffffdde21a4889a0 x13: 0000000000000228 x12: 0000000000000000 x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000 x8 : 0000000001120000 x7 : 0000000000000001 x6 : 0000000000000000 x5 : 0068000878e20f07 x4 : 0000000000000000 x3 : 00000000000003cd x2 : ffff365ee3f6e000 x1 : 0000000000000000 x0 : 00000000000003cd Call trace: mutex_lock+0x18/0x60 hwmon_notify_event+0xfc/0x110 0xffffdde1cb7a0a90 0xffffdde1cb7a0b7c irq_thread_fn+0x2c/0xa0 irq_thread+0x134/0x240 kthread+0x178/0x190 ret_from_fork+0x10/0x20 Code: d503201f d503201f d2800001 aa0103e4 (c8e47c02)
Jon Hunter reports that the exact call sequence is:
hwmon_notify_event() --> hwmon_thermal_notify() --> thermal_zone_device_update() --> update_temperature() --> mutex_lock()
The hwmon core needs to handle all errors returned from calls to devm_thermal_zone_of_sensor_register(). If the call fails with -ENODEV, report that the sensor was not attached to a thermal zone but continue to register the hwmon device.
Reported-by: Jon Hunter jonathanh@nvidia.com Cc: Dmitry Osipenko digetx@gmail.com Fixes: 1597b374af222 ("hwmon: Add notification support") Reviewed-by: Dmitry Osipenko dmitry.osipenko@collabora.com Tested-by: Jon Hunter jonathanh@nvidia.com Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hwmon/hwmon.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
--- a/drivers/hwmon/hwmon.c +++ b/drivers/hwmon/hwmon.c @@ -214,12 +214,14 @@ static int hwmon_thermal_add_sensor(stru
tzd = devm_thermal_zone_of_sensor_register(dev, index, tdata, &hwmon_thermal_ops); - /* - * If CONFIG_THERMAL_OF is disabled, this returns -ENODEV, - * so ignore that error but forward any other error. - */ - if (IS_ERR(tzd) && (PTR_ERR(tzd) != -ENODEV)) - return PTR_ERR(tzd); + if (IS_ERR(tzd)) { + if (PTR_ERR(tzd) != -ENODEV) + return PTR_ERR(tzd); + dev_info(dev, "temp%d_input not attached to any thermal zone\n", + index + 1); + devm_kfree(dev, tdata); + return 0; + }
err = devm_add_action(dev, hwmon_thermal_remove_sensor, &tdata->node); if (err)
From: Chris Mi cmi@nvidia.com
commit be7f4b0ab149afd19514929fad824b2117d238c9 upstream.
Only prio 1 is supported if firmware doesn't support ignore flow level for nic mode. The offending commit removed the check wrongly. Add it back.
Fixes: 9a99c8f1253a ("net/mlx5e: E-Switch, Offload all chain 0 priorities when modify header and forward action is not supported") Signed-off-by: Chris Mi cmi@nvidia.com Reviewed-by: Roi Dayan roid@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c @@ -121,6 +121,9 @@ u32 mlx5_chains_get_nf_ft_chain(struct m
u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains) { + if (!mlx5_chains_prios_supported(chains)) + return 1; + if (mlx5_chains_ignore_flow_level_supported(chains)) return UINT_MAX;
From: Michal Swiatkowski michal.swiatkowski@linux.intel.com
commit 932645c298c41aad64ef13016ff4c2034eef5aed upstream.
Accidentally filter flag for none encapsulated l4 port field is always set. Even if user wants to add encapsulated l4 port field.
Remove this unnecessary flag setting.
Fixes: 9e300987d4a81 ("ice: VXLAN and Geneve TC support") Signed-off-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Tested-by: Sandeep Penigalapati sandeep.penigalapati@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/intel/ice/ice_tc_lib.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c @@ -711,7 +711,7 @@ ice_tc_set_port(struct flow_match_ports fltr->flags |= ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT; else fltr->flags |= ICE_TC_FLWR_FIELD_DEST_L4_PORT; - fltr->flags |= ICE_TC_FLWR_FIELD_DEST_L4_PORT; + headers->l4_key.dst_port = match.key->dst; headers->l4_mask.dst_port = match.mask->dst; } @@ -720,7 +720,7 @@ ice_tc_set_port(struct flow_match_ports fltr->flags |= ICE_TC_FLWR_FIELD_ENC_SRC_L4_PORT; else fltr->flags |= ICE_TC_FLWR_FIELD_SRC_L4_PORT; - fltr->flags |= ICE_TC_FLWR_FIELD_SRC_L4_PORT; + headers->l4_key.src_port = match.key->src; headers->l4_mask.src_port = match.mask->src; }
From: Jacob Keller jacob.e.keller@intel.com
commit fadead80fe4c033b5e514fcbadd20b55c4494112 upstream.
Commit c503e63200c6 ("ice: Stop processing VF messages during teardown") introduced a driver state flag, ICE_VF_DEINIT_IN_PROGRESS, which is intended to prevent some issues with concurrently handling messages from VFs while tearing down the VFs.
This change was motivated by crashes caused while tearing down and bringing up VFs in rapid succession.
It turns out that the fix actually introduces issues with the VF driver caused because the PF no longer responds to any messages sent by the VF during its .remove routine. This results in the VF potentially removing its DMA memory before the PF has shut down the device queues.
Additionally, the fix doesn't actually resolve concurrency issues within the ice driver. It is possible for a VF to initiate a reset just prior to the ice driver removing VFs. This can result in the remove task concurrently operating while the VF is being reset. This results in similar memory corruption and panics purportedly fixed by that commit.
Fix this concurrency at its root by protecting both the reset and removal flows using the existing VF cfg_lock. This ensures that we cannot remove the VF while any outstanding critical tasks such as a virtchnl message or a reset are occurring.
This locking change also fixes the root cause originally fixed by commit c503e63200c6 ("ice: Stop processing VF messages during teardown"), so we can simply revert it.
Note that I kept these two changes together because simply reverting the original commit alone would leave the driver vulnerable to worse race conditions.
Fixes: c503e63200c6 ("ice: Stop processing VF messages during teardown") Signed-off-by: Jacob Keller jacob.e.keller@intel.com Tested-by: Konrad Jankowski konrad0.jankowski@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/intel/ice/ice.h | 1 drivers/net/ethernet/intel/ice/ice_main.c | 2 + drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 42 +++++++++++++---------- 3 files changed, 27 insertions(+), 18 deletions(-)
--- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -280,7 +280,6 @@ enum ice_pf_state { ICE_VFLR_EVENT_PENDING, ICE_FLTR_OVERFLOW_PROMISC, ICE_VF_DIS, - ICE_VF_DEINIT_IN_PROGRESS, ICE_CFG_BUSY, ICE_SERVICE_SCHED, ICE_SERVICE_DIS, --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -1772,7 +1772,9 @@ static void ice_handle_mdd_event(struct * reset, so print the event prior to reset. */ ice_print_vf_rx_mdd_event(vf); + mutex_lock(&pf->vf[i].cfg_lock); ice_reset_vf(&pf->vf[i], false); + mutex_unlock(&pf->vf[i].cfg_lock); } } } --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -617,8 +617,6 @@ void ice_free_vfs(struct ice_pf *pf) struct ice_hw *hw = &pf->hw; unsigned int tmp, i;
- set_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state); - if (!pf->vf) return;
@@ -636,22 +634,26 @@ void ice_free_vfs(struct ice_pf *pf) else dev_warn(dev, "VFs are assigned - not disabling SR-IOV\n");
- /* Avoid wait time by stopping all VFs at the same time */ - ice_for_each_vf(pf, i) - ice_dis_vf_qs(&pf->vf[i]); - tmp = pf->num_alloc_vfs; pf->num_qps_per_vf = 0; pf->num_alloc_vfs = 0; for (i = 0; i < tmp; i++) { - if (test_bit(ICE_VF_STATE_INIT, pf->vf[i].vf_states)) { + struct ice_vf *vf = &pf->vf[i]; + + mutex_lock(&vf->cfg_lock); + + ice_dis_vf_qs(vf); + + if (test_bit(ICE_VF_STATE_INIT, vf->vf_states)) { /* disable VF qp mappings and set VF disable state */ - ice_dis_vf_mappings(&pf->vf[i]); - set_bit(ICE_VF_STATE_DIS, pf->vf[i].vf_states); - ice_free_vf_res(&pf->vf[i]); + ice_dis_vf_mappings(vf); + set_bit(ICE_VF_STATE_DIS, vf->vf_states); + ice_free_vf_res(vf); }
- mutex_destroy(&pf->vf[i].cfg_lock); + mutex_unlock(&vf->cfg_lock); + + mutex_destroy(&vf->cfg_lock); }
if (ice_sriov_free_msix_res(pf)) @@ -687,7 +689,6 @@ void ice_free_vfs(struct ice_pf *pf) i);
clear_bit(ICE_VF_DIS, pf->state); - clear_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state); clear_bit(ICE_FLAG_SRIOV_ENA, pf->flags); }
@@ -1613,6 +1614,8 @@ bool ice_reset_all_vfs(struct ice_pf *pf ice_for_each_vf(pf, v) { vf = &pf->vf[v];
+ mutex_lock(&vf->cfg_lock); + vf->driver_caps = 0; ice_vc_set_default_allowlist(vf);
@@ -1627,6 +1630,8 @@ bool ice_reset_all_vfs(struct ice_pf *pf ice_vf_pre_vsi_rebuild(vf); ice_vf_rebuild_vsi(vf); ice_vf_post_vsi_rebuild(vf); + + mutex_unlock(&vf->cfg_lock); }
if (ice_is_eswitch_mode_switchdev(pf)) @@ -1677,6 +1682,8 @@ bool ice_reset_vf(struct ice_vf *vf, boo u32 reg; int i;
+ lockdep_assert_held(&vf->cfg_lock); + dev = ice_pf_to_dev(pf);
if (test_bit(ICE_VF_RESETS_DISABLED, pf->state)) { @@ -2176,9 +2183,12 @@ void ice_process_vflr_event(struct ice_p bit_idx = (hw->func_caps.vf_base_id + vf_id) % 32; /* read GLGEN_VFLRSTAT register to find out the flr VFs */ reg = rd32(hw, GLGEN_VFLRSTAT(reg_idx)); - if (reg & BIT(bit_idx)) + if (reg & BIT(bit_idx)) { /* GLGEN_VFLRSTAT bit will be cleared in ice_reset_vf */ + mutex_lock(&vf->cfg_lock); ice_reset_vf(vf, true); + mutex_unlock(&vf->cfg_lock); + } } }
@@ -2255,7 +2265,9 @@ ice_vf_lan_overflow_event(struct ice_pf if (!vf) return;
+ mutex_lock(&vf->cfg_lock); ice_vc_reset_vf(vf); + mutex_unlock(&vf->cfg_lock); }
/** @@ -4651,10 +4663,6 @@ void ice_vc_process_vf_msg(struct ice_pf struct device *dev; int err = 0;
- /* if de-init is underway, don't process messages from VF */ - if (test_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state)) - return; - dev = ice_pf_to_dev(pf); if (ice_validate_vf_id(pf, vf_id)) { err = -EINVAL;
From: Tom Rix trix@redhat.com
commit ed22d9c8d128293fc7b0b086c7d3654bcb99a8dd upstream.
Clang static analysis reports this issue time64.h:69:50: warning: The left operand of '+' is a garbage value set_normalized_timespec64(&ts_delta, lhs.tv_sec + rhs.tv_sec, ~~~~~~~~~~ ^ In ice_ptp_adjtime_nonatomic(), the timespec64 variable 'now' is set by ice_ptp_gettimex64(). This function can fail with -EBUSY, so 'now' can have a gargbage value. So check the return.
Fixes: 06c16d89d2cb ("ice: register 1588 PTP clock device object for E810 devices") Signed-off-by: Tom Rix trix@redhat.com Tested-by: Gurucharan G gurucharanx.g@intel.com (A Contingent worker at Intel) Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/intel/ice/ice_ptp.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -1121,9 +1121,12 @@ exit: static int ice_ptp_adjtime_nonatomic(struct ptp_clock_info *info, s64 delta) { struct timespec64 now, then; + int ret;
then = ns_to_timespec64(delta); - ice_ptp_gettimex64(info, &now, NULL); + ret = ice_ptp_gettimex64(info, &now, NULL); + if (ret) + return ret; now = timespec64_add(now, then);
return ice_ptp_settime64(info, (const struct timespec64 *)&now);
From: Tom Rix trix@redhat.com
commit 5950bdc88dd1d158f2845fdff8fb1de86476806c upstream.
Clang static analysis reports this issues ice_common.c:5008:21: warning: The left expression of the compound assignment is an uninitialized value. The computed value will also be garbage ldo->phy_type_low |= ((u64)buf << (i * 16)); ~~~~~~~~~~~~~~~~~ ^
When called from ice_cfg_phy_fec() ldo is the uninitialized local variable tlv. So initialize.
Fixes: ea78ce4dab05 ("ice: add link lenient and default override support") Signed-off-by: Tom Rix trix@redhat.com Tested-by: Gurucharan G gurucharanx.g@intel.com (A Contingent worker at Intel) Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/intel/ice/ice_common.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -3319,7 +3319,7 @@ ice_cfg_phy_fec(struct ice_port_info *pi
if (fec == ICE_FEC_AUTO && ice_fw_supports_link_override(hw) && !ice_fw_supports_report_dflt_cfg(hw)) { - struct ice_link_default_override_tlv tlv; + struct ice_link_default_override_tlv tlv = { 0 };
status = ice_get_link_default_override(&tlv, pi); if (status)
From: Meir Lichtinger meirl@nvidia.com
commit f908a35b22180c4da64cf2647e4f5f0cd3054da7 upstream.
Add the upcoming BlueField-4 and ConnectX-8 device IDs.
Fixes: 2e9d3e83ab82 ("net/mlx5: Update the list of the PCI supported devices") Signed-off-by: Meir Lichtinger meirl@nvidia.com Reviewed-by: Gal Pressman gal@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/main.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -1796,10 +1796,12 @@ static const struct pci_device_id mlx5_c { PCI_VDEVICE(MELLANOX, 0x101e), MLX5_PCI_DEV_IS_VF}, /* ConnectX Family mlx5Gen Virtual Function */ { PCI_VDEVICE(MELLANOX, 0x101f) }, /* ConnectX-6 LX */ { PCI_VDEVICE(MELLANOX, 0x1021) }, /* ConnectX-7 */ + { PCI_VDEVICE(MELLANOX, 0x1023) }, /* ConnectX-8 */ { PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */ { PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */ { PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */ { PCI_VDEVICE(MELLANOX, 0xa2dc) }, /* BlueField-3 integrated ConnectX-7 network controller */ + { PCI_VDEVICE(MELLANOX, 0xa2df) }, /* BlueField-4 integrated ConnectX-8 network controller */ { 0, } };
From: Kumar Kartikeya Dwivedi memxor@gmail.com
commit a8abb0c3dc1e28454851a00f8b7333d9695d566c upstream.
When both bpf_spin_lock and bpf_timer are present in a BPF map value, copy_map_value needs to skirt both objects when copying a value into and out of the map. However, the current code does not set both s_off and t_off in copy_map_value, which leads to a crash when e.g. bpf_spin_lock is placed in map value with bpf_timer, as bpf_map_update_elem call will be able to overwrite the other timer object.
When the issue is not fixed, an overwriting can produce the following splat:
[root@(none) bpf]# ./test_progs -t timer_crash [ 15.930339] bpf_testmod: loading out-of-tree module taints kernel. [ 16.037849] ================================================================== [ 16.038458] BUG: KASAN: user-memory-access in __pv_queued_spin_lock_slowpath+0x32b/0x520 [ 16.038944] Write of size 8 at addr 0000000000043ec0 by task test_progs/325 [ 16.039399] [ 16.039514] CPU: 0 PID: 325 Comm: test_progs Tainted: G OE 5.16.0+ #278 [ 16.039983] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ArchLinux 1.15.0-1 04/01/2014 [ 16.040485] Call Trace: [ 16.040645] <TASK> [ 16.040805] dump_stack_lvl+0x59/0x73 [ 16.041069] ? __pv_queued_spin_lock_slowpath+0x32b/0x520 [ 16.041427] kasan_report.cold+0x116/0x11b [ 16.041673] ? __pv_queued_spin_lock_slowpath+0x32b/0x520 [ 16.042040] __pv_queued_spin_lock_slowpath+0x32b/0x520 [ 16.042328] ? memcpy+0x39/0x60 [ 16.042552] ? pv_hash+0xd0/0xd0 [ 16.042785] ? lockdep_hardirqs_off+0x95/0xd0 [ 16.043079] __bpf_spin_lock_irqsave+0xdf/0xf0 [ 16.043366] ? bpf_get_current_comm+0x50/0x50 [ 16.043608] ? jhash+0x11a/0x270 [ 16.043848] bpf_timer_cancel+0x34/0xe0 [ 16.044119] bpf_prog_c4ea1c0f7449940d_sys_enter+0x7c/0x81 [ 16.044500] bpf_trampoline_6442477838_0+0x36/0x1000 [ 16.044836] __x64_sys_nanosleep+0x5/0x140 [ 16.045119] do_syscall_64+0x59/0x80 [ 16.045377] ? lock_is_held_type+0xe4/0x140 [ 16.045670] ? irqentry_exit_to_user_mode+0xa/0x40 [ 16.046001] ? mark_held_locks+0x24/0x90 [ 16.046287] ? asm_exc_page_fault+0x1e/0x30 [ 16.046569] ? asm_exc_page_fault+0x8/0x30 [ 16.046851] ? lockdep_hardirqs_on+0x7e/0x100 [ 16.047137] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 16.047405] RIP: 0033:0x7f9e4831718d [ 16.047602] Code: b4 0c 00 0f 05 eb a9 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d b3 6c 0c 00 f7 d8 64 89 01 48 [ 16.048764] RSP: 002b:00007fff488086b8 EFLAGS: 00000206 ORIG_RAX: 0000000000000023 [ 16.049275] RAX: ffffffffffffffda RBX: 00007f9e48683740 RCX: 00007f9e4831718d [ 16.049747] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fff488086d0 [ 16.050225] RBP: 00007fff488086f0 R08: 00007fff488085d7 R09: 00007f9e4cb594a0 [ 16.050648] R10: 0000000000000000 R11: 0000000000000206 R12: 00007f9e484cde30 [ 16.051124] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [ 16.051608] </TASK> [ 16.051762] ==================================================================
Fixes: 68134668c17f ("bpf: Add map side support for bpf timers.") Signed-off-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Alexei Starovoitov ast@kernel.org Link: https://lore.kernel.org/bpf/20220209070324.1093182-2-memxor@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/bpf.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -224,7 +224,8 @@ static inline void copy_map_value(struct if (unlikely(map_value_has_spin_lock(map))) { s_off = map->spin_lock_off; s_sz = sizeof(struct bpf_spin_lock); - } else if (unlikely(map_value_has_timer(map))) { + } + if (unlikely(map_value_has_timer(map))) { t_off = map->timer_off; t_sz = sizeof(struct bpf_timer); }
From: Felix Maurer fmaurer@redhat.com
commit 4a11678f683814df82fca9018d964771e02d7e6d upstream.
If bpf_msg_push_data() is called with len 0 (as it happens during selftests/bpf/test_sockmap), we do not need to do anything and can return early.
Calling bpf_msg_push_data() with len 0 previously lead to a wrong ENOMEM error: we later called get_order(copy + len); if len was 0, copy + len was also often 0 and get_order() returned some undefined value (at the moment 52). alloc_pages() caught that and failed, but then bpf_msg_push_data() returned ENOMEM. This was wrong because we are most probably not out of memory and actually do not need any additional memory.
Fixes: 6fff607e2f14b ("bpf: sk_msg program helper bpf_msg_push_data") Signed-off-by: Felix Maurer fmaurer@redhat.com Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Yonghong Song yhs@fb.com Acked-by: John Fastabend john.fastabend@gmail.com Link: https://lore.kernel.org/bpf/df69012695c7094ccb1943ca02b4920db3537466.1644421... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/core/filter.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/net/core/filter.c +++ b/net/core/filter.c @@ -2711,6 +2711,9 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_ if (unlikely(flags)) return -EINVAL;
+ if (unlikely(len == 0)) + return 0; + /* First find the starting scatterlist element */ i = msg->sg.start; do {
From: Felix Maurer fmaurer@redhat.com
commit 61d06f01f9710b327a53492e5add9f972eb909b3 upstream.
bpf_msg_push_data may return a non-zero value to indicate an error. The return value should be checked to prevent undetected errors.
To indicate an error, the BPF programs now perform a different action than their intended one to make the userspace test program notice the error, i.e., the programs supposed to pass/redirect drop, the program supposed to drop passes.
Fixes: 84fbfe026acaa ("bpf: test_sockmap add options to use msg_push_data") Signed-off-by: Felix Maurer fmaurer@redhat.com Signed-off-by: Alexei Starovoitov ast@kernel.org Acked-by: John Fastabend john.fastabend@gmail.com Link: https://lore.kernel.org/bpf/89f767bb44005d6b4dd1f42038c438f76b3ebfad.1644601... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/testing/selftests/bpf/progs/test_sockmap_kern.h | 26 ++++++++++++------ 1 file changed, 18 insertions(+), 8 deletions(-)
--- a/tools/testing/selftests/bpf/progs/test_sockmap_kern.h +++ b/tools/testing/selftests/bpf/progs/test_sockmap_kern.h @@ -235,7 +235,7 @@ SEC("sk_msg1") int bpf_prog4(struct sk_msg_md *msg) { int *bytes, zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5; - int *start, *end, *start_push, *end_push, *start_pop, *pop; + int *start, *end, *start_push, *end_push, *start_pop, *pop, err = 0;
bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero); if (bytes) @@ -249,8 +249,11 @@ int bpf_prog4(struct sk_msg_md *msg) bpf_msg_pull_data(msg, *start, *end, 0); start_push = bpf_map_lookup_elem(&sock_bytes, &two); end_push = bpf_map_lookup_elem(&sock_bytes, &three); - if (start_push && end_push) - bpf_msg_push_data(msg, *start_push, *end_push, 0); + if (start_push && end_push) { + err = bpf_msg_push_data(msg, *start_push, *end_push, 0); + if (err) + return SK_DROP; + } start_pop = bpf_map_lookup_elem(&sock_bytes, &four); pop = bpf_map_lookup_elem(&sock_bytes, &five); if (start_pop && pop) @@ -263,6 +266,7 @@ int bpf_prog6(struct sk_msg_md *msg) { int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5, key = 0; int *bytes, *start, *end, *start_push, *end_push, *start_pop, *pop, *f; + int err = 0; __u64 flags = 0;
bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero); @@ -279,8 +283,11 @@ int bpf_prog6(struct sk_msg_md *msg)
start_push = bpf_map_lookup_elem(&sock_bytes, &two); end_push = bpf_map_lookup_elem(&sock_bytes, &three); - if (start_push && end_push) - bpf_msg_push_data(msg, *start_push, *end_push, 0); + if (start_push && end_push) { + err = bpf_msg_push_data(msg, *start_push, *end_push, 0); + if (err) + return SK_DROP; + }
start_pop = bpf_map_lookup_elem(&sock_bytes, &four); pop = bpf_map_lookup_elem(&sock_bytes, &five); @@ -338,7 +345,7 @@ SEC("sk_msg5") int bpf_prog10(struct sk_msg_md *msg) { int *bytes, *start, *end, *start_push, *end_push, *start_pop, *pop; - int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5; + int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5, err = 0;
bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero); if (bytes) @@ -352,8 +359,11 @@ int bpf_prog10(struct sk_msg_md *msg) bpf_msg_pull_data(msg, *start, *end, 0); start_push = bpf_map_lookup_elem(&sock_bytes, &two); end_push = bpf_map_lookup_elem(&sock_bytes, &three); - if (start_push && end_push) - bpf_msg_push_data(msg, *start_push, *end_push, 0); + if (start_push && end_push) { + err = bpf_msg_push_data(msg, *start_push, *end_push, 0); + if (err) + return SK_PASS; + } start_pop = bpf_map_lookup_elem(&sock_bytes, &four); pop = bpf_map_lookup_elem(&sock_bytes, &five); if (start_pop && pop)
From: Yonghong Song yhs@fb.com
commit 5eaed6eedbe9612f642ad2b880f961d1c6c8ec2b upstream.
The patch in [1] intends to fix a bpf_timer related issue, but the fix caused existing 'timer' selftest to fail with hang or some random errors. After some debug, I found an issue with check_and_init_map_value() in the hashtab.c. More specifically, in hashtab.c, we have code l_new = bpf_map_kmalloc_node(&htab->map, ...) check_and_init_map_value(&htab->map, l_new...) Note that bpf_map_kmalloc_node() does not do initialization so l_new contains random value.
The function check_and_init_map_value() intends to zero the bpf_spin_lock and bpf_timer if they exist in the map. But I found bpf_spin_lock is zero'ed but bpf_timer is not zero'ed. With [1], later copy_map_value() skips copying of bpf_spin_lock and bpf_timer. The non-zero bpf_timer caused random failures for 'timer' selftest. Without [1], for both bpf_spin_lock and bpf_timer case, bpf_timer will be zero'ed, so 'timer' self test is okay.
For check_and_init_map_value(), why bpf_spin_lock is zero'ed properly while bpf_timer not. In bpf uapi header, we have struct bpf_spin_lock { __u32 val; }; struct bpf_timer { __u64 :64; __u64 :64; } __attribute__((aligned(8)));
The initialization code: *(struct bpf_spin_lock *)(dst + map->spin_lock_off) = (struct bpf_spin_lock){}; *(struct bpf_timer *)(dst + map->timer_off) = (struct bpf_timer){}; It appears the compiler has no obligation to initialize anonymous fields. For example, let us use clang with bpf target as below: $ cat t.c struct bpf_timer { unsigned long long :64; }; struct bpf_timer2 { unsigned long long a; };
void test(struct bpf_timer *t) { *t = (struct bpf_timer){}; } void test2(struct bpf_timer2 *t) { *t = (struct bpf_timer2){}; } $ clang -target bpf -O2 -c -g t.c $ llvm-objdump -d t.o ... 0000000000000000 <test>: 0: 95 00 00 00 00 00 00 00 exit 0000000000000008 <test2>: 1: b7 02 00 00 00 00 00 00 r2 = 0 2: 7b 21 00 00 00 00 00 00 *(u64 *)(r1 + 0) = r2 3: 95 00 00 00 00 00 00 00 exit
gcc11.2 does not have the above issue. But from INTERNATIONAL STANDARD ©ISO/IEC ISO/IEC 9899:201x Programming languages — C http://www.open-std.org/Jtc1/sc22/wg14/www/docs/n1547.pdf page 157: Except where explicitly stated otherwise, for the purposes of this subclause unnamed members of objects of structure and union type do not participate in initialization. Unnamed members of structure objects have indeterminate value even after initialization.
To fix the problem, let use memset for bpf_timer case in check_and_init_map_value(). For consistency, memset is also used for bpf_spin_lock case.
[1] https://lore.kernel.org/bpf/20220209070324.1093182-2-memxor@gmail.com/
Fixes: 68134668c17f3 ("bpf: Add map side support for bpf timers.") Signed-off-by: Yonghong Song yhs@fb.com Signed-off-by: Alexei Starovoitov ast@kernel.org Link: https://lore.kernel.org/bpf/20220211194953.3142152-1-yhs@fb.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/bpf.h | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
--- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -209,11 +209,9 @@ static inline bool map_value_has_timer(c static inline void check_and_init_map_value(struct bpf_map *map, void *dst) { if (unlikely(map_value_has_spin_lock(map))) - *(struct bpf_spin_lock *)(dst + map->spin_lock_off) = - (struct bpf_spin_lock){}; + memset(dst + map->spin_lock_off, 0, sizeof(struct bpf_spin_lock)); if (unlikely(map_value_has_timer(map))) - *(struct bpf_timer *)(dst + map->timer_off) = - (struct bpf_timer){}; + memset(dst + map->timer_off, 0, sizeof(struct bpf_timer)); }
/* copy everything but bpf_spin_lock and bpf_timer. There could be one of each. */
From: Eric Dumazet edumazet@google.com
commit 75134f16e7dd0007aa474b281935c5f42e79f2c8 upstream.
syzbot reported various soft lockups caused by bpf batch operations.
INFO: task kworker/1:1:27 blocked for more than 140 seconds. INFO: task hung in rcu_barrier
Nothing prevents batch ops to process huge amount of data, we need to add schedule points in them.
Note that maybe_wait_bpf_programs(map) calls from generic_map_delete_batch() can be factorized by moving the call after the loop.
This will be done later in -next tree once we get this fix merged, unless there is strong opinion doing this optimization sooner.
Fixes: aa2e93b8e58e ("bpf: Add generic support for update and delete batch ops") Fixes: cb4d03ab499d ("bpf: Add generic support for lookup batch op") Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: Alexei Starovoitov ast@kernel.org Reviewed-by: Stanislav Fomichev sdf@google.com Acked-by: Brian Vazquez brianvv@google.com Link: https://lore.kernel.org/bpf/20220217181902.808742-1-eric.dumazet@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/bpf/syscall.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1354,6 +1354,7 @@ int generic_map_delete_batch(struct bpf_ maybe_wait_bpf_programs(map); if (err) break; + cond_resched(); } if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp))) err = -EFAULT; @@ -1411,6 +1412,7 @@ int generic_map_update_batch(struct bpf_
if (err) break; + cond_resched(); }
if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp))) @@ -1508,6 +1510,7 @@ int generic_map_lookup_batch(struct bpf_ swap(prev_key, key); retry = MAP_LOOKUP_RETRIES; cp++; + cond_resched(); }
if (err == -EFAULT)
From: Eric Dumazet edumazet@google.com
commit f240762f88b4b1b58561939ffd44837759756477 upstream.
Looping ~65535 times doing kmalloc() calls can trigger soft lockups, especially with DEBUG features (like KASAN).
[ 253.536212] watchdog: BUG: soft lockup - CPU#64 stuck for 26s! [b219417889:12575] [ 253.544433] Modules linked in: vfat fat i2c_mux_pca954x i2c_mux spidev cdc_acm xhci_pci xhci_hcd sha3_generic gq(O) [ 253.544451] CPU: 64 PID: 12575 Comm: b219417889 Tainted: G S O 5.17.0-smp-DEV #801 [ 253.544457] RIP: 0010:kernel_text_address (./include/asm-generic/sections.h:192 ./include/linux/kallsyms.h:29 kernel/extable.c:67 kernel/extable.c:98) [ 253.544464] Code: 0f 93 c0 48 c7 c1 e0 63 d7 a4 48 39 cb 0f 92 c1 20 c1 0f b6 c1 5b 5d c3 90 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 53 48 89 fb <48> c7 c0 00 00 80 a0 41 be 01 00 00 00 48 39 c7 72 0c 48 c7 c0 40 [ 253.544468] RSP: 0018:ffff8882d8baf4c0 EFLAGS: 00000246 [ 253.544471] RAX: 1ffff1105b175e00 RBX: ffffffffa13ef09a RCX: 00000000a13ef001 [ 253.544474] RDX: ffffffffa13ef09a RSI: ffff8882d8baf558 RDI: ffffffffa13ef09a [ 253.544476] RBP: ffff8882d8baf4d8 R08: ffff8882d8baf5e0 R09: 0000000000000004 [ 253.544479] R10: ffff8882d8baf5e8 R11: ffffffffa0d59a50 R12: ffff8882eab20380 [ 253.544481] R13: ffffffffa0d59a50 R14: dffffc0000000000 R15: 1ffff1105b175eb0 [ 253.544483] FS: 00000000016d3380(0000) GS:ffff88af48c00000(0000) knlGS:0000000000000000 [ 253.544486] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 253.544488] CR2: 00000000004af0f0 CR3: 00000002eabfa004 CR4: 00000000003706e0 [ 253.544491] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 253.544492] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 253.544494] Call Trace: [ 253.544496] <TASK> [ 253.544498] ? io_queue_sqe (fs/io_uring.c:7143) [ 253.544505] __kernel_text_address (kernel/extable.c:78) [ 253.544508] unwind_get_return_address (arch/x86/kernel/unwind_frame.c:19) [ 253.544514] arch_stack_walk (arch/x86/kernel/stacktrace.c:27) [ 253.544517] ? io_queue_sqe (fs/io_uring.c:7143) [ 253.544521] stack_trace_save (kernel/stacktrace.c:123) [ 253.544527] ____kasan_kmalloc (mm/kasan/common.c:39 mm/kasan/common.c:45 mm/kasan/common.c:436 mm/kasan/common.c:515) [ 253.544531] ? ____kasan_kmalloc (mm/kasan/common.c:39 mm/kasan/common.c:45 mm/kasan/common.c:436 mm/kasan/common.c:515) [ 253.544533] ? __kasan_kmalloc (mm/kasan/common.c:524) [ 253.544535] ? kmem_cache_alloc_trace (./include/linux/kasan.h:270 mm/slab.c:3567) [ 253.544541] ? io_issue_sqe (fs/io_uring.c:4556 fs/io_uring.c:4589 fs/io_uring.c:6828) [ 253.544544] ? __io_queue_sqe (fs/io_uring.c:?) [ 253.544551] __kasan_kmalloc (mm/kasan/common.c:524) [ 253.544553] kmem_cache_alloc_trace (./include/linux/kasan.h:270 mm/slab.c:3567) [ 253.544556] ? io_issue_sqe (fs/io_uring.c:4556 fs/io_uring.c:4589 fs/io_uring.c:6828) [ 253.544560] io_issue_sqe (fs/io_uring.c:4556 fs/io_uring.c:4589 fs/io_uring.c:6828) [ 253.544564] ? __kasan_slab_alloc (mm/kasan/common.c:45 mm/kasan/common.c:436 mm/kasan/common.c:469) [ 253.544567] ? __kasan_slab_alloc (mm/kasan/common.c:39 mm/kasan/common.c:45 mm/kasan/common.c:436 mm/kasan/common.c:469) [ 253.544569] ? kmem_cache_alloc_bulk (mm/slab.h:732 mm/slab.c:3546) [ 253.544573] ? __io_alloc_req_refill (fs/io_uring.c:2078) [ 253.544578] ? io_submit_sqes (fs/io_uring.c:7441) [ 253.544581] ? __se_sys_io_uring_enter (fs/io_uring.c:10154 fs/io_uring.c:10096) [ 253.544584] ? __x64_sys_io_uring_enter (fs/io_uring.c:10096) [ 253.544587] ? do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80) [ 253.544590] ? entry_SYSCALL_64_after_hwframe (??:?) [ 253.544596] __io_queue_sqe (fs/io_uring.c:?) [ 253.544600] io_queue_sqe (fs/io_uring.c:7143) [ 253.544603] io_submit_sqe (fs/io_uring.c:?) [ 253.544608] io_submit_sqes (fs/io_uring.c:?) [ 253.544612] __se_sys_io_uring_enter (fs/io_uring.c:10154 fs/io_uring.c:10096) [ 253.544616] __x64_sys_io_uring_enter (fs/io_uring.c:10096) [ 253.544619] do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80) [ 253.544623] entry_SYSCALL_64_after_hwframe (??:?)
Fixes: ddf0322db79c ("io_uring: add IORING_OP_PROVIDE_BUFFERS") Signed-off-by: Eric Dumazet edumazet@google.com Cc: Jens Axboe axboe@kernel.dk Cc: Pavel Begunkov asml.silence@gmail.com Cc: io-uring io-uring@vger.kernel.org Reported-by: syzbot syzkaller@googlegroups.com Link: https://lore.kernel.org/r/20220215041003.2394784-1-eric.dumazet@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/io_uring.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4477,6 +4477,7 @@ static int io_add_buffers(struct io_prov } else { list_add_tail(&buf->list, &(*head)->list); } + cond_resched(); }
return i ? i : -ENOMEM;
From: Eric Dumazet edumazet@google.com
commit ef527f968ae05c6717c39f49c8709a7e2c19183a upstream.
Whenever one of these functions pull all data from an skb in a frag_list, use consume_skb() instead of kfree_skb() to avoid polluting drop monitoring.
Fixes: 6fa01ccd8830 ("skbuff: Add pskb_extract() helper function") Signed-off-by: Eric Dumazet edumazet@google.com Link: https://lore.kernel.org/r/20220220154052.1308469-1-eric.dumazet@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/core/skbuff.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2254,7 +2254,7 @@ void *__pskb_pull_tail(struct sk_buff *s /* Free pulled out fragments. */ while ((list = skb_shinfo(skb)->frag_list) != insp) { skb_shinfo(skb)->frag_list = list->next; - kfree_skb(list); + consume_skb(list); } /* And insert new clone at head. */ if (clone) { @@ -6227,7 +6227,7 @@ static int pskb_carve_frag_list(struct s /* Free pulled out fragments. */ while ((list = shinfo->frag_list) != insp) { shinfo->frag_list = list->next; - kfree_skb(list); + consume_skb(list); } /* And insert new clone at head. */ if (clone) {
From: Christoph Hellwig hch@lst.de
commit 602e57c9799c19f27e440639deed3ec45cfe1651 upstream.
Commit e7d65803e2bb ("nvme-multipath: revalidate paths during rescan") introduced the NVME_NS_READY flag, which nvme_path_is_disabled() uses to check if a path can be used or not. We also need to set this flag for devices that fail the ZNS feature validation and which are available through passthrough devices only to that they can be used in multipathing setups.
Fixes: e7d65803e2bb ("nvme-multipath: revalidate paths during rescan") Reported-by: Kanchan Joshi joshi.k@samsung.com Signed-off-by: Christoph Hellwig hch@lst.de Reviewed-by: Sagi Grimberg sagi@grimberg.me Reviewed-by: Daniel Wagner dwagner@suse.de Tested-by: Kanchan Joshi joshi.k@samsung.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/nvme/host/core.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1936,7 +1936,7 @@ static int nvme_update_ns_info(struct nv if (blk_queue_is_zoned(ns->queue)) { ret = nvme_revalidate_zones(ns); if (ret && !nvme_first_scan(ns->disk)) - goto out; + return ret; }
if (nvme_ns_head_multipath(ns->head)) { @@ -1951,16 +1951,16 @@ static int nvme_update_ns_info(struct nv return 0;
out_unfreeze: - blk_mq_unfreeze_queue(ns->disk->queue); -out: /* * If probing fails due an unsupported feature, hide the block device, * but still allow other access. */ if (ret == -ENODEV) { ns->disk->flags |= GENHD_FL_HIDDEN; + set_bit(NVME_NS_READY, &ns->flags); ret = 0; } + blk_mq_unfreeze_queue(ns->disk->queue); return ret; }
From: Dan Carpenter dan.carpenter@oracle.com
commit a1f8fec4dac8bc7b172b2bdbd881e015261a6322 upstream.
These tests are supposed to check if the loop exited via a break or not. However the tests are wrong because if we did not exit via a break then "p" is not a valid pointer. In that case, it's the equivalent of "if (*(u32 *)sr == *last_key) {". That's going to work most of the time, but there is a potential for those to be equal.
Fixes: 1593123a6a49 ("tipc: add name table dump to new netlink api") Fixes: 1a1a143daf84 ("tipc: add publication dump to new netlink api") Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/tipc/name_table.c | 2 +- net/tipc/socket.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
--- a/net/tipc/name_table.c +++ b/net/tipc/name_table.c @@ -967,7 +967,7 @@ static int __tipc_nl_add_nametable_publ( list_for_each_entry(p, &sr->all_publ, all_publ) if (p->key == *last_key) break; - if (p->key != *last_key) + if (list_entry_is_head(p, &sr->all_publ, all_publ)) return -EPIPE; } else { p = list_first_entry(&sr->all_publ, --- a/net/tipc/socket.c +++ b/net/tipc/socket.c @@ -3749,7 +3749,7 @@ static int __tipc_nl_list_sk_publ(struct if (p->key == *last_publ) break; } - if (p->key != *last_publ) { + if (list_entry_is_head(p, &tsk->publications, binding_sock)) { /* We never set seq or call nl_dump_check_consistent() * this means that setting prev_seq here will cause the * consistence check to fail in the netlink callback
From: Konrad Dybcio konrad.dybcio@somainline.org
commit 3494894afff4ad11f25d8342cc99699be496d082 upstream.
Just like in commit 05cf3ec00d46 ("clk: qcom: gcc-msm8996: Drop (again) gcc_aggre1_pnoc_ahb_clk") adding NoC clocks turned out to be a huge mistake, as they cause a lot of issues at little benefit (basically letting Linux know about their children's frequencies), especially when mishandled or misconfigured.
Adding these ones broke SDCC approx 99 out of 100 times, but that somehow went unnoticed. To prevent further issues like this one, remove them.
This commit is effectively a revert of 74a33fac3aab ("clk: qcom: gcc-msm8994: Add missing NoC clocks") with ABI preservation.
Fixes: 74a33fac3aab ("clk: qcom: gcc-msm8994: Add missing NoC clocks") Signed-off-by: Konrad Dybcio konrad.dybcio@somainline.org Link: https://lore.kernel.org/r/20220217232408.78932-1-konrad.dybcio@somainline.or... Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/clk/qcom/gcc-msm8994.c | 106 +++-------------------------------------- 1 file changed, 9 insertions(+), 97 deletions(-)
--- a/drivers/clk/qcom/gcc-msm8994.c +++ b/drivers/clk/qcom/gcc-msm8994.c @@ -107,42 +107,6 @@ static const struct clk_parent_data gcc_ { .hw = &gpll4.clkr.hw }, };
-static struct clk_rcg2 system_noc_clk_src = { - .cmd_rcgr = 0x0120, - .hid_width = 5, - .parent_map = gcc_xo_gpll0_map, - .clkr.hw.init = &(struct clk_init_data){ - .name = "system_noc_clk_src", - .parent_data = gcc_xo_gpll0, - .num_parents = ARRAY_SIZE(gcc_xo_gpll0), - .ops = &clk_rcg2_ops, - }, -}; - -static struct clk_rcg2 config_noc_clk_src = { - .cmd_rcgr = 0x0150, - .hid_width = 5, - .parent_map = gcc_xo_gpll0_map, - .clkr.hw.init = &(struct clk_init_data){ - .name = "config_noc_clk_src", - .parent_data = gcc_xo_gpll0, - .num_parents = ARRAY_SIZE(gcc_xo_gpll0), - .ops = &clk_rcg2_ops, - }, -}; - -static struct clk_rcg2 periph_noc_clk_src = { - .cmd_rcgr = 0x0190, - .hid_width = 5, - .parent_map = gcc_xo_gpll0_map, - .clkr.hw.init = &(struct clk_init_data){ - .name = "periph_noc_clk_src", - .parent_data = gcc_xo_gpll0, - .num_parents = ARRAY_SIZE(gcc_xo_gpll0), - .ops = &clk_rcg2_ops, - }, -}; - static struct freq_tbl ftbl_ufs_axi_clk_src[] = { F(50000000, P_GPLL0, 12, 0, 0), F(100000000, P_GPLL0, 6, 0, 0), @@ -1149,8 +1113,6 @@ static struct clk_branch gcc_blsp1_ahb_c .enable_mask = BIT(17), .hw.init = &(struct clk_init_data){ .name = "gcc_blsp1_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -1434,8 +1396,6 @@ static struct clk_branch gcc_blsp2_ahb_c .enable_mask = BIT(15), .hw.init = &(struct clk_init_data){ .name = "gcc_blsp2_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -1763,8 +1723,6 @@ static struct clk_branch gcc_lpass_q6_ax .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_lpass_q6_axi_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -1777,8 +1735,6 @@ static struct clk_branch gcc_mss_q6_bimc .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_mss_q6_bimc_axi_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -1806,9 +1762,6 @@ static struct clk_branch gcc_pcie_0_cfg_ .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_pcie_0_cfg_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -1821,9 +1774,6 @@ static struct clk_branch gcc_pcie_0_mstr .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_pcie_0_mstr_axi_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -1853,9 +1803,6 @@ static struct clk_branch gcc_pcie_0_slv_ .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_pcie_0_slv_axi_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -1883,9 +1830,6 @@ static struct clk_branch gcc_pcie_1_cfg_ .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_pcie_1_cfg_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -1898,9 +1842,6 @@ static struct clk_branch gcc_pcie_1_mstr .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_pcie_1_mstr_axi_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -1929,9 +1870,6 @@ static struct clk_branch gcc_pcie_1_slv_ .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_pcie_1_slv_axi_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -1959,8 +1897,6 @@ static struct clk_branch gcc_pdm_ahb_clk .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_pdm_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -1988,9 +1924,6 @@ static struct clk_branch gcc_sdcc1_ahb_c .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_sdcc1_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -2003,9 +1936,6 @@ static struct clk_branch gcc_sdcc2_ahb_c .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_sdcc2_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -2033,9 +1963,6 @@ static struct clk_branch gcc_sdcc3_ahb_c .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_sdcc3_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -2063,9 +1990,6 @@ static struct clk_branch gcc_sdcc4_ahb_c .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_sdcc4_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, - .flags = CLK_SET_RATE_PARENT, .ops = &clk_branch2_ops, }, }, @@ -2123,8 +2047,6 @@ static struct clk_branch gcc_tsif_ahb_cl .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_tsif_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -2152,8 +2074,6 @@ static struct clk_branch gcc_ufs_ahb_clk .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_ufs_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -2197,8 +2117,6 @@ static struct clk_branch gcc_ufs_rx_symb .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_ufs_rx_symbol_0_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -2212,8 +2130,6 @@ static struct clk_branch gcc_ufs_rx_symb .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_ufs_rx_symbol_1_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -2242,8 +2158,6 @@ static struct clk_branch gcc_ufs_tx_symb .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_ufs_tx_symbol_0_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -2257,8 +2171,6 @@ static struct clk_branch gcc_ufs_tx_symb .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_ufs_tx_symbol_1_clk", - .parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -2363,8 +2275,6 @@ static struct clk_branch gcc_usb_hs_ahb_ .enable_mask = BIT(0), .hw.init = &(struct clk_init_data){ .name = "gcc_usb_hs_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -2487,8 +2397,6 @@ static struct clk_branch gcc_boot_rom_ah .enable_mask = BIT(10), .hw.init = &(struct clk_init_data){ .name = "gcc_boot_rom_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -2502,8 +2410,6 @@ static struct clk_branch gcc_prng_ahb_cl .enable_mask = BIT(13), .hw.init = &(struct clk_init_data){ .name = "gcc_prng_ahb_clk", - .parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw }, - .num_parents = 1, .ops = &clk_branch2_ops, }, }, @@ -2546,9 +2452,6 @@ static struct clk_regmap *gcc_msm8994_cl [GPLL0] = &gpll0.clkr, [GPLL4_EARLY] = &gpll4_early.clkr, [GPLL4] = &gpll4.clkr, - [CONFIG_NOC_CLK_SRC] = &config_noc_clk_src.clkr, - [PERIPH_NOC_CLK_SRC] = &periph_noc_clk_src.clkr, - [SYSTEM_NOC_CLK_SRC] = &system_noc_clk_src.clkr, [UFS_AXI_CLK_SRC] = &ufs_axi_clk_src.clkr, [USB30_MASTER_CLK_SRC] = &usb30_master_clk_src.clkr, [BLSP1_QUP1_I2C_APPS_CLK_SRC] = &blsp1_qup1_i2c_apps_clk_src.clkr, @@ -2695,6 +2598,15 @@ static struct clk_regmap *gcc_msm8994_cl [USB_SS_PHY_LDO] = &usb_ss_phy_ldo.clkr, [GCC_BOOT_ROM_AHB_CLK] = &gcc_boot_rom_ahb_clk.clkr, [GCC_PRNG_AHB_CLK] = &gcc_prng_ahb_clk.clkr, + + /* + * The following clocks should NOT be managed by this driver, but they once were + * mistakengly added. Now they are only here to indicate that they are not defined + * on purpose, even though the names will stay in the header file (for ABI sanity). + */ + [CONFIG_NOC_CLK_SRC] = NULL, + [PERIPH_NOC_CLK_SRC] = NULL, + [SYSTEM_NOC_CLK_SRC] = NULL, };
static struct gdsc *gcc_msm8994_gdscs[] = {
From: Tao Liu thomas.liu@ucloud.cn
commit cc20cced0598d9a5ff91ae4ab147b3b5e99ee819 upstream.
We encounter a tcp drop issue in our cloud environment. Packet GROed in host forwards to a VM virtio_net nic with net_failover enabled. VM acts as a IPVS LB with ipip encapsulation. The full path like: host gro -> vm virtio_net rx -> net_failover rx -> ipvs fullnat -> ipip encap -> net_failover tx -> virtio_net tx
When net_failover transmits a ipip pkt (gso_type = 0x0103, which means SKB_GSO_TCPV4, SKB_GSO_DODGY and SKB_GSO_IPXIP4), there is no gso did because it supports TSO and GSO_IPXIP4. But network_header points to inner ip header.
Call Trace: tcp4_gso_segment ------> return NULL inet_gso_segment ------> inner iph, network_header points to ipip_gso_segment inet_gso_segment ------> outer iph skb_mac_gso_segment
Afterwards virtio_net transmits the pkt, only inner ip header is modified. And the outer one just keeps unchanged. The pkt will be dropped in remote host.
Call Trace: inet_gso_segment ------> inner iph, outer iph is skipped skb_mac_gso_segment __skb_gso_segment validate_xmit_skb validate_xmit_skb_list sch_direct_xmit __qdisc_run __dev_queue_xmit ------> virtio_net dev_hard_start_xmit __dev_queue_xmit ------> net_failover ip_finish_output2 ip_output iptunnel_xmit ip_tunnel_xmit ipip_tunnel_xmit ------> ipip dev_hard_start_xmit __dev_queue_xmit ip_finish_output2 ip_output ip_forward ip_rcv __netif_receive_skb_one_core netif_receive_skb_internal napi_gro_receive receive_buf virtnet_poll net_rx_action
The root cause of this issue is specific with the rare combination of SKB_GSO_DODGY and a tunnel device that adds an SKB_GSO_ tunnel option. SKB_GSO_DODGY is set from external virtio_net. We need to reset network header when callbacks.gso_segment() returns NULL.
This patch also includes ipv6_gso_segment(), considering SIT, etc.
Fixes: cb32f511a70b ("ipip: add GSO/TSO support") Signed-off-by: Tao Liu thomas.liu@ucloud.cn Reviewed-by: Willem de Bruijn willemb@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/af_inet.c | 5 ++++- net/ipv6/ip6_offload.c | 2 ++ 2 files changed, 6 insertions(+), 1 deletion(-)
--- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -1376,8 +1376,11 @@ struct sk_buff *inet_gso_segment(struct }
ops = rcu_dereference(inet_offloads[proto]); - if (likely(ops && ops->callbacks.gso_segment)) + if (likely(ops && ops->callbacks.gso_segment)) { segs = ops->callbacks.gso_segment(skb, features); + if (!segs) + skb->network_header = skb_mac_header(skb) + nhoff - skb->head; + }
if (IS_ERR_OR_NULL(segs)) goto out; --- a/net/ipv6/ip6_offload.c +++ b/net/ipv6/ip6_offload.c @@ -114,6 +114,8 @@ static struct sk_buff *ipv6_gso_segment( if (likely(ops && ops->callbacks.gso_segment)) { skb_reset_transport_header(skb); segs = ops->callbacks.gso_segment(skb, features); + if (!segs) + skb->network_header = skb_mac_header(skb) + nhoff - skb->head; }
if (IS_ERR_OR_NULL(segs))
From: Mauri Sandberg maukka@ext.kapsi.fi
commit 42404d8f1c01861b22ccfa1d70f950242720ae57 upstream.
Obtaining a MAC address may be deferred in cases when the MAC is stored in an NVMEM block, for example, and it may not be ready upon the first retrieval attempt and return EPROBE_DEFER.
It is also possible that a port that does not rely on NVMEM has been already created when getting the defer request. Thus, also the resources allocated previously must be freed when doing a roll-back.
Fixes: 76723bca2802 ("net: mv643xx_eth: add DT parsing support") Signed-off-by: Mauri Sandberg maukka@ext.kapsi.fi Reviewed-by: Andrew Lunn andrew@lunn.ch Link: https://lore.kernel.org/r/20220223142337.41757-1-maukka@ext.kapsi.fi Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/marvell/mv643xx_eth.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-)
--- a/drivers/net/ethernet/marvell/mv643xx_eth.c +++ b/drivers/net/ethernet/marvell/mv643xx_eth.c @@ -2700,6 +2700,16 @@ MODULE_DEVICE_TABLE(of, mv643xx_eth_shar
static struct platform_device *port_platdev[3];
+static void mv643xx_eth_shared_of_remove(void) +{ + int n; + + for (n = 0; n < 3; n++) { + platform_device_del(port_platdev[n]); + port_platdev[n] = NULL; + } +} + static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev, struct device_node *pnp) { @@ -2736,7 +2746,9 @@ static int mv643xx_eth_shared_of_add_por return -EINVAL; }
- of_get_mac_address(pnp, ppd.mac_addr); + ret = of_get_mac_address(pnp, ppd.mac_addr); + if (ret) + return ret;
mv643xx_eth_property(pnp, "tx-queue-size", ppd.tx_queue_size); mv643xx_eth_property(pnp, "tx-sram-addr", ppd.tx_sram_addr); @@ -2800,21 +2812,13 @@ static int mv643xx_eth_shared_of_probe(s ret = mv643xx_eth_shared_of_add_port(pdev, pnp); if (ret) { of_node_put(pnp); + mv643xx_eth_shared_of_remove(); return ret; } } return 0; }
-static void mv643xx_eth_shared_of_remove(void) -{ - int n; - - for (n = 0; n < 3; n++) { - platform_device_del(port_platdev[n]); - port_platdev[n] = NULL; - } -} #else static inline int mv643xx_eth_shared_of_probe(struct platform_device *pdev) {
From: Paul Blakey paulb@nvidia.com
commit d9b5ae5c1b241b91480aa30408be12fe91af834a upstream.
Ipv6 ttl, label and tos fields are modified without first pulling/pushing the ipv6 header, which would have updated the hw csum (if available). This might cause csum validation when sending the packet to the stack, as can be seen in the trace below.
Fix this by updating skb->csum if available.
Trace resulted by ipv6 ttl dec and then sending packet to conntrack [actions: set(ipv6(hlimit=63)),ct(zone=99)]: [295241.900063] s_pf0vf2: hw csum failure [295241.923191] Call Trace: [295241.925728] <IRQ> [295241.927836] dump_stack+0x5c/0x80 [295241.931240] __skb_checksum_complete+0xac/0xc0 [295241.935778] nf_conntrack_tcp_packet+0x398/0xba0 [nf_conntrack] [295241.953030] nf_conntrack_in+0x498/0x5e0 [nf_conntrack] [295241.958344] __ovs_ct_lookup+0xac/0x860 [openvswitch] [295241.968532] ovs_ct_execute+0x4a7/0x7c0 [openvswitch] [295241.979167] do_execute_actions+0x54a/0xaa0 [openvswitch] [295242.001482] ovs_execute_actions+0x48/0x100 [openvswitch] [295242.006966] ovs_dp_process_packet+0x96/0x1d0 [openvswitch] [295242.012626] ovs_vport_receive+0x6c/0xc0 [openvswitch] [295242.028763] netdev_frame_hook+0xc0/0x180 [openvswitch] [295242.034074] __netif_receive_skb_core+0x2ca/0xcb0 [295242.047498] netif_receive_skb_internal+0x3e/0xc0 [295242.052291] napi_gro_receive+0xba/0xe0 [295242.056231] mlx5e_handle_rx_cqe_mpwrq_rep+0x12b/0x250 [mlx5_core] [295242.062513] mlx5e_poll_rx_cq+0xa0f/0xa30 [mlx5_core] [295242.067669] mlx5e_napi_poll+0xe1/0x6b0 [mlx5_core] [295242.077958] net_rx_action+0x149/0x3b0 [295242.086762] __do_softirq+0xd7/0x2d6 [295242.090427] irq_exit+0xf7/0x100 [295242.093748] do_IRQ+0x7f/0xd0 [295242.096806] common_interrupt+0xf/0xf [295242.100559] </IRQ> [295242.102750] RIP: 0033:0x7f9022e88cbd [295242.125246] RSP: 002b:00007f9022282b20 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffda [295242.132900] RAX: 0000000000000005 RBX: 0000000000000010 RCX: 0000000000000000 [295242.140120] RDX: 00007f9022282ba8 RSI: 00007f9022282a30 RDI: 00007f9014005c30 [295242.147337] RBP: 00007f9014014d60 R08: 0000000000000020 R09: 00007f90254a8340 [295242.154557] R10: 00007f9022282a28 R11: 0000000000000246 R12: 0000000000000000 [295242.161775] R13: 00007f902308c000 R14: 000000000000002b R15: 00007f9022b71f40
Fixes: 3fdbd1ce11e5 ("openvswitch: add ipv6 'set' action") Signed-off-by: Paul Blakey paulb@nvidia.com Link: https://lore.kernel.org/r/20220223163416.24096-1-paulb@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/checksum.h | 5 +++++ net/openvswitch/actions.c | 46 ++++++++++++++++++++++++++++++++++++++-------- 2 files changed, 43 insertions(+), 8 deletions(-)
--- a/include/net/checksum.h +++ b/include/net/checksum.h @@ -141,6 +141,11 @@ static inline void csum_replace2(__sum16 *sum = ~csum16_add(csum16_sub(~(*sum), old), new); }
+static inline void csum_replace(__wsum *csum, __wsum old, __wsum new) +{ + *csum = csum_add(csum_sub(*csum, old), new); +} + struct sk_buff; void inet_proto_csum_replace4(__sum16 *sum, struct sk_buff *skb, __be32 from, __be32 to, bool pseudohdr); --- a/net/openvswitch/actions.c +++ b/net/openvswitch/actions.c @@ -423,12 +423,43 @@ static void set_ipv6_addr(struct sk_buff memcpy(addr, new_addr, sizeof(__be32[4])); }
-static void set_ipv6_fl(struct ipv6hdr *nh, u32 fl, u32 mask) +static void set_ipv6_dsfield(struct sk_buff *skb, struct ipv6hdr *nh, u8 ipv6_tclass, u8 mask) { + u8 old_ipv6_tclass = ipv6_get_dsfield(nh); + + ipv6_tclass = OVS_MASKED(old_ipv6_tclass, ipv6_tclass, mask); + + if (skb->ip_summed == CHECKSUM_COMPLETE) + csum_replace(&skb->csum, (__force __wsum)(old_ipv6_tclass << 12), + (__force __wsum)(ipv6_tclass << 12)); + + ipv6_change_dsfield(nh, ~mask, ipv6_tclass); +} + +static void set_ipv6_fl(struct sk_buff *skb, struct ipv6hdr *nh, u32 fl, u32 mask) +{ + u32 ofl; + + ofl = nh->flow_lbl[0] << 16 | nh->flow_lbl[1] << 8 | nh->flow_lbl[2]; + fl = OVS_MASKED(ofl, fl, mask); + /* Bits 21-24 are always unmasked, so this retains their values. */ - OVS_SET_MASKED(nh->flow_lbl[0], (u8)(fl >> 16), (u8)(mask >> 16)); - OVS_SET_MASKED(nh->flow_lbl[1], (u8)(fl >> 8), (u8)(mask >> 8)); - OVS_SET_MASKED(nh->flow_lbl[2], (u8)fl, (u8)mask); + nh->flow_lbl[0] = (u8)(fl >> 16); + nh->flow_lbl[1] = (u8)(fl >> 8); + nh->flow_lbl[2] = (u8)fl; + + if (skb->ip_summed == CHECKSUM_COMPLETE) + csum_replace(&skb->csum, (__force __wsum)htonl(ofl), (__force __wsum)htonl(fl)); +} + +static void set_ipv6_ttl(struct sk_buff *skb, struct ipv6hdr *nh, u8 new_ttl, u8 mask) +{ + new_ttl = OVS_MASKED(nh->hop_limit, new_ttl, mask); + + if (skb->ip_summed == CHECKSUM_COMPLETE) + csum_replace(&skb->csum, (__force __wsum)(nh->hop_limit << 8), + (__force __wsum)(new_ttl << 8)); + nh->hop_limit = new_ttl; }
static void set_ip_ttl(struct sk_buff *skb, struct iphdr *nh, u8 new_ttl, @@ -546,18 +577,17 @@ static int set_ipv6(struct sk_buff *skb, } } if (mask->ipv6_tclass) { - ipv6_change_dsfield(nh, ~mask->ipv6_tclass, key->ipv6_tclass); + set_ipv6_dsfield(skb, nh, key->ipv6_tclass, mask->ipv6_tclass); flow_key->ip.tos = ipv6_get_dsfield(nh); } if (mask->ipv6_label) { - set_ipv6_fl(nh, ntohl(key->ipv6_label), + set_ipv6_fl(skb, nh, ntohl(key->ipv6_label), ntohl(mask->ipv6_label)); flow_key->ipv6.label = *(__be32 *)nh & htonl(IPV6_FLOWINFO_FLOWLABEL); } if (mask->ipv6_hlimit) { - OVS_SET_MASKED(nh->hop_limit, key->ipv6_hlimit, - mask->ipv6_hlimit); + set_ipv6_ttl(skb, nh, key->ipv6_hlimit, mask->ipv6_hlimit); flow_key->ip.ttl = nh->hop_limit; } return 0;
From: Maxime Ripard maxime@cerno.tech
commit ecbd4912a693b862e25cba0a6990a8c95b00721e upstream.
In order to fill the drm_display_info structure each time an EDID is read, the code currently will call drm_add_display_info with the parsed EDID.
drm_add_display_info will then call drm_reset_display_info to reset all the fields to 0, and then set them to the proper value depending on the EDID.
In the color_formats case, we will thus report that we don't support any color format, and then fill it back with RGB444 plus the additional formats described in the EDID Feature Support byte.
However, since that byte only contains format-related bits since the 1.4 specification, this doesn't happen if the EDID is following an earlier specification. In turn, it means that for one of these EDID, we end up with color_formats set to 0.
The EDID 1.3 specification never really specifies what it means by RGB exactly, but since both HDMI and DVI will use RGB444, it's fairly safe to assume it's supposed to be RGB444.
Let's move the addition of RGB444 to color_formats earlier in drm_add_display_info() so that it's always set for a digital display.
Fixes: da05a5a71ad8 ("drm: parse color format support for digital displays") Cc: Ville Syrjälä ville.syrjala@linux.intel.com Reported-by: Matthias Reichl hias@horus.com Signed-off-by: Maxime Ripard maxime@cerno.tech Reviewed-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20220203115416.1137308-1-maxim... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_edid.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/drm_edid.c +++ b/drivers/gpu/drm/drm_edid.c @@ -5345,6 +5345,7 @@ u32 drm_add_display_info(struct drm_conn if (!(edid->input & DRM_EDID_INPUT_DIGITAL)) return quirks;
+ info->color_formats |= DRM_COLOR_FORMAT_RGB444; drm_parse_cea_ext(connector, edid);
/* @@ -5393,7 +5394,6 @@ u32 drm_add_display_info(struct drm_conn DRM_DEBUG("%s: Assigning EDID-1.4 digital sink color depth as %d bpc.\n", connector->name, info->bpc);
- info->color_formats |= DRM_COLOR_FORMAT_RGB444; if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB444) info->color_formats |= DRM_COLOR_FORMAT_YCRCB444; if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB422)
From: Gal Pressman gal@nvidia.com
commit 0b89429722353d112f8b8b29ca397e95fa994d27 upstream.
The ioctl EEPROM query wrongly returns success on read failures, fix that by returning the appropriate error code.
Fixes: bb64143eee8c ("net/mlx5e: Add ethtool support for dump module EEPROM") Signed-off-by: Gal Pressman gal@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -1752,7 +1752,7 @@ static int mlx5e_get_module_eeprom(struc if (size_read < 0) { netdev_err(priv->netdev, "%s: mlx5_query_eeprom failed:0x%x\n", __func__, size_read); - return 0; + return size_read; }
i += size_read;
From: Roi Dayan roid@nvidia.com
commit 3d65492a86d4e6675734646929759138a023d914 upstream.
Such rules are redundant but allowed and passed to the driver. The driver does not support offloading such rules so return an error.
Fixes: 03a9d11e6eeb ("net/mlx5e: Add TC drop and mirred/redirect action parsing for SRIOV offloads") Signed-off-by: Roi Dayan roid@nvidia.com Reviewed-by: Oz Shlomo ozsh@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -3427,6 +3427,12 @@ actions_match_supported(struct mlx5e_pri return false; }
+ if (!(~actions & + (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) { + NL_SET_ERR_MSG_MOD(extack, "Rule cannot support forward+drop action"); + return false; + } + if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR && actions & MLX5_FLOW_CONTEXT_ACTION_DROP) { NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported");
From: Roi Dayan roid@nvidia.com
commit 23216d387c40b090b221ad457c95912fb47eb11e upstream.
This kind of action is not supported by firmware and generates a syndrome.
kernel: mlx5_core 0000:08:00.0: mlx5_cmd_check:777:(pid 102063): SET_FLOW_TABLE_ENTRY(0x936) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0x8708c3)
Fixes: d7e75a325cb2 ("net/mlx5e: Add offloading of E-Switch TC pedit (header re-write) actions") Signed-off-by: Roi Dayan roid@nvidia.com Reviewed-by: Maor Dickman maord@nvidia.com Reviewed-by: Oz Shlomo ozsh@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -3440,6 +3440,12 @@ actions_match_supported(struct mlx5e_pri }
if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR && + actions & MLX5_FLOW_CONTEXT_ACTION_DROP) { + NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported"); + return false; + } + + if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR && !modify_header_match_supported(priv, &parse_attr->spec, flow_action, actions, ct_flow, ct_clear, extack)) return false;
From: Stefano Garzarella sgarzare@redhat.com
commit bb49c6fa8b845591b317b0d7afea4ae60ec7f3aa upstream.
iocb_bio_iopoll() expects iocb->private to be cleared before releasing the bio.
We already do this in blkdev_bio_end_io(), but we forgot in the recently added blkdev_bio_end_io_async().
Fixes: 54a88eb838d3 ("block: add single bio async direct IO helper") Cc: asml.silence@gmail.com Signed-off-by: Stefano Garzarella sgarzare@redhat.com Reviewed-by: Ming Lei ming.lei@redhat.com Reviewed-by: Christoph Hellwig hch@lst.de Link: https://lore.kernel.org/r/20220211090136.44471-1-sgarzare@redhat.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/fops.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/block/fops.c +++ b/block/fops.c @@ -289,6 +289,8 @@ static void blkdev_bio_end_io_async(stru struct kiocb *iocb = dio->iocb; ssize_t ret;
+ WRITE_ONCE(iocb->private, NULL); + if (likely(!bio->bi_status)) { ret = dio->size; iocb->ki_pos += ret;
From: Maxime Ripard maxime@cerno.tech
commit 6764eb690e77ecded48587d6d4e346ba2e196546 upstream.
At boot on the BCM2711, if the HDMI controllers are running, the CRTC driver will disable itself and its associated HDMI controller to work around a hardware bug that would leave some pixels stuck in a FIFO.
In order to avoid that issue, we need to run some operations in lockstep between the CRTC and HDMI controller, and we need to make sure the HDMI controller will be powered properly.
However, since we haven't enabled it through KMS, the runtime_pm state is off at this point so we need to make sure the device is powered through pm_runtime_resume_and_get, and once the operations are complete, we call pm_runtime_put.
However, the HDMI controller will do that itself in its post_crtc_powerdown, which means we'll end up calling pm_runtime_put for a single pm_runtime_get, throwing the reference counting off. Let's remove the pm_runtime_put call in the CRTC code in order to have the proper counting.
Fixes: bca10db67bda ("drm/vc4: crtc: Make sure the HDMI controller is powered when disabling") Signed-off-by: Maxime Ripard maxime@cerno.tech Reviewed-by: Javier Martinez Canillas javierm@redhat.com Link: https://patchwork.freedesktop.org/patch/msgid/20220203102003.1114673-1-maxim... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/vc4/vc4_crtc.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/vc4/vc4_crtc.c +++ b/drivers/gpu/drm/vc4/vc4_crtc.c @@ -538,9 +538,11 @@ int vc4_crtc_disable_at_boot(struct drm_ if (ret) return ret;
- ret = pm_runtime_put(&vc4_hdmi->pdev->dev); - if (ret) - return ret; + /* + * post_crtc_powerdown will have called pm_runtime_put, so we + * don't need it here otherwise we'll get the reference counting + * wrong. + */
return 0; }
From: Matt Roper matthew.d.roper@intel.com
commit 28adef861233c6fce47372ebd2070b55eaa8e899 upstream.
We need to use phy_name() to convert the PHY value into a human-readable character in the error message.
Fixes: a6a128116e55 ("drm/i915/dg2: Wait for SNPS PHY calibration during display init") Signed-off-by: Matt Roper matthew.d.roper@intel.com Reviewed-by: Swathi Dhanavanthri swathi.dhanavanthri@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20220215163545.2175730-1-matth... (cherry picked from commit 84073e568eec7b586b2f6fd5fb2fb08f59edec54) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/display/intel_snps_phy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/i915/display/intel_snps_phy.c +++ b/drivers/gpu/drm/i915/display/intel_snps_phy.c @@ -34,7 +34,7 @@ void intel_snps_phy_wait_for_calibration if (intel_de_wait_for_clear(dev_priv, ICL_PHY_MISC(phy), DG2_PHY_DP_TX_ACK_MASK, 25)) DRM_ERROR("SNPS PHY %c failed to calibrate after 25ms.\n", - phy); + phy_name(phy)); } }
From: Michel Dänzer mdaenzer@redhat.com
commit 4d22336f903930eb94588b939c310743a3640276 upstream.
Even if PSR is allowed for a present GPU, there might be no eDP link which supports PSR.
Fixes: 708978487304 ("drm/amdgpu/display: Only set vblank_disable_immediate when PSR is not enabled") Reviewed-by: Harry Wentland harry.wentland@amd.com Signed-off-by: Michel Dänzer mdaenzer@redhat.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -4232,6 +4232,9 @@ static int amdgpu_dm_initialize_drm_devi } #endif
+ /* Disable vblank IRQs aggressively for power-saving. */ + adev_to_drm(adev)->vblank_disable_immediate = true; + /* loops over all connectors on the board */ for (i = 0; i < link_cnt; i++) { struct dc_link *link = NULL; @@ -4277,19 +4280,17 @@ static int amdgpu_dm_initialize_drm_devi update_connector_ext_caps(aconnector); if (psr_feature_enabled) amdgpu_dm_set_psr_caps(link); + + /* TODO: Fix vblank control helpers to delay PSR entry to allow this when + * PSR is also supported. + */ + if (link->psr_settings.psr_feature_enabled) + adev_to_drm(adev)->vblank_disable_immediate = false; }
}
- /* - * Disable vblank IRQs aggressively for power-saving. - * - * TODO: Fix vblank control helpers to delay PSR entry to allow this when PSR - * is also supported. - */ - adev_to_drm(adev)->vblank_disable_immediate = !psr_feature_enabled; - /* Software is initialized. Now we can register interrupt handlers. */ switch (adev->asic_type) { #if defined(CONFIG_DRM_AMD_DC_SI)
From: Paul Blakey paulb@nvidia.com
commit 2f131de361f6d0eaff17db26efdb844c178432f8 upstream.
Flow table lookup is skipped if packet either went through ct clear action (which set the IP_CT_UNTRACKED flag on the packet), or while switching zones and there is already a connection associated with the packet. This will result in no SW offload of the connection, and the and connection not being removed from flow table with TCP teardown (fin/rst packet).
To fix the above, remove these unneccary checks in flow table lookup.
Fixes: 46475bb20f4b ("net/sched: act_ct: Software offload of established flows") Signed-off-by: Paul Blakey paulb@nvidia.com Acked-by: Marcelo Ricardo Leitner marcelo.leitner@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/sched/act_ct.c | 5 ----- 1 file changed, 5 deletions(-)
--- a/net/sched/act_ct.c +++ b/net/sched/act_ct.c @@ -516,11 +516,6 @@ static bool tcf_ct_flow_table_lookup(str struct nf_conn *ct; u8 dir;
- /* Previously seen or loopback */ - ct = nf_ct_get(skb, &ctinfo); - if ((ct && !nf_ct_is_template(ct)) || ctinfo == IP_CT_UNTRACKED) - return false; - switch (family) { case NFPROTO_IPV4: if (!tcf_ct_flow_table_fill_tuple_ipv4(skb, &tuple, &tcph))
From: Xiaoke Wang xkernel.wang@foxmail.com
commit b352c3465bb808ab700d03f5bac2f7a6f37c5350 upstream.
devm_kmalloc() returns a pointer to allocated memory on success, NULL on failure. While lp->indirect_lock is allocated by devm_kmalloc() without proper check. It is better to check the value of it to prevent potential wrong memory access.
Fixes: f14f5c11f051 ("net: ll_temac: Support indirect_mutex share within TEMAC IP") Signed-off-by: Xiaoke Wang xkernel.wang@foxmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/xilinx/ll_temac_main.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/net/ethernet/xilinx/ll_temac_main.c +++ b/drivers/net/ethernet/xilinx/ll_temac_main.c @@ -1427,6 +1427,8 @@ static int temac_probe(struct platform_d lp->indirect_lock = devm_kmalloc(&pdev->dev, sizeof(*lp->indirect_lock), GFP_KERNEL); + if (!lp->indirect_lock) + return -ENOMEM; spin_lock_init(lp->indirect_lock); }
From: Christophe Leroy christophe.leroy@csgroup.eu
commit 5486f5bf790b5c664913076c3194b8f916a5c7ad upstream.
All functions defined as static inline in net/checksum.h are meant to be inlined for performance reason.
But since commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") the compiler is allowed to uninline functions when it wants.
Fair enough in the general case, but for tiny performance critical checksum helpers that's counter-productive.
The problem mainly arises when selecting CONFIG_CC_OPTIMISE_FOR_SIZE, Those helpers being 'static inline' in header files you suddenly find them duplicated many times in the resulting vmlinux.
Here is a typical exemple when building powerpc pmac32_defconfig with CONFIG_CC_OPTIMISE_FOR_SIZE. csum_sub() appears 4 times:
c04a23cc <csum_sub>: c04a23cc: 7c 84 20 f8 not r4,r4 c04a23d0: 7c 63 20 14 addc r3,r3,r4 c04a23d4: 7c 63 01 94 addze r3,r3 c04a23d8: 4e 80 00 20 blr ... c04a2ce8: 4b ff f6 e5 bl c04a23cc <csum_sub> ... c04a2d2c: 4b ff f6 a1 bl c04a23cc <csum_sub> ... c04a2d54: 4b ff f6 79 bl c04a23cc <csum_sub> ... c04a754c <csum_sub>: c04a754c: 7c 84 20 f8 not r4,r4 c04a7550: 7c 63 20 14 addc r3,r3,r4 c04a7554: 7c 63 01 94 addze r3,r3 c04a7558: 4e 80 00 20 blr ... c04ac930: 4b ff ac 1d bl c04a754c <csum_sub> ... c04ad264: 4b ff a2 e9 bl c04a754c <csum_sub> ... c04e3b08 <csum_sub>: c04e3b08: 7c 84 20 f8 not r4,r4 c04e3b0c: 7c 63 20 14 addc r3,r3,r4 c04e3b10: 7c 63 01 94 addze r3,r3 c04e3b14: 4e 80 00 20 blr ... c04e5788: 4b ff e3 81 bl c04e3b08 <csum_sub> ... c04e65c8: 4b ff d5 41 bl c04e3b08 <csum_sub> ... c0512d34 <csum_sub>: c0512d34: 7c 84 20 f8 not r4,r4 c0512d38: 7c 63 20 14 addc r3,r3,r4 c0512d3c: 7c 63 01 94 addze r3,r3 c0512d40: 4e 80 00 20 blr ... c0512dfc: 4b ff ff 39 bl c0512d34 <csum_sub> ... c05138bc: 4b ff f4 79 bl c0512d34 <csum_sub> ...
Restore the expected behaviour by using __always_inline for all functions defined in net/checksum.h
vmlinux size is even reduced by 256 bytes with this patch:
text data bss dec hex filename 6980022 2515362 194384 9689768 93daa8 vmlinux.before 6979862 2515266 194384 9689512 93d9a8 vmlinux.now
Fixes: ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") Cc: Masahiro Yamada yamada.masahiro@socionext.com Cc: Nick Desaulniers ndesaulniers@google.com Cc: Andrew Morton akpm@linux-foundation.org Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/checksum.h | 45 +++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 22 deletions(-)
--- a/include/net/checksum.h +++ b/include/net/checksum.h @@ -22,7 +22,7 @@ #include <asm/checksum.h>
#ifndef _HAVE_ARCH_COPY_AND_CSUM_FROM_USER -static inline +static __always_inline __wsum csum_and_copy_from_user (const void __user *src, void *dst, int len) { @@ -33,7 +33,7 @@ __wsum csum_and_copy_from_user (const vo #endif
#ifndef HAVE_CSUM_COPY_USER -static __inline__ __wsum csum_and_copy_to_user +static __always_inline __wsum csum_and_copy_to_user (const void *src, void __user *dst, int len) { __wsum sum = csum_partial(src, len, ~0U); @@ -45,7 +45,7 @@ static __inline__ __wsum csum_and_copy_t #endif
#ifndef _HAVE_ARCH_CSUM_AND_COPY -static inline __wsum +static __always_inline __wsum csum_partial_copy_nocheck(const void *src, void *dst, int len) { memcpy(dst, src, len); @@ -54,7 +54,7 @@ csum_partial_copy_nocheck(const void *sr #endif
#ifndef HAVE_ARCH_CSUM_ADD -static inline __wsum csum_add(__wsum csum, __wsum addend) +static __always_inline __wsum csum_add(__wsum csum, __wsum addend) { u32 res = (__force u32)csum; res += (__force u32)addend; @@ -62,12 +62,12 @@ static inline __wsum csum_add(__wsum csu } #endif
-static inline __wsum csum_sub(__wsum csum, __wsum addend) +static __always_inline __wsum csum_sub(__wsum csum, __wsum addend) { return csum_add(csum, ~addend); }
-static inline __sum16 csum16_add(__sum16 csum, __be16 addend) +static __always_inline __sum16 csum16_add(__sum16 csum, __be16 addend) { u16 res = (__force u16)csum;
@@ -75,12 +75,12 @@ static inline __sum16 csum16_add(__sum16 return (__force __sum16)(res + (res < (__force u16)addend)); }
-static inline __sum16 csum16_sub(__sum16 csum, __be16 addend) +static __always_inline __sum16 csum16_sub(__sum16 csum, __be16 addend) { return csum16_add(csum, ~addend); }
-static inline __wsum csum_shift(__wsum sum, int offset) +static __always_inline __wsum csum_shift(__wsum sum, int offset) { /* rotate sum to align it with a 16b boundary */ if (offset & 1) @@ -88,42 +88,43 @@ static inline __wsum csum_shift(__wsum s return sum; }
-static inline __wsum +static __always_inline __wsum csum_block_add(__wsum csum, __wsum csum2, int offset) { return csum_add(csum, csum_shift(csum2, offset)); }
-static inline __wsum +static __always_inline __wsum csum_block_add_ext(__wsum csum, __wsum csum2, int offset, int len) { return csum_block_add(csum, csum2, offset); }
-static inline __wsum +static __always_inline __wsum csum_block_sub(__wsum csum, __wsum csum2, int offset) { return csum_block_add(csum, ~csum2, offset); }
-static inline __wsum csum_unfold(__sum16 n) +static __always_inline __wsum csum_unfold(__sum16 n) { return (__force __wsum)n; }
-static inline __wsum csum_partial_ext(const void *buff, int len, __wsum sum) +static __always_inline +__wsum csum_partial_ext(const void *buff, int len, __wsum sum) { return csum_partial(buff, len, sum); }
#define CSUM_MANGLED_0 ((__force __sum16)0xffff)
-static inline void csum_replace_by_diff(__sum16 *sum, __wsum diff) +static __always_inline void csum_replace_by_diff(__sum16 *sum, __wsum diff) { *sum = csum_fold(csum_add(diff, ~csum_unfold(*sum))); }
-static inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to) +static __always_inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to) { __wsum tmp = csum_sub(~csum_unfold(*sum), (__force __wsum)from);
@@ -136,7 +137,7 @@ static inline void csum_replace4(__sum16 * m : old value of a 16bit field * m' : new value of a 16bit field */ -static inline void csum_replace2(__sum16 *sum, __be16 old, __be16 new) +static __always_inline void csum_replace2(__sum16 *sum, __be16 old, __be16 new) { *sum = ~csum16_add(csum16_sub(~(*sum), old), new); } @@ -155,16 +156,16 @@ void inet_proto_csum_replace16(__sum16 * void inet_proto_csum_replace_by_diff(__sum16 *sum, struct sk_buff *skb, __wsum diff, bool pseudohdr);
-static inline void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb, - __be16 from, __be16 to, - bool pseudohdr) +static __always_inline +void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb, + __be16 from, __be16 to, bool pseudohdr) { inet_proto_csum_replace4(sum, skb, (__force __be32)from, (__force __be32)to, pseudohdr); }
-static inline __wsum remcsum_adjust(void *ptr, __wsum csum, - int start, int offset) +static __always_inline __wsum remcsum_adjust(void *ptr, __wsum csum, + int start, int offset) { __sum16 *psum = (__sum16 *)(ptr + offset); __wsum delta; @@ -180,7 +181,7 @@ static inline __wsum remcsum_adjust(void return delta; }
-static inline void remcsum_unadjust(__sum16 *psum, __wsum delta) +static __always_inline void remcsum_unadjust(__sum16 *psum, __wsum delta) { *psum = csum_fold(csum_sub(delta, (__force __wsum)*psum)); }
From: Pablo Neira Ayuso pablo@netfilter.org
commit 6069da443bf65f513bb507bb21e2f87cfb1ad0b6 upstream.
Unregister flowtable hooks before they are releases via nf_tables_flowtable_destroy() otherwise hook core reports UAF.
BUG: KASAN: use-after-free in nf_hook_entries_grow+0x5a7/0x700 net/netfilter/core.c:142 net/netfilter/core.c:142 Read of size 4 at addr ffff8880736f7438 by task syz-executor579/3666
CPU: 0 PID: 3666 Comm: syz-executor579 Not tainted 5.16.0-rc5-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] __dump_stack lib/dump_stack.c:88 [inline] lib/dump_stack.c:106 dump_stack_lvl+0x1dc/0x2d8 lib/dump_stack.c:106 lib/dump_stack.c:106 print_address_description+0x65/0x380 mm/kasan/report.c:247 mm/kasan/report.c:247 __kasan_report mm/kasan/report.c:433 [inline] __kasan_report mm/kasan/report.c:433 [inline] mm/kasan/report.c:450 kasan_report+0x19a/0x1f0 mm/kasan/report.c:450 mm/kasan/report.c:450 nf_hook_entries_grow+0x5a7/0x700 net/netfilter/core.c:142 net/netfilter/core.c:142 __nf_register_net_hook+0x27e/0x8d0 net/netfilter/core.c:429 net/netfilter/core.c:429 nf_register_net_hook+0xaa/0x180 net/netfilter/core.c:571 net/netfilter/core.c:571 nft_register_flowtable_net_hooks+0x3c5/0x730 net/netfilter/nf_tables_api.c:7232 net/netfilter/nf_tables_api.c:7232 nf_tables_newflowtable+0x2022/0x2cf0 net/netfilter/nf_tables_api.c:7430 net/netfilter/nf_tables_api.c:7430 nfnetlink_rcv_batch net/netfilter/nfnetlink.c:513 [inline] nfnetlink_rcv_skb_batch net/netfilter/nfnetlink.c:634 [inline] nfnetlink_rcv_batch net/netfilter/nfnetlink.c:513 [inline] net/netfilter/nfnetlink.c:652 nfnetlink_rcv_skb_batch net/netfilter/nfnetlink.c:634 [inline] net/netfilter/nfnetlink.c:652 nfnetlink_rcv+0x10e6/0x2550 net/netfilter/nfnetlink.c:652 net/netfilter/nfnetlink.c:652
__nft_release_hook() calls nft_unregister_flowtable_net_hooks() which only unregisters the hooks, then after RCU grace period, it is guaranteed that no packets add new entries to the flowtable (no flow offload rules and flowtable hooks are reachable from packet path), so it is safe to call nf_flow_table_free() which cleans up the remaining entries from the flowtable (both software and hardware) and it unbinds the flow_block.
Fixes: ff4bf2f42a40 ("netfilter: nf_tables: add nft_unregister_flowtable_hook()") Reported-by: syzbot+e918523f77e62790d6d9@syzkaller.appspotmail.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/netfilter/nf_tables_api.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -9574,10 +9574,13 @@ EXPORT_SYMBOL_GPL(__nft_release_basechai
static void __nft_release_hook(struct net *net, struct nft_table *table) { + struct nft_flowtable *flowtable; struct nft_chain *chain;
list_for_each_entry(chain, &table->chains, list) nf_tables_unregister_hook(net, table, chain); + list_for_each_entry(flowtable, &table->flowtables, list) + nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list); }
static void __nft_release_hooks(struct net *net)
From: Vladimir Oltean vladimir.oltean@nxp.com
commit 8940e6b669ca1196ce0a0549c819078096390f76 upstream.
If the DSA master doesn't support IFF_UNICAST_FLT, then the following call path is possible:
dsa_slave_switchdev_event_work -> dsa_port_host_fdb_add -> dev_uc_add -> __dev_set_rx_mode -> __dev_set_promiscuity
Since the blamed commit, dsa_slave_switchdev_event_work() no longer holds rtnl_lock(), which triggers the ASSERT_RTNL() from __dev_set_promiscuity().
Taking rtnl_lock() around dev_uc_add() is impossible, because all the code paths that call dsa_flush_workqueue() do so from contexts where the rtnl_mutex is already held - so this would lead to an instant deadlock.
dev_uc_add() in itself doesn't require the rtnl_mutex for protection. There is this comment in __dev_set_rx_mode() which assumes so:
/* Unicast addresses changes may only happen under the rtnl, * therefore calling __dev_set_promiscuity here is safe. */
but it is from commit 4417da668c00 ("[NET]: dev: secondary unicast address support") dated June 2007, and in the meantime, commit f1f28aa3510d ("netdev: Add addr_list_lock to struct net_device."), dated July 2008, has added &dev->addr_list_lock to protect this instead of the global rtnl_mutex.
Nonetheless, __dev_set_promiscuity() does assume rtnl_mutex protection, but it is the uncommon path of what we typically expect dev_uc_add() to do. So since only the uncommon path requires rtnl_lock(), just check ahead of time whether dev_uc_add() would result into a call to __dev_set_promiscuity(), and handle that condition separately.
DSA already configures the master interface to be promiscuous if the tagger requires this. We can extend this to also cover the case where the master doesn't handle dev_uc_add() (doesn't support IFF_UNICAST_FLT), and on the premise that we'd end up making it promiscuous during operation anyway, either if a DSA slave has a non-inherited MAC address, or if the bridge notifies local FDB entries for its own MAC address, the address of a station learned on a foreign port, etc.
Fixes: 0faf890fc519 ("net: dsa: drop rtnl_lock from dsa_slave_switchdev_event_work") Reported-by: Oleksij Rempel o.rempel@pengutronix.de Signed-off-by: Vladimir Oltean vladimir.oltean@nxp.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/dsa/master.c | 7 ++++++- net/dsa/port.c | 20 ++++++++++++++------ 2 files changed, 20 insertions(+), 7 deletions(-)
--- a/net/dsa/master.c +++ b/net/dsa/master.c @@ -260,11 +260,16 @@ static void dsa_netdev_ops_set(struct ne dev->dsa_ptr->netdev_ops = ops; }
+/* Keep the master always promiscuous if the tagging protocol requires that + * (garbles MAC DA) or if it doesn't support unicast filtering, case in which + * it would revert to promiscuous mode as soon as we call dev_uc_add() on it + * anyway. + */ static void dsa_master_set_promiscuity(struct net_device *dev, int inc) { const struct dsa_device_ops *ops = dev->dsa_ptr->tag_ops;
- if (!ops->promisc_on_master) + if ((dev->priv_flags & IFF_UNICAST_FLT) && !ops->promisc_on_master) return;
rtnl_lock(); --- a/net/dsa/port.c +++ b/net/dsa/port.c @@ -777,9 +777,15 @@ int dsa_port_host_fdb_add(struct dsa_por struct dsa_port *cpu_dp = dp->cpu_dp; int err;
- err = dev_uc_add(cpu_dp->master, addr); - if (err) - return err; + /* Avoid a call to __dev_set_promiscuity() on the master, which + * requires rtnl_lock(), since we can't guarantee that is held here, + * and we can't take it either. + */ + if (cpu_dp->master->priv_flags & IFF_UNICAST_FLT) { + err = dev_uc_add(cpu_dp->master, addr); + if (err) + return err; + }
return dsa_port_notify(dp, DSA_NOTIFIER_HOST_FDB_ADD, &info); } @@ -796,9 +802,11 @@ int dsa_port_host_fdb_del(struct dsa_por struct dsa_port *cpu_dp = dp->cpu_dp; int err;
- err = dev_uc_del(cpu_dp->master, addr); - if (err) - return err; + if (cpu_dp->master->priv_flags & IFF_UNICAST_FLT) { + err = dev_uc_del(cpu_dp->master, addr); + if (err) + return err; + }
return dsa_port_notify(dp, DSA_NOTIFIER_HOST_FDB_DEL, &info); }
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
commit 3a14d0888eb4b0045884126acc69abfb7b87814d upstream.
ida_simple_get() returns an id between min (0) and max (NFP_MAX_MAC_INDEX) inclusive. So NFP_MAX_MAC_INDEX (0xff) is a valid id.
In order for the error handling path to work correctly, the 'invalid' value for 'ida_idx' should not be in the 0..NFP_MAX_MAC_INDEX range, inclusive.
So set it to -1.
Fixes: 20cce8865098 ("nfp: flower: enable MAC address sharing for offloadable devs") Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Signed-off-by: Simon Horman simon.horman@corigine.com Link: https://lore.kernel.org/r/20220218131535.100258-1-simon.horman@corigine.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c +++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c @@ -922,8 +922,8 @@ nfp_tunnel_add_shared_mac(struct nfp_app int port, bool mod) { struct nfp_flower_priv *priv = app->priv; - int ida_idx = NFP_MAX_MAC_INDEX, err; struct nfp_tun_offloaded_mac *entry; + int ida_idx = -1, err; u16 nfp_mac_idx = 0;
entry = nfp_tunnel_lookup_offloaded_macs(app, netdev->dev_addr); @@ -997,7 +997,7 @@ err_remove_hash: err_free_entry: kfree(entry); err_free_ida: - if (ida_idx != NFP_MAX_MAC_INDEX) + if (ida_idx != -1) ida_simple_remove(&priv->tun.mac_off_ids, ida_idx);
return err;
From: Baruch Siach baruch.siach@siklu.com
commit b6ad6261d27708567b309fdb3102b12c42a070cc upstream.
Experimentation shows that PHY detect might fail when the code attempts MDIO bus read immediately after clock enable. Add delay to stabilize the clock before bus access.
PHY detect failure started to show after commit 7590fc6f80ac ("net: mdio: Demote probed message to debug print") that removed coincidental delay between clock enable and bus access.
10ms is meant to match the time it take to send the probed message over UART at 115200 bps. This might be a far overshoot.
Fixes: 23a890d493e3 ("net: mdio: Add the reset function for IPQ MDIO driver") Signed-off-by: Baruch Siach baruch.siach@siklu.com Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/mdio/mdio-ipq4019.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/net/mdio/mdio-ipq4019.c +++ b/drivers/net/mdio/mdio-ipq4019.c @@ -200,7 +200,11 @@ static int ipq_mdio_reset(struct mii_bus if (ret) return ret;
- return clk_prepare_enable(priv->mdio_clk); + ret = clk_prepare_enable(priv->mdio_clk); + if (ret == 0) + mdelay(10); + + return ret; }
static int ipq4019_mdio_probe(struct platform_device *pdev)
From: Florian Westphal fw@strlen.de
commit dad3bdeef45f81a6e90204bcc85360bb76eccec7 upstream.
stateful objects can be updated from the control plane. The transaction logic allocates a temporary object for this purpose.
The ->init function was called for this object, so plain kfree() leaks resources. We must call ->destroy function of the object.
nft_obj_destroy does this, but it also decrements the module refcount, but the update path doesn't increment it.
To avoid special-casing the update object release, do module_get for the update case too and release it via nft_obj_destroy().
Fixes: d62d0ba97b58 ("netfilter: nf_tables: Introduce stateful object update operation") Cc: Fernando Fernandez Mancera ffmancera@riseup.net Signed-off-by: Florian Westphal fw@strlen.de Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/netfilter/nf_tables_api.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
--- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -6535,12 +6535,15 @@ static int nf_tables_updobj(const struct { struct nft_object *newobj; struct nft_trans *trans; - int err; + int err = -ENOMEM; + + if (!try_module_get(type->owner)) + return -ENOENT;
trans = nft_trans_alloc(ctx, NFT_MSG_NEWOBJ, sizeof(struct nft_trans_obj)); if (!trans) - return -ENOMEM; + goto err_trans;
newobj = nft_obj_init(ctx, type, attr); if (IS_ERR(newobj)) { @@ -6557,6 +6560,8 @@ static int nf_tables_updobj(const struct
err_free_trans: kfree(trans); +err_trans: + module_put(type->owner); return err; }
@@ -8169,7 +8174,7 @@ static void nft_obj_commit_update(struct if (obj->ops->update) obj->ops->update(obj, newobj);
- kfree(newobj); + nft_obj_destroy(&trans->ctx, newobj); }
static void nft_commit_release(struct nft_trans *trans) @@ -8914,7 +8919,7 @@ static int __nf_tables_abort(struct net break; case NFT_MSG_NEWOBJ: if (nft_trans_obj_update(trans)) { - kfree(nft_trans_obj_newobj(trans)); + nft_obj_destroy(&trans->ctx, nft_trans_obj_newobj(trans)); nft_trans_destroy(trans); } else { trans->ctx.table->use--;
From: Fabio M. De Francesco fmdefrancesco@gmail.com
commit 7ff57e98fb78ad94edafbdc7435f2d745e9e6bb5 upstream.
smc_pnetid_by_table_ib() uses read_lock() and then it calls smc_pnet_apply_ib() which, in turn, calls mutex_lock(&smc_ib_devices.mutex).
read_lock() disables preemption. Therefore, the code acquires a mutex while in atomic context and it leads to a SAC bug.
Fix this bug by replacing the rwlock with a mutex.
Reported-and-tested-by: syzbot+4f322a6d84e991c38775@syzkaller.appspotmail.com Fixes: 64e28b52c7a6 ("net/smc: add pnet table namespace support") Confirmed-by: Tony Lu tonylu@linux.alibaba.com Signed-off-by: Fabio M. De Francesco fmdefrancesco@gmail.com Acked-by: Karsten Graul kgraul@linux.ibm.com Link: https://lore.kernel.org/r/20220223100252.22562-1-fmdefrancesco@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/smc/smc_pnet.c | 42 +++++++++++++++++++++--------------------- net/smc/smc_pnet.h | 2 +- 2 files changed, 22 insertions(+), 22 deletions(-)
--- a/net/smc/smc_pnet.c +++ b/net/smc/smc_pnet.c @@ -112,7 +112,7 @@ static int smc_pnet_remove_by_pnetid(str pnettable = &sn->pnettable;
/* remove table entry */ - write_lock(&pnettable->lock); + mutex_lock(&pnettable->lock); list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) { if (!pnet_name || @@ -130,7 +130,7 @@ static int smc_pnet_remove_by_pnetid(str rc = 0; } } - write_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock);
/* if this is not the initial namespace, stop here */ if (net != &init_net) @@ -191,7 +191,7 @@ static int smc_pnet_add_by_ndev(struct n sn = net_generic(net, smc_net_id); pnettable = &sn->pnettable;
- write_lock(&pnettable->lock); + mutex_lock(&pnettable->lock); list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) { if (pnetelem->type == SMC_PNET_ETH && !pnetelem->ndev && !strncmp(pnetelem->eth_name, ndev->name, IFNAMSIZ)) { @@ -205,7 +205,7 @@ static int smc_pnet_add_by_ndev(struct n break; } } - write_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock); return rc; }
@@ -223,7 +223,7 @@ static int smc_pnet_remove_by_ndev(struc sn = net_generic(net, smc_net_id); pnettable = &sn->pnettable;
- write_lock(&pnettable->lock); + mutex_lock(&pnettable->lock); list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) { if (pnetelem->type == SMC_PNET_ETH && pnetelem->ndev == ndev) { dev_put(pnetelem->ndev); @@ -236,7 +236,7 @@ static int smc_pnet_remove_by_ndev(struc break; } } - write_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock); return rc; }
@@ -371,7 +371,7 @@ static int smc_pnet_add_eth(struct smc_p
rc = -EEXIST; new_netdev = true; - write_lock(&pnettable->lock); + mutex_lock(&pnettable->lock); list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) { if (tmp_pe->type == SMC_PNET_ETH && !strncmp(tmp_pe->eth_name, eth_name, IFNAMSIZ)) { @@ -381,9 +381,9 @@ static int smc_pnet_add_eth(struct smc_p } if (new_netdev) { list_add_tail(&new_pe->list, &pnettable->pnetlist); - write_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock); } else { - write_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock); kfree(new_pe); goto out_put; } @@ -444,7 +444,7 @@ static int smc_pnet_add_ib(struct smc_pn new_pe->ib_port = ib_port;
new_ibdev = true; - write_lock(&pnettable->lock); + mutex_lock(&pnettable->lock); list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) { if (tmp_pe->type == SMC_PNET_IB && !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX)) { @@ -454,9 +454,9 @@ static int smc_pnet_add_ib(struct smc_pn } if (new_ibdev) { list_add_tail(&new_pe->list, &pnettable->pnetlist); - write_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock); } else { - write_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock); kfree(new_pe); } return (new_ibdev) ? 0 : -EEXIST; @@ -601,7 +601,7 @@ static int _smc_pnet_dump(struct net *ne pnettable = &sn->pnettable;
/* dump pnettable entries */ - read_lock(&pnettable->lock); + mutex_lock(&pnettable->lock); list_for_each_entry(pnetelem, &pnettable->pnetlist, list) { if (pnetid && !smc_pnet_match(pnetelem->pnet_name, pnetid)) continue; @@ -616,7 +616,7 @@ static int _smc_pnet_dump(struct net *ne break; } } - read_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock); return idx; }
@@ -860,7 +860,7 @@ int smc_pnet_net_init(struct net *net) struct smc_pnetids_ndev *pnetids_ndev = &sn->pnetids_ndev;
INIT_LIST_HEAD(&pnettable->pnetlist); - rwlock_init(&pnettable->lock); + mutex_init(&pnettable->lock); INIT_LIST_HEAD(&pnetids_ndev->list); rwlock_init(&pnetids_ndev->lock);
@@ -940,7 +940,7 @@ static int smc_pnet_find_ndev_pnetid_by_ sn = net_generic(net, smc_net_id); pnettable = &sn->pnettable;
- read_lock(&pnettable->lock); + mutex_lock(&pnettable->lock); list_for_each_entry(pnetelem, &pnettable->pnetlist, list) { if (pnetelem->type == SMC_PNET_ETH && ndev == pnetelem->ndev) { /* get pnetid of netdev device */ @@ -949,7 +949,7 @@ static int smc_pnet_find_ndev_pnetid_by_ break; } } - read_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock); return rc; }
@@ -1141,7 +1141,7 @@ int smc_pnetid_by_table_ib(struct smc_ib sn = net_generic(&init_net, smc_net_id); pnettable = &sn->pnettable;
- read_lock(&pnettable->lock); + mutex_lock(&pnettable->lock); list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) { if (tmp_pe->type == SMC_PNET_IB && !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX) && @@ -1151,7 +1151,7 @@ int smc_pnetid_by_table_ib(struct smc_ib break; } } - read_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock);
return rc; } @@ -1170,7 +1170,7 @@ int smc_pnetid_by_table_smcd(struct smcd sn = net_generic(&init_net, smc_net_id); pnettable = &sn->pnettable;
- read_lock(&pnettable->lock); + mutex_lock(&pnettable->lock); list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) { if (tmp_pe->type == SMC_PNET_IB && !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX)) { @@ -1179,7 +1179,7 @@ int smc_pnetid_by_table_smcd(struct smcd break; } } - read_unlock(&pnettable->lock); + mutex_unlock(&pnettable->lock);
return rc; } --- a/net/smc/smc_pnet.h +++ b/net/smc/smc_pnet.h @@ -29,7 +29,7 @@ struct smc_link_group; * @pnetlist: List of PNETIDs */ struct smc_pnettable { - rwlock_t lock; + struct mutex lock; struct list_head pnetlist; };
From: Hans de Goede hdegoede@redhat.com
commit 21d90aaee8d5c2a097ef41f1430d97661233ecc6 upstream.
The battery on the 2nd hand Surface 3 which I recently bought appears to not have a serial number programmed in. This results in any I2C reads from the registers containing the serial number failing with an I2C NACK.
This was causing mshw0011_bix() to fail causing the battery readings to not work at all.
Ignore EREMOTEIO (I2C NACK) errors when retrieving the serial number and continue with an empty serial number to fix this.
Fixes: b1f81b496b0d ("platform/x86: surface3_power: MSHW0011 rev-eng implementation") BugLink: https://github.com/linux-surface/linux-surface/issues/608 Reviewed-by: Benjamin Tissoires benjamin.tissoires@redhat.com Reviewed-by: Maximilian Luz luzmaximilian@gmail.com Signed-off-by: Hans de Goede hdegoede@redhat.com Link: https://lore.kernel.org/r/20220224101848.7219-1-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/surface/surface3_power.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
--- a/drivers/platform/surface/surface3_power.c +++ b/drivers/platform/surface/surface3_power.c @@ -232,14 +232,21 @@ static int mshw0011_bix(struct mshw0011_ } bix->last_full_charg_capacity = ret;
- /* get serial number */ + /* + * Get serial number, on some devices (with unofficial replacement + * battery?) reading any of the serial number range addresses gets + * nacked in this case just leave the serial number empty. + */ ret = i2c_smbus_read_i2c_block_data(client, MSHW0011_BAT0_REG_SERIAL_NO, sizeof(buf), buf); - if (ret != sizeof(buf)) { + if (ret == -EREMOTEIO) { + /* no serial number available */ + } else if (ret != sizeof(buf)) { dev_err(&client->dev, "Error reading serial no: %d\n", ret); return ret; + } else { + snprintf(bix->serial, ARRAY_SIZE(bix->serial), "%3pE%6pE", buf + 7, buf); } - snprintf(bix->serial, ARRAY_SIZE(bix->serial), "%3pE%6pE", buf + 7, buf);
/* get cycle count */ ret = i2c_smbus_read_word_data(client, MSHW0011_BAT0_REG_CYCLE_CNT);
From: Dan Carpenter dan.carpenter@oracle.com
commit de7b2efacf4e83954aed3f029d347dfc0b7a4f49 upstream.
This test is checking if we exited the list via break or not. However if it did not exit via a break then "node" does not point to a valid udp_tunnel_nic_shared_node struct. It will work because of the way the structs are laid out it's the equivalent of "if (info->shared->udp_tunnel_nic_info != dev)" which will always be true, but it's not the right way to test.
Fixes: 74cc6d182d03 ("udp_tunnel: add the ability to share port tables") Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/udp_tunnel_nic.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/net/ipv4/udp_tunnel_nic.c +++ b/net/ipv4/udp_tunnel_nic.c @@ -846,7 +846,7 @@ udp_tunnel_nic_unregister(struct net_dev list_for_each_entry(node, &info->shared->devices, list) if (node->dev == dev) break; - if (node->dev != dev) + if (list_entry_is_head(node, &info->shared->devices, list)) return;
list_del(&node->list);
From: Yevgeny Kliteynik kliteyn@nvidia.com
commit e5b2bc30c21139ae10f0e56989389d0bc7b7b1d6 upstream.
During rule insertion on each ICM memory chunk we also allocate shadow memory used for management. This includes the hw_ste, dr_ste and miss list per entry. Since the scale of these allocations is large we noticed a performance hiccup that happens once malloc and free are stressed. In extreme usecases when ~1M chunks are freed at once, it might take up to 40 seconds to complete this, up to the point the kernel sees this as self-detected stall on CPU:
rcu: INFO: rcu_sched self-detected stall on CPU
To resolve this we will increase the reuse of shadow memory. Doing this we see that a time in the aforementioned usecase dropped from ~40 seconds to ~8-10 seconds.
Fixes: 29cf8febd185 ("net/mlx5: DR, ICM pool memory allocator") Signed-off-by: Alex Vesker valex@nvidia.com Signed-off-by: Yevgeny Kliteynik kliteyn@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c | 109 ++++++---- drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h | 5 2 files changed, 79 insertions(+), 35 deletions(-)
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c @@ -136,37 +136,35 @@ static void dr_icm_pool_mr_destroy(struc kvfree(icm_mr); }
-static int dr_icm_chunk_ste_init(struct mlx5dr_icm_chunk *chunk) +static int dr_icm_buddy_get_ste_size(struct mlx5dr_icm_buddy_mem *buddy) { - chunk->ste_arr = kvzalloc(chunk->num_of_entries * - sizeof(chunk->ste_arr[0]), GFP_KERNEL); - if (!chunk->ste_arr) - return -ENOMEM; - - chunk->hw_ste_arr = kvzalloc(chunk->num_of_entries * - DR_STE_SIZE_REDUCED, GFP_KERNEL); - if (!chunk->hw_ste_arr) - goto out_free_ste_arr; - - chunk->miss_list = kvmalloc(chunk->num_of_entries * - sizeof(chunk->miss_list[0]), GFP_KERNEL); - if (!chunk->miss_list) - goto out_free_hw_ste_arr; + /* We support only one type of STE size, both for ConnectX-5 and later + * devices. Once the support for match STE which has a larger tag is + * added (32B instead of 16B), the STE size for devices later than + * ConnectX-5 needs to account for that. + */ + return DR_STE_SIZE_REDUCED; +}
- return 0; +static void dr_icm_chunk_ste_init(struct mlx5dr_icm_chunk *chunk, int offset) +{ + struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem; + int index = offset / DR_STE_SIZE;
-out_free_hw_ste_arr: - kvfree(chunk->hw_ste_arr); -out_free_ste_arr: - kvfree(chunk->ste_arr); - return -ENOMEM; + chunk->ste_arr = &buddy->ste_arr[index]; + chunk->miss_list = &buddy->miss_list[index]; + chunk->hw_ste_arr = buddy->hw_ste_arr + + index * dr_icm_buddy_get_ste_size(buddy); }
static void dr_icm_chunk_ste_cleanup(struct mlx5dr_icm_chunk *chunk) { - kvfree(chunk->miss_list); - kvfree(chunk->hw_ste_arr); - kvfree(chunk->ste_arr); + struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem; + + memset(chunk->hw_ste_arr, 0, + chunk->num_of_entries * dr_icm_buddy_get_ste_size(buddy)); + memset(chunk->ste_arr, 0, + chunk->num_of_entries * sizeof(chunk->ste_arr[0])); }
static enum mlx5dr_icm_type @@ -189,6 +187,44 @@ static void dr_icm_chunk_destroy(struct kvfree(chunk); }
+static int dr_icm_buddy_init_ste_cache(struct mlx5dr_icm_buddy_mem *buddy) +{ + int num_of_entries = + mlx5dr_icm_pool_chunk_size_to_entries(buddy->pool->max_log_chunk_sz); + + buddy->ste_arr = kvcalloc(num_of_entries, + sizeof(struct mlx5dr_ste), GFP_KERNEL); + if (!buddy->ste_arr) + return -ENOMEM; + + /* Preallocate full STE size on non-ConnectX-5 devices since + * we need to support both full and reduced with the same cache. + */ + buddy->hw_ste_arr = kvcalloc(num_of_entries, + dr_icm_buddy_get_ste_size(buddy), GFP_KERNEL); + if (!buddy->hw_ste_arr) + goto free_ste_arr; + + buddy->miss_list = kvmalloc(num_of_entries * sizeof(struct list_head), GFP_KERNEL); + if (!buddy->miss_list) + goto free_hw_ste_arr; + + return 0; + +free_hw_ste_arr: + kvfree(buddy->hw_ste_arr); +free_ste_arr: + kvfree(buddy->ste_arr); + return -ENOMEM; +} + +static void dr_icm_buddy_cleanup_ste_cache(struct mlx5dr_icm_buddy_mem *buddy) +{ + kvfree(buddy->ste_arr); + kvfree(buddy->hw_ste_arr); + kvfree(buddy->miss_list); +} + static int dr_icm_buddy_create(struct mlx5dr_icm_pool *pool) { struct mlx5dr_icm_buddy_mem *buddy; @@ -208,11 +244,19 @@ static int dr_icm_buddy_create(struct ml buddy->icm_mr = icm_mr; buddy->pool = pool;
+ if (pool->icm_type == DR_ICM_TYPE_STE) { + /* Reduce allocations by preallocating and reusing the STE structures */ + if (dr_icm_buddy_init_ste_cache(buddy)) + goto err_cleanup_buddy; + } + /* add it to the -start- of the list in order to search in it first */ list_add(&buddy->list_node, &pool->buddy_mem_list);
return 0;
+err_cleanup_buddy: + mlx5dr_buddy_cleanup(buddy); err_free_buddy: kvfree(buddy); free_mr: @@ -234,6 +278,9 @@ static void dr_icm_buddy_destroy(struct
mlx5dr_buddy_cleanup(buddy);
+ if (buddy->pool->icm_type == DR_ICM_TYPE_STE) + dr_icm_buddy_cleanup_ste_cache(buddy); + kvfree(buddy); }
@@ -261,26 +308,18 @@ dr_icm_chunk_create(struct mlx5dr_icm_po chunk->byte_size = mlx5dr_icm_pool_chunk_size_to_byte(chunk_size, pool->icm_type); chunk->seg = seg; + chunk->buddy_mem = buddy_mem_pool;
- if (pool->icm_type == DR_ICM_TYPE_STE && dr_icm_chunk_ste_init(chunk)) { - mlx5dr_err(pool->dmn, - "Failed to init ste arrays (order: %d)\n", - chunk_size); - goto out_free_chunk; - } + if (pool->icm_type == DR_ICM_TYPE_STE) + dr_icm_chunk_ste_init(chunk, offset);
buddy_mem_pool->used_memory += chunk->byte_size; - chunk->buddy_mem = buddy_mem_pool; INIT_LIST_HEAD(&chunk->chunk_list);
/* chunk now is part of the used_list */ list_add_tail(&chunk->chunk_list, &buddy_mem_pool->used_list);
return chunk; - -out_free_chunk: - kvfree(chunk); - return NULL; }
static bool dr_icm_pool_is_sync_required(struct mlx5dr_icm_pool *pool) --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h @@ -160,6 +160,11 @@ struct mlx5dr_icm_buddy_mem { * sync_ste command sets them free. */ struct list_head hot_list; + + /* Memory optimisation */ + struct mlx5dr_ste *ste_arr; + struct list_head *miss_list; + u8 *hw_ste_arr; };
int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy,
From: Sukadev Bhattiprolu sukadev@linux.ibm.com
commit 277f2bb14361790a70e4b3c649e794b75a91a597 upstream.
If client is unable to initiate a failover reset via H_VIOCTL hcall, then it should schedule a failover reset as a last resort. Otherwise, there is no need to do a last resort.
Fixes: 334c42414729 ("ibmvnic: improve failover sysfs entry") Reported-by: Cris Forno cforno12@outlook.com Signed-off-by: Sukadev Bhattiprolu sukadev@linux.ibm.com Signed-off-by: Dany Madden drt@linux.ibm.com Link: https://lore.kernel.org/r/20220221210545.115283-1-drt@linux.ibm.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/ibm/ibmvnic.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -5919,10 +5919,14 @@ static ssize_t failover_store(struct dev be64_to_cpu(session_token)); rc = plpar_hcall_norets(H_VIOCTL, adapter->vdev->unit_address, H_SESSION_ERR_DETECTED, session_token, 0, 0); - if (rc) + if (rc) { netdev_err(netdev, "H_VIOCTL initiated failover failed, rc %ld\n", rc); + goto last_resort; + } + + return count;
last_resort: netdev_dbg(netdev, "Trying to send CRQ_CMD, the last resort\n");
From: Yevgeny Kliteynik kliteyn@nvidia.com
commit ffb0753b954763d94f52c901adfe58ed0d4005e6 upstream.
Currently SMFS allows adding rule with matching on src/dst IP w/o matching on full ethertype or ip_version, which is not supported by HW. This patch fixes this issue and adds the check as it is done in DMFS.
Fixes: 26d688e33f88 ("net/mlx5: DR, Add Steering entry (STE) utilities") Signed-off-by: Yevgeny Kliteynik kliteyn@nvidia.com Reviewed-by: Alex Vesker valex@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c | 20 +----- drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c | 32 +++++++++- drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h | 10 +++ 3 files changed, 45 insertions(+), 17 deletions(-)
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c @@ -13,18 +13,6 @@ static bool dr_mask_is_dmac_set(struct m return (spec->dmac_47_16 || spec->dmac_15_0); }
-static bool dr_mask_is_src_addr_set(struct mlx5dr_match_spec *spec) -{ - return (spec->src_ip_127_96 || spec->src_ip_95_64 || - spec->src_ip_63_32 || spec->src_ip_31_0); -} - -static bool dr_mask_is_dst_addr_set(struct mlx5dr_match_spec *spec) -{ - return (spec->dst_ip_127_96 || spec->dst_ip_95_64 || - spec->dst_ip_63_32 || spec->dst_ip_31_0); -} - static bool dr_mask_is_l3_base_set(struct mlx5dr_match_spec *spec) { return (spec->ip_protocol || spec->frag || spec->tcp_flags || @@ -480,11 +468,11 @@ static int dr_matcher_set_ste_builders(s &mask, inner, rx);
if (outer_ipv == DR_RULE_IPV6) { - if (dr_mask_is_dst_addr_set(&mask.outer)) + if (DR_MASK_IS_DST_IP_SET(&mask.outer)) mlx5dr_ste_build_eth_l3_ipv6_dst(ste_ctx, &sb[idx++], &mask, inner, rx);
- if (dr_mask_is_src_addr_set(&mask.outer)) + if (DR_MASK_IS_SRC_IP_SET(&mask.outer)) mlx5dr_ste_build_eth_l3_ipv6_src(ste_ctx, &sb[idx++], &mask, inner, rx);
@@ -580,11 +568,11 @@ static int dr_matcher_set_ste_builders(s &mask, inner, rx);
if (inner_ipv == DR_RULE_IPV6) { - if (dr_mask_is_dst_addr_set(&mask.inner)) + if (DR_MASK_IS_DST_IP_SET(&mask.inner)) mlx5dr_ste_build_eth_l3_ipv6_dst(ste_ctx, &sb[idx++], &mask, inner, rx);
- if (dr_mask_is_src_addr_set(&mask.inner)) + if (DR_MASK_IS_SRC_IP_SET(&mask.inner)) mlx5dr_ste_build_eth_l3_ipv6_src(ste_ctx, &sb[idx++], &mask, inner, rx);
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c @@ -602,12 +602,34 @@ int mlx5dr_ste_set_action_decap_l3_list( used_hw_action_num); }
+static int dr_ste_build_pre_check_spec(struct mlx5dr_domain *dmn, + struct mlx5dr_match_spec *spec) +{ + if (spec->ip_version) { + if (spec->ip_version != 0xf) { + mlx5dr_err(dmn, + "Partial ip_version mask with src/dst IP is not supported\n"); + return -EINVAL; + } + } else if (spec->ethertype != 0xffff && + (DR_MASK_IS_SRC_IP_SET(spec) || DR_MASK_IS_DST_IP_SET(spec))) { + mlx5dr_err(dmn, + "Partial/no ethertype mask with src/dst IP is not supported\n"); + return -EINVAL; + } + + return 0; +} + int mlx5dr_ste_build_pre_check(struct mlx5dr_domain *dmn, u8 match_criteria, struct mlx5dr_match_param *mask, struct mlx5dr_match_param *value) { - if (!value && (match_criteria & DR_MATCHER_CRITERIA_MISC)) { + if (value) + return 0; + + if (match_criteria & DR_MATCHER_CRITERIA_MISC) { if (mask->misc.source_port && mask->misc.source_port != 0xffff) { mlx5dr_err(dmn, "Partial mask source_port is not supported\n"); @@ -621,6 +643,14 @@ int mlx5dr_ste_build_pre_check(struct ml } }
+ if ((match_criteria & DR_MATCHER_CRITERIA_OUTER) && + dr_ste_build_pre_check_spec(dmn, &mask->outer)) + return -EINVAL; + + if ((match_criteria & DR_MATCHER_CRITERIA_INNER) && + dr_ste_build_pre_check_spec(dmn, &mask->inner)) + return -EINVAL; + return 0; }
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h @@ -739,6 +739,16 @@ struct mlx5dr_match_param { (_misc3)->icmpv4_code || \ (_misc3)->icmpv4_header_data)
+#define DR_MASK_IS_SRC_IP_SET(_spec) ((_spec)->src_ip_127_96 || \ + (_spec)->src_ip_95_64 || \ + (_spec)->src_ip_63_32 || \ + (_spec)->src_ip_31_0) + +#define DR_MASK_IS_DST_IP_SET(_spec) ((_spec)->dst_ip_127_96 || \ + (_spec)->dst_ip_95_64 || \ + (_spec)->dst_ip_63_32 || \ + (_spec)->dst_ip_31_0) + struct mlx5dr_esw_caps { u64 drop_icm_address_rx; u64 drop_icm_address_tx;
From: Maor Gottlieb maorg@nvidia.com
commit b645e57debca846f51b3209907546ea857ddd3f5 upstream.
Add missing call to up_write_ref_node() which releases the semaphore in case the FTE doesn't have destinations, such in drop rule case.
Fixes: 465e7baab6d9 ("net/mlx5: Fix deletion of duplicate rules") Signed-off-by: Maor Gottlieb maorg@nvidia.com Reviewed-by: Mark Bloch mbloch@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -2073,6 +2073,8 @@ void mlx5_del_flow_rules(struct mlx5_flo fte->node.del_hw_func = NULL; up_write_ref_node(&fte->node, false); tree_put_node(&fte->node, false); + } else { + up_write_ref_node(&fte->node, false); } kfree(handle); }
From: Ariel Levkovich lariel@nvidia.com
commit 07666c75ad17d7389b18ac0235c8cf41e1504ea8 upstream.
Match metadata support check returns false for ecpf device. However, this support does exist for ecpf and therefore this limitation should be removed to allow feature such as stacked devices and internal port offloaded to be supported.
Fixes: 92ab1eb392c6 ("net/mlx5: E-Switch, Enable vport metadata matching if firmware supports it") Signed-off-by: Ariel Levkovich lariel@nvidia.com Reviewed-by: Maor Dickman maord@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 4 ---- 1 file changed, 4 deletions(-)
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -2838,10 +2838,6 @@ bool mlx5_esw_vport_match_metadata_suppo if (!MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source)) return false;
- if (mlx5_core_is_ecpf_esw_manager(esw->dev) || - mlx5_ecpf_vport_exists(esw->dev)) - return false; - return true; }
From: Yevgeny Kliteynik kliteyn@nvidia.com
commit ecd9c5cd46e013659e2fad433057bad1ba66888e upstream.
When deciding whether to start syncing and actually free all the "hot" ICM chunks, we need to consider the type of the ICM chunks that we're dealing with. For instance, the amount of available ICM for MODIFY_ACTION is significantly lower than the usual STE ICM, so the threshold should account for that - otherwise we can deplete MODIFY_ACTION memory just by creating and deleting the same modify header action in a continuous loop.
This patch replaces the hard-coded threshold with a dynamic value.
Fixes: 1c58651412bb ("net/mlx5: DR, ICM memory pools sync optimization") Signed-off-by: Yevgeny Kliteynik kliteyn@nvidia.com Reviewed-by: Alex Vesker valex@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c | 11 ++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c @@ -4,7 +4,6 @@ #include "dr_types.h"
#define DR_ICM_MODIFY_HDR_ALIGN_BASE 64 -#define DR_ICM_SYNC_THRESHOLD_POOL (64 * 1024 * 1024)
struct mlx5dr_icm_pool { enum mlx5dr_icm_type icm_type; @@ -324,10 +323,14 @@ dr_icm_chunk_create(struct mlx5dr_icm_po
static bool dr_icm_pool_is_sync_required(struct mlx5dr_icm_pool *pool) { - if (pool->hot_memory_size > DR_ICM_SYNC_THRESHOLD_POOL) - return true; + int allow_hot_size;
- return false; + /* sync when hot memory reaches half of the pool size */ + allow_hot_size = + mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, + pool->icm_type) / 2; + + return pool->hot_memory_size > allow_hot_size; }
static int dr_icm_pool_sync_all_buddy_pools(struct mlx5dr_icm_pool *pool)
From: Maor Dickman maord@nvidia.com
commit fdc18e4e4bded2a08638cdcd22dc087a64b9ddad upstream.
Currently offload of rule on bareudp device require tunnel key in order to match on mpls fields and without it the mpls fields are ignored, this is incorrect due to the fact udp tunnel doesn't have key to match on.
Fix by returning error in case flow is matching on tunnel key.
Fixes: 72046a91d134 ("net/mlx5e: Allow to match on mpls parameters") Signed-off-by: Maor Dickman maord@nvidia.com Reviewed-by: Roi Dayan roid@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c | 28 ++++------- 1 file changed, 11 insertions(+), 17 deletions(-)
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c @@ -60,37 +60,31 @@ static int parse_tunnel(struct mlx5e_pri void *headers_v) { struct flow_rule *rule = flow_cls_offload_flow_rule(f); - struct flow_match_enc_keyid enc_keyid; struct flow_match_mpls match; void *misc2_c; void *misc2_v;
- misc2_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, - misc_parameters_2); - misc2_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, - misc_parameters_2); - - if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS)) - return 0; - - if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) - return 0; - - flow_rule_match_enc_keyid(rule, &enc_keyid); - - if (!enc_keyid.mask->keyid) - return 0; - if (!MLX5_CAP_ETH(priv->mdev, tunnel_stateless_mpls_over_udp) && !(MLX5_CAP_GEN(priv->mdev, flex_parser_protocols) & MLX5_FLEX_PROTO_CW_MPLS_UDP)) return -EOPNOTSUPP;
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) + return -EOPNOTSUPP; + + if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS)) + return 0; + flow_rule_match_mpls(rule, &match);
/* Only support matching the first LSE */ if (match.mask->used_lses != 1) return -EOPNOTSUPP;
+ misc2_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, + misc_parameters_2); + misc2_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, + misc_parameters_2); + MLX5_SET(fte_match_set_misc2, misc2_c, outer_first_mpls_over_udp.mpls_label, match.mask->ls[0].mpls_label);
From: Tariq Toukan tariqt@nvidia.com
commit 7eaf1f37b8817c608c4e959d69986ef459d345cd upstream.
For RX TLS device-offloaded packets, the HW spec guarantees checksum validation for the offloaded packets, but does not define whether the CQE.checksum field matches the original packet (ciphertext) or the decrypted one (plaintext). This latitude allows architetctural improvements between generations of chips, resulting in different decisions regarding the value type of CQE.checksum.
Hence, for these packets, the device driver should not make use of this CQE field. Here we block CHECKSUM_COMPLETE usage for RX TLS device-offloaded packets, and use CHECKSUM_UNNECESSARY instead.
Value of the packet's tcp_hdr.csum is not modified by the HW, and it always matches the original ciphertext.
Fixes: 1182f3659357 ("net/mlx5e: kTLS, Add kTLS RX HW offload support") Signed-off-by: Tariq Toukan tariqt@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1348,7 +1348,8 @@ static inline void mlx5e_handle_csum(str }
/* True when explicitly set via priv flag, or XDP prog is loaded */ - if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state)) + if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state) || + get_cqe_tls_offload(cqe)) goto csum_unnecessary;
/* CQE csum doesn't cover padding octets in short ethernet
From: Yevgeny Kliteynik kliteyn@nvidia.com
commit 0aec12d97b2036af0946e3d582144739860ac07b upstream.
When adding a rule with 32 destinations, we hit the following out-of-band access issue:
BUG: KASAN: slab-out-of-bounds in mlx5_cmd_dr_create_fte+0x18ee/0x1e70
This patch fixes the issue by both increasing the allocated buffers to accommodate for the needed actions and by checking the number of actions to prevent this issue when a rule with too many actions is provided.
Fixes: 1ffd498901c1 ("net/mlx5: DR, Increase supported num of actions to 32") Signed-off-by: Yevgeny Kliteynik kliteyn@nvidia.com Reviewed-by: Alex Vesker valex@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c | 33 +++++++++++---- 1 file changed, 26 insertions(+), 7 deletions(-)
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c @@ -222,7 +222,11 @@ static bool contain_vport_reformat_actio dst->dest_attr.vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID; }
-#define MLX5_FLOW_CONTEXT_ACTION_MAX 32 +/* We want to support a rule with 32 destinations, which means we need to + * account for 32 destinations plus usually a counter plus one more action + * for a multi-destination flow table. + */ +#define MLX5_FLOW_CONTEXT_ACTION_MAX 34 static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns, struct mlx5_flow_table *ft, struct mlx5_flow_group *group, @@ -392,9 +396,9 @@ static int mlx5_cmd_dr_create_fte(struct enum mlx5_flow_destination_type type = dst->dest_attr.type; u32 id;
- if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || - num_term_actions >= MLX5_FLOW_CONTEXT_ACTION_MAX) { - err = -ENOSPC; + if (fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || + num_term_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; goto free_actions; }
@@ -464,8 +468,9 @@ static int mlx5_cmd_dr_create_fte(struct MLX5_FLOW_DESTINATION_TYPE_COUNTER) continue;
- if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { - err = -ENOSPC; + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || + fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; goto free_actions; }
@@ -485,14 +490,28 @@ static int mlx5_cmd_dr_create_fte(struct params.match_sz = match_sz; params.match_buf = (u64 *)fte->val; if (num_term_actions == 1) { - if (term_actions->reformat) + if (term_actions->reformat) { + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } actions[num_actions++] = term_actions->reformat; + }
+ if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } actions[num_actions++] = term_actions->dest; } else if (num_term_actions > 1) { bool ignore_flow_level = !!(fte->action.flags & FLOW_ACT_IGNORE_FLOW_LEVEL);
+ if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || + fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } tmp_action = mlx5dr_action_create_mult_dest_tbl(domain, term_actions, num_term_actions,
From: Maher Sanalla msanalla@nvidia.com
commit 7f839965b2d77e1926ad08b23c51d60988f10a99 upstream.
Currently, log_max_qp value is dependent on what FW reports as its max capability. In reality, due to a bug, some FWs report a value greater than 17, even though they don't support log_max_qp > 17.
This FW issue led the driver to exhaust memory on startup. Thus, log_max_qp value is set to be no more than 17 regardless of what FW reports, as it was before the cited commit.
Fixes: f79a609ea6bf ("net/mlx5: Update log_max_qp value to FW max capability") Signed-off-by: Maher Sanalla msanalla@nvidia.com Reviewed-by: Avihai Horon avihaih@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -510,7 +510,7 @@ static int handle_hca_cap(struct mlx5_co
/* Check log_max_qp from HCA caps to set in current profile */ if (prof->log_max_qp == LOG_MAX_SUPPORTED_QPS) { - prof->log_max_qp = MLX5_CAP_GEN_MAX(dev, log_max_qp); + prof->log_max_qp = min_t(u8, 17, MLX5_CAP_GEN_MAX(dev, log_max_qp)); } else if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < prof->log_max_qp) { mlx5_core_warn(dev, "log_max_qp value in current profile is %d, changing it to HCA capability limit (%d)\n", prof->log_max_qp,
From: Lama Kayal lkayal@nvidia.com
commit 5ee02b7a800654ff9549807bcf0b4c9fd5cf25f9 upstream.
Add mistakenly missing increment of count variable when looping over output buffer in mlx5e_self_test().
This resolves the issue of garbage values output when querying with self test via ethtool.
before: $ ethtool -t eth2 The test result is PASS The test extra info: Link Test 0 Speed Test 1768697188 Health Test 758528120 Loopback Test 3288687
after: $ ethtool -t eth2 The test result is PASS The test extra info: Link Test 0 Speed Test 0 Health Test 0 Loopback Test 0
Fixes: 7990b1b5e8bd ("net/mlx5e: loopback test is not supported in switchdev mode") Signed-off-by: Lama Kayal lkayal@nvidia.com Reviewed-by: Gal Pressman gal@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c @@ -334,6 +334,7 @@ void mlx5e_self_test(struct net_device * netdev_info(ndev, "\t[%d] %s start..\n", i, st.name); buf[count] = st.st_func(priv); netdev_info(ndev, "\t[%d] %s end: result(%lld)\n", i, st.name, buf[count]); + count++; }
mutex_unlock(&priv->state_lock);
From: Zhou Qingyang zhou1615@umn.edu
[ Upstream commit ab3824427b848da10e9fe2727f035bbeecae6ff4 ]
In zynq_qspi_exec_mem_op(), kzalloc() is directly used in memset(), which could lead to a NULL pointer dereference on failure of kzalloc().
Fix this bug by adding a check of tmpbuf.
This bug was found by a static analyzer. The analysis employs differential checking to identify inconsistent security operations (e.g., checks or kfrees) between two code paths and confirms that the inconsistent operations are not recovered in the current function or the callers, so they constitute bugs.
Note that, as a bug found by static analysis, it can be a false positive or hard to trigger. Multiple researchers have cross-reviewed the bug.
Builds with CONFIG_SPI_ZYNQ_QSPI=m show no new warnings, and our static analyzer no longer warns about this code.
Fixes: 67dca5e580f1 ("spi: spi-mem: Add support for Zynq QSPI controller") Signed-off-by: Zhou Qingyang zhou1615@umn.edu Link: https://lore.kernel.org/r/20211130172253.203700-1-zhou1615@umn.edu Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-zynq-qspi.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c index cfa222c9bd5e7..78f31b61a2aac 100644 --- a/drivers/spi/spi-zynq-qspi.c +++ b/drivers/spi/spi-zynq-qspi.c @@ -570,6 +570,9 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
if (op->dummy.nbytes) { tmpbuf = kzalloc(op->dummy.nbytes, GFP_KERNEL); + if (!tmpbuf) + return -ENOMEM; + memset(tmpbuf, 0xff, op->dummy.nbytes); reinit_completion(&xqspi->data_completion); xqspi->txbuf = tmpbuf;
From: Pali Rohár pali@kernel.org
[ Upstream commit c49ae619905eebd3f54598a84e4cd2bd58ba8fe9 ]
Jan reported that on Turris Omnia (Armada 385), no PCIe devices were detected after upgrading from v5.16.1 to v5.16.3 and identified the cause as the backport of 91a8d79fc797 ("PCI: mvebu: Fix configuring secondary bus of PCIe Root Port via emulated bridge"), which appeared in v5.17-rc1.
91a8d79fc797 was incorrectly applied from mailing list patch [1] to the linux git repository [2] probably due to resolving merge conflicts incorrectly. Fix it now.
[1] https://lore.kernel.org/r/20211125124605.25915-12-pali@kernel.org [2] https://git.kernel.org/linus/91a8d79fc797
[bhelgaas: commit log] BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215540 Fixes: 91a8d79fc797 ("PCI: mvebu: Fix configuring secondary bus of PCIe Root Port via emulated bridge") Link: https://lore.kernel.org/r/20220214110228.25825-1-pali@kernel.org Link: https://lore.kernel.org/r/20220127234917.GA150851@bhelgaas Reported-by: Jan Palus jpalus@fastmail.com Signed-off-by: Pali Rohár pali@kernel.org Signed-off-by: Bjorn Helgaas bhelgaas@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pci/controller/pci-mvebu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c index 357e9a293edf7..2a3bf82aa4e26 100644 --- a/drivers/pci/controller/pci-mvebu.c +++ b/drivers/pci/controller/pci-mvebu.c @@ -1288,7 +1288,8 @@ static int mvebu_pcie_probe(struct platform_device *pdev) * indirectly via kernel emulated PCI bridge driver. */ mvebu_pcie_setup_hw(port); - mvebu_pcie_set_local_dev_nr(port, 0); + mvebu_pcie_set_local_dev_nr(port, 1); + mvebu_pcie_set_local_bus_nr(port, 0); }
pcie->nports = i;
From: Samuel Holland samuel@sholland.org
[ Upstream commit 7920af5c826cb4a7ada1ae26fdd317642805adc2 ]
With v2 hardware, an IRQ can be configured to trigger on both edges via a bit in the int_bothedge register. Currently, the driver sets this bit when changing the trigger type to IRQ_TYPE_EDGE_BOTH, but fails to reset this bit if the trigger type is later changed to something else. This causes spurious IRQs, and when using gpio-keys with wakeup-event-action set to EV_ACT_(DE)ASSERTED, those IRQs translate into spurious wakeups.
Fixes: 3bcbd1a85b68 ("gpio/rockchip: support next version gpio controller") Reported-by: Guillaume Savaton guillaume@baierouge.fr Tested-by: Guillaume Savaton guillaume@baierouge.fr Signed-off-by: Samuel Holland samuel@sholland.org Signed-off-by: Bartosz Golaszewski brgl@bgdev.pl Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-rockchip.c | 56 +++++++++++++++++++----------------- 1 file changed, 29 insertions(+), 27 deletions(-)
diff --git a/drivers/gpio/gpio-rockchip.c b/drivers/gpio/gpio-rockchip.c index ce63cbd14d69a..24155c038f6d0 100644 --- a/drivers/gpio/gpio-rockchip.c +++ b/drivers/gpio/gpio-rockchip.c @@ -410,10 +410,8 @@ static int rockchip_irq_set_type(struct irq_data *d, unsigned int type) level = rockchip_gpio_readl(bank, bank->gpio_regs->int_type); polarity = rockchip_gpio_readl(bank, bank->gpio_regs->int_polarity);
- switch (type) { - case IRQ_TYPE_EDGE_BOTH: + if (type == IRQ_TYPE_EDGE_BOTH) { if (bank->gpio_type == GPIO_TYPE_V2) { - bank->toggle_edge_mode &= ~mask; rockchip_gpio_writel_bit(bank, d->hwirq, 1, bank->gpio_regs->int_bothedge); goto out; @@ -431,30 +429,34 @@ static int rockchip_irq_set_type(struct irq_data *d, unsigned int type) else polarity |= mask; } - break; - case IRQ_TYPE_EDGE_RISING: - bank->toggle_edge_mode &= ~mask; - level |= mask; - polarity |= mask; - break; - case IRQ_TYPE_EDGE_FALLING: - bank->toggle_edge_mode &= ~mask; - level |= mask; - polarity &= ~mask; - break; - case IRQ_TYPE_LEVEL_HIGH: - bank->toggle_edge_mode &= ~mask; - level &= ~mask; - polarity |= mask; - break; - case IRQ_TYPE_LEVEL_LOW: - bank->toggle_edge_mode &= ~mask; - level &= ~mask; - polarity &= ~mask; - break; - default: - ret = -EINVAL; - goto out; + } else { + if (bank->gpio_type == GPIO_TYPE_V2) { + rockchip_gpio_writel_bit(bank, d->hwirq, 0, + bank->gpio_regs->int_bothedge); + } else { + bank->toggle_edge_mode &= ~mask; + } + switch (type) { + case IRQ_TYPE_EDGE_RISING: + level |= mask; + polarity |= mask; + break; + case IRQ_TYPE_EDGE_FALLING: + level |= mask; + polarity &= ~mask; + break; + case IRQ_TYPE_LEVEL_HIGH: + level &= ~mask; + polarity |= mask; + break; + case IRQ_TYPE_LEVEL_LOW: + level &= ~mask; + polarity &= ~mask; + break; + default: + ret = -EINVAL; + goto out; + } }
rockchip_gpio_writel(bank, level, bank->gpio_regs->int_type);
From: Prasad Kumpatla quic_pkumpatl@quicinc.com
[ Upstream commit d04ad245d67a3991dfea5e108e4c452c2ab39bac ]
With the existing logic where clear_ack is true (HW doesn’t support auto clear for ICR), interrupt clear register reset is not handled properly. Due to this only the first interrupts get processed properly and further interrupts are blocked due to not resetting interrupt clear register.
Example for issue case where Invert_ack is false and clear_ack is true:
Say Default ISR=0x00 & ICR=0x00 and ISR is triggered with 2 interrupts making ISR = 0x11.
Step 1: Say ISR is set 0x11 (store status_buff = ISR). ISR needs to be cleared with the help of ICR once the Interrupt is processed.
Step 2: Write ICR = 0x11 (status_buff), this will clear the ISR to 0x00.
Step 3: Issue - In the existing code, ICR is written with ICR = ~(status_buff) i.e ICR = 0xEE -> This will block all the interrupts from raising except for interrupts 0 and 4. So expectation here is to reset ICR, which will unblock all the interrupts.
if (chip->clear_ack) { if (chip->ack_invert && !ret) ........ else if (!ret) ret = regmap_write(map, reg, ~data->status_buf[i]);
So writing 0 and 0xff (when ack_invert is true) should have no effect, other than clearing the ACKs just set.
Fixes: 3a6f0fb7b8eb ("regmap: irq: Add support to clear ack registers") Signed-off-by: Prasad Kumpatla quic_pkumpatl@quicinc.com Reviewed-by: Charles Keepax ckeepax@opensource.cirrus.com Tested-by: Marek Szyprowski m.szyprowski@samsung.com Link: https://lore.kernel.org/r/20220217085007.30218-1-quic_pkumpatl@quicinc.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/base/regmap/regmap-irq.c | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c index d2656581a6085..4a446259a184e 100644 --- a/drivers/base/regmap/regmap-irq.c +++ b/drivers/base/regmap/regmap-irq.c @@ -189,11 +189,9 @@ static void regmap_irq_sync_unlock(struct irq_data *data) ret = regmap_write(map, reg, d->mask_buf[i]); if (d->chip->clear_ack) { if (d->chip->ack_invert && !ret) - ret = regmap_write(map, reg, - d->mask_buf[i]); + ret = regmap_write(map, reg, UINT_MAX); else if (!ret) - ret = regmap_write(map, reg, - ~d->mask_buf[i]); + ret = regmap_write(map, reg, 0); } if (ret != 0) dev_err(d->map->dev, "Failed to ack 0x%x: %d\n", @@ -556,11 +554,9 @@ static irqreturn_t regmap_irq_thread(int irq, void *d) data->status_buf[i]); if (chip->clear_ack) { if (chip->ack_invert && !ret) - ret = regmap_write(map, reg, - data->status_buf[i]); + ret = regmap_write(map, reg, UINT_MAX); else if (!ret) - ret = regmap_write(map, reg, - ~data->status_buf[i]); + ret = regmap_write(map, reg, 0); } if (ret != 0) dev_err(map->dev, "Failed to ack 0x%x: %d\n", @@ -817,13 +813,9 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode, d->status_buf[i] & d->mask_buf[i]); if (chip->clear_ack) { if (chip->ack_invert && !ret) - ret = regmap_write(map, reg, - (d->status_buf[i] & - d->mask_buf[i])); + ret = regmap_write(map, reg, UINT_MAX); else if (!ret) - ret = regmap_write(map, reg, - ~(d->status_buf[i] & - d->mask_buf[i])); + ret = regmap_write(map, reg, 0); } if (ret != 0) { dev_err(map->dev, "Failed to ack 0x%x: %d\n",
From: Eric Dumazet edumazet@google.com
[ Upstream commit 42f67eea3ba36cef2dce2e853de6ddcb2e89eb39 ]
Move sk_is_tcp() to include/net/sock.h and use it where we can.
Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/skmsg.h | 6 ------ include/net/sock.h | 5 +++++ net/core/skbuff.c | 6 ++---- net/core/sock.c | 6 ++---- 4 files changed, 9 insertions(+), 14 deletions(-)
diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 584d94be9c8b0..18a717fe62eb0 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -507,12 +507,6 @@ static inline bool sk_psock_strp_enabled(struct sk_psock *psock) return !!psock->saved_data_ready; }
-static inline bool sk_is_tcp(const struct sock *sk) -{ - return sk->sk_type == SOCK_STREAM && - sk->sk_protocol == IPPROTO_TCP; -} - static inline bool sk_is_udp(const struct sock *sk) { return sk->sk_type == SOCK_DGRAM && diff --git a/include/net/sock.h b/include/net/sock.h index d47e9658da285..4e575735563a7 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -2654,6 +2654,11 @@ static inline void skb_setup_tx_timestamp(struct sk_buff *skb, __u16 tsflags) &skb_shinfo(skb)->tskey); }
+static inline bool sk_is_tcp(const struct sock *sk) +{ + return sk->sk_type == SOCK_STREAM && sk->sk_protocol == IPPROTO_TCP; +} + /** * sk_eat_skb - Release a skb if it is no longer needed * @sk: socket to eat this skb from diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 92edd03c2cd1c..2b2e19afef158 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -4849,8 +4849,7 @@ static void __skb_complete_tx_timestamp(struct sk_buff *skb, serr->header.h4.iif = skb->dev ? skb->dev->ifindex : 0; if (sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) { serr->ee.ee_data = skb_shinfo(skb)->tskey; - if (sk->sk_protocol == IPPROTO_TCP && - sk->sk_type == SOCK_STREAM) + if (sk_is_tcp(sk)) serr->ee.ee_data -= sk->sk_tskey; }
@@ -4919,8 +4918,7 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb, if (tsonly) { #ifdef CONFIG_INET if ((sk->sk_tsflags & SOF_TIMESTAMPING_OPT_STATS) && - sk->sk_protocol == IPPROTO_TCP && - sk->sk_type == SOCK_STREAM) { + sk_is_tcp(sk)) { skb = tcp_get_timestamping_opt_stats(sk, orig_skb, ack_skb); opt_stats = true; diff --git a/net/core/sock.c b/net/core/sock.c index 7de234693a3bf..d18e4ffd84820 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -874,8 +874,7 @@ int sock_set_timestamping(struct sock *sk, int optname,
if (val & SOF_TIMESTAMPING_OPT_ID && !(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)) { - if (sk->sk_protocol == IPPROTO_TCP && - sk->sk_type == SOCK_STREAM) { + if (sk_is_tcp(sk)) { if ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) return -EINVAL; @@ -1372,8 +1371,7 @@ int sock_setsockopt(struct socket *sock, int level, int optname,
case SO_ZEROCOPY: if (sk->sk_family == PF_INET || sk->sk_family == PF_INET6) { - if (!((sk->sk_type == SOCK_STREAM && - sk->sk_protocol == IPPROTO_TCP) || + if (!(sk_is_tcp(sk) || (sk->sk_type == SOCK_DGRAM && sk->sk_protocol == IPPROTO_UDP))) ret = -ENOTSUPP;
From: Eric Dumazet edumazet@google.com
[ Upstream commit a1cdec57e03a1352e92fbbe7974039dda4efcec0 ]
UDP sendmsg() can be lockless, this is causing all kinds of data races.
This patch converts sk->sk_tskey to remove one of these races.
BUG: KCSAN: data-race in __ip_append_data / __ip_append_data
read to 0xffff8881035d4b6c of 4 bytes by task 8877 on cpu 1: __ip_append_data+0x1c1/0x1de0 net/ipv4/ip_output.c:994 ip_make_skb+0x13f/0x2d0 net/ipv4/ip_output.c:1636 udp_sendmsg+0x12bd/0x14c0 net/ipv4/udp.c:1249 inet_sendmsg+0x5f/0x80 net/ipv4/af_inet.c:819 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg net/socket.c:725 [inline] ____sys_sendmsg+0x39a/0x510 net/socket.c:2413 ___sys_sendmsg net/socket.c:2467 [inline] __sys_sendmmsg+0x267/0x4c0 net/socket.c:2553 __do_sys_sendmmsg net/socket.c:2582 [inline] __se_sys_sendmmsg net/socket.c:2579 [inline] __x64_sys_sendmmsg+0x53/0x60 net/socket.c:2579 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x44/0xd0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae
write to 0xffff8881035d4b6c of 4 bytes by task 8880 on cpu 0: __ip_append_data+0x1d8/0x1de0 net/ipv4/ip_output.c:994 ip_make_skb+0x13f/0x2d0 net/ipv4/ip_output.c:1636 udp_sendmsg+0x12bd/0x14c0 net/ipv4/udp.c:1249 inet_sendmsg+0x5f/0x80 net/ipv4/af_inet.c:819 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg net/socket.c:725 [inline] ____sys_sendmsg+0x39a/0x510 net/socket.c:2413 ___sys_sendmsg net/socket.c:2467 [inline] __sys_sendmmsg+0x267/0x4c0 net/socket.c:2553 __do_sys_sendmmsg net/socket.c:2582 [inline] __se_sys_sendmmsg net/socket.c:2579 [inline] __x64_sys_sendmmsg+0x53/0x60 net/socket.c:2579 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x44/0xd0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae
value changed: 0x0000054d -> 0x0000054e
Reported by Kernel Concurrency Sanitizer on: CPU: 0 PID: 8880 Comm: syz-executor.5 Not tainted 5.17.0-rc2-syzkaller-00167-gdcb85f85fa6f-dirty #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Fixes: 09c2d251b707 ("net-timestamp: add key to disambiguate concurrent datagrams") Signed-off-by: Eric Dumazet edumazet@google.com Cc: Willem de Bruijn willemb@google.com Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/sock.h | 4 ++-- net/can/j1939/transport.c | 2 +- net/core/skbuff.c | 2 +- net/core/sock.c | 4 ++-- net/ipv4/ip_output.c | 2 +- net/ipv6/ip6_output.c | 2 +- 6 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/include/net/sock.h b/include/net/sock.h index 4e575735563a7..cd69595949614 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -504,7 +504,7 @@ struct sock { u16 sk_tsflags; int sk_bind_phc; u8 sk_shutdown; - u32 sk_tskey; + atomic_t sk_tskey; atomic_t sk_zckey;
u8 sk_clockid; @@ -2636,7 +2636,7 @@ static inline void _sock_tx_timestamp(struct sock *sk, __u16 tsflags, __sock_tx_timestamp(tsflags, tx_flags); if (tsflags & SOF_TIMESTAMPING_OPT_ID && tskey && tsflags & SOF_TIMESTAMPING_TX_RECORD_MASK) - *tskey = sk->sk_tskey++; + *tskey = atomic_inc_return(&sk->sk_tskey) - 1; } if (unlikely(sock_flag(sk, SOCK_WIFI_STATUS))) *tx_flags |= SKBTX_WIFI_STATUS; diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c index a271688780a2c..307ee1174a6e2 100644 --- a/net/can/j1939/transport.c +++ b/net/can/j1939/transport.c @@ -2006,7 +2006,7 @@ struct j1939_session *j1939_tp_send(struct j1939_priv *priv, /* set the end-packet for broadcast */ session->pkt.last = session->pkt.total;
- skcb->tskey = session->sk->sk_tskey++; + skcb->tskey = atomic_inc_return(&session->sk->sk_tskey) - 1; session->tskey = skcb->tskey;
return session; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 2b2e19afef158..f78969d8d8160 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -4850,7 +4850,7 @@ static void __skb_complete_tx_timestamp(struct sk_buff *skb, if (sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) { serr->ee.ee_data = skb_shinfo(skb)->tskey; if (sk_is_tcp(sk)) - serr->ee.ee_data -= sk->sk_tskey; + serr->ee.ee_data -= atomic_read(&sk->sk_tskey); }
err = sock_queue_err_skb(sk, skb); diff --git a/net/core/sock.c b/net/core/sock.c index d18e4ffd84820..6613a864f7f5a 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -878,9 +878,9 @@ int sock_set_timestamping(struct sock *sk, int optname, if ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) return -EINVAL; - sk->sk_tskey = tcp_sk(sk)->snd_una; + atomic_set(&sk->sk_tskey, tcp_sk(sk)->snd_una); } else { - sk->sk_tskey = 0; + atomic_set(&sk->sk_tskey, 0); } }
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index a4d2eb691cbc1..131066d0319a2 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -992,7 +992,7 @@ static int __ip_append_data(struct sock *sk,
if (cork->tx_flags & SKBTX_ANY_SW_TSTAMP && sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) - tskey = sk->sk_tskey++; + tskey = atomic_inc_return(&sk->sk_tskey) - 1;
hh_len = LL_RESERVED_SPACE(rt->dst.dev);
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index ff4e83e2a5068..22bf8fb617165 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -1465,7 +1465,7 @@ static int __ip6_append_data(struct sock *sk,
if (cork->tx_flags & SKBTX_ANY_SW_TSTAMP && sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) - tskey = sk->sk_tskey++; + tskey = atomic_inc_return(&sk->sk_tskey) - 1;
hh_len = LL_RESERVED_SPACE(rt->dst.dev);
From: Md Haris Iqbal haris.iqbal@ionos.com
[ Upstream commit 8700af2cc18c919b2a83e74e0479038fd113c15d ]
Callback function rtrs_clt_dev_release() for put_device() calls kfree(clt) to free memory. We shouldn't call kfree(clt) again, and we can't use the clt after kfree too.
Replace device_register() with device_initialize() and device_add() so that dev_set_name can() be used appropriately.
Move mutex_destroy() to the release function so it can be called in the alloc_clt err path.
Fixes: eab098246625 ("RDMA/rtrs-clt: Refactor the failure cases in alloc_clt") Link: https://lore.kernel.org/r/20220217030929.323849-1-haris.iqbal@ionos.com Reported-by: Miaoqian Lin linmq006@gmail.com Signed-off-by: Md Haris Iqbal haris.iqbal@ionos.com Reviewed-by: Jack Wang jinpu.wang@ionos.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 37 ++++++++++++++------------ 1 file changed, 20 insertions(+), 17 deletions(-)
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index e39709dee179d..7760cbc098add 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -2664,6 +2664,8 @@ static void rtrs_clt_dev_release(struct device *dev) { struct rtrs_clt *clt = container_of(dev, struct rtrs_clt, dev);
+ mutex_destroy(&clt->paths_ev_mutex); + mutex_destroy(&clt->paths_mutex); kfree(clt); }
@@ -2693,6 +2695,8 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num, return ERR_PTR(-ENOMEM); }
+ clt->dev.class = rtrs_clt_dev_class; + clt->dev.release = rtrs_clt_dev_release; uuid_gen(&clt->paths_uuid); INIT_LIST_HEAD_RCU(&clt->paths_list); clt->paths_num = paths_num; @@ -2709,43 +2713,41 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num, init_waitqueue_head(&clt->permits_wait); mutex_init(&clt->paths_ev_mutex); mutex_init(&clt->paths_mutex); + device_initialize(&clt->dev);
- clt->dev.class = rtrs_clt_dev_class; - clt->dev.release = rtrs_clt_dev_release; err = dev_set_name(&clt->dev, "%s", sessname); if (err) - goto err; + goto err_put; + /* * Suppress user space notification until * sysfs files are created */ dev_set_uevent_suppress(&clt->dev, true); - err = device_register(&clt->dev); - if (err) { - put_device(&clt->dev); - goto err; - } + err = device_add(&clt->dev); + if (err) + goto err_put;
clt->kobj_paths = kobject_create_and_add("paths", &clt->dev.kobj); if (!clt->kobj_paths) { err = -ENOMEM; - goto err_dev; + goto err_del; } err = rtrs_clt_create_sysfs_root_files(clt); if (err) { kobject_del(clt->kobj_paths); kobject_put(clt->kobj_paths); - goto err_dev; + goto err_del; } dev_set_uevent_suppress(&clt->dev, false); kobject_uevent(&clt->dev.kobj, KOBJ_ADD);
return clt; -err_dev: - device_unregister(&clt->dev); -err: +err_del: + device_del(&clt->dev); +err_put: free_percpu(clt->pcpu_path); - kfree(clt); + put_device(&clt->dev); return ERR_PTR(err); }
@@ -2753,9 +2755,10 @@ static void free_clt(struct rtrs_clt *clt) { free_permits(clt); free_percpu(clt->pcpu_path); - mutex_destroy(&clt->paths_ev_mutex); - mutex_destroy(&clt->paths_mutex); - /* release callback will free clt in last put */ + + /* + * release callback will free clt and destroy mutexes in last put + */ device_unregister(&clt->dev); }
From: Md Haris Iqbal haris.iqbal@ionos.com
[ Upstream commit c46fa8911b17e3f808679061a8af8bee219f4602 ]
Error path of rtrs_clt_open() calls free_clt(), where free_permit is called. This is wrong since error path of rtrs_clt_open() does not need to call free_permit().
Also, moving free_permits() call to rtrs_clt_close(), makes it more aligned with the call to alloc_permit() in rtrs_clt_open().
Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality") Link: https://lore.kernel.org/r/20220217030929.323849-2-haris.iqbal@ionos.com Signed-off-by: Md Haris Iqbal haris.iqbal@ionos.com Reviewed-by: Jack Wang jinpu.wang@ionos.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 7760cbc098add..be96701cf281e 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -2753,7 +2753,6 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
static void free_clt(struct rtrs_clt *clt) { - free_permits(clt); free_percpu(clt->pcpu_path);
/* @@ -2875,6 +2874,7 @@ void rtrs_clt_close(struct rtrs_clt *clt) rtrs_clt_destroy_sess_files(sess, NULL); kobject_put(&sess->kobj); } + free_permits(clt); free_clt(clt); } EXPORT_SYMBOL(rtrs_clt_close);
From: Michael Chan michael.chan@broadcom.com
[ Upstream commit b891106da52b2c12dbaf73400f6d225b06a38d80 ]
When polling for the firmware message response, we first poll for the response message header. Once the valid length is detected in the header, we poll for the valid bit at the end of the message which signals DMA completion. Normally, this poll time for DMA completion is extremely short (0 to a few usec). But on some devices under some rare conditions, it can be up to about 20 msec.
Increase this delay to 50 msec and use udelay() for the first 10 usec for the common case, and usleep_range() beyond that.
Also, change the error message to include the above delay time when printing the timeout value.
Fixes: 3c8c20db769c ("bnxt_en: move HWRM API implementation into separate file") Reviewed-by: Vladimir Olovyannikov vladimir.olovyannikov@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c | 12 +++++++++--- drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h | 2 +- 2 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c index 8171f4912fa01..3a0eeb3737767 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c @@ -595,18 +595,24 @@ static int __hwrm_send(struct bnxt *bp, struct bnxt_hwrm_ctx *ctx)
/* Last byte of resp contains valid bit */ valid = ((u8 *)ctx->resp) + len - 1; - for (j = 0; j < HWRM_VALID_BIT_DELAY_USEC; j++) { + for (j = 0; j < HWRM_VALID_BIT_DELAY_USEC; ) { /* make sure we read from updated DMA memory */ dma_rmb(); if (*valid) break; - usleep_range(1, 5); + if (j < 10) { + udelay(1); + j++; + } else { + usleep_range(20, 30); + j += 20; + } }
if (j >= HWRM_VALID_BIT_DELAY_USEC) { if (!(ctx->flags & BNXT_HWRM_CTX_SILENT)) netdev_err(bp->dev, "Error (timeout: %u) msg {0x%x 0x%x} len:%d v:%d\n", - hwrm_total_timeout(i), + hwrm_total_timeout(i) + j, le16_to_cpu(ctx->req->req_type), le16_to_cpu(ctx->req->seq_id), len, *valid); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h index 9a9fc4e8041b6..380ef69afb51b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h @@ -94,7 +94,7 @@ static inline unsigned int hwrm_total_timeout(unsigned int n) }
-#define HWRM_VALID_BIT_DELAY_USEC 150 +#define HWRM_VALID_BIT_DELAY_USEC 50000
static inline bool bnxt_cfa_hwrm_message(u16 req_type) {
From: ChenXiaoSong chenxiaosong2@huawei.com
[ Upstream commit 84ec758fb2daa236026506868c8796b0500c047d ]
When configfs_register_subsystem() or configfs_unregister_subsystem() is executing link_group() or unlink_group(), it is possible that two processes add or delete list concurrently. Some unfortunate interleavings of them can cause kernel panic.
One of cases is: A --> B --> C --> D A <-- B <-- C <-- D
delete list_head *B | delete list_head *C --------------------------------|----------------------------------- configfs_unregister_subsystem | configfs_unregister_subsystem unlink_group | unlink_group unlink_obj | unlink_obj list_del_init | list_del_init __list_del_entry | __list_del_entry __list_del | __list_del // next == C | next->prev = prev | | next->prev = prev prev->next = next | | // prev == B | prev->next = next
Fix this by adding mutex when calling link_group() or unlink_group(), but parent configfs_subsystem is NULL when config_item is root. So I create a mutex configfs_subsystem_mutex.
Fixes: 7063fbf22611 ("[PATCH] configfs: User-driven configuration filesystem") Signed-off-by: ChenXiaoSong chenxiaosong2@huawei.com Signed-off-by: Laibin Qiu qiulaibin@huawei.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- fs/configfs/dir.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c index d3cd2a94d1e8c..d1f9d26322027 100644 --- a/fs/configfs/dir.c +++ b/fs/configfs/dir.c @@ -34,6 +34,14 @@ */ DEFINE_SPINLOCK(configfs_dirent_lock);
+/* + * All of link_obj/unlink_obj/link_group/unlink_group require that + * subsys->su_mutex is held. + * But parent configfs_subsystem is NULL when config_item is root. + * Use this mutex when config_item is root. + */ +static DEFINE_MUTEX(configfs_subsystem_mutex); + static void configfs_d_iput(struct dentry * dentry, struct inode * inode) { @@ -1859,7 +1867,9 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys) group->cg_item.ci_name = group->cg_item.ci_namebuf;
sd = root->d_fsdata; + mutex_lock(&configfs_subsystem_mutex); link_group(to_config_group(sd->s_element), group); + mutex_unlock(&configfs_subsystem_mutex);
inode_lock_nested(d_inode(root), I_MUTEX_PARENT);
@@ -1884,7 +1894,9 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys) inode_unlock(d_inode(root));
if (err) { + mutex_lock(&configfs_subsystem_mutex); unlink_group(group); + mutex_unlock(&configfs_subsystem_mutex); configfs_release_fs(); } put_fragment(frag); @@ -1931,7 +1943,9 @@ void configfs_unregister_subsystem(struct configfs_subsystem *subsys)
dput(dentry);
+ mutex_lock(&configfs_subsystem_mutex); unlink_group(group); + mutex_unlock(&configfs_subsystem_mutex); configfs_release_fs(); }
From: Bart Van Assche bvanassche@acm.org
[ Upstream commit 081bdc9fe05bb23248f5effb6f811da3da4b8252 ]
Remove the flush_workqueue(system_long_wq) call since flushing system_long_wq is deadlock-prone and since that call is redundant with a preceding cancel_work_sync()
Link: https://lore.kernel.org/r/20220215210511.28303-3-bvanassche@acm.org Fixes: ef6c49d87c34 ("IB/srp: Eliminate state SRP_TARGET_DEAD") Reported-by: syzbot+831661966588c802aae9@syzkaller.appspotmail.com Signed-off-by: Bart Van Assche bvanassche@acm.org Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/ulp/srp/ib_srp.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index e174e853f8a40..285b766e4e704 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -4047,9 +4047,11 @@ static void srp_remove_one(struct ib_device *device, void *client_data) spin_unlock(&host->target_lock);
/* - * Wait for tl_err and target port removal tasks. + * srp_queue_remove_work() queues a call to + * srp_remove_target(). The latter function cancels + * target->tl_err_work so waiting for the remove works to + * finish is sufficient. */ - flush_workqueue(system_long_wq); flush_workqueue(srp_remove_wq);
kfree(host);
From: Kumar Kartikeya Dwivedi memxor@gmail.com
[ Upstream commit 3363bd0cfbb80dfcd25003cd3815b0ad8b68d0ff ]
Allow passing PTR_TO_CTX, if the kfunc expects a matching struct type, and punt to PTR_TO_MEM block if reg->type does not fall in one of PTR_TO_BTF_ID or PTR_TO_SOCK* types. This will be used by future commits to get access to XDP and TC PTR_TO_CTX, and pass various data (flags, l4proto, netns_id, etc.) encoded in opts struct passed as pointer to kfunc.
For PTR_TO_MEM support, arguments are currently limited to pointer to scalar, or pointer to struct composed of scalars. This is done so that unsafe scenarios (like passing PTR_TO_MEM where PTR_TO_BTF_ID of in-kernel valid structure is expected, which may have pointers) are avoided. Since the argument checking happens basd on argument register type, it is not easy to ascertain what the expected type is. In the future, support for PTR_TO_MEM for kfunc can be extended to serve other usecases. The struct type whose pointer is passed in may have maximum nesting depth of 4, all recursively composed of scalars or struct with scalars.
Future commits will add negative tests that check whether these restrictions imposed for kfunc arguments are duly rejected by BPF verifier or not.
Signed-off-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Alexei Starovoitov ast@kernel.org Link: https://lore.kernel.org/bpf/20211217015031.1278167-4-memxor@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/btf.c | 94 +++++++++++++++++++++++++++++++++++++----------- 1 file changed, 73 insertions(+), 21 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index d2ff8ba7ae58f..1621f9d45fbda 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -5564,12 +5564,53 @@ static u32 *reg2btf_ids[__BPF_REG_TYPE_MAX] = { #endif };
+/* Returns true if struct is composed of scalars, 4 levels of nesting allowed */ +static bool __btf_type_is_scalar_struct(struct bpf_verifier_log *log, + const struct btf *btf, + const struct btf_type *t, int rec) +{ + const struct btf_type *member_type; + const struct btf_member *member; + u32 i; + + if (!btf_type_is_struct(t)) + return false; + + for_each_member(i, t, member) { + const struct btf_array *array; + + member_type = btf_type_skip_modifiers(btf, member->type, NULL); + if (btf_type_is_struct(member_type)) { + if (rec >= 3) { + bpf_log(log, "max struct nesting depth exceeded\n"); + return false; + } + if (!__btf_type_is_scalar_struct(log, btf, member_type, rec + 1)) + return false; + continue; + } + if (btf_type_is_array(member_type)) { + array = btf_type_array(member_type); + if (!array->nelems) + return false; + member_type = btf_type_skip_modifiers(btf, array->type, NULL); + if (!btf_type_is_scalar(member_type)) + return false; + continue; + } + if (!btf_type_is_scalar(member_type)) + return false; + } + return true; +} + static int btf_check_func_arg_match(struct bpf_verifier_env *env, const struct btf *btf, u32 func_id, struct bpf_reg_state *regs, bool ptr_to_mem_ok) { struct bpf_verifier_log *log = &env->log; + bool is_kfunc = btf_is_kernel(btf); const char *func_name, *ref_tname; const struct btf_type *t, *ref_t; const struct btf_param *args; @@ -5622,7 +5663,20 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id); ref_tname = btf_name_by_offset(btf, ref_t->name_off); - if (btf_is_kernel(btf)) { + if (btf_get_prog_ctx_type(log, btf, t, + env->prog->type, i)) { + /* If function expects ctx type in BTF check that caller + * is passing PTR_TO_CTX. + */ + if (reg->type != PTR_TO_CTX) { + bpf_log(log, + "arg#%d expected pointer to ctx, but got %s\n", + i, btf_type_str(t)); + return -EINVAL; + } + if (check_ctx_reg(env, reg, regno)) + return -EINVAL; + } else if (is_kfunc && (reg->type == PTR_TO_BTF_ID || reg2btf_ids[reg->type])) { const struct btf_type *reg_ref_t; const struct btf *reg_btf; const char *reg_ref_tname; @@ -5638,14 +5692,9 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, if (reg->type == PTR_TO_BTF_ID) { reg_btf = reg->btf; reg_ref_id = reg->btf_id; - } else if (reg2btf_ids[reg->type]) { + } else { reg_btf = btf_vmlinux; reg_ref_id = *reg2btf_ids[reg->type]; - } else { - bpf_log(log, "kernel function %s args#%d expected pointer to %s %s but R%d is not a pointer to btf_id\n", - func_name, i, - btf_type_str(ref_t), ref_tname, regno); - return -EINVAL; }
reg_ref_t = btf_type_skip_modifiers(reg_btf, reg_ref_id, @@ -5661,23 +5710,24 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, reg_ref_tname); return -EINVAL; } - } else if (btf_get_prog_ctx_type(log, btf, t, - env->prog->type, i)) { - /* If function expects ctx type in BTF check that caller - * is passing PTR_TO_CTX. - */ - if (reg->type != PTR_TO_CTX) { - bpf_log(log, - "arg#%d expected pointer to ctx, but got %s\n", - i, btf_type_str(t)); - return -EINVAL; - } - if (check_ctx_reg(env, reg, regno)) - return -EINVAL; } else if (ptr_to_mem_ok) { const struct btf_type *resolve_ret; u32 type_size;
+ if (is_kfunc) { + /* Permit pointer to mem, but only when argument + * type is pointer to scalar, or struct composed + * (recursively) of scalars. + */ + if (!btf_type_is_scalar(ref_t) && + !__btf_type_is_scalar_struct(log, btf, ref_t, 0)) { + bpf_log(log, + "arg#%d pointer type %s %s must point to scalar or struct with scalar\n", + i, btf_type_str(ref_t), ref_tname); + return -EINVAL; + } + } + resolve_ret = btf_resolve_size(btf, ref_t, &type_size); if (IS_ERR(resolve_ret)) { bpf_log(log, @@ -5690,6 +5740,8 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, if (check_mem_reg(env, reg, regno, type_size)) return -EINVAL; } else { + bpf_log(log, "reg type unsupported for arg#%d %sfunction %s#%d\n", i, + is_kfunc ? "kernel " : "", func_name, func_id); return -EINVAL; } } @@ -5739,7 +5791,7 @@ int btf_check_kfunc_arg_match(struct bpf_verifier_env *env, const struct btf *btf, u32 func_id, struct bpf_reg_state *regs) { - return btf_check_func_arg_match(env, btf, func_id, regs, false); + return btf_check_func_arg_match(env, btf, func_id, regs, true); }
/* Convert BTF of a function into bpf_reg_state if possible
From: Kumar Kartikeya Dwivedi memxor@gmail.com
[ Upstream commit 45ce4b4f9009102cd9f581196d480a59208690c1 ]
When commit e6ac2450d6de ("bpf: Support bpf program calling kernel function") added kfunc support, it defined reg2btf_ids as a cheap way to translate the verifier reg type to the appropriate btf_vmlinux BTF ID, however commit c25b2ae13603 ("bpf: Replace PTR_TO_XXX_OR_NULL with PTR_TO_XXX | PTR_MAYBE_NULL") moved the __BPF_REG_TYPE_MAX from the last member of bpf_reg_type enum to after the base register types, and defined other variants using type flag composition. However, now, the direct usage of reg->type to index into reg2btf_ids may no longer fall into __BPF_REG_TYPE_MAX range, and hence lead to out of bounds access and kernel crash on dereference of bad pointer.
Fixes: c25b2ae13603 ("bpf: Replace PTR_TO_XXX_OR_NULL with PTR_TO_XXX | PTR_MAYBE_NULL") Signed-off-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Alexei Starovoitov ast@kernel.org Link: https://lore.kernel.org/bpf/20220216201943.624869-1-memxor@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/btf.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 1621f9d45fbda..c9da250fee38c 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -5676,7 +5676,8 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, } if (check_ctx_reg(env, reg, regno)) return -EINVAL; - } else if (is_kfunc && (reg->type == PTR_TO_BTF_ID || reg2btf_ids[reg->type])) { + } else if (is_kfunc && (reg->type == PTR_TO_BTF_ID || + (reg2btf_ids[base_type(reg->type)] && !type_flag(reg->type)))) { const struct btf_type *reg_ref_t; const struct btf *reg_btf; const char *reg_ref_tname; @@ -5694,7 +5695,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, reg_ref_id = reg->btf_id; } else { reg_btf = btf_vmlinux; - reg_ref_id = *reg2btf_ids[reg->type]; + reg_ref_id = *reg2btf_ids[base_type(reg->type)]; }
reg_ref_t = btf_type_skip_modifiers(reg_btf, reg_ref_id,
From: Daniel Bristot de Oliveira bristot@kernel.org
commit ce33c845b030c9cf768370c951bc699470b09fa7 upstream.
The stacktrace event trigger is not dumping the stacktrace to the instance where it was enabled, but to the global "instance."
Use the private_data, pointing to the trigger file, to figure out the corresponding trace instance, and use it in the trigger action, like snapshot_trigger does.
Link: https://lkml.kernel.org/r/afbb0b4f18ba92c276865bc97204d438473f4ebc.164539623...
Cc: stable@vger.kernel.org Fixes: ae63b31e4d0e2 ("tracing: Separate out trace events from global variables") Reviewed-by: Tom Zanussi zanussi@kernel.org Tested-by: Tom Zanussi zanussi@kernel.org Signed-off-by: Daniel Bristot de Oliveira bristot@kernel.org Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_events_trigger.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/kernel/trace/trace_events_trigger.c +++ b/kernel/trace/trace_events_trigger.c @@ -1200,7 +1200,12 @@ stacktrace_trigger(struct event_trigger_ struct trace_buffer *buffer, void *rec, struct ring_buffer_event *event) { - trace_dump_stack(STACK_SKIP); + struct trace_event_file *file = data->private_data; + + if (file) + __trace_stack(file->tr, tracing_gen_ctx(), STACK_SKIP); + else + trace_dump_stack(STACK_SKIP); }
static void
From: Steven Rostedt (Google) rostedt@goodmis.org
commit 302e9edd54985f584cfc180098f3554774126969 upstream.
If a trigger is set on an event to disable or enable tracing within an instance, then tracing should be disabled or enabled in the instance and not at the top level, which is confusing to users.
Link: https://lkml.kernel.org/r/20220223223837.14f94ec3@rorschach.local.home
Cc: stable@vger.kernel.org Fixes: ae63b31e4d0e2 ("tracing: Separate out trace events from global variables") Tested-by: Daniel Bristot de Oliveira bristot@kernel.org Reviewed-by: Tom Zanussi zanussi@kernel.org Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_events_trigger.c | 52 +++++++++++++++++++++++++++++++----- 1 file changed, 46 insertions(+), 6 deletions(-)
--- a/kernel/trace/trace_events_trigger.c +++ b/kernel/trace/trace_events_trigger.c @@ -955,6 +955,16 @@ traceon_trigger(struct event_trigger_dat struct trace_buffer *buffer, void *rec, struct ring_buffer_event *event) { + struct trace_event_file *file = data->private_data; + + if (file) { + if (tracer_tracing_is_on(file->tr)) + return; + + tracer_tracing_on(file->tr); + return; + } + if (tracing_is_on()) return;
@@ -966,8 +976,15 @@ traceon_count_trigger(struct event_trigg struct trace_buffer *buffer, void *rec, struct ring_buffer_event *event) { - if (tracing_is_on()) - return; + struct trace_event_file *file = data->private_data; + + if (file) { + if (tracer_tracing_is_on(file->tr)) + return; + } else { + if (tracing_is_on()) + return; + }
if (!data->count) return; @@ -975,7 +992,10 @@ traceon_count_trigger(struct event_trigg if (data->count != -1) (data->count)--;
- tracing_on(); + if (file) + tracer_tracing_on(file->tr); + else + tracing_on(); }
static void @@ -983,6 +1003,16 @@ traceoff_trigger(struct event_trigger_da struct trace_buffer *buffer, void *rec, struct ring_buffer_event *event) { + struct trace_event_file *file = data->private_data; + + if (file) { + if (!tracer_tracing_is_on(file->tr)) + return; + + tracer_tracing_off(file->tr); + return; + } + if (!tracing_is_on()) return;
@@ -994,8 +1024,15 @@ traceoff_count_trigger(struct event_trig struct trace_buffer *buffer, void *rec, struct ring_buffer_event *event) { - if (!tracing_is_on()) - return; + struct trace_event_file *file = data->private_data; + + if (file) { + if (!tracer_tracing_is_on(file->tr)) + return; + } else { + if (!tracing_is_on()) + return; + }
if (!data->count) return; @@ -1003,7 +1040,10 @@ traceoff_count_trigger(struct event_trig if (data->count != -1) (data->count)--;
- tracing_off(); + if (file) + tracer_tracing_off(file->tr); + else + tracing_off(); }
static int
From: Nuno Sá nuno.sa@analog.com
commit b0e85f95e30d4d2dc22ea123a30dba36406879a1 upstream.
The trigger handler defined in the driver assumes that burst mode is being used. Hence, for devices that do not support it, we have to use the adis library default trigger implementation.
Tested-by: Julia Pineda julia.pineda@analog.com Fixes: 941f130881fa9 ("iio: adis16480: support burst read function") Signed-off-by: Nuno Sá nuno.sa@analog.com Link: https://lore.kernel.org/r/20220114132608.241-1-nuno.sa@analog.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/imu/adis16480.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/drivers/iio/imu/adis16480.c +++ b/drivers/iio/imu/adis16480.c @@ -1403,6 +1403,7 @@ static int adis16480_probe(struct spi_de { const struct spi_device_id *id = spi_get_device_id(spi); const struct adis_data *adis16480_data; + irq_handler_t trigger_handler = NULL; struct iio_dev *indio_dev; struct adis16480 *st; int ret; @@ -1474,8 +1475,12 @@ static int adis16480_probe(struct spi_de st->clk_freq = st->chip_info->int_clk; }
+ /* Only use our trigger handler if burst mode is supported */ + if (adis16480_data->burst_len) + trigger_handler = adis16480_trigger_handler; + ret = devm_adis_setup_buffer_and_trigger(&st->adis, indio_dev, - adis16480_trigger_handler); + trigger_handler); if (ret) return ret;
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
commit e0a2e37f303828d030a83f33ffe14b36cb88d563 upstream.
If iio_device_register() fails, a previous ioremap() is left unbalanced.
Update the error handling path and add the missing iounmap() call, as already done in the remove function.
Fixes: 74aeac4da66f ("iio: adc: Add MEN 16z188 ADC driver") Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Link: https://lore.kernel.org/r/320fc777863880247c2aff4a9d1a54ba69abf080.164344514... Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/men_z188_adc.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
--- a/drivers/iio/adc/men_z188_adc.c +++ b/drivers/iio/adc/men_z188_adc.c @@ -103,6 +103,7 @@ static int men_z188_probe(struct mcb_dev struct z188_adc *adc; struct iio_dev *indio_dev; struct resource *mem; + int ret;
indio_dev = devm_iio_device_alloc(&dev->dev, sizeof(struct z188_adc)); if (!indio_dev) @@ -128,8 +129,14 @@ static int men_z188_probe(struct mcb_dev adc->mem = mem; mcb_set_drvdata(dev, indio_dev);
- return iio_device_register(indio_dev); + ret = iio_device_register(indio_dev); + if (ret) + goto err_unmap;
+ return 0; + +err_unmap: + iounmap(adc->base); err: mcb_release_mem(mem); return -ENXIO;
From: Oleksij Rempel o.rempel@pengutronix.de
commit b7a78a8adaa8849c02f174d707aead0f85dca0da upstream.
On one side we have indio_dev->num_channels includes all physical channels + timestamp channel. On other side we have an array allocated only for physical channels. So, fix memory corruption by ARRAY_SIZE() instead of num_channels variable.
Note the first case is a cleanup rather than a fix as the software timestamp channel bit in active_scanmask is never set by the IIO core.
Fixes: 9374e8f5a38d ("iio: adc: add ADC driver for the TI TSC2046 controller") Signed-off-by: Oleksij Rempel o.rempel@pengutronix.de Link: https://lore.kernel.org/r/20220107081401.2816357-1-o.rempel@pengutronix.de Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/ti-tsc2046.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/iio/adc/ti-tsc2046.c +++ b/drivers/iio/adc/ti-tsc2046.c @@ -388,7 +388,7 @@ static int tsc2046_adc_update_scan_mode( mutex_lock(&priv->slock);
size = 0; - for_each_set_bit(ch_idx, active_scan_mask, indio_dev->num_channels) { + for_each_set_bit(ch_idx, active_scan_mask, ARRAY_SIZE(priv->l)) { size += tsc2046_adc_group_set_layout(priv, group, ch_idx); tsc2046_adc_group_set_cmd(priv, group, ch_idx); group++; @@ -548,7 +548,7 @@ static int tsc2046_adc_setup_spi_msg(str * enabled. */ size = 0; - for (ch_idx = 0; ch_idx < priv->dcfg->num_channels; ch_idx++) + for (ch_idx = 0; ch_idx < ARRAY_SIZE(priv->l); ch_idx++) size += tsc2046_adc_group_set_layout(priv, ch_idx, ch_idx);
priv->tx = devm_kzalloc(&priv->spi->dev, size, GFP_KERNEL);
From: Cosmin Tanislav demonsingur@gmail.com
commit 0e33d15f1dce9e3a80a970ea7f0b27837168aeca upstream.
According to page 90 of the datasheet [1], AIN_BUFP is bit 6 and AIN_BUFM is bit 5 of the CONFIG_0 -> CONFIG_7 registers.
Fix the mask used for setting these bits.
[1]: https://www.analog.com/media/en/technical-documentation/data-sheets/ad7124-8...
Fixes: 0eaecea6e487 ("iio: adc: ad7124: Add buffered input support") Signed-off-by: Cosmin Tanislav cosmin.tanislav@analog.com Link: https://lore.kernel.org/r/20220112200036.694490-1-cosmin.tanislav@analog.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/ad7124.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/iio/adc/ad7124.c +++ b/drivers/iio/adc/ad7124.c @@ -76,7 +76,7 @@ #define AD7124_CONFIG_REF_SEL(x) FIELD_PREP(AD7124_CONFIG_REF_SEL_MSK, x) #define AD7124_CONFIG_PGA_MSK GENMASK(2, 0) #define AD7124_CONFIG_PGA(x) FIELD_PREP(AD7124_CONFIG_PGA_MSK, x) -#define AD7124_CONFIG_IN_BUFF_MSK GENMASK(7, 6) +#define AD7124_CONFIG_IN_BUFF_MSK GENMASK(6, 5) #define AD7124_CONFIG_IN_BUFF(x) FIELD_PREP(AD7124_CONFIG_IN_BUFF_MSK, x)
/* AD7124_FILTER_X */
From: Sean Nyekjaer sean@geanix.com
commit ccbed9d8d2a5351d8238f2d3f0741c9a3176f752 upstream.
Add missing don't care padding between address and data for SPI transfers
Fixes: a3e0b51884ee ("iio: accel: add support for FXLS8962AF/FXLS8964AF accelerometers") Signed-off-by: Sean Nyekjaer sean@geanix.com Link: https://lore.kernel.org/r/20211220125144.3630539-1-sean@geanix.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/accel/fxls8962af-core.c | 12 ++++++++++-- drivers/iio/accel/fxls8962af-i2c.c | 2 +- drivers/iio/accel/fxls8962af-spi.c | 2 +- drivers/iio/accel/fxls8962af.h | 3 ++- 4 files changed, 14 insertions(+), 5 deletions(-)
--- a/drivers/iio/accel/fxls8962af-core.c +++ b/drivers/iio/accel/fxls8962af-core.c @@ -173,12 +173,20 @@ struct fxls8962af_data { u16 upper_thres; };
-const struct regmap_config fxls8962af_regmap_conf = { +const struct regmap_config fxls8962af_i2c_regmap_conf = { .reg_bits = 8, .val_bits = 8, .max_register = FXLS8962AF_MAX_REG, }; -EXPORT_SYMBOL_GPL(fxls8962af_regmap_conf); +EXPORT_SYMBOL_GPL(fxls8962af_i2c_regmap_conf); + +const struct regmap_config fxls8962af_spi_regmap_conf = { + .reg_bits = 8, + .pad_bits = 8, + .val_bits = 8, + .max_register = FXLS8962AF_MAX_REG, +}; +EXPORT_SYMBOL_GPL(fxls8962af_spi_regmap_conf);
enum { fxls8962af_idx_x, --- a/drivers/iio/accel/fxls8962af-i2c.c +++ b/drivers/iio/accel/fxls8962af-i2c.c @@ -18,7 +18,7 @@ static int fxls8962af_probe(struct i2c_c { struct regmap *regmap;
- regmap = devm_regmap_init_i2c(client, &fxls8962af_regmap_conf); + regmap = devm_regmap_init_i2c(client, &fxls8962af_i2c_regmap_conf); if (IS_ERR(regmap)) { dev_err(&client->dev, "Failed to initialize i2c regmap\n"); return PTR_ERR(regmap); --- a/drivers/iio/accel/fxls8962af-spi.c +++ b/drivers/iio/accel/fxls8962af-spi.c @@ -18,7 +18,7 @@ static int fxls8962af_probe(struct spi_d { struct regmap *regmap;
- regmap = devm_regmap_init_spi(spi, &fxls8962af_regmap_conf); + regmap = devm_regmap_init_spi(spi, &fxls8962af_spi_regmap_conf); if (IS_ERR(regmap)) { dev_err(&spi->dev, "Failed to initialize spi regmap\n"); return PTR_ERR(regmap); --- a/drivers/iio/accel/fxls8962af.h +++ b/drivers/iio/accel/fxls8962af.h @@ -17,6 +17,7 @@ int fxls8962af_core_probe(struct device int fxls8962af_core_remove(struct device *dev);
extern const struct dev_pm_ops fxls8962af_pm_ops; -extern const struct regmap_config fxls8962af_regmap_conf; +extern const struct regmap_config fxls8962af_i2c_regmap_conf; +extern const struct regmap_config fxls8962af_spi_regmap_conf;
#endif /* _FXLS8962AF_H_ */
From: Lorenzo Bianconi lorenzo@kernel.org
commit ea85bf906466191b58532bb19f4fbb4591f0a77e upstream.
We need to wait for sensor settling time (~ 3/ODR) before reading data in st_lsm6dsx_read_oneshot routine in order to avoid corrupted samples.
Fixes: 290a6ce11d93 ("iio: imu: add support to lsm6dsx driver") Reported-by: Mario Tesi mario.tesi@st.com Tested-by: Mario Tesi mario.tesi@st.com Signed-off-by: Lorenzo Bianconi lorenzo@kernel.org Link: https://lore.kernel.org/r/b41ebda5535895298716c76d939f9f165fcd2d13.164409812... Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c +++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c @@ -1374,8 +1374,12 @@ static int st_lsm6dsx_read_oneshot(struc if (err < 0) return err;
+ /* + * we need to wait for sensor settling time before + * reading data in order to avoid corrupted samples + */ delay = 1000000000 / sensor->odr; - usleep_range(delay, 2 * delay); + usleep_range(3 * delay, 4 * delay);
err = st_lsm6dsx_read_locked(hw, addr, &data, sizeof(data)); if (err < 0)
From: Miaoqian Lin linmq006@gmail.com
commit 632fe0bb8c5b9c06ec961f575ee42a6fff5eceeb upstream.
The pm_runtime_enable will increase power disable depth. If the probe fails, we should use pm_runtime_disable() to balance pm_runtime_enable(). In the PM Runtime docs: Drivers in ->remove() callback should undo the runtime PM changes done in ->probe(). Usually this means calling pm_runtime_disable(), pm_runtime_dont_use_autosuspend() etc. We should do this in error handling.
Fix this problem for the following drivers: bmc150, bmg160, kmx61, kxcj-1013, mma9551, mma9553.
Fixes: 7d0ead5c3f00 ("iio: Reconcile operation order between iio_register/unregister and pm functions") Signed-off-by: Miaoqian Lin linmq006@gmail.com Reviewed-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Link: https://lore.kernel.org/r/20220106112309.16879-1-linmq006@gmail.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/accel/bmc150-accel-core.c | 5 ++++- drivers/iio/accel/kxcjk-1013.c | 5 ++++- drivers/iio/accel/mma9551.c | 5 ++++- drivers/iio/accel/mma9553.c | 5 ++++- drivers/iio/gyro/bmg160_core.c | 5 ++++- drivers/iio/imu/kmx61.c | 5 ++++- drivers/iio/magnetometer/bmc150_magn.c | 5 +++-- 7 files changed, 27 insertions(+), 8 deletions(-)
--- a/drivers/iio/accel/bmc150-accel-core.c +++ b/drivers/iio/accel/bmc150-accel-core.c @@ -1783,11 +1783,14 @@ int bmc150_accel_core_probe(struct devic ret = iio_device_register(indio_dev); if (ret < 0) { dev_err(dev, "Unable to register iio device\n"); - goto err_trigger_unregister; + goto err_pm_cleanup; }
return 0;
+err_pm_cleanup: + pm_runtime_dont_use_autosuspend(dev); + pm_runtime_disable(dev); err_trigger_unregister: bmc150_accel_unregister_triggers(data, BMC150_ACCEL_TRIGGERS - 1); err_buffer_cleanup: --- a/drivers/iio/accel/kxcjk-1013.c +++ b/drivers/iio/accel/kxcjk-1013.c @@ -1589,11 +1589,14 @@ static int kxcjk1013_probe(struct i2c_cl ret = iio_device_register(indio_dev); if (ret < 0) { dev_err(&client->dev, "unable to register iio device\n"); - goto err_buffer_cleanup; + goto err_pm_cleanup; }
return 0;
+err_pm_cleanup: + pm_runtime_dont_use_autosuspend(&client->dev); + pm_runtime_disable(&client->dev); err_buffer_cleanup: iio_triggered_buffer_cleanup(indio_dev); err_trigger_unregister: --- a/drivers/iio/accel/mma9551.c +++ b/drivers/iio/accel/mma9551.c @@ -495,11 +495,14 @@ static int mma9551_probe(struct i2c_clie ret = iio_device_register(indio_dev); if (ret < 0) { dev_err(&client->dev, "unable to register iio device\n"); - goto out_poweroff; + goto err_pm_cleanup; }
return 0;
+err_pm_cleanup: + pm_runtime_dont_use_autosuspend(&client->dev); + pm_runtime_disable(&client->dev); out_poweroff: mma9551_set_device_state(client, false);
--- a/drivers/iio/accel/mma9553.c +++ b/drivers/iio/accel/mma9553.c @@ -1134,12 +1134,15 @@ static int mma9553_probe(struct i2c_clie ret = iio_device_register(indio_dev); if (ret < 0) { dev_err(&client->dev, "unable to register iio device\n"); - goto out_poweroff; + goto err_pm_cleanup; }
dev_dbg(&indio_dev->dev, "Registered device %s\n", name); return 0;
+err_pm_cleanup: + pm_runtime_dont_use_autosuspend(&client->dev); + pm_runtime_disable(&client->dev); out_poweroff: mma9551_set_device_state(client, false); return ret; --- a/drivers/iio/gyro/bmg160_core.c +++ b/drivers/iio/gyro/bmg160_core.c @@ -1188,11 +1188,14 @@ int bmg160_core_probe(struct device *dev ret = iio_device_register(indio_dev); if (ret < 0) { dev_err(dev, "unable to register iio device\n"); - goto err_buffer_cleanup; + goto err_pm_cleanup; }
return 0;
+err_pm_cleanup: + pm_runtime_dont_use_autosuspend(dev); + pm_runtime_disable(dev); err_buffer_cleanup: iio_triggered_buffer_cleanup(indio_dev); err_trigger_unregister: --- a/drivers/iio/imu/kmx61.c +++ b/drivers/iio/imu/kmx61.c @@ -1385,7 +1385,7 @@ static int kmx61_probe(struct i2c_client ret = iio_device_register(data->acc_indio_dev); if (ret < 0) { dev_err(&client->dev, "Failed to register acc iio device\n"); - goto err_buffer_cleanup_mag; + goto err_pm_cleanup; }
ret = iio_device_register(data->mag_indio_dev); @@ -1398,6 +1398,9 @@ static int kmx61_probe(struct i2c_client
err_iio_unregister_acc: iio_device_unregister(data->acc_indio_dev); +err_pm_cleanup: + pm_runtime_dont_use_autosuspend(&client->dev); + pm_runtime_disable(&client->dev); err_buffer_cleanup_mag: if (client->irq > 0) iio_triggered_buffer_cleanup(data->mag_indio_dev); --- a/drivers/iio/magnetometer/bmc150_magn.c +++ b/drivers/iio/magnetometer/bmc150_magn.c @@ -962,13 +962,14 @@ int bmc150_magn_probe(struct device *dev ret = iio_device_register(indio_dev); if (ret < 0) { dev_err(dev, "unable to register iio device\n"); - goto err_disable_runtime_pm; + goto err_pm_cleanup; }
dev_dbg(dev, "Registered device %s\n", name); return 0;
-err_disable_runtime_pm: +err_pm_cleanup: + pm_runtime_dont_use_autosuspend(dev); pm_runtime_disable(dev); err_buffer_cleanup: iio_triggered_buffer_cleanup(indio_dev);
From: Phil Elwell phil@raspberrypi.com
commit eebb0f4e894f1e9577a56b337693d1051dd6ebfd upstream.
UART drivers are meant to use the port spinlock within certain methods, to protect against reentrancy. The sc16is7xx driver does very little locking, presumably because when added it triggers "scheduling while atomic" errors. This is due to the use of mutexes within the regmap abstraction layer, and the mutex implementation's habit of sleeping the current thread while waiting for access. Unfortunately this lack of interlocking can lead to corruption of outbound data, which occurs when the buffer used for I2C transmission is used simultaneously by two threads - a work queue thread running sc16is7xx_tx_proc, and an IRQ thread in sc16is7xx_port_irq, both of which can call sc16is7xx_handle_tx.
An earlier patch added efr_lock, a mutex that controls access to the EFR register. This mutex is already claimed in the IRQ handler, and all that is required is to claim the same mutex in sc16is7xx_tx_proc.
See: https://github.com/raspberrypi/linux/issues/4885
Fixes: 6393ff1c4435 ("sc16is7xx: Use threaded IRQ") Cc: stable stable@vger.kernel.org Signed-off-by: Phil Elwell phil@raspberrypi.com Link: https://lore.kernel.org/r/20220216160802.1026013-1-phil@raspberrypi.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/serial/sc16is7xx.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/tty/serial/sc16is7xx.c +++ b/drivers/tty/serial/sc16is7xx.c @@ -734,12 +734,15 @@ static irqreturn_t sc16is7xx_irq(int irq static void sc16is7xx_tx_proc(struct kthread_work *ws) { struct uart_port *port = &(to_sc16is7xx_one(ws, tx_work)->port); + struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
if ((port->rs485.flags & SER_RS485_ENABLED) && (port->rs485.delay_rts_before_send > 0)) msleep(port->rs485.delay_rts_before_send);
+ mutex_lock(&s->efr_lock); sc16is7xx_handle_tx(port); + mutex_unlock(&s->efr_lock); }
static void sc16is7xx_reconf_rs485(struct uart_port *port)
From: Sergey Shtylyov s.shtylyov@omp.ru
commit 8d093e02e898b24c58788b0289e3202317a96d2a upstream.
The HPT371 chip physically has only one channel, the secondary one, however the primary channel registers do exist! Thus we have to manually disable the non-existing channel if the BIOS hasn't done this already. Similarly to the pata_hpt3x2n driver, always disable the primary channel.
Fixes: 669a5db411d8 ("[libata] Add a bunch of PATA drivers.") Cc: stable@vger.kernel.org Signed-off-by: Sergey Shtylyov s.shtylyov@omp.ru Signed-off-by: Damien Le Moal damien.lemoal@opensource.wdc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/ata/pata_hpt37x.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
--- a/drivers/ata/pata_hpt37x.c +++ b/drivers/ata/pata_hpt37x.c @@ -920,6 +920,20 @@ static int hpt37x_init_one(struct pci_de pci_write_config_byte(dev, 0x5a, irqmask);
/* + * HPT371 chips physically have only one channel, the secondary one, + * but the primary channel registers do exist! Go figure... + * So, we manually disable the non-existing channel here + * (if the BIOS hasn't done this already). + */ + if (dev->device == PCI_DEVICE_ID_TTI_HPT371) { + u8 mcr1; + + pci_read_config_byte(dev, 0x50, &mcr1); + mcr1 &= ~0x04; + pci_write_config_byte(dev, 0x50, mcr1); + } + + /* * default to pci clock. make sure MA15/16 are set to output * to prevent drives having problems with 40-pin cables. Needed * for some drives such as IBM-DTLA which will not enter ready
From: Dmytro Bagrii dimich.dmb@gmail.com
commit 198a7ebd5fa17b4d0be8cb70240ee1be885175c0 upstream.
This reverts commit 46ee4abb10a07bd8f8ce910ee6b4ae6a947d7f63.
CH341 has Product ID 0x5512 in EPP/MEM mode which is used for I2C/SPI/GPIO interfaces. In asynchronous serial interface mode CH341 has PID 0x5523 which is already in the table.
Mode is selected by corresponding jumper setting.
Signed-off-by: Dmytro Bagrii dimich.dmb@gmail.com Link: https://lore.kernel.org/r/20220210164137.4376-1-dimich.dmb@gmail.com Link: https://lore.kernel.org/r/YJ0OCS/sh+1ifD/q@hovoldconsulting.com Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/serial/ch341.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/usb/serial/ch341.c +++ b/drivers/usb/serial/ch341.c @@ -81,7 +81,6 @@ #define CH341_QUIRK_SIMULATE_BREAK BIT(1)
static const struct usb_device_id id_table[] = { - { USB_DEVICE(0x1a86, 0x5512) }, { USB_DEVICE(0x1a86, 0x5523) }, { USB_DEVICE(0x1a86, 0x7522) }, { USB_DEVICE(0x1a86, 0x7523) },
From: Daehwan Jung dh10.jung@samsung.com
commit aaaba1c86d04dac8e49bf508b492f81506257da3 upstream.
There's no lock for rndis response list. It could cause list corruption if there're two different list_add at the same time like below. It's better to add in rndis_add_response / rndis_free_response / rndis_get_next_response to prevent any race condition on response list.
[ 361.894299] [1: irq/191-dwc3:16979] list_add corruption. next->prev should be prev (ffffff80651764d0), but was ffffff883dc36f80. (next=ffffff80651764d0).
[ 361.904380] [1: irq/191-dwc3:16979] Call trace: [ 361.904391] [1: irq/191-dwc3:16979] __list_add_valid+0x74/0x90 [ 361.904401] [1: irq/191-dwc3:16979] rndis_msg_parser+0x168/0x8c0 [ 361.904409] [1: irq/191-dwc3:16979] rndis_command_complete+0x24/0x84 [ 361.904417] [1: irq/191-dwc3:16979] usb_gadget_giveback_request+0x20/0xe4 [ 361.904426] [1: irq/191-dwc3:16979] dwc3_gadget_giveback+0x44/0x60 [ 361.904434] [1: irq/191-dwc3:16979] dwc3_ep0_complete_data+0x1e8/0x3a0 [ 361.904442] [1: irq/191-dwc3:16979] dwc3_ep0_interrupt+0x29c/0x3dc [ 361.904450] [1: irq/191-dwc3:16979] dwc3_process_event_entry+0x78/0x6cc [ 361.904457] [1: irq/191-dwc3:16979] dwc3_process_event_buf+0xa0/0x1ec [ 361.904465] [1: irq/191-dwc3:16979] dwc3_thread_interrupt+0x34/0x5c
Fixes: f6281af9d62e ("usb: gadget: rndis: use list_for_each_entry_safe") Cc: stable stable@kernel.org Signed-off-by: Daehwan Jung dh10.jung@samsung.com Link: https://lore.kernel.org/r/1645507768-77687-1-git-send-email-dh10.jung@samsun... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/gadget/function/rndis.c | 8 ++++++++ drivers/usb/gadget/function/rndis.h | 1 + 2 files changed, 9 insertions(+)
--- a/drivers/usb/gadget/function/rndis.c +++ b/drivers/usb/gadget/function/rndis.c @@ -922,6 +922,7 @@ struct rndis_params *rndis_register(void params->resp_avail = resp_avail; params->v = v; INIT_LIST_HEAD(¶ms->resp_queue); + spin_lock_init(¶ms->resp_lock); pr_debug("%s: configNr = %d\n", __func__, i);
return params; @@ -1015,12 +1016,14 @@ void rndis_free_response(struct rndis_pa { rndis_resp_t *r, *n;
+ spin_lock(¶ms->resp_lock); list_for_each_entry_safe(r, n, ¶ms->resp_queue, list) { if (r->buf == buf) { list_del(&r->list); kfree(r); } } + spin_unlock(¶ms->resp_lock); } EXPORT_SYMBOL_GPL(rndis_free_response);
@@ -1030,14 +1033,17 @@ u8 *rndis_get_next_response(struct rndis
if (!length) return NULL;
+ spin_lock(¶ms->resp_lock); list_for_each_entry_safe(r, n, ¶ms->resp_queue, list) { if (!r->send) { r->send = 1; *length = r->length; + spin_unlock(¶ms->resp_lock); return r->buf; } }
+ spin_unlock(¶ms->resp_lock); return NULL; } EXPORT_SYMBOL_GPL(rndis_get_next_response); @@ -1054,7 +1060,9 @@ static rndis_resp_t *rndis_add_response( r->length = length; r->send = 0;
+ spin_lock(¶ms->resp_lock); list_add_tail(&r->list, ¶ms->resp_queue); + spin_unlock(¶ms->resp_lock); return r; }
--- a/drivers/usb/gadget/function/rndis.h +++ b/drivers/usb/gadget/function/rndis.h @@ -174,6 +174,7 @@ typedef struct rndis_params { void (*resp_avail)(void *v); void *v; struct list_head resp_queue; + spinlock_t resp_lock; } rndis_params;
/* RNDIS Message parser and other useless functions */
From: Szymon Heidrich szymon.heidrich@gmail.com
commit 7f14c7227f342d9932f9b918893c8814f86d2a0d upstream.
Assure that host may not manipulate the index to point past endpoint array.
Signed-off-by: Szymon Heidrich szymon.heidrich@gmail.com Cc: stable stable@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/gadget/udc/udc-xilinx.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/usb/gadget/udc/udc-xilinx.c +++ b/drivers/usb/gadget/udc/udc-xilinx.c @@ -1615,6 +1615,8 @@ static void xudc_getstatus(struct xusb_u break; case USB_RECIP_ENDPOINT: epnum = udc->setup.wIndex & USB_ENDPOINT_NUMBER_MASK; + if (epnum >= XUSB_MAX_ENDPOINTS) + goto stall; target_ep = &udc->ep[epnum]; epcfgreg = udc->read_fn(udc->addr + target_ep->offset); halt = epcfgreg & XUSB_EP_CFG_STALL_MASK; @@ -1682,6 +1684,10 @@ static void xudc_set_clear_feature(struc case USB_RECIP_ENDPOINT: if (!udc->setup.wValue) { endpoint = udc->setup.wIndex & USB_ENDPOINT_NUMBER_MASK; + if (endpoint >= XUSB_MAX_ENDPOINTS) { + xudc_ep0_stall(udc); + return; + } target_ep = &udc->ep[endpoint]; outinbit = udc->setup.wIndex & USB_ENDPOINT_DIR_MASK; outinbit = outinbit >> 7;
From: Steven Rostedt (Google) rostedt@goodmis.org
commit 851e99ebeec3f4a672bb5010cf1ece095acee447 upstream.
Al Viro brought it to my attention that the dentries may not be filled when the parse_options() is called, causing the call to set_gid() to possibly crash. It should only be called if parse_options() succeeds totally anyway.
He suggested the logical place to do the update is in apply_options().
Link: https://lore.kernel.org/all/20220225165219.737025658@goodmis.org/ Link: https://lkml.kernel.org/r/20220225153426.1c4cab6b@gandalf.local.home
Cc: stable@vger.kernel.org Acked-by: Al Viro viro@zeniv.linux.org.uk Reported-by: Al Viro viro@zeniv.linux.org.uk Fixes: 48b27b6b5191 ("tracefs: Set all files to the same group ownership as the mount option") Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/tracefs/inode.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/fs/tracefs/inode.c +++ b/fs/tracefs/inode.c @@ -264,7 +264,6 @@ static int tracefs_parse_options(char *d if (!gid_valid(gid)) return -EINVAL; opts->gid = gid; - set_gid(tracefs_mount->mnt_root, gid); break; case Opt_mode: if (match_octal(&args[0], &option)) @@ -291,7 +290,9 @@ static int tracefs_apply_options(struct inode->i_mode |= opts->mode;
inode->i_uid = opts->uid; - inode->i_gid = opts->gid; + + /* Set all the group ids to the mount option */ + set_gid(sb->s_root, opts->gid);
return 0; }
From: Slark Xiao slark_xiao@163.com
commit 6ecb3f0b18b320320460a42e40d6fb603f6ded96 upstream.
Dell DW5829e same as DW5821e except CAT level. DW5821e supports CAT16 but DW5829e supports CAT9. There are 2 types product of DW5829e: normal and eSIM. So we will add 2 PID for DW5829e. And for each PID, it support MBIM or RMNET. Let's see test evidence as below:
DW5829e MBIM mode: T: Bus=04 Lev=01 Prnt=01 Port=01 Cnt=01 Dev#= 4 Spd=5000 MxCh= 0 D: Ver= 3.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS= 9 #Cfgs= 2 P: Vendor=413c ProdID=81e6 Rev=03.18 S: Manufacturer=Dell Inc. S: Product=DW5829e Snapdragon X20 LTE S: SerialNumber=0123456789ABCDEF C: #Ifs= 7 Cfg#= 2 Atr=a0 MxPwr=896mA I: If#=0x0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=0e Prot=00 Driver=cdc_mbim I: If#=0x1 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=02 Driver=cdc_mbim I: If#=0x2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option I: If#=0x6 Alt= 0 #EPs= 1 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
DW5829e RMNET mode: T: Bus=04 Lev=01 Prnt=01 Port=01 Cnt=01 Dev#= 5 Spd=5000 MxCh= 0 D: Ver= 3.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS= 9 #Cfgs= 1 P: Vendor=413c ProdID=81e6 Rev=03.18 S: Manufacturer=Dell Inc. S: Product=DW5829e Snapdragon X20 LTE S: SerialNumber=0123456789ABCDEF C: #Ifs= 6 Cfg#= 1 Atr=a0 MxPwr=896mA I: If#=0x0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan I: If#=0x1 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbhid I: If#=0x2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option
DW5829e-eSIM MBIM mode: T: Bus=04 Lev=01 Prnt=01 Port=01 Cnt=01 Dev#= 6 Spd=5000 MxCh= 0 D: Ver= 3.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS= 9 #Cfgs= 2 P: Vendor=413c ProdID=81e4 Rev=03.18 S: Manufacturer=Dell Inc. S: Product=DW5829e-eSIM Snapdragon X20 LTE S: SerialNumber=0123456789ABCDEF C: #Ifs= 7 Cfg#= 2 Atr=a0 MxPwr=896mA I: If#=0x0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=0e Prot=00 Driver=cdc_mbim I: If#=0x1 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=02 Driver=cdc_mbim I: If#=0x2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option I: If#=0x6 Alt= 0 #EPs= 1 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
DW5829e-eSIM RMNET mode: T: Bus=04 Lev=01 Prnt=01 Port=01 Cnt=01 Dev#= 7 Spd=5000 MxCh= 0 D: Ver= 3.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS= 9 #Cfgs= 1 P: Vendor=413c ProdID=81e4 Rev=03.18 S: Manufacturer=Dell Inc. S: Product=DW5829e-eSIM Snapdragon X20 LTE S: SerialNumber=0123456789ABCDEF C: #Ifs= 6 Cfg#= 1 Atr=a0 MxPwr=896mA I: If#=0x0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan I: If#=0x1 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbhid I: If#=0x2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#=0x5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option
BTW, the interface 0x6 of MBIM mode is GNSS port, which not same as NMEA port. So it's banned from serial option driver. The remaining interfaces 0x2-0x5 are: MODEM, MODEM, NMEA, DIAG.
Signed-off-by: Slark Xiao slark_xiao@163.com Link: https://lore.kernel.org/r/20220214021401.6264-1-slark_xiao@163.com [ johan: drop unnecessary reservation of interface 1 ] Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/serial/option.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -198,6 +198,8 @@ static void option_instat_callback(struc
#define DELL_PRODUCT_5821E 0x81d7 #define DELL_PRODUCT_5821E_ESIM 0x81e0 +#define DELL_PRODUCT_5829E_ESIM 0x81e4 +#define DELL_PRODUCT_5829E 0x81e6
#define KYOCERA_VENDOR_ID 0x0c88 #define KYOCERA_PRODUCT_KPC650 0x17da @@ -1063,6 +1065,10 @@ static const struct usb_device_id option .driver_info = RSVD(0) | RSVD(1) | RSVD(6) }, { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5821E_ESIM), .driver_info = RSVD(0) | RSVD(1) | RSVD(6) }, + { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5829E), + .driver_info = RSVD(0) | RSVD(6) }, + { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5829E_ESIM), + .driver_info = RSVD(0) | RSVD(6) }, { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) }, /* ADU-E100, ADU-310 */ { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) }, { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
From: Daniele Palmas dnlplm@gmail.com
commit cfc4442c642d568014474b6718ccf65dc7ca6099 upstream.
Add support for the following Telit LE910R1 compositions:
0x701a: rndis, tty, tty, tty 0x701b: ecm, tty, tty, tty 0x9201: tty
Signed-off-by: Daniele Palmas dnlplm@gmail.com Link: https://lore.kernel.org/r/20220218134552.4051-1-dnlplm@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/serial/option.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1279,10 +1279,16 @@ static const struct usb_device_id option .driver_info = NCTRL(2) }, { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7011, 0xff), /* Telit LE910-S1 (ECM) */ .driver_info = NCTRL(2) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x701a, 0xff), /* Telit LE910R1 (RNDIS) */ + .driver_info = NCTRL(2) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x701b, 0xff), /* Telit LE910R1 (ECM) */ + .driver_info = NCTRL(2) }, { USB_DEVICE(TELIT_VENDOR_ID, 0x9010), /* Telit SBL FN980 flashing device */ .driver_info = NCTRL(0) | ZLP }, { USB_DEVICE(TELIT_VENDOR_ID, 0x9200), /* Telit LE910S1 flashing device */ .driver_info = NCTRL(0) | ZLP }, + { USB_DEVICE(TELIT_VENDOR_ID, 0x9201), /* Telit LE910R1 flashing device */ + .driver_info = NCTRL(0) | ZLP }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0002, 0xff, 0xff, 0xff), .driver_info = RSVD(1) },
From: Fabrice Gasnier fabrice.gasnier@foss.st.com
commit 32fde84362c40961726a5c91f35ad37355ccc0c6 upstream.
When the gadget driver hasn't been (yet) configured, and the cable is connected to a HOST, the SFTDISCON gets cleared unconditionally, so the HOST tries to enumerate it. At the host side, this can result in a stuck USB port or worse. When getting lucky, some dmesg can be observed at the host side: new high-speed USB device number ... device descriptor read/64, error -110
Fix it in drd, by checking the enabled flag before calling dwc2_hsotg_core_connect(). It will be called later, once configured, by the normal flow: - udc_bind_to_driver - usb_gadget_connect - dwc2_hsotg_pullup - dwc2_hsotg_core_connect
Fixes: 17f934024e84 ("usb: dwc2: override PHY input signals with usb role switch support") Cc: stable stable@vger.kernel.org Signed-off-by: Fabrice Gasnier fabrice.gasnier@foss.st.com Link: https://lore.kernel.org/r/1644999135-13478-1-git-send-email-fabrice.gasnier@... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/dwc2/core.h | 2 ++ drivers/usb/dwc2/drd.c | 6 ++++-- 2 files changed, 6 insertions(+), 2 deletions(-)
--- a/drivers/usb/dwc2/core.h +++ b/drivers/usb/dwc2/core.h @@ -1416,6 +1416,7 @@ void dwc2_hsotg_core_connect(struct dwc2 void dwc2_hsotg_disconnect(struct dwc2_hsotg *dwc2); int dwc2_hsotg_set_test_mode(struct dwc2_hsotg *hsotg, int testmode); #define dwc2_is_device_connected(hsotg) (hsotg->connected) +#define dwc2_is_device_enabled(hsotg) (hsotg->enabled) int dwc2_backup_device_registers(struct dwc2_hsotg *hsotg); int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, int remote_wakeup); int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg); @@ -1452,6 +1453,7 @@ static inline int dwc2_hsotg_set_test_mo int testmode) { return 0; } #define dwc2_is_device_connected(hsotg) (0) +#define dwc2_is_device_enabled(hsotg) (0) static inline int dwc2_backup_device_registers(struct dwc2_hsotg *hsotg) { return 0; } static inline int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, --- a/drivers/usb/dwc2/drd.c +++ b/drivers/usb/dwc2/drd.c @@ -109,8 +109,10 @@ static int dwc2_drd_role_sw_set(struct u already = dwc2_ovr_avalid(hsotg, true); } else if (role == USB_ROLE_DEVICE) { already = dwc2_ovr_bvalid(hsotg, true); - /* This clear DCTL.SFTDISCON bit */ - dwc2_hsotg_core_connect(hsotg); + if (dwc2_is_device_enabled(hsotg)) { + /* This clear DCTL.SFTDISCON bit */ + dwc2_hsotg_core_connect(hsotg); + } } else { if (dwc2_is_device_mode(hsotg)) { if (!dwc2_ovr_bvalid(hsotg, false))
From: Hans de Goede hdegoede@redhat.com
commit d7c93a903f33ff35aa0e6b5a8032eb9755b00826 upstream.
Commit e0082698b689 ("usb: dwc3: ulpi: conditionally resume ULPI PHY") fixed an issue where ULPI transfers would timeout if any requests where send to the phy sometime after init, giving it enough time to auto-suspend.
Commit e5f4ca3fce90 ("usb: dwc3: ulpi: Fix USB2.0 HS/FS/LS PHY suspend regression") changed the behavior to instead of clearing the DWC3_GUSB2PHYCFG_SUSPHY bit, add an extra sleep when it is set.
But on Bay Trail devices, when phy_set_mode() gets called during init, this leads to errors like these: [ 28.451522] tusb1210 dwc3.ulpi: error -110 writing val 0x01 to reg 0x0a [ 28.464089] tusb1210 dwc3.ulpi: error -110 writing val 0x01 to reg 0x0a
Add "snps,dis_u2_susphy_quirk" to the settings for Bay Trail devices to fix this. This restores the old behavior for Bay Trail devices, since previously the DWC3_GUSB2PHYCFG_SUSPHY bit would get cleared on the first ulpi_read/_write() and then was never set again.
Fixes: e5f4ca3fce90 ("usb: dwc3: ulpi: Fix USB2.0 HS/FS/LS PHY suspend regression") Cc: stable@kernel.org Cc: Serge Semin Sergey.Semin@baikalelectronics.ru Signed-off-by: Hans de Goede hdegoede@redhat.com Link: https://lore.kernel.org/r/20220213130524.18748-2-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/dwc3/dwc3-pci.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
--- a/drivers/usb/dwc3/dwc3-pci.c +++ b/drivers/usb/dwc3/dwc3-pci.c @@ -119,6 +119,13 @@ static const struct property_entry dwc3_ {} };
+static const struct property_entry dwc3_pci_intel_byt_properties[] = { + PROPERTY_ENTRY_STRING("dr_mode", "peripheral"), + PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"), + PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"), + {} +}; + static const struct property_entry dwc3_pci_mrfld_properties[] = { PROPERTY_ENTRY_STRING("dr_mode", "otg"), PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"), @@ -161,6 +168,10 @@ static const struct software_node dwc3_p .properties = dwc3_pci_intel_properties, };
+static const struct software_node dwc3_pci_intel_byt_swnode = { + .properties = dwc3_pci_intel_byt_properties, +}; + static const struct software_node dwc3_pci_intel_mrfld_swnode = { .properties = dwc3_pci_mrfld_properties, }; @@ -344,7 +355,7 @@ static const struct pci_device_id dwc3_p (kernel_ulong_t) &dwc3_pci_intel_swnode, },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BYT), - (kernel_ulong_t) &dwc3_pci_intel_swnode, }, + (kernel_ulong_t) &dwc3_pci_intel_byt_swnode, },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MRFLD), (kernel_ulong_t) &dwc3_pci_intel_mrfld_swnode, },
From: Hans de Goede hdegoede@redhat.com
commit 62e3f0afe246720f7646eb1b034a6897dac34405 upstream.
When the Bay Trail phy GPIO mappings where added cs and reset were swapped, this did not cause any issues sofar, because sofar they were always driven high/low at the same time.
Note the new mapping has been verified both in /sys/kernel/debug/gpio output on Android factory images on multiple devices, as well as in the schematics for some devices.
Fixes: 5741022cbdf3 ("usb: dwc3: pci: Add GPIO lookup table on platforms without ACPI GPIO resources") Cc: stable stable@vger.kernel.org Signed-off-by: Hans de Goede hdegoede@redhat.com Link: https://lore.kernel.org/r/20220213130524.18748-3-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/dwc3/dwc3-pci.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/usb/dwc3/dwc3-pci.c +++ b/drivers/usb/dwc3/dwc3-pci.c @@ -85,8 +85,8 @@ static const struct acpi_gpio_mapping ac static struct gpiod_lookup_table platform_bytcr_gpios = { .dev_id = "0000:00:16.0", .table = { - GPIO_LOOKUP("INT33FC:00", 54, "reset", GPIO_ACTIVE_HIGH), - GPIO_LOOKUP("INT33FC:02", 14, "cs", GPIO_ACTIVE_HIGH), + GPIO_LOOKUP("INT33FC:00", 54, "cs", GPIO_ACTIVE_HIGH), + GPIO_LOOKUP("INT33FC:02", 14, "reset", GPIO_ACTIVE_HIGH), {} }, };
From: Sebastian Andrzej Siewior bigeasy@linutronix.de
commit 84918a89d6efaff075de570b55642b6f4ceeac6d upstream.
The interrupt service routine registered for the gadget is a primary handler which mask the interrupt source and a threaded handler which handles the source of the interrupt. Since the threaded handler is voluntary threaded, the IRQ-core does not disable bottom halves before invoke the handler like it does for the forced-threaded handler.
Due to changes in networking it became visible that a network gadget's completions handler may schedule a softirq which remains unprocessed. The gadget's completion handler is usually invoked either in hard-IRQ or soft-IRQ context. In this context it is enough to just raise the softirq because the softirq itself will be handled once that context is left. In the case of the voluntary threaded handler, there is nothing that will process pending softirqs. Which means it remain queued until another random interrupt (on this CPU) fires and handles it on its exit path or another thread locks and unlocks a lock with the bh suffix. Worst case is that the CPU goes idle and the NOHZ complains about unhandled softirqs.
Disable bottom halves before acquiring the lock (and disabling interrupts) and enable them after dropping the lock. This ensures that any pending softirqs will handled right away.
Link: https://lkml.kernel.org/r/c2a64979-73d1-2c22-e048-c275c9f81558@samsung.com Fixes: e5f68b4a3e7b0 ("Revert "usb: dwc3: gadget: remove unnecessary _irqsave()"") Cc: stable stable@kernel.org Reported-by: Marek Szyprowski m.szyprowski@samsung.com Tested-by: Marek Szyprowski m.szyprowski@samsung.com Signed-off-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Link: https://lore.kernel.org/r/Yg/YPejVQH3KkRVd@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/dwc3/gadget.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -4131,9 +4131,11 @@ static irqreturn_t dwc3_thread_interrupt unsigned long flags; irqreturn_t ret = IRQ_NONE;
+ local_bh_disable(); spin_lock_irqsave(&dwc->lock, flags); ret = dwc3_process_event_buf(evt); spin_unlock_irqrestore(&dwc->lock, flags); + local_bh_enable();
return ret; }
From: Puma Hsu pumahsu@google.com
commit 8b328f8002bcf29ef517ee4bf234e09aabec4d2e upstream.
When HCE(Host Controller Error) is set, it means an internal error condition has been detected. Software needs to re-initialize the HC, so add this check in xhci resume.
Cc: stable@vger.kernel.org Signed-off-by: Puma Hsu pumahsu@google.com Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20220215123320.1253947-2-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-)
--- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -1091,6 +1091,7 @@ int xhci_resume(struct xhci_hcd *xhci, b int retval = 0; bool comp_timer_running = false; bool pending_portevent = false; + bool reinit_xhc = false;
if (!hcd->state) return 0; @@ -1107,10 +1108,11 @@ int xhci_resume(struct xhci_hcd *xhci, b set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags);
spin_lock_irq(&xhci->lock); - if ((xhci->quirks & XHCI_RESET_ON_RESUME) || xhci->broken_suspend) - hibernated = true;
- if (!hibernated) { + if (hibernated || xhci->quirks & XHCI_RESET_ON_RESUME || xhci->broken_suspend) + reinit_xhc = true; + + if (!reinit_xhc) { /* * Some controllers might lose power during suspend, so wait * for controller not ready bit to clear, just as in xHC init. @@ -1143,12 +1145,17 @@ int xhci_resume(struct xhci_hcd *xhci, b spin_unlock_irq(&xhci->lock); return -ETIMEDOUT; } - temp = readl(&xhci->op_regs->status); }
- /* If restore operation fails, re-initialize the HC during resume */ - if ((temp & STS_SRE) || hibernated) { + temp = readl(&xhci->op_regs->status); + + /* re-initialize the HC on Restore Error, or Host Controller Error */ + if (temp & (STS_SRE | STS_HCE)) { + reinit_xhc = true; + xhci_warn(xhci, "xHC error in resume, USBSTS 0x%x, Reinit\n", temp); + }
+ if (reinit_xhc) { if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) && !(xhci_all_ports_seen_u0(xhci))) { del_timer_sync(&xhci->comp_mode_recovery_timer);
From: Hongyu Xie xiehongyu1@kylinos.cn
commit 243a1dd7ba48c120986dd9e66fee74bcb7751034 upstream.
The -ENODEV return value from xhci_check_args() is incorrectly changed to -EINVAL in a couple places before propagated further.
xhci_check_args() returns 4 types of value, -ENODEV, -EINVAL, 1 and 0. xhci_urb_enqueue and xhci_check_streams_endpoint return -EINVAL if the return value of xhci_check_args <= 0. This causes problems for example r8152_submit_rx, calling usb_submit_urb in drivers/net/usb/r8152.c. r8152_submit_rx will never get -ENODEV after submiting an urb when xHC is halted because xhci_urb_enqueue returns -EINVAL in the very beginning.
[commit message and header edit -Mathias]
Fixes: 203a86613fb3 ("xhci: Avoid NULL pointer deref when host dies.") Cc: stable@vger.kernel.org Signed-off-by: Hongyu Xie xiehongyu1@kylinos.cn Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20220215123320.1253947-3-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -1611,9 +1611,12 @@ static int xhci_urb_enqueue(struct usb_h struct urb_priv *urb_priv; int num_tds;
- if (!urb || xhci_check_args(hcd, urb->dev, urb->ep, - true, true, __func__) <= 0) + if (!urb) return -EINVAL; + ret = xhci_check_args(hcd, urb->dev, urb->ep, + true, true, __func__); + if (ret <= 0) + return ret ? ret : -EINVAL;
slot_id = urb->dev->slot_id; ep_index = xhci_get_endpoint_index(&urb->ep->desc); @@ -3330,7 +3333,7 @@ static int xhci_check_streams_endpoint(s return -EINVAL; ret = xhci_check_args(xhci_to_hcd(xhci), udev, ep, 1, true, __func__); if (ret <= 0) - return -EINVAL; + return ret ? ret : -EINVAL; if (usb_ss_max_streams(&ep->ss_ep_comp) == 0) { xhci_warn(xhci, "WARN: SuperSpeed Endpoint Companion" " descriptor for ep 0x%x does not support streams\n",
From: Christophe Kerello christophe.kerello@foss.st.com
commit f6c052afe6f802d87c74153b7a57c43b2e9faf07 upstream.
Wp-gpios property can be used on NVMEM nodes and the same property can be also used on MTD NAND nodes. In case of the wp-gpios property is defined at NAND level node, the GPIO management is done at NAND driver level. Write protect is disabled when the driver is probed or resumed and is enabled when the driver is released or suspended.
When no partitions are defined in the NAND DT node, then the NAND DT node will be passed to NVMEM framework. If wp-gpios property is defined in this node, the GPIO resource is taken twice and the NAND controller driver fails to probe.
It would be possible to set config->wp_gpio at MTD level before calling nvmem_register function but NVMEM framework will toggle this GPIO on each write when this GPIO should only be controlled at NAND level driver to ensure that the Write Protect has not been enabled.
A way to fix this conflict is to add a new boolean flag in nvmem_config named ignore_wp. In case ignore_wp is set, the GPIO resource will be managed by the provider.
Fixes: 2a127da461a9 ("nvmem: add support for the write-protect pin") Cc: stable@vger.kernel.org Signed-off-by: Christophe Kerello christophe.kerello@foss.st.com Signed-off-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Link: https://lore.kernel.org/r/20220220151432.16605-2-srinivas.kandagatla@linaro.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/nvmem/core.c | 2 +- include/linux/nvmem-provider.h | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -771,7 +771,7 @@ struct nvmem_device *nvmem_register(cons
if (config->wp_gpio) nvmem->wp_gpio = config->wp_gpio; - else + else if (!config->ignore_wp) nvmem->wp_gpio = gpiod_get_optional(config->dev, "wp", GPIOD_OUT_HIGH); if (IS_ERR(nvmem->wp_gpio)) { --- a/include/linux/nvmem-provider.h +++ b/include/linux/nvmem-provider.h @@ -70,7 +70,8 @@ struct nvmem_keepout { * @word_size: Minimum read/write access granularity. * @stride: Minimum read/write access stride. * @priv: User context passed to read/write callbacks. - * @wp-gpio: Write protect pin + * @wp-gpio: Write protect pin + * @ignore_wp: Write Protect pin is managed by the provider. * * Note: A default "nvmem<id>" name will be assigned to the device if * no name is specified in its configuration. In such case "<id>" is @@ -92,6 +93,7 @@ struct nvmem_config { enum nvmem_type type; bool read_only; bool root_only; + bool ignore_wp; struct device_node *of_node; bool no_of_node; nvmem_reg_read_t reg_read;
From: Christophe Kerello christophe.kerello@foss.st.com
commit 6c7621890995d089a56a06d11580d185ede7c2f8 upstream.
Wp-gpios property can be used on NVMEM nodes and the same property can be also used on MTD NAND nodes. In case of the wp-gpios property is defined at NAND level node, the GPIO management is done at NAND driver level. Write protect is disabled when the driver is probed or resumed and is enabled when the driver is released or suspended.
When no partitions are defined in the NAND DT node, then the NAND DT node will be passed to NVMEM framework. If wp-gpios property is defined in this node, the GPIO resource is taken twice and the NAND controller driver fails to probe.
A new Boolean flag named ignore_wp has been added in nvmem_config. In case ignore_wp is set, it means that the GPIO is handled by the provider. Lets set this flag in MTD layer to avoid the conflict on wp_gpios property.
Fixes: 2a127da461a9 ("nvmem: add support for the write-protect pin") Cc: stable@vger.kernel.org Acked-by: Miquel Raynal miquel.raynal@bootlin.com Signed-off-by: Christophe Kerello christophe.kerello@foss.st.com Signed-off-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Link: https://lore.kernel.org/r/20220220151432.16605-3-srinivas.kandagatla@linaro.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/mtd/mtdcore.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/mtd/mtdcore.c +++ b/drivers/mtd/mtdcore.c @@ -546,6 +546,7 @@ static int mtd_nvmem_add(struct mtd_info config.stride = 1; config.read_only = true; config.root_only = true; + config.ignore_wp = true; config.no_of_node = !of_device_is_compatible(node, "nvmem-cells"); config.priv = mtd;
@@ -830,6 +831,7 @@ static struct nvmem_device *mtd_otp_nvme config.owner = THIS_MODULE; config.type = NVMEM_TYPE_OTP; config.root_only = true; + config.ignore_wp = true; config.reg_read = reg_read; config.size = size; config.of_node = np;
From: Mårten Lindahl marten.lindahl@axis.com
commit d8f7a5484f2188e9af2d9e4e587587d724501b12 upstream.
When unbinding/binding a driver with DMA mapped memory, the DMA map is not freed before the driver is reloaded. This leads to a memory leak when the DMA map is overwritten when reprobing the driver.
This can be reproduced with a platform driver having a dma-range:
dummy { ... #address-cells = <0x2>; #size-cells = <0x2>; ranges; dma-ranges = <...>; ... };
and then unbinding/binding it:
~# echo soc:dummy >/sys/bus/platform/drivers/<driver>/unbind
DMA map object 0xffffff800b0ae540 still being held by &pdev->dev
~# echo soc:dummy >/sys/bus/platform/drivers/<driver>/bind ~# echo scan > /sys/kernel/debug/kmemleak ~# cat /sys/kernel/debug/kmemleak unreferenced object 0xffffff800b0ae540 (size 64): comm "sh", pid 833, jiffies 4295174550 (age 2535.352s) hex dump (first 32 bytes): 00 00 00 80 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 80 00 00 00 00 00 00 00 80 00 00 00 00 ................ backtrace: [<ffffffefd1694708>] create_object.isra.0+0x108/0x344 [<ffffffefd1d1a850>] kmemleak_alloc+0x8c/0xd0 [<ffffffefd167e2d0>] __kmalloc+0x440/0x6f0 [<ffffffefd1a960a4>] of_dma_get_range+0x124/0x220 [<ffffffefd1a8ce90>] of_dma_configure_id+0x40/0x2d0 [<ffffffefd198b68c>] platform_dma_configure+0x5c/0xa4 [<ffffffefd198846c>] really_probe+0x8c/0x514 [<ffffffefd1988990>] __driver_probe_device+0x9c/0x19c [<ffffffefd1988cd8>] device_driver_attach+0x54/0xbc [<ffffffefd1986634>] bind_store+0xc4/0x120 [<ffffffefd19856e0>] drv_attr_store+0x30/0x44 [<ffffffefd173c9b0>] sysfs_kf_write+0x50/0x60 [<ffffffefd173c1c4>] kernfs_fop_write_iter+0x124/0x1b4 [<ffffffefd16a013c>] new_sync_write+0xdc/0x160 [<ffffffefd16a256c>] vfs_write+0x23c/0x2a0 [<ffffffefd16a2758>] ksys_write+0x64/0xec
To prevent this we should free the dma_range_map when the device is released.
Fixes: e0d072782c73 ("dma-mapping: introduce DMA range map, supplanting dma_pfn_offset") Cc: stable stable@vger.kernel.org Suggested-by: Rob Herring robh@kernel.org Reviewed-by: Rob Herring robh@kernel.org Signed-off-by: Mårten Lindahl marten.lindahl@axis.com Link: https://lore.kernel.org/r/20220216094128.4025861-1-marten.lindahl@axis.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/base/dd.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -629,6 +629,9 @@ re_probe: drv->remove(dev);
devres_release_all(dev); + arch_teardown_dma_ops(dev); + kfree(dev->dma_range_map); + dev->dma_range_map = NULL; driver_sysfs_remove(dev); dev->driver = NULL; dev_set_drvdata(dev, NULL); @@ -1208,6 +1211,8 @@ static void __device_release_driver(stru
devres_release_all(dev); arch_teardown_dma_ops(dev); + kfree(dev->dma_range_map); + dev->dma_range_map = NULL; dev->driver = NULL; dev_set_drvdata(dev, NULL); if (dev->pm_domain && dev->pm_domain->dismiss)
From: Qu Wenruo wqu@suse.com
commit 7093f15291e95f16dfb5a93307eda3272bfe1108 upstream.
[BUG] With older kernels (before v5.16), btrfs will defrag preallocated extents. While with newer kernels (v5.16 and newer) btrfs will not defrag preallocated extents, but it will defrag the extent just before the preallocated extent, even it's just a single sector.
This can be exposed by the following small script:
mkfs.btrfs -f $dev > /dev/null
mount $dev $mnt xfs_io -f -c "pwrite 0 4k" -c sync -c "falloc 4k 16K" $mnt/file xfs_io -c "fiemap -v" $mnt/file btrfs fi defrag $mnt/file sync xfs_io -c "fiemap -v" $mnt/file
The output looks like this on older kernels:
/mnt/btrfs/file: EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS 0: [0..7]: 26624..26631 8 0x0 1: [8..39]: 26632..26663 32 0x801 /mnt/btrfs/file: EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS 0: [0..39]: 26664..26703 40 0x1
Which defrags the single sector along with the preallocated extent, and replace them with an regular extent into a new location (caused by data COW). This wastes most of the data IO just for the preallocated range.
On the other hand, v5.16 is slightly better:
/mnt/btrfs/file: EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS 0: [0..7]: 26624..26631 8 0x0 1: [8..39]: 26632..26663 32 0x801 /mnt/btrfs/file: EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS 0: [0..7]: 26664..26671 8 0x0 1: [8..39]: 26632..26663 32 0x801
The preallocated range is not defragged, but the sector before it still gets defragged, which has no need for it.
[CAUSE] One of the function reused by the old and new behavior is defrag_check_next_extent(), it will determine if we should defrag current extent by checking the next one.
It only checks if the next extent is a hole or inlined, but it doesn't check if it's preallocated.
On the other hand, out of the function, both old and new kernel will reject preallocated extents.
Such inconsistent behavior causes above behavior.
[FIX] - Also check if next extent is preallocated If so, don't defrag current extent.
- Add comments for each branch why we reject the extent
This will reduce the IO caused by defrag ioctl and autodefrag.
CC: stable@vger.kernel.org # 5.16 Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/ioctl.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
--- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1024,19 +1024,24 @@ static bool defrag_check_next_extent(str bool locked) { struct extent_map *next; - bool ret = true; + bool ret = false;
/* this is the last extent */ if (em->start + em->len >= i_size_read(inode)) return false;
next = defrag_lookup_extent(inode, em->start + em->len, locked); + /* No more em or hole */ if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE) - ret = false; - else if ((em->block_start + em->block_len == next->block_start) && - (em->block_len > SZ_128K && next->block_len > SZ_128K)) - ret = false; - + goto out; + if (test_bit(EXTENT_FLAG_PREALLOC, &next->flags)) + goto out; + /* Physically adjacent and large enough */ + if ((em->block_start + em->block_len == next->block_start) && + (em->block_len > SZ_128K && next->block_len > SZ_128K)) + goto out; + ret = true; +out: free_extent_map(next); return ret; }
From: Qu Wenruo wqu@suse.com
commit 979b25c300dbcbcb750e88715018e04e854de6c6 upstream.
[BUG] For compressed extents, defrag ioctl will always try to defrag any compressed extents, wasting not only IO but also CPU time to compress/decompress:
mkfs.btrfs -f $DEV mount -o compress $DEV $MNT xfs_io -f -c "pwrite -S 0xab 0 128K" $MNT/foobar sync xfs_io -f -c "pwrite -S 0xcd 128K 128K" $MNT/foobar sync echo "=== before ===" xfs_io -c "fiemap -v" $MNT/foobar btrfs filesystem defrag $MNT/foobar sync echo "=== after ===" xfs_io -c "fiemap -v" $MNT/foobar
Then it shows the 2 128K extents just get COW for no extra benefit, with extra IO/CPU spent:
=== before === /mnt/btrfs/file1: EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS 0: [0..255]: 26624..26879 256 0x8 1: [256..511]: 26632..26887 256 0x9 === after === /mnt/btrfs/file1: EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS 0: [0..255]: 26640..26895 256 0x8 1: [256..511]: 26648..26903 256 0x9
This affects not only v5.16 (after the defrag rework), but also v5.15 (before the defrag rework).
[CAUSE]
From the very beginning, btrfs defrag never checks if one extent is
already at its max capacity (128K for compressed extents, 128M otherwise).
And the default extent size threshold is 256K, which is already beyond the compressed extent max size.
This means, by default btrfs defrag ioctl will mark all compressed extent which is not adjacent to a hole/preallocated range for defrag.
[FIX] Introduce a helper to grab the maximum extent size, and then in defrag_collect_targets() and defrag_check_next_extent(), reject extents which are already at their max capacity.
Reported-by: Filipe Manana fdmanana@suse.com CC: stable@vger.kernel.org # 5.16 Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/ioctl.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+)
--- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1020,6 +1020,13 @@ static struct extent_map *defrag_lookup_ return em; }
+static u32 get_extent_max_capacity(const struct extent_map *em) +{ + if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)) + return BTRFS_MAX_COMPRESSED; + return BTRFS_MAX_EXTENT_SIZE; +} + static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em, bool locked) { @@ -1036,6 +1043,12 @@ static bool defrag_check_next_extent(str goto out; if (test_bit(EXTENT_FLAG_PREALLOC, &next->flags)) goto out; + /* + * If the next extent is at its max capacity, defragging current extent + * makes no sense, as the total number of extents won't change. + */ + if (next->len >= get_extent_max_capacity(em)) + goto out; /* Physically adjacent and large enough */ if ((em->block_start + em->block_len == next->block_start) && (em->block_len > SZ_128K && next->block_len > SZ_128K)) @@ -1233,6 +1246,13 @@ static int defrag_collect_targets(struct if (range_len >= extent_thresh) goto next;
+ /* + * Skip extents already at its max capacity, this is mostly for + * compressed extents, which max cap is only 128K. + */ + if (em->len >= get_extent_max_capacity(em)) + goto next; + next_mergeable = defrag_check_next_extent(&inode->vfs_inode, em, locked); if (!next_mergeable) {
From: Qu Wenruo wqu@suse.com
commit 550f133f6959db927127111b50e483da3a7ce662 upstream.
From the very beginning of btrfs defrag, there is a check to reject
extents which meet both conditions:
- Physically adjacent
We may want to defrag physically adjacent extents to reduce the number of extents or the size of subvolume tree.
- Larger than 128K
This may be there for compressed extents, but unfortunately 128K is exactly the max capacity for compressed extents. And the check is > 128K, thus it never rejects compressed extents.
Furthermore, the compressed extent capacity bug is fixed by previous patch, there is no reason for that check anymore.
The original check has a very small ranges to reject (the target extent size is > 128K, and default extent threshold is 256K), and for compressed extent it doesn't work at all.
So it's better just to remove the rejection, and allow us to defrag physically adjacent extents.
CC: stable@vger.kernel.org # 5.16 Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/ioctl.c | 4 ---- 1 file changed, 4 deletions(-)
--- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1049,10 +1049,6 @@ static bool defrag_check_next_extent(str */ if (next->len >= get_extent_max_capacity(em)) goto out; - /* Physically adjacent and large enough */ - if ((em->block_start + em->block_len == next->block_start) && - (em->block_len > SZ_128K && next->block_len > SZ_128K)) - goto out; ret = true; out: free_extent_map(next);
From: Dāvis Mosāns davispuh@gmail.com
commit 741b23a970a79d5d3a1db2d64fa2c7b375a4febb upstream.
Compressed length can be corrupted to be a lot larger than memory we have allocated for buffer. This will cause memcpy in copy_compressed_segment to write outside of allocated memory.
This mostly results in stuck read syscall but sometimes when using btrfs send can get #GP
kernel: general protection fault, probably for non-canonical address 0x841551d5c1000: 0000 [#1] PREEMPT SMP NOPTI kernel: CPU: 17 PID: 264 Comm: kworker/u256:7 Tainted: P OE 5.17.0-rc2-1 #12 kernel: Workqueue: btrfs-endio btrfs_work_helper [btrfs] kernel: RIP: 0010:lzo_decompress_bio (./include/linux/fortify-string.h:225 fs/btrfs/lzo.c:322 fs/btrfs/lzo.c:394) btrfs Code starting with the faulting instruction =========================================== 0:* 48 8b 06 mov (%rsi),%rax <-- trapping instruction 3: 48 8d 79 08 lea 0x8(%rcx),%rdi 7: 48 83 e7 f8 and $0xfffffffffffffff8,%rdi b: 48 89 01 mov %rax,(%rcx) e: 44 89 f0 mov %r14d,%eax 11: 48 8b 54 06 f8 mov -0x8(%rsi,%rax,1),%rdx kernel: RSP: 0018:ffffb110812efd50 EFLAGS: 00010212 kernel: RAX: 0000000000001000 RBX: 000000009ca264c8 RCX: ffff98996e6d8ff8 kernel: RDX: 0000000000000064 RSI: 000841551d5c1000 RDI: ffffffff9500435d kernel: RBP: ffff989a3be856c0 R08: 0000000000000000 R09: 0000000000000000 kernel: R10: 0000000000000000 R11: 0000000000001000 R12: ffff98996e6d8000 kernel: R13: 0000000000000008 R14: 0000000000001000 R15: 000841551d5c1000 kernel: FS: 0000000000000000(0000) GS:ffff98a09d640000(0000) knlGS:0000000000000000 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 kernel: CR2: 00001e9f984d9ea8 CR3: 000000014971a000 CR4: 00000000003506e0 kernel: Call Trace: kernel: <TASK> kernel: end_compressed_bio_read (fs/btrfs/compression.c:104 fs/btrfs/compression.c:1363 fs/btrfs/compression.c:323) btrfs kernel: end_workqueue_fn (fs/btrfs/disk-io.c:1923) btrfs kernel: btrfs_work_helper (fs/btrfs/async-thread.c:326) btrfs kernel: process_one_work (./arch/x86/include/asm/jump_label.h:27 ./include/linux/jump_label.h:212 ./include/trace/events/workqueue.h:108 kernel/workqueue.c:2312) kernel: worker_thread (./include/linux/list.h:292 kernel/workqueue.c:2455) kernel: ? process_one_work (kernel/workqueue.c:2397) kernel: kthread (kernel/kthread.c:377) kernel: ? kthread_complete_and_exit (kernel/kthread.c:332) kernel: ret_from_fork (arch/x86/entry/entry_64.S:301) kernel: </TASK>
CC: stable@vger.kernel.org # 4.9+ Signed-off-by: Dāvis Mosāns davispuh@gmail.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/lzo.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/fs/btrfs/lzo.c +++ b/fs/btrfs/lzo.c @@ -380,6 +380,17 @@ int lzo_decompress_bio(struct list_head kunmap(cur_page); cur_in += LZO_LEN;
+ if (seg_len > lzo1x_worst_compress(PAGE_SIZE)) { + /* + * seg_len shouldn't be larger than we have allocated + * for workspace->cbuf + */ + btrfs_err(fs_info, "unexpectedly large lzo segment len %u", + seg_len); + ret = -EIO; + goto out; + } + /* Copy the compressed segment payload into workspace */ copy_compressed_segment(cb, workspace->cbuf, seg_len, &cur_in);
From: Qu Wenruo wqu@suse.com
commit 966d879bafaaf020c11a7cee9526f6dd823a4126 upstream.
In the rework of btrfs_defrag_file(), we always call defrag_one_cluster() and increase the offset by cluster size, which is only 256K.
But there are cases where we have a large extent (e.g. 128M) which doesn't need to be defragged at all.
Before the refactor, we can directly skip the range, but now we have to scan that extent map again and again until the cluster moves after the non-target extent.
Fix the problem by allow defrag_one_cluster() to increase btrfs_defrag_ctrl::last_scanned to the end of an extent, if and only if the last extent of the cluster is not a target.
The test script looks like this:
mkfs.btrfs -f $dev > /dev/null
mount $dev $mnt
# As btrfs ioctl uses 32M as extent_threshold xfs_io -f -c "pwrite 0 64M" $mnt/file1 sync # Some fragemented range to defrag xfs_io -s -c "pwrite 65548k 4k" \ -c "pwrite 65544k 4k" \ -c "pwrite 65540k 4k" \ -c "pwrite 65536k 4k" \ $mnt/file1 sync
echo "=== before ===" xfs_io -c "fiemap -v" $mnt/file1 echo "=== after ===" btrfs fi defrag $mnt/file1 sync xfs_io -c "fiemap -v" $mnt/file1 umount $mnt
With extra ftrace put into defrag_one_cluster(), before the patch it would result tons of loops:
(As defrag_one_cluster() is inlined, the function name is its caller)
btrfs-126062 [005] ..... 4682.816026: btrfs_defrag_file: r/i=5/257 start=0 len=262144 btrfs-126062 [005] ..... 4682.816027: btrfs_defrag_file: r/i=5/257 start=262144 len=262144 btrfs-126062 [005] ..... 4682.816028: btrfs_defrag_file: r/i=5/257 start=524288 len=262144 btrfs-126062 [005] ..... 4682.816028: btrfs_defrag_file: r/i=5/257 start=786432 len=262144 btrfs-126062 [005] ..... 4682.816028: btrfs_defrag_file: r/i=5/257 start=1048576 len=262144 ... btrfs-126062 [005] ..... 4682.816043: btrfs_defrag_file: r/i=5/257 start=67108864 len=262144
But with this patch there will be just one loop, then directly to the end of the extent:
btrfs-130471 [014] ..... 5434.029558: defrag_one_cluster: r/i=5/257 start=0 len=262144 btrfs-130471 [014] ..... 5434.029559: defrag_one_cluster: r/i=5/257 start=67108864 len=16384
CC: stable@vger.kernel.org # 5.16 Signed-off-by: Qu Wenruo wqu@suse.com Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/ioctl.c | 48 +++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 39 insertions(+), 9 deletions(-)
--- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1174,8 +1174,10 @@ struct defrag_target_range { static int defrag_collect_targets(struct btrfs_inode *inode, u64 start, u64 len, u32 extent_thresh, u64 newer_than, bool do_compress, - bool locked, struct list_head *target_list) + bool locked, struct list_head *target_list, + u64 *last_scanned_ret) { + bool last_is_target = false; u64 cur = start; int ret = 0;
@@ -1185,6 +1187,7 @@ static int defrag_collect_targets(struct bool next_mergeable = true; u64 range_len;
+ last_is_target = false; em = defrag_lookup_extent(&inode->vfs_inode, cur, locked); if (!em) break; @@ -1267,6 +1270,7 @@ static int defrag_collect_targets(struct }
add: + last_is_target = true; range_len = min(extent_map_end(em), start + len) - cur; /* * This one is a good target, check if it can be merged into @@ -1310,6 +1314,17 @@ next: kfree(entry); } } + if (!ret && last_scanned_ret) { + /* + * If the last extent is not a target, the caller can skip to + * the end of that extent. + * Otherwise, we can only go the end of the specified range. + */ + if (!last_is_target) + *last_scanned_ret = max(cur, *last_scanned_ret); + else + *last_scanned_ret = max(start + len, *last_scanned_ret); + } return ret; }
@@ -1368,7 +1383,8 @@ static int defrag_one_locked_target(stru }
static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len, - u32 extent_thresh, u64 newer_than, bool do_compress) + u32 extent_thresh, u64 newer_than, bool do_compress, + u64 *last_scanned_ret) { struct extent_state *cached_state = NULL; struct defrag_target_range *entry; @@ -1414,7 +1430,7 @@ static int defrag_one_range(struct btrfs */ ret = defrag_collect_targets(inode, start, len, extent_thresh, newer_than, do_compress, true, - &target_list); + &target_list, last_scanned_ret); if (ret < 0) goto unlock_extent;
@@ -1449,7 +1465,8 @@ static int defrag_one_cluster(struct btr u64 start, u32 len, u32 extent_thresh, u64 newer_than, bool do_compress, unsigned long *sectors_defragged, - unsigned long max_sectors) + unsigned long max_sectors, + u64 *last_scanned_ret) { const u32 sectorsize = inode->root->fs_info->sectorsize; struct defrag_target_range *entry; @@ -1460,7 +1477,7 @@ static int defrag_one_cluster(struct btr BUILD_BUG_ON(!IS_ALIGNED(CLUSTER_SIZE, PAGE_SIZE)); ret = defrag_collect_targets(inode, start, len, extent_thresh, newer_than, do_compress, false, - &target_list); + &target_list, NULL); if (ret < 0) goto out;
@@ -1477,6 +1494,15 @@ static int defrag_one_cluster(struct btr range_len = min_t(u32, range_len, (max_sectors - *sectors_defragged) * sectorsize);
+ /* + * If defrag_one_range() has updated last_scanned_ret, + * our range may already be invalid (e.g. hole punched). + * Skip if our range is before last_scanned_ret, as there is + * no need to defrag the range anymore. + */ + if (entry->start + range_len <= *last_scanned_ret) + continue; + if (ra) page_cache_sync_readahead(inode->vfs_inode.i_mapping, ra, NULL, entry->start >> PAGE_SHIFT, @@ -1489,7 +1515,8 @@ static int defrag_one_cluster(struct btr * accounting. */ ret = defrag_one_range(inode, entry->start, range_len, - extent_thresh, newer_than, do_compress); + extent_thresh, newer_than, do_compress, + last_scanned_ret); if (ret < 0) break; *sectors_defragged += range_len >> @@ -1500,6 +1527,8 @@ out: list_del_init(&entry->list); kfree(entry); } + if (ret >= 0) + *last_scanned_ret = max(*last_scanned_ret, start + len); return ret; }
@@ -1585,6 +1614,7 @@ int btrfs_defrag_file(struct inode *inod
while (cur < last_byte) { const unsigned long prev_sectors_defragged = sectors_defragged; + u64 last_scanned = cur; u64 cluster_end;
/* The cluster size 256K should always be page aligned */ @@ -1614,8 +1644,8 @@ int btrfs_defrag_file(struct inode *inod BTRFS_I(inode)->defrag_compress = compress_type; ret = defrag_one_cluster(BTRFS_I(inode), ra, cur, cluster_end + 1 - cur, extent_thresh, - newer_than, do_compress, - §ors_defragged, max_to_defrag); + newer_than, do_compress, §ors_defragged, + max_to_defrag, &last_scanned);
if (sectors_defragged > prev_sectors_defragged) balance_dirty_pages_ratelimited(inode->i_mapping); @@ -1623,7 +1653,7 @@ int btrfs_defrag_file(struct inode *inod btrfs_inode_unlock(inode, 0); if (ret < 0) break; - cur = cluster_end + 1; + cur = max(cluster_end + 1, last_scanned); if (ret > 0) { ret = 0; break;
From: Qu Wenruo wqu@suse.com
commit 26fbac2517fcad34fa3f950151fd4c0240fb2935 upstream.
Although we have btrfs_requeue_inode_defrag(), for autodefrag we are still just exhausting all inode_defrag items in the tree.
This means, it doesn't make much difference to requeue an inode_defrag, other than scan the inode from the beginning till its end.
Change the behaviour to always scan from offset 0 of an inode, and till the end.
By this we get the following benefit:
- Straight-forward code
- No more re-queue related check
- Fewer members in inode_defrag
We still keep the same btrfs_get_fs_root() and btrfs_iget() check for each loop, and added extra should_auto_defrag() check per-loop.
Note: the patch needs to be backported and is intentionally written to minimize the diff size, code will be cleaned up later.
CC: stable@vger.kernel.org # 5.16 Signed-off-by: Qu Wenruo wqu@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/file.c | 84 ++++++++++++++------------------------------------------ 1 file changed, 22 insertions(+), 62 deletions(-)
--- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -49,12 +49,6 @@ struct inode_defrag {
/* root objectid */ u64 root; - - /* last offset we were able to defrag */ - u64 last_offset; - - /* if we've wrapped around back to zero once already */ - int cycled; };
static int __compare_inode_defrag(struct inode_defrag *defrag1, @@ -107,8 +101,6 @@ static int __btrfs_add_inode_defrag(stru */ if (defrag->transid < entry->transid) entry->transid = defrag->transid; - if (defrag->last_offset > entry->last_offset) - entry->last_offset = defrag->last_offset; return -EEXIST; } } @@ -179,34 +171,6 @@ int btrfs_add_inode_defrag(struct btrfs_ }
/* - * Requeue the defrag object. If there is a defrag object that points to - * the same inode in the tree, we will merge them together (by - * __btrfs_add_inode_defrag()) and free the one that we want to requeue. - */ -static void btrfs_requeue_inode_defrag(struct btrfs_inode *inode, - struct inode_defrag *defrag) -{ - struct btrfs_fs_info *fs_info = inode->root->fs_info; - int ret; - - if (!__need_auto_defrag(fs_info)) - goto out; - - /* - * Here we don't check the IN_DEFRAG flag, because we need merge - * them together. - */ - spin_lock(&fs_info->defrag_inodes_lock); - ret = __btrfs_add_inode_defrag(inode, defrag); - spin_unlock(&fs_info->defrag_inodes_lock); - if (ret) - goto out; - return; -out: - kmem_cache_free(btrfs_inode_defrag_cachep, defrag); -} - -/* * pick the defragable inode that we want, if it doesn't exist, we will get * the next one. */ @@ -278,8 +242,14 @@ static int __btrfs_run_defrag_inode(stru struct btrfs_root *inode_root; struct inode *inode; struct btrfs_ioctl_defrag_range_args range; - int num_defrag; - int ret; + int ret = 0; + u64 cur = 0; + +again: + if (test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state)) + goto cleanup; + if (!__need_auto_defrag(fs_info)) + goto cleanup;
/* get the inode */ inode_root = btrfs_get_fs_root(fs_info, defrag->root, true); @@ -295,39 +265,29 @@ static int __btrfs_run_defrag_inode(stru goto cleanup; }
+ if (cur >= i_size_read(inode)) { + iput(inode); + goto cleanup; + } + /* do a chunk of defrag */ clear_bit(BTRFS_INODE_IN_DEFRAG, &BTRFS_I(inode)->runtime_flags); memset(&range, 0, sizeof(range)); range.len = (u64)-1; - range.start = defrag->last_offset; + range.start = cur;
sb_start_write(fs_info->sb); - num_defrag = btrfs_defrag_file(inode, NULL, &range, defrag->transid, + ret = btrfs_defrag_file(inode, NULL, &range, defrag->transid, BTRFS_DEFRAG_BATCH); sb_end_write(fs_info->sb); - /* - * if we filled the whole defrag batch, there - * must be more work to do. Queue this defrag - * again - */ - if (num_defrag == BTRFS_DEFRAG_BATCH) { - defrag->last_offset = range.start; - btrfs_requeue_inode_defrag(BTRFS_I(inode), defrag); - } else if (defrag->last_offset && !defrag->cycled) { - /* - * we didn't fill our defrag batch, but - * we didn't start at zero. Make sure we loop - * around to the start of the file. - */ - defrag->last_offset = 0; - defrag->cycled = 1; - btrfs_requeue_inode_defrag(BTRFS_I(inode), defrag); - } else { - kmem_cache_free(btrfs_inode_defrag_cachep, defrag); - } - iput(inode); - return 0; + + if (ret < 0) + goto cleanup; + + cur = max(cur + fs_info->sectorsize, range.start); + goto again; + cleanup: kmem_cache_free(btrfs_inode_defrag_cachep, defrag); return ret;
From: Qu Wenruo wqu@suse.com
commit 558732df2122092259ab4ef85594bee11dbb9104 upstream.
There is a big gap between inode_should_defrag() and autodefrag extent size threshold. For inode_should_defrag() it has a flexible @small_write value. For compressed extent is 16K, and for non-compressed extent it's 64K.
However for autodefrag extent size threshold, it's always fixed to the default value (256K).
This means, the following write sequence will trigger autodefrag to defrag ranges which didn't trigger autodefrag:
pwrite 0 8k sync pwrite 8k 128K sync
The latter 128K write will also be considered as a defrag target (if other conditions are met). While only that 8K write is really triggering autodefrag.
Such behavior can cause extra IO for autodefrag.
Close the gap, by copying the @small_write value into inode_defrag, so that later autodefrag can use the same @small_write value which triggered autodefrag.
With the existing transid value, this allows autodefrag really to scan the ranges which triggered autodefrag.
Although this behavior change is mostly reducing the extent_thresh value for autodefrag, I believe in the future we should allow users to specify the autodefrag extent threshold through mount options, but that's an other problem to consider in the future.
CC: stable@vger.kernel.org # 5.16+ Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/ctree.h | 2 +- fs/btrfs/file.c | 15 ++++++++++++++- fs/btrfs/inode.c | 4 ++-- 3 files changed, 17 insertions(+), 4 deletions(-)
--- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -3315,7 +3315,7 @@ void btrfs_exclop_finish(struct btrfs_fs int __init btrfs_auto_defrag_init(void); void __cold btrfs_auto_defrag_exit(void); int btrfs_add_inode_defrag(struct btrfs_trans_handle *trans, - struct btrfs_inode *inode); + struct btrfs_inode *inode, u32 extent_thresh); int btrfs_run_defrag_inodes(struct btrfs_fs_info *fs_info); void btrfs_cleanup_defrag_inodes(struct btrfs_fs_info *fs_info); int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync); --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -49,6 +49,15 @@ struct inode_defrag {
/* root objectid */ u64 root; + + /* + * The extent size threshold for autodefrag. + * + * This value is different for compressed/non-compressed extents, + * thus needs to be passed from higher layer. + * (aka, inode_should_defrag()) + */ + u32 extent_thresh; };
static int __compare_inode_defrag(struct inode_defrag *defrag1, @@ -101,6 +110,8 @@ static int __btrfs_add_inode_defrag(stru */ if (defrag->transid < entry->transid) entry->transid = defrag->transid; + entry->extent_thresh = min(defrag->extent_thresh, + entry->extent_thresh); return -EEXIST; } } @@ -126,7 +137,7 @@ static inline int __need_auto_defrag(str * enabled */ int btrfs_add_inode_defrag(struct btrfs_trans_handle *trans, - struct btrfs_inode *inode) + struct btrfs_inode *inode, u32 extent_thresh) { struct btrfs_root *root = inode->root; struct btrfs_fs_info *fs_info = root->fs_info; @@ -152,6 +163,7 @@ int btrfs_add_inode_defrag(struct btrfs_ defrag->ino = btrfs_ino(inode); defrag->transid = transid; defrag->root = root->root_key.objectid; + defrag->extent_thresh = extent_thresh;
spin_lock(&fs_info->defrag_inodes_lock); if (!test_bit(BTRFS_INODE_IN_DEFRAG, &inode->runtime_flags)) { @@ -275,6 +287,7 @@ again: memset(&range, 0, sizeof(range)); range.len = (u64)-1; range.start = cur; + range.extent_thresh = defrag->extent_thresh;
sb_start_write(fs_info->sb); ret = btrfs_defrag_file(inode, NULL, &range, defrag->transid, --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -561,12 +561,12 @@ static inline int inode_need_compress(st }
static inline void inode_should_defrag(struct btrfs_inode *inode, - u64 start, u64 end, u64 num_bytes, u64 small_write) + u64 start, u64 end, u64 num_bytes, u32 small_write) { /* If this is a small write inside eof, kick off a defrag */ if (num_bytes < small_write && (start > 0 || end + 1 < inode->disk_i_size)) - btrfs_add_inode_defrag(NULL, inode); + btrfs_add_inode_defrag(NULL, inode, small_write); }
/*
From: Jason Gunthorpe jgg@nvidia.com
commit 22e9f71072fa605cbf033158db58e0790101928d upstream.
If the state is not idle then resolve_prepare_src() should immediately fail and no change to global state should happen. However, it unconditionally overwrites the src_addr trying to build a temporary any address.
For instance if the state is already RDMA_CM_LISTEN then this will corrupt the src_addr and would cause the test in cma_cancel_operation():
if (cma_any_addr(cma_src_addr(id_priv)) && !id_priv->cma_dev)
Which would manifest as this trace from syzkaller:
BUG: KASAN: use-after-free in __list_add_valid+0x93/0xa0 lib/list_debug.c:26 Read of size 8 at addr ffff8881546491e0 by task syz-executor.1/32204
CPU: 1 PID: 32204 Comm: syz-executor.1 Not tainted 5.12.0-rc8-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x141/0x1d7 lib/dump_stack.c:120 print_address_description.constprop.0.cold+0x5b/0x2f8 mm/kasan/report.c:232 __kasan_report mm/kasan/report.c:399 [inline] kasan_report.cold+0x7c/0xd8 mm/kasan/report.c:416 __list_add_valid+0x93/0xa0 lib/list_debug.c:26 __list_add include/linux/list.h:67 [inline] list_add_tail include/linux/list.h:100 [inline] cma_listen_on_all drivers/infiniband/core/cma.c:2557 [inline] rdma_listen+0x787/0xe00 drivers/infiniband/core/cma.c:3751 ucma_listen+0x16a/0x210 drivers/infiniband/core/ucma.c:1102 ucma_write+0x259/0x350 drivers/infiniband/core/ucma.c:1732 vfs_write+0x28e/0xa30 fs/read_write.c:603 ksys_write+0x1ee/0x250 fs/read_write.c:658 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xae
This is indicating that an rdma_id_private was destroyed without doing cma_cancel_listens().
Instead of trying to re-use the src_addr memory to indirectly create an any address derived from the dst build one explicitly on the stack and bind to that as any other normal flow would do. rdma_bind_addr() will copy it over the src_addr once it knows the state is valid.
This is similar to commit bc0bdc5afaa7 ("RDMA/cma: Do not change route.addr.src_addr.ss_family")
Link: https://lore.kernel.org/r/0-v2-e975c8fd9ef2+11e-syz_cma_srcaddr_jgg@nvidia.c... Cc: stable@vger.kernel.org Fixes: 732d41c545bb ("RDMA/cma: Make the locking for automatic state transition more clear") Reported-by: syzbot+c94a3675a626f6333d74@syzkaller.appspotmail.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/infiniband/core/cma.c | 38 +++++++++++++++++++++++--------------- 1 file changed, 23 insertions(+), 15 deletions(-)
--- a/drivers/infiniband/core/cma.c +++ b/drivers/infiniband/core/cma.c @@ -3370,22 +3370,30 @@ err: static int cma_bind_addr(struct rdma_cm_id *id, struct sockaddr *src_addr, const struct sockaddr *dst_addr) { - if (!src_addr || !src_addr->sa_family) { - src_addr = (struct sockaddr *) &id->route.addr.src_addr; - src_addr->sa_family = dst_addr->sa_family; - if (IS_ENABLED(CONFIG_IPV6) && - dst_addr->sa_family == AF_INET6) { - struct sockaddr_in6 *src_addr6 = (struct sockaddr_in6 *) src_addr; - struct sockaddr_in6 *dst_addr6 = (struct sockaddr_in6 *) dst_addr; - src_addr6->sin6_scope_id = dst_addr6->sin6_scope_id; - if (ipv6_addr_type(&dst_addr6->sin6_addr) & IPV6_ADDR_LINKLOCAL) - id->route.addr.dev_addr.bound_dev_if = dst_addr6->sin6_scope_id; - } else if (dst_addr->sa_family == AF_IB) { - ((struct sockaddr_ib *) src_addr)->sib_pkey = - ((struct sockaddr_ib *) dst_addr)->sib_pkey; - } + struct sockaddr_storage zero_sock = {}; + + if (src_addr && src_addr->sa_family) + return rdma_bind_addr(id, src_addr); + + /* + * When the src_addr is not specified, automatically supply an any addr + */ + zero_sock.ss_family = dst_addr->sa_family; + if (IS_ENABLED(CONFIG_IPV6) && dst_addr->sa_family == AF_INET6) { + struct sockaddr_in6 *src_addr6 = + (struct sockaddr_in6 *)&zero_sock; + struct sockaddr_in6 *dst_addr6 = + (struct sockaddr_in6 *)dst_addr; + + src_addr6->sin6_scope_id = dst_addr6->sin6_scope_id; + if (ipv6_addr_type(&dst_addr6->sin6_addr) & IPV6_ADDR_LINKLOCAL) + id->route.addr.dev_addr.bound_dev_if = + dst_addr6->sin6_scope_id; + } else if (dst_addr->sa_family == AF_IB) { + ((struct sockaddr_ib *)&zero_sock)->sib_pkey = + ((struct sockaddr_ib *)dst_addr)->sib_pkey; } - return rdma_bind_addr(id, src_addr); + return rdma_bind_addr(id, (struct sockaddr *)&zero_sock); }
/*
From: Chuansheng Liu chuansheng.liu@intel.com
commit 3abea10e6a8f0e7804ed4c124bea2d15aca977c8 upstream.
It is easy to hit the below memory leaks in my TigerLake platform:
unreferenced object 0xffff927c8b91dbc0 (size 32): comm "kworker/0:2", pid 112, jiffies 4294893323 (age 83.604s) hex dump (first 32 bytes): 4e 41 4d 45 3d 49 4e 54 33 34 30 30 20 54 68 65 NAME=INT3400 The 72 6d 61 6c 00 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 rmal.kkkkkkkkkk. backtrace: [<ffffffff9c502c3e>] __kmalloc_track_caller+0x2fe/0x4a0 [<ffffffff9c7b7c15>] kvasprintf+0x65/0xd0 [<ffffffff9c7b7d6e>] kasprintf+0x4e/0x70 [<ffffffffc04cb662>] int3400_notify+0x82/0x120 [int3400_thermal] [<ffffffff9c8b7358>] acpi_ev_notify_dispatch+0x54/0x71 [<ffffffff9c88f1a7>] acpi_os_execute_deferred+0x17/0x30 [<ffffffff9c2c2c0a>] process_one_work+0x21a/0x3f0 [<ffffffff9c2c2e2a>] worker_thread+0x4a/0x3b0 [<ffffffff9c2cb4dd>] kthread+0xfd/0x130 [<ffffffff9c201c1f>] ret_from_fork+0x1f/0x30
Fix it by calling kfree() accordingly.
Fixes: 38e44da59130 ("thermal: int3400_thermal: process "thermal table changed" event") Signed-off-by: Chuansheng Liu chuansheng.liu@intel.com Cc: 4.14+ stable@vger.kernel.org # 4.14+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/thermal/intel/int340x_thermal/int3400_thermal.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c +++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c @@ -404,6 +404,10 @@ static void int3400_notify(acpi_handle h thermal_prop[3] = kasprintf(GFP_KERNEL, "EVENT=%d", therm_event); thermal_prop[4] = NULL; kobject_uevent_env(&priv->thermal->device.kobj, KOBJ_CHANGE, thermal_prop); + kfree(thermal_prop[0]); + kfree(thermal_prop[1]); + kfree(thermal_prop[2]); + kfree(thermal_prop[3]); }
static int int3400_thermal_get_temp(struct thermal_zone_device *thermal,
From: Oliver Graute oliver.graute@kococonnector.com
commit b6821b0d9b56386d2bf14806f90ec401468c799f upstream.
In rare cases the display is flipped or mirrored. This was observed more often in a low temperature environment. A clean reset on init_display() should help to get registers in a sane state.
Fixes: ef8f317795da (staging: fbtft: use init function instead of init sequence) Cc: stable@vger.kernel.org Signed-off-by: Oliver Graute oliver.graute@kococonnector.com Link: https://lore.kernel.org/r/20220210085322.15676-1-oliver.graute@kococonnector... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/staging/fbtft/fb_st7789v.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/staging/fbtft/fb_st7789v.c +++ b/drivers/staging/fbtft/fb_st7789v.c @@ -144,6 +144,8 @@ static int init_display(struct fbtft_par { int rc;
+ par->fbtftops.reset(par); + rc = init_tearing_effect_line(par); if (rc) return rc;
From: Jens Axboe axboe@kernel.dk
commit aba2081e0a9c977396124aa6df93b55ed5912b19 upstream.
The interrupt mask is enabled before any potential failure points in the driver, which can leave a failure path where we exit with interrupts enabled but the device not live. This causes an infinite stream of interrupts on an Apple M1 Pro laptop on USB-C.
Add a failure label that's used post enabling interrupts, where we mask them again before returning an error.
Suggested-by: Sven Peter sven@svenpeter.dev Cc: stable stable@vger.kernel.org Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Signed-off-by: Jens Axboe axboe@kernel.dk Link: https://lore.kernel.org/r/e6b80669-20f3-06e7-9ed5-8951a9c6db6f@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/typec/tipd/core.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/usb/typec/tipd/core.c b/drivers/usb/typec/tipd/core.c index 6d27a5b5e3ca..7ffcda94d323 100644 --- a/drivers/usb/typec/tipd/core.c +++ b/drivers/usb/typec/tipd/core.c @@ -761,12 +761,12 @@ static int tps6598x_probe(struct i2c_client *client)
ret = tps6598x_read32(tps, TPS_REG_STATUS, &status); if (ret < 0) - return ret; + goto err_clear_mask; trace_tps6598x_status(status);
ret = tps6598x_read32(tps, TPS_REG_SYSTEM_CONF, &conf); if (ret < 0) - return ret; + goto err_clear_mask;
/* * This fwnode has a "compatible" property, but is never populated as a @@ -855,7 +855,8 @@ static int tps6598x_probe(struct i2c_client *client) usb_role_switch_put(tps->role_sw); err_fwnode_put: fwnode_handle_put(fwnode); - +err_clear_mask: + tps6598x_write64(tps, TPS_REG_INT_MASK1, 0); return ret; }
From: Mike Marciniszyn mike.marciniszyn@cornelisnetworks.com
commit 32f57cb1b2c8d6f20aefec7052b1bfeb7e3b69d4 upstream.
The qib driver load has been failing with the following message:
sysfs: cannot create duplicate filename '/devices/pci0000:80/0000:80:02.0/0000:81:00.0/infiniband/qib0/ports/1/linkcontrol'
The patch below has two "linkcontrol" names causing the duplication.
Fix by using the correct "diag_counters" name on the second instance.
Fixes: 4a7aaf88c89f ("RDMA/qib: Use attributes for the port sysfs") Link: https://lore.kernel.org/r/1645106372-23004-1-git-send-email-mike.marciniszyn... Cc: stable@vger.kernel.org Reviewed-by: Dennis Dalessandro dennis.dalessandro@cornelisnetworks.com Signed-off-by: Mike Marciniszyn mike.marciniszyn@cornelisnetworks.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/infiniband/hw/qib/qib_sysfs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/infiniband/hw/qib/qib_sysfs.c +++ b/drivers/infiniband/hw/qib/qib_sysfs.c @@ -541,7 +541,7 @@ static struct attribute *port_diagc_attr };
static const struct attribute_group port_diagc_group = { - .name = "linkcontrol", + .name = "diag_counters", .attrs = port_diagc_attributes, };
From: Damien Le Moal damien.lemoal@opensource.wdc.com
commit 762e52f79c95ea20a7229674ffd13b94d7d8959c upstream.
Instead of an arbitrary delay, use the "rootwait" kernel option to wait for the mmc root device to be ready.
Signed-off-by: Damien Le Moal damien.lemoal@opensource.wdc.com Reviewed-by: Anup Patel anup@brainfault.org Fixes: 7e09fd3994c5 ("riscv: Add Canaan Kendryte K210 SD card defconfig") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/riscv/configs/nommu_k210_sdcard_defconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/riscv/configs/nommu_k210_sdcard_defconfig +++ b/arch/riscv/configs/nommu_k210_sdcard_defconfig @@ -23,7 +23,7 @@ CONFIG_SLOB=y CONFIG_SOC_CANAAN=y CONFIG_SMP=y CONFIG_NR_CPUS=2 -CONFIG_CMDLINE="earlycon console=ttySIF0 rootdelay=2 root=/dev/mmcblk0p1 ro" +CONFIG_CMDLINE="earlycon console=ttySIF0 root=/dev/mmcblk0p1 rootwait ro" CONFIG_CMDLINE_FORCE=y # CONFIG_SECCOMP is not set # CONFIG_STACKPROTECTOR is not set
From: Changbin Du changbin.du@gmail.com
commit 22e2100b1b07d6f5acc71cc1acb53f680c677d77 upstream.
The trace_hardirqs_{on,off}() require the caller to setup frame pointer properly. This because these two functions use macro 'CALLER_ADDR1' (aka. __builtin_return_address(1)) to acquire caller info. If the $fp is used for other purpose, the code generated this macro (as below) could trigger memory access fault.
0xffffffff8011510e <+80>: ld a1,-16(s0) 0xffffffff80115112 <+84>: ld s2,-8(a1) # <-- paging fault here
The oops message during booting if compiled with 'irqoff' tracer enabled: [ 0.039615][ T0] Unable to handle kernel NULL pointer dereference at virtual address 00000000000000f8 [ 0.041925][ T0] Oops [#1] [ 0.042063][ T0] Modules linked in: [ 0.042864][ T0] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.17.0-rc1-00233-g9a20c48d1ed2 #29 [ 0.043568][ T0] Hardware name: riscv-virtio,qemu (DT) [ 0.044343][ T0] epc : trace_hardirqs_on+0x56/0xe2 [ 0.044601][ T0] ra : restore_all+0x12/0x6e [ 0.044721][ T0] epc : ffffffff80126a5c ra : ffffffff80003b94 sp : ffffffff81403db0 [ 0.044801][ T0] gp : ffffffff8163acd8 tp : ffffffff81414880 t0 : 0000000000000020 [ 0.044882][ T0] t1 : 0098968000000000 t2 : 0000000000000000 s0 : ffffffff81403de0 [ 0.044967][ T0] s1 : 0000000000000000 a0 : 0000000000000001 a1 : 0000000000000100 [ 0.045046][ T0] a2 : 0000000000000000 a3 : 0000000000000000 a4 : 0000000000000000 [ 0.045124][ T0] a5 : 0000000000000000 a6 : 0000000000000000 a7 : 0000000054494d45 [ 0.045210][ T0] s2 : ffffffff80003b94 s3 : ffffffff81a8f1b0 s4 : ffffffff80e27b50 [ 0.045289][ T0] s5 : ffffffff81414880 s6 : ffffffff8160fa00 s7 : 00000000800120e8 [ 0.045389][ T0] s8 : 0000000080013100 s9 : 000000000000007f s10: 0000000000000000 [ 0.045474][ T0] s11: 0000000000000000 t3 : 7fffffffffffffff t4 : 0000000000000000 [ 0.045548][ T0] t5 : 0000000000000000 t6 : ffffffff814aa368 [ 0.045620][ T0] status: 0000000200000100 badaddr: 00000000000000f8 cause: 000000000000000d [ 0.046402][ T0] [<ffffffff80003b94>] restore_all+0x12/0x6e
This because the $fp(aka. $s0) register is not used as frame pointer in the assembly entry code.
resume_kernel: REG_L s0, TASK_TI_PREEMPT_COUNT(tp) bnez s0, restore_all REG_L s0, TASK_TI_FLAGS(tp) andi s0, s0, _TIF_NEED_RESCHED beqz s0, restore_all call preempt_schedule_irq j restore_all
To fix above issue, here we add one extra level wrapper for function trace_hardirqs_{on,off}() so they can be safely called by low level entry code.
Signed-off-by: Changbin Du changbin.du@gmail.com Fixes: 3c4697982982 ("riscv: Enable LOCKDEP_SUPPORT & fixup TRACE_IRQFLAGS_SUPPORT") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/riscv/kernel/Makefile | 2 ++ arch/riscv/kernel/entry.S | 10 +++++----- arch/riscv/kernel/trace_irq.c | 27 +++++++++++++++++++++++++++ arch/riscv/kernel/trace_irq.h | 11 +++++++++++ 4 files changed, 45 insertions(+), 5 deletions(-) create mode 100644 arch/riscv/kernel/trace_irq.c create mode 100644 arch/riscv/kernel/trace_irq.h
--- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -50,6 +50,8 @@ obj-$(CONFIG_MODULE_SECTIONS) += module- obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o
+obj-$(CONFIG_TRACE_IRQFLAGS) += trace_irq.o + obj-$(CONFIG_RISCV_BASE_PMU) += perf_event.o obj-$(CONFIG_PERF_EVENTS) += perf_callchain.o obj-$(CONFIG_HAVE_PERF_REGS) += perf_regs.o --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -108,7 +108,7 @@ _save_context: .option pop
#ifdef CONFIG_TRACE_IRQFLAGS - call trace_hardirqs_off + call __trace_hardirqs_off #endif
#ifdef CONFIG_CONTEXT_TRACKING @@ -143,7 +143,7 @@ skip_context_tracking: li t0, EXC_BREAKPOINT beq s4, t0, 1f #ifdef CONFIG_TRACE_IRQFLAGS - call trace_hardirqs_on + call __trace_hardirqs_on #endif csrs CSR_STATUS, SR_IE
@@ -234,7 +234,7 @@ ret_from_exception: REG_L s0, PT_STATUS(sp) csrc CSR_STATUS, SR_IE #ifdef CONFIG_TRACE_IRQFLAGS - call trace_hardirqs_off + call __trace_hardirqs_off #endif #ifdef CONFIG_RISCV_M_MODE /* the MPP value is too large to be used as an immediate arg for addi */ @@ -270,10 +270,10 @@ restore_all: REG_L s1, PT_STATUS(sp) andi t0, s1, SR_PIE beqz t0, 1f - call trace_hardirqs_on + call __trace_hardirqs_on j 2f 1: - call trace_hardirqs_off + call __trace_hardirqs_off 2: #endif REG_L a0, PT_STATUS(sp) --- /dev/null +++ b/arch/riscv/kernel/trace_irq.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022 Changbin Du changbin.du@gmail.com + */ + +#include <linux/irqflags.h> +#include <linux/kprobes.h> +#include "trace_irq.h" + +/* + * trace_hardirqs_on/off require the caller to setup frame pointer properly. + * Otherwise, CALLER_ADDR1 might trigger an pagging exception in kernel. + * Here we add one extra level so they can be safely called by low + * level entry code which $fp is used for other purpose. + */ + +void __trace_hardirqs_on(void) +{ + trace_hardirqs_on(); +} +NOKPROBE_SYMBOL(__trace_hardirqs_on); + +void __trace_hardirqs_off(void) +{ + trace_hardirqs_off(); +} +NOKPROBE_SYMBOL(__trace_hardirqs_off); --- /dev/null +++ b/arch/riscv/kernel/trace_irq.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2022 Changbin Du changbin.du@gmail.com + */ +#ifndef __TRACE_IRQ_H +#define __TRACE_IRQ_H + +void __trace_hardirqs_on(void); +void __trace_hardirqs_off(void); + +#endif /* __TRACE_IRQ_H */
From: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com
commit db110a99d3367936058727ff4798e3a39c707969 upstream.
This fixes the below crash:
kernel BUG at include/linux/mm.h:2373! cpu 0x5d: Vector: 700 (Program Check) at [c00000003c6e76e0] pc: c000000000581a54: pmd_to_page+0x54/0x80 lr: c00000000058d184: move_hugetlb_page_tables+0x4e4/0x5b0 sp: c00000003c6e7980 msr: 9000000000029033 current = 0xc00000003bd8d980 paca = 0xc000200fff610100 irqmask: 0x03 irq_happened: 0x01 pid = 9349, comm = hugepage-mremap kernel BUG at include/linux/mm.h:2373! move_hugetlb_page_tables+0x4e4/0x5b0 (link register) move_hugetlb_page_tables+0x22c/0x5b0 (unreliable) move_page_tables+0xdbc/0x1010 move_vma+0x254/0x5f0 sys_mremap+0x7c0/0x900 system_call_exception+0x160/0x2c0
the kernel can't use huge_pte_offset before it set the pte entry because a page table lookup check for huge PTE bit in the page table to differentiate between a huge pte entry and a pointer to pte page. A huge_pte_alloc won't mark the page table entry huge and hence kernel should not use huge_pte_offset after a huge_pte_alloc.
Link: https://lkml.kernel.org/r/20220211063221.99293-1-aneesh.kumar@linux.ibm.com Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") Signed-off-by: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com Reviewed-by: Mike Kravetz mike.kravetz@oracle.com Reviewed-by: Mina Almasry almasrymina@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/hugetlb.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 61895cc01d09..e57650a9404f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4851,14 +4851,13 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, }
static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr, - unsigned long new_addr, pte_t *src_pte) + unsigned long new_addr, pte_t *src_pte, pte_t *dst_pte) { struct hstate *h = hstate_vma(vma); struct mm_struct *mm = vma->vm_mm; - pte_t *dst_pte, pte; spinlock_t *src_ptl, *dst_ptl; + pte_t pte;
- dst_pte = huge_pte_offset(mm, new_addr, huge_page_size(h)); dst_ptl = huge_pte_lock(h, mm, dst_pte); src_ptl = huge_pte_lockptr(h, mm, src_pte);
@@ -4917,7 +4916,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, if (!dst_pte) break;
- move_huge_pte(vma, old_addr, new_addr, src_pte); + move_huge_pte(vma, old_addr, new_addr, src_pte, dst_pte); } flush_tlb_range(vma, old_end - len, old_end); mmu_notifier_invalidate_range_end(&range);
From: Liu Yuntao liuyuntao10@huawei.com
commit e79ce9832316e09529b212a21278d68240ccbf1f upstream.
When we specify a large number for node in hugepages parameter, it may be parsed to another number due to truncation in this statement:
node = tmp;
For example, add following parameter in command line:
hugepagesz=1G hugepages=4294967297:5
and kernel will allocate 5 hugepages for node 1 instead of ignoring it.
I move the validation check earlier to fix this issue, and slightly simplifies the condition here.
Link: https://lkml.kernel.org/r/20220209134018.8242-1-liuyuntao10@huawei.com Fixes: b5389086ad7be0 ("hugetlbfs: extend the definition of hugepages parameter to support node allocation") Signed-off-by: Liu Yuntao liuyuntao10@huawei.com Reviewed-by: Mike Kravetz mike.kravetz@oracle.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/hugetlb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e57650a9404f..f294db835f4b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4159,10 +4159,10 @@ static int __init hugepages_setup(char *s) pr_warn("HugeTLB: architecture can't support node specific alloc, ignoring!\n"); return 0; } + if (tmp >= nr_online_nodes) + goto invalid; node = tmp; p += count + 1; - if (node < 0 || node >= nr_online_nodes) - goto invalid; /* Parse hugepages */ if (sscanf(p, "%lu%n", &tmp, &count) != 1) goto invalid;
From: daniel.starke@siemens.com daniel.starke@siemens.com
commit 737b0ef3be6b319d6c1fd64193d1603311969326 upstream.
n_gsm is based on the 3GPP 07.010 and its newer version is the 3GPP 27.010. See https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.a... The changes from 07.010 to 27.010 are non-functional. Therefore, I refer to the newer 27.010 here. Chapter 5.4.6.3.7 describes the encoding of the control signal octet used by the MSC (modem status command). The same encoding is also used in convergence layer type 2 as described in chapter 5.5.2. Table 7 and 24 both require the DV (data valid) bit to be set 1 for outgoing control signal octets sent by the DTE (data terminal equipment), i.e. for the initiator side. Currently, the DV bit is only set if CD (carrier detect) is on, regardless of the side.
This patch fixes this behavior by setting the DV bit on the initiator side unconditionally.
Fixes: e1eaea46bb40 ("tty: n_gsm line discipline") Cc: stable@vger.kernel.org Signed-off-by: Daniel Starke daniel.starke@siemens.com Link: https://lore.kernel.org/r/20220218073123.2121-1-daniel.starke@siemens.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/n_gsm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -439,7 +439,7 @@ static u8 gsm_encode_modem(const struct modembits |= MDM_RTR; if (dlci->modem_tx & TIOCM_RI) modembits |= MDM_IC; - if (dlci->modem_tx & TIOCM_CD) + if (dlci->modem_tx & TIOCM_CD || dlci->gsm->initiator) modembits |= MDM_DV; return modembits; }
From: daniel.starke@siemens.com daniel.starke@siemens.com
commit 57435c42400ec147a527b2313188b649e81e449e upstream.
n_gsm is based on the 3GPP 07.010 and its newer version is the 3GPP 27.010. See https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.a... The changes from 07.010 to 27.010 are non-functional. Therefore, I refer to the newer 27.010 here. Chapter 5.2.1.2 describes the encoding of the C/R (command/response) bit. Table 1 shows that the actual encoding of the C/R bit is inverted if the associated frame is sent by the responder.
The referenced commit fixed here further broke the internal meaning of this bit in the outgoing path by always setting the C/R bit regardless of the frame type.
This patch fixes both by setting the C/R bit always consistently for command (1) and response (0) frames and inverting it later for the responder where necessary. The meaning of this bit in the debug output is being preserved and shows the bit as if it was encoded by the initiator. This reflects only the frame type rather than the encoded combination of communication side and frame type.
Fixes: cc0f42122a7e ("tty: n_gsm: Modify CR,PF bit when config requester") Cc: stable@vger.kernel.org Signed-off-by: Daniel Starke daniel.starke@siemens.com Link: https://lore.kernel.org/r/20220218073123.2121-2-daniel.starke@siemens.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/n_gsm.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
--- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -448,7 +448,7 @@ static u8 gsm_encode_modem(const struct * gsm_print_packet - display a frame for debug * @hdr: header to print before decode * @addr: address EA from the frame - * @cr: C/R bit from the frame + * @cr: C/R bit seen as initiator * @control: control including PF bit * @data: following data bytes * @dlen: length of data @@ -548,7 +548,7 @@ static int gsm_stuff_frame(const u8 *inp * gsm_send - send a control frame * @gsm: our GSM mux * @addr: address for control frame - * @cr: command/response bit + * @cr: command/response bit seen as initiator * @control: control byte including PF bit * * Format up and transmit a control frame. These do not go via the @@ -563,11 +563,15 @@ static void gsm_send(struct gsm_mux *gsm int len; u8 cbuf[10]; u8 ibuf[3]; + int ocr; + + /* toggle C/R coding if not initiator */ + ocr = cr ^ (gsm->initiator ? 0 : 1);
switch (gsm->encoding) { case 0: cbuf[0] = GSM0_SOF; - cbuf[1] = (addr << 2) | (cr << 1) | EA; + cbuf[1] = (addr << 2) | (ocr << 1) | EA; cbuf[2] = control; cbuf[3] = EA; /* Length of data = 0 */ cbuf[4] = 0xFF - gsm_fcs_add_block(INIT_FCS, cbuf + 1, 3); @@ -577,7 +581,7 @@ static void gsm_send(struct gsm_mux *gsm case 1: case 2: /* Control frame + packing (but not frame stuffing) in mode 1 */ - ibuf[0] = (addr << 2) | (cr << 1) | EA; + ibuf[0] = (addr << 2) | (ocr << 1) | EA; ibuf[1] = control; ibuf[2] = 0xFF - gsm_fcs_add_block(INIT_FCS, ibuf, 2); /* Stuffing may double the size worst case */ @@ -611,7 +615,7 @@ static void gsm_send(struct gsm_mux *gsm
static inline void gsm_response(struct gsm_mux *gsm, int addr, int control) { - gsm_send(gsm, addr, 1, control); + gsm_send(gsm, addr, 0, control); }
/** @@ -1800,10 +1804,10 @@ static void gsm_queue(struct gsm_mux *gs goto invalid;
cr = gsm->address & 1; /* C/R bit */ + cr ^= gsm->initiator ? 0 : 1; /* Flip so 1 always means command */
gsm_print_packet("<--", address, cr, gsm->control, gsm->buf, gsm->len);
- cr ^= 1 - gsm->initiator; /* Flip so 1 always means command */ dlci = gsm->dlci[address];
switch (gsm->control) {
From: daniel.starke@siemens.com daniel.starke@siemens.com
commit e3b7468f082d106459e86e8dc6fb9bdd65553433 upstream.
Trying to open a DLCI by sending a SABM frame may fail with a timeout. The link is closed on the initiator side without informing the responder about this event. The responder assumes the link is open after sending a UA frame to answer the SABM frame. The link gets stuck in a half open state.
This patch fixes this by initiating the proper link termination procedure after link setup timeout instead of silently closing it down.
Fixes: e1eaea46bb40 ("tty: n_gsm line discipline") Cc: stable@vger.kernel.org Signed-off-by: Daniel Starke daniel.starke@siemens.com Link: https://lore.kernel.org/r/20220218073123.2121-3-daniel.starke@siemens.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/n_gsm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -1518,7 +1518,7 @@ static void gsm_dlci_t1(struct timer_lis dlci->mode = DLCI_MODE_ADM; gsm_dlci_open(dlci); } else { - gsm_dlci_close(dlci); + gsm_dlci_begin_close(dlci); /* prevent half open link */ }
break;
From: daniel.starke@siemens.com daniel.starke@siemens.com
commit 96b169f05cdcc844b400695184d77e42071d14f2 upstream.
The here fixed commit made the tty hangup asynchronous to avoid a circular locking warning. I could not reproduce this warning. Furthermore, due to the asynchronous hangup the function call now gets queued up while the underlying tty is being freed. Depending on the timing this results in a NULL pointer access in the global work queue scheduler. To be precise in process_one_work(). Therefore, the previous commit made the issue worse which it tried to fix.
This patch fixes this by falling back to the old behavior which uses a blocking tty hangup call before freeing up the associated tty.
Fixes: 7030082a7415 ("tty: n_gsm: avoid recursive locking with async port hangup") Cc: stable@vger.kernel.org Signed-off-by: Daniel Starke daniel.starke@siemens.com Link: https://lore.kernel.org/r/20220218073123.2121-4-daniel.starke@siemens.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/n_gsm.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c index 53fa1cec49bd..79397a40a8f8 100644 --- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -1752,7 +1752,12 @@ static void gsm_dlci_release(struct gsm_dlci *dlci) gsm_destroy_network(dlci); mutex_unlock(&dlci->mutex);
- tty_hangup(tty); + /* We cannot use tty_hangup() because in tty_kref_put() the tty + * driver assumes that the hangup queue is free and reuses it to + * queue release_one_tty() -> NULL pointer panic in + * process_one_work(). + */ + tty_vhangup(tty);
tty_port_tty_set(&dlci->port, NULL); tty_kref_put(tty);
From: daniel.starke@siemens.com daniel.starke@siemens.com
commit c19d93542a6081577e6da9bf5e887979c72e80c1 upstream.
tty flow control is handled via gsmtty_throttle() and gsmtty_unthrottle(). Both functions propagate the outgoing hardware flow control state to the remote side via MSC (modem status command) frames. The local state is taken from the RTS (ready to send) flag of the tty. However, RTS gets mapped to DTR (data terminal ready), which is wrong. This patch corrects this by mapping RTS to RTS.
Fixes: e1eaea46bb40 ("tty: n_gsm line discipline") Cc: stable@vger.kernel.org Signed-off-by: Daniel Starke daniel.starke@siemens.com Link: https://lore.kernel.org/r/20220218073123.2121-5-daniel.starke@siemens.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/n_gsm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -3246,9 +3246,9 @@ static void gsmtty_throttle(struct tty_s if (dlci->state == DLCI_CLOSED) return; if (C_CRTSCTS(tty)) - dlci->modem_tx &= ~TIOCM_DTR; + dlci->modem_tx &= ~TIOCM_RTS; dlci->throttled = true; - /* Send an MSC with DTR cleared */ + /* Send an MSC with RTS cleared */ gsmtty_modem_update(dlci, 0); }
@@ -3258,9 +3258,9 @@ static void gsmtty_unthrottle(struct tty if (dlci->state == DLCI_CLOSED) return; if (C_CRTSCTS(tty)) - dlci->modem_tx |= TIOCM_DTR; + dlci->modem_tx |= TIOCM_RTS; dlci->throttled = false; - /* Send an MSC with DTR set */ + /* Send an MSC with RTS set */ gsmtty_modem_update(dlci, 0); }
From: daniel.starke@siemens.com daniel.starke@siemens.com
commit 687f9ad43c52501f46164758e908a5dd181a87fc upstream.
The function gsm_process_modem() exists to handle modem status bits of incoming frames. This includes incoming MSC (modem status command) frames and convergence layer type 2 data frames. The function, however, was only designed to handle MSC frames as it expects the command length. Within gsm_dlci_data() it is wrongly assumed that this is the same as the data frame length. This is only true if the data frame contains only 1 byte of payload.
This patch names the length parameter of gsm_process_modem() in a generic manner to reflect its association. It also corrects all calls to the function to handle the variable number of modem status octets correctly in both cases.
Fixes: 7263287af93d ("tty: n_gsm: Fixed logic to decode break signal from modem status") Cc: stable@vger.kernel.org Signed-off-by: Daniel Starke daniel.starke@siemens.com Link: https://lore.kernel.org/r/20220218073123.2121-6-daniel.starke@siemens.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/n_gsm.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-)
--- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -1021,25 +1021,25 @@ static void gsm_control_reply(struct gsm * @tty: virtual tty bound to the DLCI * @dlci: DLCI to affect * @modem: modem bits (full EA) - * @clen: command length + * @slen: number of signal octets * * Used when a modem control message or line state inline in adaption * layer 2 is processed. Sort out the local modem state and throttles */
static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci, - u32 modem, int clen) + u32 modem, int slen) { int mlines = 0; u8 brk = 0; int fc;
- /* The modem status command can either contain one octet (v.24 signals) - or two octets (v.24 signals + break signals). The length field will - either be 2 or 3 respectively. This is specified in section - 5.4.6.3.7 of the 27.010 mux spec. */ + /* The modem status command can either contain one octet (V.24 signals) + * or two octets (V.24 signals + break signals). This is specified in + * section 5.4.6.3.7 of the 07.10 mux spec. + */
- if (clen == 2) + if (slen == 1) modem = modem & 0x7f; else { brk = modem & 0x7f; @@ -1096,6 +1096,7 @@ static void gsm_control_modem(struct gsm unsigned int brk = 0; struct gsm_dlci *dlci; int len = clen; + int slen; const u8 *dp = data; struct tty_struct *tty;
@@ -1115,6 +1116,7 @@ static void gsm_control_modem(struct gsm return; dlci = gsm->dlci[addr];
+ slen = len; while (gsm_read_ea(&modem, *dp++) == 0) { len--; if (len == 0) @@ -1131,7 +1133,7 @@ static void gsm_control_modem(struct gsm modem |= (brk & 0x7f); } tty = tty_port_tty_get(&dlci->port); - gsm_process_modem(tty, dlci, modem, clen); + gsm_process_modem(tty, dlci, modem, slen); if (tty) { tty_wakeup(tty); tty_kref_put(tty); @@ -1597,6 +1599,7 @@ static void gsm_dlci_data(struct gsm_dlc struct tty_struct *tty; unsigned int modem = 0; int len = clen; + int slen = 0;
if (debug & 16) pr_debug("%d bytes for tty\n", len); @@ -1609,12 +1612,14 @@ static void gsm_dlci_data(struct gsm_dlc case 2: /* Asynchronous serial with line state in each frame */ while (gsm_read_ea(&modem, *data++) == 0) { len--; + slen++; if (len == 0) return; } + slen++; tty = tty_port_tty_get(port); if (tty) { - gsm_process_modem(tty, dlci, modem, clen); + gsm_process_modem(tty, dlci, modem, slen); tty_kref_put(tty); } fallthrough;
From: daniel.starke@siemens.com daniel.starke@siemens.com
commit a2ab75b8e76e455af7867e3835fd9cdf386b508f upstream.
In the current implementation the user may open a virtual tty which then could fail to establish the underlying DLCI. The function gsmtty_open() gets stuck in tty_port_block_til_ready() while waiting for a carrier rise. This happens if the remote side fails to acknowledge the link establishment request in time or completely. At some point gsm_dlci_close() is called to abort the link establishment attempt. The function tries to inform the associated virtual tty by performing a hangup. But the blocking loop within tty_port_block_til_ready() is not informed about this event. The patch proposed here fixes this by resetting the initialization state of the virtual tty to ensure the loop exits and triggering it to make tty_port_block_til_ready() return.
Fixes: e1eaea46bb40 ("tty: n_gsm line discipline") Cc: stable@vger.kernel.org Signed-off-by: Daniel Starke daniel.starke@siemens.com Link: https://lore.kernel.org/r/20220218073123.2121-7-daniel.starke@siemens.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/n_gsm.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -1457,6 +1457,9 @@ static void gsm_dlci_close(struct gsm_dl if (dlci->addr != 0) { tty_port_tty_hangup(&dlci->port, false); kfifo_reset(&dlci->fifo); + /* Ensure that gsmtty_open() can return. */ + tty_port_set_initialized(&dlci->port, 0); + wake_up_interruptible(&dlci->port.open_wait); } else dlci->gsm->dead = true; /* Unregister gsmtty driver,report gsmtty dev remove uevent for user */
From: Dan Carpenter dan.carpenter@oracle.com
commit ba2ab85951c91a140a8fa51d8347d54e59ec009d upstream.
The loop exited too early so the k210_pinconf_drive_strength[0] array element was never used.
Fixes: d4c34d09ab03 ("pinctrl: Add RISC-V Canaan Kendryte K210 FPIOA driver") Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Reviewed-by: Damien Le Moal damien.lemoal@opensource.wdc.com Reviewed-by: Sean Anderson seanga2@gmail.com Link: https://lore.kernel.org/r/20220209180804.GA18385@kili Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pinctrl/pinctrl-k210.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/pinctrl/pinctrl-k210.c +++ b/drivers/pinctrl/pinctrl-k210.c @@ -482,7 +482,7 @@ static int k210_pinconf_get_drive(unsign { int i;
- for (i = K210_PC_DRIVE_MAX; i; i--) { + for (i = K210_PC_DRIVE_MAX; i >= 0; i--) { if (k210_pinconf_drive_strength[i] <= max_strength_ua) return i; }
From: Sean Anderson seanga2@gmail.com
commit e9f7b9228a94778edb7a63fde3c0a3c5bb793064 upstream.
Using bias-pull-up would actually cause the pin to have its pull-down enabled. Fix this.
Signed-off-by: Sean Anderson seanga2@gmail.com Reviewed-by: Damien Le Moal damien.lemoal@opensource.wdc.com Fixes: d4c34d09ab03 ("pinctrl: Add RISC-V Canaan Kendryte K210 FPIOA driver") Link: https://lore.kernel.org/r/20220209182822.640905-1-seanga2@gmail.com Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pinctrl/pinctrl-k210.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/pinctrl/pinctrl-k210.c +++ b/drivers/pinctrl/pinctrl-k210.c @@ -527,7 +527,7 @@ static int k210_pinconf_set_param(struct case PIN_CONFIG_BIAS_PULL_UP: if (!arg) return -EINVAL; - val |= K210_PC_PD; + val |= K210_PC_PU; break; case PIN_CONFIG_DRIVE_STRENGTH: arg *= 1000;
From: Marc Zyngier maz@kernel.org
commit d1e972ace42390de739cde87d96043dcbe502286 upstream.
The tegra186 GPIO driver makes the assumption that the pointer returned by irq_data_get_irq_chip_data() is a pointer to a tegra_gpio structure. Unfortunately, it is actually a pointer to the inner gpio_chip structure, as mandated by the gpiolib infrastructure. Nice try.
The saving grace is that the gpio_chip is the first member of tegra_gpio, so the bug has gone undetected since... forever.
Fix it by performing a container_of() on the pointer. This results in no additional code, and makes it possible to understand how the whole thing works.
Fixes: 5b2b135a87fc ("gpio: Add Tegra186 support") Signed-off-by: Marc Zyngier maz@kernel.org Cc: Thierry Reding treding@nvidia.com Cc: Linus Walleij linus.walleij@linaro.org Cc: Bartosz Golaszewski bgolaszewski@baylibre.com Link: https://lore.kernel.org/r/20220211093904.1112679-1-maz@kernel.org Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpio/gpio-tegra186.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)
--- a/drivers/gpio/gpio-tegra186.c +++ b/drivers/gpio/gpio-tegra186.c @@ -341,9 +341,12 @@ static int tegra186_gpio_of_xlate(struct return offset + pin; }
+#define to_tegra_gpio(x) container_of((x), struct tegra_gpio, gpio) + static void tegra186_irq_ack(struct irq_data *data) { - struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data); + struct gpio_chip *gc = irq_data_get_irq_chip_data(data); + struct tegra_gpio *gpio = to_tegra_gpio(gc); void __iomem *base;
base = tegra186_gpio_get_base(gpio, data->hwirq); @@ -355,7 +358,8 @@ static void tegra186_irq_ack(struct irq_
static void tegra186_irq_mask(struct irq_data *data) { - struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data); + struct gpio_chip *gc = irq_data_get_irq_chip_data(data); + struct tegra_gpio *gpio = to_tegra_gpio(gc); void __iomem *base; u32 value;
@@ -370,7 +374,8 @@ static void tegra186_irq_mask(struct irq
static void tegra186_irq_unmask(struct irq_data *data) { - struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data); + struct gpio_chip *gc = irq_data_get_irq_chip_data(data); + struct tegra_gpio *gpio = to_tegra_gpio(gc); void __iomem *base; u32 value;
@@ -385,7 +390,8 @@ static void tegra186_irq_unmask(struct i
static int tegra186_irq_set_type(struct irq_data *data, unsigned int type) { - struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data); + struct gpio_chip *gc = irq_data_get_irq_chip_data(data); + struct tegra_gpio *gpio = to_tegra_gpio(gc); void __iomem *base; u32 value;
From: Miaohe Lin linmiaohe@huawei.com
commit c94afc46cae7ad41b2ad6a99368147879f4b0e56 upstream.
memblock.{reserved,memory}.regions may be allocated using kmalloc() in memblock_double_array(). Use kfree() to release these kmalloced regions indicated by memblock_{reserved,memory}_in_slab.
Signed-off-by: Miaohe Lin linmiaohe@huawei.com Fixes: 3010f876500f ("mm: discard memblock data later") Signed-off-by: Mike Rapoport rppt@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memblock.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- a/mm/memblock.c +++ b/mm/memblock.c @@ -366,14 +366,20 @@ void __init memblock_discard(void) addr = __pa(memblock.reserved.regions); size = PAGE_ALIGN(sizeof(struct memblock_region) * memblock.reserved.max); - memblock_free_late(addr, size); + if (memblock_reserved_in_slab) + kfree(memblock.reserved.regions); + else + memblock_free_late(addr, size); }
if (memblock.memory.regions != memblock_memory_init_regions) { addr = __pa(memblock.memory.regions); size = PAGE_ALIGN(sizeof(struct memblock_region) * memblock.memory.max); - memblock_free_late(addr, size); + if (memblock_memory_in_slab) + kfree(memblock.memory.regions); + else + memblock_free_late(addr, size); }
memblock_memory = NULL;
On 2/28/22 10:22 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.16.12-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.16.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
On Mon, Feb 28, 2022 at 10:47 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.16.12-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.16.y and the diffstat can be found below.
thanks,
greg k-h
Hi Greg,
Compiled and booted on my test system Lenovo P50s: Intel Core i7 No emergency and critical messages in the dmesg
./perf bench sched all # Running sched/messaging benchmark... # 20 sender and receiver processes per group # 10 groups == 400 processes run
Total time: 0.440 [sec]
# Running sched/pipe benchmark... # Executed 1000000 pipe operations between two processes
Total time: 6.990 [sec]
6.990131 usecs/op 143058 ops/sec
Tested-by: Zan Aziz zanaziz313@gmail.com
Thanks -Zan
On 2/28/2022 9:22 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.16.12-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.16.y and the diffstat can be found below.
thanks,
greg k-h
On ARCH_BRCMSTb using 32-bit and 64-bit ARM kernels:
Tested-by: Florian Fainelli f.fainelli@gmail.com
On Mon, 28 Feb 2022 at 23:13, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.16.12-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.16.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 5.16.12-rc1 * git: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git * git branch: linux-5.16.y * git commit: dc69c736a4039f3f7c3d26910013103cdb517bce * git describe: v5.16.11-165-gdc69c736a403 * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.16.y/build/v5.16....
## Test Regressions (compared to v5.16.10-228-ga75faa1f1534) No test regressions found.
## Metric Regressions (compared to v5.16.10-228-ga75faa1f1534) No metric regressions found.
## Test Fixes (compared to v5.16.10-228-ga75faa1f1534) No test fixes found.
## Metric Fixes (compared to v5.16.10-228-ga75faa1f1534) No metric fixes found.
## Test result summary total: 105073, pass: 89294, fail: 1663, skip: 13057, xfail: 1059
## Build Summary * arc: 10 total, 10 passed, 0 failed * arm: 141 total, 138 passed, 3 failed * arm64: 47 total, 44 passed, 3 failed * dragonboard-410c: 2 total, 2 passed, 0 failed * hi6220-hikey: 2 total, 2 passed, 0 failed * i386: 46 total, 42 passed, 4 failed * juno-r2: 2 total, 2 passed, 0 failed * mips: 41 total, 38 passed, 3 failed * parisc: 14 total, 14 passed, 0 failed * powerpc: 65 total, 50 passed, 15 failed * riscv: 32 total, 27 passed, 5 failed * s390: 26 total, 23 passed, 3 failed * sh: 26 total, 24 passed, 2 failed * sparc: 14 total, 14 passed, 0 failed * x15: 2 total, 2 passed, 0 failed * x86: 2 total, 2 passed, 0 failed * x86_64: 47 total, 47 passed, 0 failed
## Test suites summary * fwts * igt-gpu-tools * kselftest- * kselftest-android * kselftest-bpf * kselftest-capabilities * kselftest-cgroup * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-cpufreq * kselftest-efivarfs * kselftest-filesystems * kselftest-firmware * kselftest-fpu * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-ir * kselftest-kcmp * kselftest-kexec * kselftest-kvm * kselftest-lib * kselftest-livepatch * kselftest-lkdtm * kselftest-membarrier * kselftest-memfd * kselftest-memory-hotplug * kselftest-mincore * kselftest-mount * kselftest-mqueue * kselftest-net * kselftest-netfilter * kselftest-nsfs * kselftest-openat2 * kselftest-pid_namespace * kselftest-pidfd * kselftest-proc * kselftest-pstore * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-splice * kselftest-static_keys * kselftest-sync * kselftest-sysctl * kselftest-tc-testing * kselftest-timens * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user * kselftest-vm * kselftest-x86 * kselftest-zram * kunit * kvm-unit-tests * libgpiod * libhugetlbfs * linux-log-parser * ltp-cap_bounds-tests * ltp-commands-tests * ltp-containers-tests * ltp-controllers-tests * ltp-cpuhotplug-tests * ltp-crypto-tests * ltp-cve-tests * ltp-dio-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-mm-tests * ltp-nptl-tests * ltp-open-posix-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-syscalls-tests * ltp-tracing-tests * network-basic-tests * packetdrill * perf * rcutorture * ssuite * v4l2-compliance * vdso
-- Linaro LKFT https://lkft.linaro.org
On Mon, 28 Feb 2022 18:22:42 +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.16.12-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.16.y and the diffstat can be found below.
thanks,
greg k-h
All tests passing for Tegra ...
Test results for stable-v5.16: 10 builds: 10 pass, 0 fail 28 boots: 28 pass, 0 fail 122 tests: 122 pass, 0 fail
Linux version: 5.16.12-rc1-gdc69c736a403 Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra194-p3509-0000+p3668-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Tested-by: Jon Hunter jonathanh@nvidia.com
Jon
On 01/03/22 00.22, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Successfully cross-compiled for arm64 (bcm2711_defconfig, gcc 10.2.0) and powerpc (ps3_defconfig, gcc 11.2.0).
Tested-by: Bagas Sanjaya bagasdotme@gmail.com
On Mon, Feb 28, 2022 at 06:22:42PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Hi Greg,
5.16.12-rc1 tested.
Run tested on: - Allwinner H6 (Tanix TX6) - Intel Tiger Lake x86_64 (nuc11 i7-1165G7)
In addition - build tested on: - Allwinner A64 - Allwinner H3 - Allwinner H5 - NXP iMX6 - NXP iMX8 - Qualcomm Dragonboard - Rockchip RK3288 - Rockchip RK3328 - Rockchip RK3399pro - Samsung Exynos
Tested-by: Rudi Heitbaum rudi@heitbaum.com -- Rudi
On 2/28/22 9:22 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.16.12-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.16.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On Mon, Feb 28, 2022 at 06:22:42PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.16.12-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.16.y and the diffstat can be found below.
thanks,
greg k-h
Tested rc1 against the Fedora build system (aarch64, armv7, ppc64le, s390x, x86_64), and boot tested x86_64. No regressions noted.
Tested-by: Justin M. Forbes jforbes@fedoraproject.org
On Mon, Feb 28, 2022 at 06:22:42PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
Build results: total: 155 pass: 155 fail: 0 Qemu test results: total: 488 pass: 488 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
On Mon, Feb 28, 2022, at 12:22 PM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.16.12 release. There are 164 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 02 Mar 2022 17:20:16 +0000. Anything received after that time might be too late.
5.16.12-rc1 compiled and booted with no errors or regressions on my x86_64 test system.
Tested-by: Slade Watkins slade@sladewatkins.com
Cheers, Slade
linux-stable-mirror@lists.linaro.org