This is the start of the stable review cycle for the 5.4.208 release. There are 87 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 29 Jul 2022 16:09:50 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.4.208-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 5.4.208-rc1
Jan Beulich jbeulich@suse.com x86: drop bogus "cc" clobber from __try_cmpxchg_user_asm()
Jose Alonso joalonsof@gmail.com net: usb: ax88179_178a needs FLAG_SEND_ZLP
Jiri Slaby jslaby@suse.cz tty: use new tty_insert_flip_string_and_push_buffer() in pty_write()
Jiri Slaby jslaby@suse.cz tty: extract tty_flip_buffer_commit() from tty_flip_buffer_push()
Jiri Slaby jslaby@suse.cz tty: drop tty_schedule_flip()
Jiri Slaby jslaby@suse.cz tty: the rest, stop using tty_schedule_flip()
Jiri Slaby jslaby@suse.cz tty: drivers/tty/, stop using tty_schedule_flip()
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: Fix bt_skb_sendmmsg not allocating partial chunks
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: SCO: Fix sco_send_frame returning skb->len
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: Fix passing NULL to PTR_ERR
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: RFCOMM: Replace use of memcpy_from_msg with bt_skb_sendmmsg
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: SCO: Replace use of memcpy_from_msg with bt_skb_sendmsg
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: Add bt_skb_sendmmsg helper
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: Add bt_skb_sendmsg helper
Takashi Iwai tiwai@suse.de ALSA: memalloc: Align buffer allocations in page size
Peter Zijlstra peterz@infradead.org bitfield.h: Fix "type of reg too small for mask" test
Thomas Gleixner tglx@linutronix.de x86/mce: Deduplicate exception handling
Michel Lespinasse walken@google.com mmap locking API: initial implementation as rwsem wrappers
Peter Zijlstra peterz@infradead.org x86/uaccess: Implement macros for CMPXCHG on user addresses
Al Viro viro@zeniv.linux.org.uk x86: get rid of small constant size cases in raw_copy_{to,from}_user()
Will Deacon will@kernel.org locking/refcount: Consolidate implementations of refcount_t
Will Deacon will@kernel.org locking/refcount: Consolidate REFCOUNT_{MAX,SATURATED} definitions
Will Deacon will@kernel.org locking/refcount: Move saturation warnings out of line
Will Deacon will@kernel.org locking/refcount: Improve performance of generic REFCOUNT_FULL code
Will Deacon will@kernel.org locking/refcount: Move the bulk of the REFCOUNT_FULL implementation into the <linux/refcount.h> header
Will Deacon will@kernel.org locking/refcount: Remove unused refcount_*_checked() variants
Will Deacon will@kernel.org locking/refcount: Ensure integer operands are treated as signed
Will Deacon will@kernel.org locking/refcount: Define constants for saturation and max refcount values
GUO Zihua guozihua@huawei.com ima: remove the IMA_TEMPLATE Kconfig option
Alexander Aring aahringo@redhat.com dlm: fix pending remove if msg allocation fails
Eric Dumazet edumazet@google.com bpf: Make sure mac_header was set before using it
Wang Cheng wanngchenng@gmail.com mm/mempolicy: fix uninit-value in mpol_rebind_policy()
Marc Kleine-Budde mkl@pengutronix.de spi: bcm2835: bcm2835_spi_handle_err(): fix NULL pointer deref for non DMA transfers
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_tcp_max_reordering.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_rfc1337.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_stdurg.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_retrans_collapse.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_tcp_slow_start_after_idle.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_thin_linear_timeouts.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_tcp_recovery.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_early_retrans.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl knobs related to SYN option.
Kuniyuki Iwashima kuniyu@amazon.com udp: Fix a data-race around sysctl_udp_l3mdev_accept.
Kuniyuki Iwashima kuniyu@amazon.com ipv4: Fix a data-race around sysctl_fib_multipath_use_neigh.
Hristo Venev hristo@venev.name be2net: Fix buffer overflow in be_get_module_eeprom
Haibo Chen haibo.chen@nxp.com gpio: pca953x: only use single read/write for No AI mode
Piotr Skajewski piotrx.skajewski@intel.com ixgbe: Add locking to prevent panic when setting sriov_numvfs to zero
Dawid Lukwinski dawid.lukwinski@intel.com i40e: Fix erroneous adapter reinitialization during recovery process
Przemyslaw Patynowski przemyslawx.patynowski@intel.com iavf: Fix handling of dummy receive descriptors
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_tcp_fastopen.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_max_syn_backlog.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_tw_reuse.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_notsent_lowat.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around some timeout sysctl knobs.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_tcp_reordering.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_tcp_syncookies.
Kuniyuki Iwashima kuniyu@amazon.com igmp: Fix a data-race around sysctl_igmp_max_memberships.
Kuniyuki Iwashima kuniyu@amazon.com igmp: Fix data-races around sysctl_igmp_llm_reports.
Tariq Toukan tariqt@nvidia.com net/tls: Fix race in TLS device down flow
Junxiao Chang junxiao.chang@intel.com net: stmmac: fix dma queue left shift overflow issue
Robert Hancock robert.hancock@calian.com i2c: cadence: Change large transfer count reset logic to be unconditional
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_probe_interval.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_probe_threshold.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix a data-race around sysctl_tcp_mtu_probe_floor.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_tcp_min_snd_mss.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_tcp_base_mss.
Kuniyuki Iwashima kuniyu@amazon.com tcp: Fix data-races around sysctl_tcp_mtu_probing.
Kuniyuki Iwashima kuniyu@amazon.com tcp/dccp: Fix a data-race around sysctl_tcp_fwmark_accept.
Kuniyuki Iwashima kuniyu@amazon.com ip: Fix a data-race around sysctl_fwmark_reflect.
Kuniyuki Iwashima kuniyu@amazon.com ip: Fix data-races around sysctl_ip_nonlocal_bind.
Kuniyuki Iwashima kuniyu@amazon.com ip: Fix data-races around sysctl_ip_fwd_use_pmtu.
Kuniyuki Iwashima kuniyu@amazon.com ip: Fix data-races around sysctl_ip_no_pmtu_disc.
Lennert Buytenhek buytenh@wantstofly.org igc: Reinstate IGC_REMOVED logic and implement it properly
Peter Zijlstra peterz@infradead.org perf/core: Fix data race between perf_event_set_output() and perf_mmap_close()
William Dean williamsukatube@gmail.com pinctrl: ralink: Check for null return of devm_kcalloc
Miaoqian Lin linmq006@gmail.com power/reset: arm-versatile: Fix refcount leak in versatile_reboot_probe
Hangyu Hua hbh25y@gmail.com xfrm: xfrm_policy: fix a possible double xfrm_pols_put() in xfrm_bundle_lookup()
Pali Rohár pali@kernel.org serial: mvebu-uart: correctly report configured baudrate value
Jeffrey Hugo quic_jhugo@quicinc.com PCI: hv: Fix interrupt mapping for multi-MSI
Jeffrey Hugo quic_jhugo@quicinc.com PCI: hv: Reuse existing IRTE allocation in compose_msi_msg()
Jeffrey Hugo quic_jhugo@quicinc.com PCI: hv: Fix hv_arch_irq_unmask() for multi-MSI
Jeffrey Hugo quic_jhugo@quicinc.com PCI: hv: Fix multi-MSI to allow more than one MSI vector
Demi Marie Obenour demi@invisiblethingslab.com xen/gntdev: Ignore failure to unmap INVALID_GRANT_HANDLE
Eric Snowberg eric.snowberg@oracle.com lockdown: Fix kexec lockdown bypass with ima policy
Ido Schimmel idosch@nvidia.com mlxsw: spectrum_router: Fix IPv4 nexthop gateway indication
Ben Dooks ben.dooks@codethink.co.uk riscv: add as-options for modules with assembly compontents
Fabien Dessenne fabien.dessenne@foss.st.com pinctrl: stm32: fix optional IRQ support to gpios
-------------
Diffstat:
Makefile | 4 +- arch/Kconfig | 21 -- arch/alpha/kernel/srmcons.c | 2 +- arch/arm/Kconfig | 1 - arch/arm64/Kconfig | 1 - arch/riscv/Makefile | 1 + arch/s390/configs/debug_defconfig | 1 - arch/x86/Kconfig | 1 - arch/x86/include/asm/asm.h | 6 - arch/x86/include/asm/refcount.h | 126 ---------- arch/x86/include/asm/uaccess.h | 154 +++++++++++- arch/x86/include/asm/uaccess_32.h | 27 --- arch/x86/include/asm/uaccess_64.h | 108 +-------- arch/x86/kernel/cpu/mce/core.c | 34 +-- arch/x86/mm/extable.c | 49 ---- drivers/crypto/chelsio/chtls/chtls_cm.c | 6 +- drivers/gpio/gpio-pca953x.c | 3 + drivers/gpu/drm/i915/Kconfig.debug | 1 - drivers/i2c/busses/i2c-cadence.c | 30 +-- drivers/misc/lkdtm/refcount.c | 8 - drivers/net/ethernet/emulex/benet/be_cmds.c | 10 +- drivers/net/ethernet/emulex/benet/be_cmds.h | 2 +- drivers/net/ethernet/emulex/benet/be_ethtool.c | 31 ++- drivers/net/ethernet/intel/i40e/i40e_main.c | 13 +- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 5 +- drivers/net/ethernet/intel/igc/igc_regs.h | 2 + drivers/net/ethernet/intel/ixgbe/ixgbe.h | 1 + drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 3 + drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 6 + .../net/ethernet/mellanox/mlxsw/spectrum_router.c | 2 +- drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c | 3 + drivers/net/usb/ax88179_178a.c | 16 +- drivers/pci/controller/pci-hyperv.c | 100 ++++++-- drivers/pinctrl/stm32/pinctrl-stm32.c | 18 +- drivers/power/reset/arm-versatile-reboot.c | 1 + drivers/s390/char/keyboard.h | 4 +- drivers/spi/spi-bcm2835.c | 12 +- drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c | 2 + drivers/staging/speakup/spk_ttyio.c | 4 +- drivers/tty/cyclades.c | 6 +- drivers/tty/goldfish.c | 2 +- drivers/tty/moxa.c | 4 +- drivers/tty/pty.c | 14 +- drivers/tty/serial/lpc32xx_hs.c | 2 +- drivers/tty/serial/mvebu-uart.c | 25 +- drivers/tty/tty_buffer.c | 66 +++-- drivers/tty/vt/keyboard.c | 6 +- drivers/tty/vt/vt.c | 2 +- drivers/xen/gntdev.c | 3 +- fs/dlm/lock.c | 3 +- include/linux/bitfield.h | 19 +- include/linux/mm.h | 1 + include/linux/mmap_lock.h | 54 +++++ include/linux/refcount.h | 269 ++++++++++++++++++--- include/linux/tty_flip.h | 4 +- include/net/bluetooth/bluetooth.h | 65 +++++ include/net/inet_sock.h | 5 +- include/net/ip.h | 4 +- include/net/tcp.h | 9 +- include/net/udp.h | 2 +- kernel/bpf/core.c | 8 +- kernel/events/core.c | 45 ++-- lib/refcount.c | 255 +++---------------- mm/mempolicy.c | 2 +- net/bluetooth/rfcomm/core.c | 50 +++- net/bluetooth/rfcomm/sock.c | 46 +--- net/bluetooth/sco.c | 30 +-- net/core/filter.c | 4 +- net/core/secure_seq.c | 4 +- net/ipv4/af_inet.c | 4 +- net/ipv4/fib_semantics.c | 2 +- net/ipv4/icmp.c | 2 +- net/ipv4/igmp.c | 23 +- net/ipv4/route.c | 2 +- net/ipv4/syncookies.c | 9 +- net/ipv4/tcp.c | 10 +- net/ipv4/tcp_fastopen.c | 4 +- net/ipv4/tcp_input.c | 51 ++-- net/ipv4/tcp_ipv4.c | 2 +- net/ipv4/tcp_metrics.c | 3 +- net/ipv4/tcp_minisocks.c | 2 +- net/ipv4/tcp_output.c | 29 +-- net/ipv4/tcp_recovery.c | 6 +- net/ipv4/tcp_timer.c | 20 +- net/ipv6/af_inet6.c | 2 +- net/ipv6/syncookies.c | 3 +- net/sctp/protocol.c | 2 +- net/tls/tls_device.c | 8 +- net/xfrm/xfrm_policy.c | 5 +- net/xfrm/xfrm_state.c | 2 +- security/integrity/ima/Kconfig | 12 +- security/integrity/ima/ima_policy.c | 4 + sound/core/memalloc.c | 1 + 93 files changed, 1046 insertions(+), 990 deletions(-)
From: Fabien Dessenne fabien.dessenne@foss.st.com
commit a1d4ef1adf8bbd302067534ead671a94759687ed upstream.
To act as an interrupt controller, a gpio bank relies on the "interrupt-parent" of the pin controller. When this optional "interrupt-parent" misses, do not create any IRQ domain.
This fixes a "NULL pointer in stm32_gpio_domain_alloc()" kernel crash when the interrupt-parent = <exti> property is not declared in the Device Tree.
Fixes: 0eb9f683336d ("pinctrl: Add IRQ support to STM32 gpios") Signed-off-by: Fabien Dessenne fabien.dessenne@foss.st.com Link: https://lore.kernel.org/r/20220627142350.742973-1-fabien.dessenne@foss.st.co... Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pinctrl/stm32/pinctrl-stm32.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-)
--- a/drivers/pinctrl/stm32/pinctrl-stm32.c +++ b/drivers/pinctrl/stm32/pinctrl-stm32.c @@ -1215,15 +1215,17 @@ static int stm32_gpiolib_register_bank(s bank->bank_ioport_nr = bank_ioport_nr; spin_lock_init(&bank->lock);
- /* create irq hierarchical domain */ - bank->fwnode = of_node_to_fwnode(np); + if (pctl->domain) { + /* create irq hierarchical domain */ + bank->fwnode = of_node_to_fwnode(np); + + bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, STM32_GPIO_IRQ_LINE, + bank->fwnode, &stm32_gpio_domain_ops, + bank);
- bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, - STM32_GPIO_IRQ_LINE, bank->fwnode, - &stm32_gpio_domain_ops, bank); - - if (!bank->domain) - return -ENODEV; + if (!bank->domain) + return -ENODEV; + }
err = gpiochip_add_data(&bank->gpio_chip, bank); if (err) { @@ -1393,6 +1395,8 @@ int stm32_pctl_probe(struct platform_dev pctl->domain = stm32_pctrl_get_irq_domain(np); if (IS_ERR(pctl->domain)) return PTR_ERR(pctl->domain); + if (!pctl->domain) + dev_warn(dev, "pinctrl without interrupt support\n");
/* hwspinlock is optional */ hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
From: Ben Dooks ben.dooks@codethink.co.uk
commit c1f6eff304e4dfa4558b6a8c6b2d26a91db6c998 upstream.
When trying to load modules built for RISC-V which include assembly files the kernel loader errors with "unexpected relocation type 'R_RISCV_ALIGN'" due to R_RISCV_ALIGN relocations being generated by the assembler.
The R_RISCV_ALIGN relocations can be removed at the expense of code space by adding -mno-relax to gcc and as. In commit 7a8e7da42250138 ("RISC-V: Fixes to module loading") -mno-relax is added to the build variable KBUILD_CFLAGS_MODULE. See [1] for more info.
The issue is that when kbuild builds a .S file, it invokes gcc with the -mno-relax flag, but this is not being passed through to the assembler. Adding -Wa,-mno-relax to KBUILD_AFLAGS_MODULE ensures that the assembler is invoked correctly. This may have now been fixed in gcc[2] and this addition should not stop newer gcc and as from working.
[1] https://github.com/riscv/riscv-elf-psabi-doc/issues/183 [2] https://github.com/gcc-mirror/gcc/commit/3b0a7d624e64eeb81e4d5e8c62c46d86ef5...
Signed-off-by: Ben Dooks ben.dooks@codethink.co.uk Reviewed-by: Bin Meng bmeng.cn@gmail.com Link: https://lore.kernel.org/r/20220529152200.609809-1-ben.dooks@codethink.co.uk Fixes: ab1ef68e5401 ("RISC-V: Add sections of PLT and GOT for kernel module") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/riscv/Makefile | 1 + 1 file changed, 1 insertion(+)
--- a/arch/riscv/Makefile +++ b/arch/riscv/Makefile @@ -74,6 +74,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y) endif
KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax) +KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
# GCC versions that support the "-mstrict-align" option default to allowing # unaligned accesses. While unaligned accesses are explicitly allowed in the
From: Ido Schimmel idosch@nvidia.com
commit e5ec6a2513383fe2ecc2ee3b5f51d97acbbcd4d8 upstream.
mlxsw needs to distinguish nexthops with a gateway from connected nexthops in order to write the former to the adjacency table of the device. The check used to rely on the fact that nexthops with a gateway have a 'link' scope whereas connected nexthops have a 'host' scope. This is no longer correct after commit 747c14307214 ("ip: fix dflt addr selection for connected nexthop").
Fix that by instead checking the address family of the gateway IP. This is a more direct way and also consistent with the IPv6 counterpart in mlxsw_sp_rt6_is_gateway().
Cc: stable@vger.kernel.org Fixes: 747c14307214 ("ip: fix dflt addr selection for connected nexthop") Fixes: 597cfe4fc339 ("nexthop: Add support for IPv4 nexthops") Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Amit Cohen amcohen@nvidia.com Reviewed-by: Nicolas Dichtel nicolas.dichtel@6wind.com Reviewed-by: David Ahern dsahern@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -3871,7 +3871,7 @@ static bool mlxsw_sp_fi_is_gateway(const { const struct fib_nh *nh = fib_info_nh(fi, 0);
- return nh->fib_nh_scope == RT_SCOPE_LINK || + return nh->fib_nh_gw_family || mlxsw_sp_nexthop4_ipip_type(mlxsw_sp, nh, NULL); }
From: Eric Snowberg eric.snowberg@oracle.com
commit 543ce63b664e2c2f9533d089a4664b559c3e6b5b upstream.
The lockdown LSM is primarily used in conjunction with UEFI Secure Boot. This LSM may also be used on machines without UEFI. It can also be enabled when UEFI Secure Boot is disabled. One of lockdown's features is to prevent kexec from loading untrusted kernels. Lockdown can be enabled through a bootparam or after the kernel has booted through securityfs.
If IMA appraisal is used with the "ima_appraise=log" boot param, lockdown can be defeated with kexec on any machine when Secure Boot is disabled or unavailable. IMA prevents setting "ima_appraise=log" from the boot param when Secure Boot is enabled, but this does not cover cases where lockdown is used without Secure Boot.
To defeat lockdown, boot without Secure Boot and add ima_appraise=log to the kernel command line; then:
$ echo "integrity" > /sys/kernel/security/lockdown $ echo "appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig" > \ /sys/kernel/security/ima/policy $ kexec -ls unsigned-kernel
Add a call to verify ima appraisal is set to "enforce" whenever lockdown is enabled. This fixes CVE-2022-21505.
Cc: stable@vger.kernel.org Fixes: 29d3c1c8dfe7 ("kexec: Allow kexec_file() with appropriate IMA policy when locked down") Signed-off-by: Eric Snowberg eric.snowberg@oracle.com Acked-by: Mimi Zohar zohar@linux.ibm.com Reviewed-by: John Haxby john.haxby@oracle.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- security/integrity/ima/ima_policy.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/security/integrity/ima/ima_policy.c +++ b/security/integrity/ima/ima_policy.c @@ -1542,6 +1542,10 @@ bool ima_appraise_signature(enum kernel_ if (id >= READING_MAX_ID) return false;
+ if (id == READING_KEXEC_IMAGE && !(ima_appraise & IMA_APPRAISE_ENFORCE) + && security_locked_down(LOCKDOWN_KEXEC)) + return false; + func = read_idmap[id] ?: FILE_CHECK;
rcu_read_lock();
From: Demi Marie Obenour demi@invisiblethingslab.com
commit 166d3863231667c4f64dee72b77d1102cdfad11f upstream.
The error paths of gntdev_mmap() can call unmap_grant_pages() even though not all of the pages have been successfully mapped. This will trigger the WARN_ON()s in __unmap_grant_pages_done(). The number of warnings can be very large; I have observed thousands of lines of warnings in the systemd journal.
Avoid this problem by only warning on unmapping failure if the handle being unmapped is not INVALID_GRANT_HANDLE. The handle field of any page that was not successfully mapped will be INVALID_GRANT_HANDLE, so this catches all cases where unmapping can legitimately fail.
Fixes: dbe97cff7dd9 ("xen/gntdev: Avoid blocking in unmap_grant_pages()") Cc: stable@vger.kernel.org Suggested-by: Juergen Gross jgross@suse.com Signed-off-by: Demi Marie Obenour demi@invisiblethingslab.com Reviewed-by: Oleksandr Tyshchenko oleksandr_tyshchenko@epam.com Reviewed-by: Juergen Gross jgross@suse.com Link: https://lore.kernel.org/r/20220710230522.1563-1-demi@invisiblethingslab.com Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/xen/gntdev.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -413,7 +413,8 @@ static void __unmap_grant_pages_done(int unsigned int offset = data->unmap_ops - map->unmap_ops;
for (i = 0; i < data->count; i++) { - WARN_ON(map->unmap_ops[offset+i].status); + WARN_ON(map->unmap_ops[offset+i].status && + map->unmap_ops[offset+i].handle != -1); pr_debug("unmap handle=%d st=%d\n", map->unmap_ops[offset+i].handle, map->unmap_ops[offset+i].status);
From: Jeffrey Hugo quic_jhugo@quicinc.com
commit 08e61e861a0e47e5e1a3fb78406afd6b0cea6b6d upstream.
If the allocation of multiple MSI vectors for multi-MSI fails in the core PCI framework, the framework will retry the allocation as a single MSI vector, assuming that meets the min_vecs specified by the requesting driver.
Hyper-V advertises that multi-MSI is supported, but reuses the VECTOR domain to implement that for x86. The VECTOR domain does not support multi-MSI, so the alloc will always fail and fallback to a single MSI allocation.
In short, Hyper-V advertises a capability it does not implement.
Hyper-V can support multi-MSI because it coordinates with the hypervisor to map the MSIs in the IOMMU's interrupt remapper, which is something the VECTOR domain does not have. Therefore the fix is simple - copy what the x86 IOMMU drivers (AMD/Intel-IR) do by removing X86_IRQ_ALLOC_CONTIGUOUS_VECTORS after calling the VECTOR domain's pci_msi_prepare().
5.4 backport - adds the hv_msi_prepare wrapper function. X86_IRQ_ALLOC_TYPE_PCI_MSI changed to X86_IRQ_ALLOC_TYPE_MSI (same value).
Fixes: 4daace0d8ce8 ("PCI: hv: Add paravirtual PCI front-end for Microsoft Hyper-V VMs") Signed-off-by: Jeffrey Hugo quic_jhugo@quicinc.com Reviewed-by: Dexuan Cui decui@microsoft.com Link: https://lore.kernel.org/r/1649856981-14649-1-git-send-email-quic_jhugo@quici... Signed-off-by: Wei Liu wei.liu@kernel.org Signed-off-by: Carl Vanderlip quic_carlv@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pci/controller/pci-hyperv.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-)
--- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -1172,6 +1172,21 @@ static void hv_irq_mask(struct irq_data pci_msi_mask_irq(data); }
+static int hv_msi_prepare(struct irq_domain *domain, struct device *dev, + int nvec, msi_alloc_info_t *info) +{ + int ret = pci_msi_prepare(domain, dev, nvec, info); + + /* + * By using the interrupt remapper in the hypervisor IOMMU, contiguous + * CPU vectors is not needed for multi-MSI + */ + if (info->type == X86_IRQ_ALLOC_TYPE_MSI) + info->flags &= ~X86_IRQ_ALLOC_CONTIGUOUS_VECTORS; + + return ret; +} + /** * hv_irq_unmask() - "Unmask" the IRQ by setting its current * affinity. @@ -1518,7 +1533,7 @@ static irq_hw_number_t hv_msi_domain_ops
static struct msi_domain_ops hv_msi_ops = { .get_hwirq = hv_msi_domain_ops_get_hwirq, - .msi_prepare = pci_msi_prepare, + .msi_prepare = hv_msi_prepare, .set_desc = pci_msi_set_desc, .msi_free = hv_msi_free, };
From: Jeffrey Hugo quic_jhugo@quicinc.com
commit 455880dfe292a2bdd3b4ad6a107299fce610e64b upstream.
In the multi-MSI case, hv_arch_irq_unmask() will only operate on the first MSI of the N allocated. This is because only the first msi_desc is cached and it is shared by all the MSIs of the multi-MSI block. This means that hv_arch_irq_unmask() gets the correct address, but the wrong data (always 0).
This can break MSIs.
Lets assume MSI0 is vector 34 on CPU0, and MSI1 is vector 33 on CPU0.
hv_arch_irq_unmask() is called on MSI0. It uses a hypercall to configure the MSI address and data (0) to vector 34 of CPU0. This is correct. Then hv_arch_irq_unmask is called on MSI1. It uses another hypercall to configure the MSI address and data (0) to vector 33 of CPU0. This is wrong, and results in both MSI0 and MSI1 being routed to vector 33. Linux will observe extra instances of MSI1 and no instances of MSI0 despite the endpoint device behaving correctly.
For the multi-MSI case, we need unique address and data info for each MSI, but the cached msi_desc does not provide that. However, that information can be gotten from the int_desc cached in the chip_data by compose_msi_msg(). Fix the multi-MSI case to use that cached information instead. Since hv_set_msi_entry_from_desc() is no longer applicable, remove it.
5.4 backport - hv_set_msi_entry_from_desc doesn't exist to be removed. msi_desc replaces msi_entry for location int_desc is written to.
Signed-off-by: Jeffrey Hugo quic_jhugo@quicinc.com Reviewed-by: Michael Kelley mikelley@microsoft.com Link: https://lore.kernel.org/r/1651068453-29588-1-git-send-email-quic_jhugo@quici... Signed-off-by: Wei Liu wei.liu@kernel.org Signed-off-by: Carl Vanderlip quic_carlv@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pci/controller/pci-hyperv.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -1202,6 +1202,7 @@ static void hv_irq_unmask(struct irq_dat struct msi_desc *msi_desc = irq_data_get_msi_desc(data); struct irq_cfg *cfg = irqd_cfg(data); struct retarget_msi_interrupt *params; + struct tran_int_desc *int_desc; struct hv_pcibus_device *hbus; struct cpumask *dest; cpumask_var_t tmp; @@ -1216,6 +1217,7 @@ static void hv_irq_unmask(struct irq_dat pdev = msi_desc_to_pci_dev(msi_desc); pbus = pdev->bus; hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); + int_desc = data->chip_data;
spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);
@@ -1223,8 +1225,8 @@ static void hv_irq_unmask(struct irq_dat memset(params, 0, sizeof(*params)); params->partition_id = HV_PARTITION_ID_SELF; params->int_entry.source = 1; /* MSI(-X) */ - params->int_entry.address = msi_desc->msg.address_lo; - params->int_entry.data = msi_desc->msg.data; + params->int_entry.address = int_desc->address & 0xffffffff; + params->int_entry.data = int_desc->data; params->device_id = (hbus->hdev->dev_instance.b[5] << 24) | (hbus->hdev->dev_instance.b[4] << 16) | (hbus->hdev->dev_instance.b[7] << 8) |
From: Jeffrey Hugo quic_jhugo@quicinc.com
commit b4b77778ecc5bfbd4e77de1b2fd5c1dd3c655f1f upstream.
Currently if compose_msi_msg() is called multiple times, it will free any previous IRTE allocation, and generate a new allocation. While nothing prevents this from occurring, it is extraneous when Linux could just reuse the existing allocation and avoid a bunch of overhead.
However, when future IRTE allocations operate on blocks of MSIs instead of a single line, freeing the allocation will impact all of the lines. This could cause an issue where an allocation of N MSIs occurs, then some of the lines are retargeted, and finally the allocation is freed/reallocated. The freeing of the allocation removes all of the configuration for the entire block, which requires all the lines to be retargeted, which might not happen since some lines might already be unmasked/active.
Signed-off-by: Jeffrey Hugo quic_jhugo@quicinc.com Reviewed-by: Dexuan Cui decui@microsoft.com Tested-by: Dexuan Cui decui@microsoft.com Tested-by: Michael Kelley mikelley@microsoft.com Link: https://lore.kernel.org/r/1652282582-21595-1-git-send-email-quic_jhugo@quici... Signed-off-by: Wei Liu wei.liu@kernel.org Signed-off-by: Carl Vanderlip quic_carlv@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pci/controller/pci-hyperv.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-)
--- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -1387,6 +1387,15 @@ static void hv_compose_msi_msg(struct ir u32 size; int ret;
+ /* Reuse the previous allocation */ + if (data->chip_data) { + int_desc = data->chip_data; + msg->address_hi = int_desc->address >> 32; + msg->address_lo = int_desc->address & 0xffffffff; + msg->data = int_desc->data; + return; + } + pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data)); dest = irq_data_get_effective_affinity_mask(data); pbus = pdev->bus; @@ -1395,13 +1404,6 @@ static void hv_compose_msi_msg(struct ir if (!hpdev) goto return_null_message;
- /* Free any previous message that might have already been composed. */ - if (data->chip_data) { - int_desc = data->chip_data; - data->chip_data = NULL; - hv_int_desc_free(hpdev, int_desc); - } - int_desc = kzalloc(sizeof(*int_desc), GFP_ATOMIC); if (!int_desc) goto drop_reference;
From: Jeffrey Hugo quic_jhugo@quicinc.com
commit a2bad844a67b1c7740bda63e87453baf63c3a7f7 upstream.
According to Dexuan, the hypervisor folks beleive that multi-msi allocations are not correct. compose_msi_msg() will allocate multi-msi one by one. However, multi-msi is a block of related MSIs, with alignment requirements. In order for the hypervisor to allocate properly aligned and consecutive entries in the IOMMU Interrupt Remapping Table, there should be a single mapping request that requests all of the multi-msi vectors in one shot.
Dexuan suggests detecting the multi-msi case and composing a single request related to the first MSI. Then for the other MSIs in the same block, use the cached information. This appears to be viable, so do it.
5.4 backport - add hv_msi_get_int_vector helper function. Fixed merge conflict due to delivery_mode name change (APIC_DELIVERY_MODE_FIXED is the value given to dest_Fixed). Removed unused variable in hv_compose_msi_msg. Fixed reference to msi_desc->pci to point to the same is_msix variable. Removed changes to compose_msi_req_v3 since it doesn't exist yet.
Suggested-by: Dexuan Cui decui@microsoft.com Signed-off-by: Jeffrey Hugo quic_jhugo@quicinc.com Reviewed-by: Dexuan Cui decui@microsoft.com Tested-by: Michael Kelley mikelley@microsoft.com Link: https://lore.kernel.org/r/1652282599-21643-1-git-send-email-quic_jhugo@quici... Signed-off-by: Wei Liu wei.liu@kernel.org Signed-off-by: Carl Vanderlip quic_carlv@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pci/controller/pci-hyperv.c | 61 +++++++++++++++++++++++++++++++----- 1 file changed, 53 insertions(+), 8 deletions(-)
--- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -1110,6 +1110,10 @@ static void hv_int_desc_free(struct hv_p u8 buffer[sizeof(struct pci_delete_interrupt)]; } ctxt;
+ if (!int_desc->vector_count) { + kfree(int_desc); + return; + } memset(&ctxt, 0, sizeof(ctxt)); int_pkt = (struct pci_delete_interrupt *)&ctxt.pkt.message; int_pkt->message_type.type = @@ -1172,6 +1176,13 @@ static void hv_irq_mask(struct irq_data pci_msi_mask_irq(data); }
+static unsigned int hv_msi_get_int_vector(struct irq_data *data) +{ + struct irq_cfg *cfg = irqd_cfg(data); + + return cfg->vector; +} + static int hv_msi_prepare(struct irq_domain *domain, struct device *dev, int nvec, msi_alloc_info_t *info) { @@ -1313,12 +1324,12 @@ static void hv_pci_compose_compl(void *c
static u32 hv_compose_msi_req_v1( struct pci_create_interrupt *int_pkt, struct cpumask *affinity, - u32 slot, u8 vector) + u32 slot, u8 vector, u8 vector_count) { int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE; int_pkt->wslot.slot = slot; int_pkt->int_desc.vector = vector; - int_pkt->int_desc.vector_count = 1; + int_pkt->int_desc.vector_count = vector_count; int_pkt->int_desc.delivery_mode = dest_Fixed;
/* @@ -1332,14 +1343,14 @@ static u32 hv_compose_msi_req_v1(
static u32 hv_compose_msi_req_v2( struct pci_create_interrupt2 *int_pkt, struct cpumask *affinity, - u32 slot, u8 vector) + u32 slot, u8 vector, u8 vector_count) { int cpu;
int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE2; int_pkt->wslot.slot = slot; int_pkt->int_desc.vector = vector; - int_pkt->int_desc.vector_count = 1; + int_pkt->int_desc.vector_count = vector_count; int_pkt->int_desc.delivery_mode = dest_Fixed;
/* @@ -1367,7 +1378,6 @@ static u32 hv_compose_msi_req_v2( */ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) { - struct irq_cfg *cfg = irqd_cfg(data); struct hv_pcibus_device *hbus; struct hv_pci_dev *hpdev; struct pci_bus *pbus; @@ -1376,6 +1386,8 @@ static void hv_compose_msi_msg(struct ir unsigned long flags; struct compose_comp_ctxt comp; struct tran_int_desc *int_desc; + struct msi_desc *msi_desc; + u8 vector, vector_count; struct { struct pci_packet pci_pkt; union { @@ -1396,7 +1408,8 @@ static void hv_compose_msi_msg(struct ir return; }
- pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data)); + msi_desc = irq_data_get_msi_desc(data); + pdev = msi_desc_to_pci_dev(msi_desc); dest = irq_data_get_effective_affinity_mask(data); pbus = pdev->bus; hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); @@ -1408,6 +1421,36 @@ static void hv_compose_msi_msg(struct ir if (!int_desc) goto drop_reference;
+ if (!msi_desc->msi_attrib.is_msix && msi_desc->nvec_used > 1) { + /* + * If this is not the first MSI of Multi MSI, we already have + * a mapping. Can exit early. + */ + if (msi_desc->irq != data->irq) { + data->chip_data = int_desc; + int_desc->address = msi_desc->msg.address_lo | + (u64)msi_desc->msg.address_hi << 32; + int_desc->data = msi_desc->msg.data + + (data->irq - msi_desc->irq); + msg->address_hi = msi_desc->msg.address_hi; + msg->address_lo = msi_desc->msg.address_lo; + msg->data = int_desc->data; + put_pcichild(hpdev); + return; + } + /* + * The vector we select here is a dummy value. The correct + * value gets sent to the hypervisor in unmask(). This needs + * to be aligned with the count, and also not zero. Multi-msi + * is powers of 2 up to 32, so 32 will always work here. + */ + vector = 32; + vector_count = msi_desc->nvec_used; + } else { + vector = hv_msi_get_int_vector(data); + vector_count = 1; + } + memset(&ctxt, 0, sizeof(ctxt)); init_completion(&comp.comp_pkt.host_event); ctxt.pci_pkt.completion_func = hv_pci_compose_compl; @@ -1418,14 +1461,16 @@ static void hv_compose_msi_msg(struct ir size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1, dest, hpdev->desc.win_slot.slot, - cfg->vector); + vector, + vector_count); break;
case PCI_PROTOCOL_VERSION_1_2: size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2, dest, hpdev->desc.win_slot.slot, - cfg->vector); + vector, + vector_count); break;
default:
From: Pali Rohár pali@kernel.org
commit 4f532c1e25319e42996ec18a1f473fd50c8e575d upstream.
Functions tty_termios_encode_baud_rate() and uart_update_timeout() should be called with the baudrate value which was set to hardware. Linux then report exact values via ioctl(TCGETS2) to userspace.
Change mvebu_uart_baud_rate_set() function to return baudrate value which was set to hardware and propagate this value to above mentioned functions.
With this change userspace would see precise value in termios c_ospeed field.
Fixes: 68a0db1d7da2 ("serial: mvebu-uart: add function to change baudrate") Cc: stable stable@kernel.org Reviewed-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Pali Rohár pali@kernel.org Link: https://lore.kernel.org/r/20220628100922.10717-1-pali@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/serial/mvebu-uart.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-)
--- a/drivers/tty/serial/mvebu-uart.c +++ b/drivers/tty/serial/mvebu-uart.c @@ -443,13 +443,13 @@ static void mvebu_uart_shutdown(struct u } }
-static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud) +static unsigned int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud) { unsigned int d_divisor, m_divisor; u32 brdv, osamp;
if (!port->uartclk) - return -EOPNOTSUPP; + return 0;
/* * The baudrate is derived from the UART clock thanks to two divisors: @@ -473,7 +473,7 @@ static int mvebu_uart_baud_rate_set(stru osamp &= ~OSAMP_DIVISORS_MASK; writel(osamp, port->membase + UART_OSAMP);
- return 0; + return DIV_ROUND_CLOSEST(port->uartclk, d_divisor * m_divisor); }
static void mvebu_uart_set_termios(struct uart_port *port, @@ -510,15 +510,11 @@ static void mvebu_uart_set_termios(struc max_baud = 230400;
baud = uart_get_baud_rate(port, termios, old, min_baud, max_baud); - if (mvebu_uart_baud_rate_set(port, baud)) { - /* No clock available, baudrate cannot be changed */ - if (old) - baud = uart_get_baud_rate(port, old, NULL, - min_baud, max_baud); - } else { - tty_termios_encode_baud_rate(termios, baud, baud); - uart_update_timeout(port, termios->c_cflag, baud); - } + baud = mvebu_uart_baud_rate_set(port, baud); + + /* In case baudrate cannot be changed, report previous old value */ + if (baud == 0 && old) + baud = tty_termios_baud_rate(old);
/* Only the following flag changes are supported */ if (old) { @@ -529,6 +525,11 @@ static void mvebu_uart_set_termios(struc termios->c_cflag |= CS8; }
+ if (baud != 0) { + tty_termios_encode_baud_rate(termios, baud, baud); + uart_update_timeout(port, termios->c_cflag, baud); + } + spin_unlock_irqrestore(&port->lock, flags); }
From: Hangyu Hua hbh25y@gmail.com
[ Upstream commit f85daf0e725358be78dfd208dea5fd665d8cb901 ]
xfrm_policy_lookup() will call xfrm_pol_hold_rcu() to get a refcount of pols[0]. This refcount can be dropped in xfrm_expand_policies() when xfrm_expand_policies() return error. pols[0]'s refcount is balanced in here. But xfrm_bundle_lookup() will also call xfrm_pols_put() with num_pols == 1 to drop this refcount when xfrm_expand_policies() return error.
This patch also fix an illegal address access. pols[0] will save a error point when xfrm_policy_lookup fails. This lead to xfrm_pols_put to resolve an illegal address in xfrm_bundle_lookup's error path.
Fix these by setting num_pols = 0 in xfrm_expand_policies()'s error path.
Fixes: 80c802f3073e ("xfrm: cache bundles instead of policies for outgoing flows") Signed-off-by: Hangyu Hua hbh25y@gmail.com Signed-off-by: Steffen Klassert steffen.klassert@secunet.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/xfrm/xfrm_policy.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index 3ecb77c58c44..28a8cdef8e51 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -2679,8 +2679,10 @@ static int xfrm_expand_policies(const struct flowi *fl, u16 family, *num_xfrms = 0; return 0; } - if (IS_ERR(pols[0])) + if (IS_ERR(pols[0])) { + *num_pols = 0; return PTR_ERR(pols[0]); + }
*num_xfrms = pols[0]->xfrm_nr;
@@ -2695,6 +2697,7 @@ static int xfrm_expand_policies(const struct flowi *fl, u16 family, if (pols[1]) { if (IS_ERR(pols[1])) { xfrm_pols_put(pols, *num_pols); + *num_pols = 0; return PTR_ERR(pols[1]); } (*num_pols)++;
From: Miaoqian Lin linmq006@gmail.com
[ Upstream commit 80192eff64eee9b3bc0594a47381937b94b9d65a ]
of_find_matching_node_and_match() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak.
Fixes: 0e545f57b708 ("power: reset: driver for the Versatile syscon reboot") Signed-off-by: Miaoqian Lin linmq006@gmail.com Reviewed-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sebastian Reichel sebastian.reichel@collabora.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/power/reset/arm-versatile-reboot.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/power/reset/arm-versatile-reboot.c b/drivers/power/reset/arm-versatile-reboot.c index 08d0a07b58ef..c7624d7611a7 100644 --- a/drivers/power/reset/arm-versatile-reboot.c +++ b/drivers/power/reset/arm-versatile-reboot.c @@ -146,6 +146,7 @@ static int __init versatile_reboot_probe(void) versatile_reboot_type = (enum versatile_reboot)reboot_id->data;
syscon_regmap = syscon_node_to_regmap(np); + of_node_put(np); if (IS_ERR(syscon_regmap)) return PTR_ERR(syscon_regmap);
From: William Dean williamsukatube@gmail.com
[ Upstream commit c3b821e8e406d5650e587b7ac624ac24e9b780a8 ]
Because of the possible failure of the allocation, data->domains might be NULL pointer and will cause the dereference of the NULL pointer later. Therefore, it might be better to check it and directly return -ENOMEM without releasing data manually if fails, because the comment of the devm_kmalloc() says "Memory allocated with this function is automatically freed on driver detach.".
Fixes: a86854d0c599b ("treewide: devm_kzalloc() -> devm_kcalloc()") Reported-by: Hacash Robot hacashRobot@santino.com Signed-off-by: William Dean williamsukatube@gmail.com Link: https://lore.kernel.org/r/20220710154922.2610876-1-williamsukatube@163.com Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c b/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c index 0ba4e4e070a9..7cfbdfb10e23 100644 --- a/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c +++ b/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c @@ -267,6 +267,8 @@ static int rt2880_pinmux_pins(struct rt2880_priv *p) p->func[i]->pin_count, sizeof(int), GFP_KERNEL); + if (!p->func[i]->pins) + return -ENOMEM; for (j = 0; j < p->func[i]->pin_count; j++) p->func[i]->pins[j] = p->func[i]->pin_first + j;
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit 68e3c69803dada336893640110cb87221bb01dcf ]
Yang Jihing reported a race between perf_event_set_output() and perf_mmap_close():
CPU1 CPU2
perf_mmap_close(e2) if (atomic_dec_and_test(&e2->rb->mmap_count)) // 1 - > 0 detach_rest = true
ioctl(e1, IOC_SET_OUTPUT, e2) perf_event_set_output(e1, e2)
... list_for_each_entry_rcu(e, &e2->rb->event_list, rb_entry) ring_buffer_attach(e, NULL); // e1 isn't yet added and // therefore not detached
ring_buffer_attach(e1, e2->rb) list_add_rcu(&e1->rb_entry, &e2->rb->event_list)
After this; e1 is attached to an unmapped rb and a subsequent perf_mmap() will loop forever more:
again: mutex_lock(&e->mmap_mutex); if (event->rb) { ... if (!atomic_inc_not_zero(&e->rb->mmap_count)) { ... mutex_unlock(&e->mmap_mutex); goto again; } }
The loop in perf_mmap_close() holds e2->mmap_mutex, while the attach in perf_event_set_output() holds e1->mmap_mutex. As such there is no serialization to avoid this race.
Change perf_event_set_output() to take both e1->mmap_mutex and e2->mmap_mutex to alleviate that problem. Additionally, have the loop in perf_mmap() detach the rb directly, this avoids having to wait for the concurrent perf_mmap_close() to get around to doing it to make progress.
Fixes: 9bb5d40cd93c ("perf: Fix mmap() accounting hole") Reported-by: Yang Jihong yangjihong1@huawei.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Tested-by: Yang Jihong yangjihong1@huawei.com Link: https://lkml.kernel.org/r/YsQ3jm2GR38SW7uD@worktop.programming.kicks-ass.net Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/core.c | 45 ++++++++++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 14 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index 8336dcb2bd43..0a54780e0942 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5819,10 +5819,10 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
if (!atomic_inc_not_zero(&event->rb->mmap_count)) { /* - * Raced against perf_mmap_close() through - * perf_event_set_output(). Try again, hope for better - * luck. + * Raced against perf_mmap_close(); remove the + * event and try again. */ + ring_buffer_attach(event, NULL); mutex_unlock(&event->mmap_mutex); goto again; } @@ -10763,14 +10763,25 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr, goto out; }
+static void mutex_lock_double(struct mutex *a, struct mutex *b) +{ + if (b < a) + swap(a, b); + + mutex_lock(a); + mutex_lock_nested(b, SINGLE_DEPTH_NESTING); +} + static int perf_event_set_output(struct perf_event *event, struct perf_event *output_event) { struct ring_buffer *rb = NULL; int ret = -EINVAL;
- if (!output_event) + if (!output_event) { + mutex_lock(&event->mmap_mutex); goto set; + }
/* don't allow circular references */ if (event == output_event) @@ -10808,8 +10819,15 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event) event->pmu != output_event->pmu) goto out;
+ /* + * Hold both mmap_mutex to serialize against perf_mmap_close(). Since + * output_event is already on rb->event_list, and the list iteration + * restarts after every removal, it is guaranteed this new event is + * observed *OR* if output_event is already removed, it's guaranteed we + * observe !rb->mmap_count. + */ + mutex_lock_double(&event->mmap_mutex, &output_event->mmap_mutex); set: - mutex_lock(&event->mmap_mutex); /* Can't redirect output if we've got an active mmap() */ if (atomic_read(&event->mmap_count)) goto unlock; @@ -10819,6 +10837,12 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event) rb = ring_buffer_get(output_event); if (!rb) goto unlock; + + /* did we race against perf_mmap_close() */ + if (!atomic_read(&rb->mmap_count)) { + ring_buffer_put(rb); + goto unlock; + } }
ring_buffer_attach(event, rb); @@ -10826,20 +10850,13 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event) ret = 0; unlock: mutex_unlock(&event->mmap_mutex); + if (output_event) + mutex_unlock(&output_event->mmap_mutex);
out: return ret; }
-static void mutex_lock_double(struct mutex *a, struct mutex *b) -{ - if (b < a) - swap(a, b); - - mutex_lock(a); - mutex_lock_nested(b, SINGLE_DEPTH_NESTING); -} - static int perf_event_set_clock(struct perf_event *event, clockid_t clk_id) { bool nmi_safe = false;
From: Lennert Buytenhek buytenh@wantstofly.org
[ Upstream commit 7c1ddcee5311f3315096217881d2dbe47cc683f9 ]
The initially merged version of the igc driver code (via commit 146740f9abc4, "igc: Add support for PF") contained the following IGC_REMOVED checks in the igc_rd32/wr32() MMIO accessors:
u32 igc_rd32(struct igc_hw *hw, u32 reg) { u8 __iomem *hw_addr = READ_ONCE(hw->hw_addr); u32 value = 0;
if (IGC_REMOVED(hw_addr)) return ~value;
value = readl(&hw_addr[reg]);
/* reads should not return all F's */ if (!(~value) && (!reg || !(~readl(hw_addr)))) hw->hw_addr = NULL;
return value; }
And:
#define wr32(reg, val) \ do { \ u8 __iomem *hw_addr = READ_ONCE((hw)->hw_addr); \ if (!IGC_REMOVED(hw_addr)) \ writel((val), &hw_addr[(reg)]); \ } while (0)
E.g. igb has similar checks in its MMIO accessors, and has a similar macro E1000_REMOVED, which is implemented as follows:
#define E1000_REMOVED(h) unlikely(!(h))
These checks serve to detect and take note of an 0xffffffff MMIO read return from the device, which can be caused by a PCIe link flap or some other kind of PCI bus error, and to avoid performing MMIO reads and writes from that point onwards.
However, the IGC_REMOVED macro was not originally implemented:
#ifndef IGC_REMOVED #define IGC_REMOVED(a) (0) #endif /* IGC_REMOVED */
This led to the IGC_REMOVED logic to be removed entirely in a subsequent commit (commit 3c215fb18e70, "igc: remove IGC_REMOVED function"), with the rationale that such checks matter only for virtualization and that igc does not support virtualization -- but a PCIe device can become detached even without virtualization being in use, and without proper checks, a PCIe bus error affecting an igc adapter will lead to various NULL pointer dereferences, as the first access after the error will set hw->hw_addr to NULL, and subsequent accesses will blindly dereference this now-NULL pointer.
This patch reinstates the IGC_REMOVED checks in igc_rd32/wr32(), and implements IGC_REMOVED the way it is done for igb, by checking for the unlikely() case of hw_addr being NULL. This change prevents the oopses seen when a PCIe link flap occurs on an igc adapter.
Fixes: 146740f9abc4 ("igc: Add support for PF") Signed-off-by: Lennert Buytenhek buytenh@arista.com Tested-by: Naama Meir naamax.meir@linux.intel.com Acked-by: Sasha Neftin sasha.neftin@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/igc/igc_regs.h | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/intel/igc/igc_regs.h b/drivers/net/ethernet/intel/igc/igc_regs.h index 50d7c04dccf5..7bc7d7618fe1 100644 --- a/drivers/net/ethernet/intel/igc/igc_regs.h +++ b/drivers/net/ethernet/intel/igc/igc_regs.h @@ -236,4 +236,6 @@ do { \
#define array_rd32(reg, offset) (igc_rd32(hw, (reg) + ((offset) << 2)))
+#define IGC_REMOVED(h) unlikely(!(h)) + #endif
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 0968d2a441bf6afb551fd99e60fa65ed67068963 ]
While reading sysctl_ip_no_pmtu_disc, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/af_inet.c | 2 +- net/ipv4/icmp.c | 2 +- net/ipv6/af_inet6.c | 2 +- net/xfrm/xfrm_state.c | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 9ab73fcc7411..06153386776d 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -337,7 +337,7 @@ static int inet_create(struct net *net, struct socket *sock, int protocol, inet->hdrincl = 1; }
- if (net->ipv4.sysctl_ip_no_pmtu_disc) + if (READ_ONCE(net->ipv4.sysctl_ip_no_pmtu_disc)) inet->pmtudisc = IP_PMTUDISC_DONT; else inet->pmtudisc = IP_PMTUDISC_WANT; diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 9bc01411be4c..b44f51e404ae 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -886,7 +886,7 @@ static bool icmp_unreach(struct sk_buff *skb) * values please see * Documentation/networking/ip-sysctl.txt */ - switch (net->ipv4.sysctl_ip_no_pmtu_disc) { + switch (READ_ONCE(net->ipv4.sysctl_ip_no_pmtu_disc)) { default: net_dbg_ratelimited("%pI4: fragmentation needed and DF set\n", &iph->daddr); diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c index 942da168f18f..56f396ecc26b 100644 --- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -222,7 +222,7 @@ static int inet6_create(struct net *net, struct socket *sock, int protocol, inet->mc_list = NULL; inet->rcv_tos = 0;
- if (net->ipv4.sysctl_ip_no_pmtu_disc) + if (READ_ONCE(net->ipv4.sysctl_ip_no_pmtu_disc)) inet->pmtudisc = IP_PMTUDISC_DONT; else inet->pmtudisc = IP_PMTUDISC_WANT; diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index 268bba29bb60..bee1a8143d75 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -2488,7 +2488,7 @@ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload) int err;
if (family == AF_INET && - xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc) + READ_ONCE(xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc)) x->props.flags |= XFRM_STATE_NOPMTUDISC;
err = -EPROTONOSUPPORT;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 60c158dc7b1f0558f6cadd5b50d0386da0000d50 ]
While reading sysctl_ip_fwd_use_pmtu, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: f87c10a8aa1e ("ipv4: introduce ip_dst_mtu_maybe_forward and protect forwarding path against pmtu spoofing") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/ip.h | 2 +- net/ipv4/route.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h index 3f3ea86b2173..21fc0a29a8d4 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -442,7 +442,7 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst, struct net *net = dev_net(dst->dev); unsigned int mtu;
- if (net->ipv4.sysctl_ip_fwd_use_pmtu || + if (READ_ONCE(net->ipv4.sysctl_ip_fwd_use_pmtu) || ip_mtu_locked(dst) || !forwarding) return dst_mtu(dst); diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 9280e5087159..7004e379c325 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -1423,7 +1423,7 @@ u32 ip_mtu_from_fib_result(struct fib_result *res, __be32 daddr) struct fib_info *fi = res->fi; u32 mtu = 0;
- if (dev_net(dev)->ipv4.sysctl_ip_fwd_use_pmtu || + if (READ_ONCE(dev_net(dev)->ipv4.sysctl_ip_fwd_use_pmtu) || fi->fib_metrics->metrics[RTAX_LOCK - 1] & (1 << RTAX_MTU)) mtu = fi->fib_mtu;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 289d3b21fb0bfc94c4e98f10635bba1824e5f83c ]
While reading sysctl_ip_nonlocal_bind, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/inet_sock.h | 2 +- net/sctp/protocol.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h index 34c4436fd18f..40f92f5a3047 100644 --- a/include/net/inet_sock.h +++ b/include/net/inet_sock.h @@ -375,7 +375,7 @@ static inline bool inet_get_convert_csum(struct sock *sk) static inline bool inet_can_nonlocal_bind(struct net *net, struct inet_sock *inet) { - return net->ipv4.sysctl_ip_nonlocal_bind || + return READ_ONCE(net->ipv4.sysctl_ip_nonlocal_bind) || inet->freebind || inet->transparent; }
diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c index bb370a7948f4..363a64c12414 100644 --- a/net/sctp/protocol.c +++ b/net/sctp/protocol.c @@ -358,7 +358,7 @@ static int sctp_v4_available(union sctp_addr *addr, struct sctp_sock *sp) if (addr->v4.sin_addr.s_addr != htonl(INADDR_ANY) && ret != RTN_LOCAL && !sp->inet.freebind && - !net->ipv4.sysctl_ip_nonlocal_bind) + !READ_ONCE(net->ipv4.sysctl_ip_nonlocal_bind)) return 0;
if (ipv6_only_sock(sctp_opt2sk(sp)))
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 85d0b4dbd74b95cc492b1f4e34497d3f894f5d9a ]
While reading sysctl_fwmark_reflect, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: e110861f8609 ("net: add a sysctl to reflect the fwmark on replies") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/ip.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/net/ip.h b/include/net/ip.h index 21fc0a29a8d4..db841ab388c0 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -381,7 +381,7 @@ void ipfrag_init(void); void ip_static_sysctl_init(void);
#define IP4_REPLY_MARK(net, mark) \ - ((net)->ipv4.sysctl_fwmark_reflect ? (mark) : 0) + (READ_ONCE((net)->ipv4.sysctl_fwmark_reflect) ? (mark) : 0)
static inline bool ip_is_fragment(const struct iphdr *iph) {
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 1a0008f9df59451d0a17806c1ee1a19857032fa8 ]
While reading sysctl_tcp_fwmark_accept, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 84f39b08d786 ("net: support marking accepting TCP sockets") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/inet_sock.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h index 40f92f5a3047..58db7c69c146 100644 --- a/include/net/inet_sock.h +++ b/include/net/inet_sock.h @@ -107,7 +107,8 @@ static inline struct inet_request_sock *inet_rsk(const struct request_sock *sk)
static inline u32 inet_request_mark(const struct sock *sk, struct sk_buff *skb) { - if (!sk->sk_mark && sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept) + if (!sk->sk_mark && + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept)) return skb->mark;
return sk->sk_mark;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit f47d00e077e7d61baf69e46dde3210c886360207 ]
While reading sysctl_tcp_mtu_probing, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 5d424d5a674f ("[TCP]: MTU probing") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_output.c | 2 +- net/ipv4/tcp_timer.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 739fc69cdcc6..5ac81c4f076d 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1537,7 +1537,7 @@ void tcp_mtup_init(struct sock *sk) struct inet_connection_sock *icsk = inet_csk(sk); struct net *net = sock_net(sk);
- icsk->icsk_mtup.enabled = net->ipv4.sysctl_tcp_mtu_probing > 1; + icsk->icsk_mtup.enabled = READ_ONCE(net->ipv4.sysctl_tcp_mtu_probing) > 1; icsk->icsk_mtup.search_high = tp->rx_opt.mss_clamp + sizeof(struct tcphdr) + icsk->icsk_af_ops->net_header_len; icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, net->ipv4.sysctl_tcp_base_mss); diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index fa2ae96ecdc4..57fa707e9e98 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -163,7 +163,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk) int mss;
/* Black hole detection */ - if (!net->ipv4.sysctl_tcp_mtu_probing) + if (!READ_ONCE(net->ipv4.sysctl_tcp_mtu_probing)) return;
if (!icsk->icsk_mtup.enabled) {
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 88d78bc097cd8ebc6541e93316c9d9bf651b13e8 ]
While reading sysctl_tcp_base_mss, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 5d424d5a674f ("[TCP]: MTU probing") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_output.c | 2 +- net/ipv4/tcp_timer.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 5ac81c4f076d..b84bedf2804a 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1540,7 +1540,7 @@ void tcp_mtup_init(struct sock *sk) icsk->icsk_mtup.enabled = READ_ONCE(net->ipv4.sysctl_tcp_mtu_probing) > 1; icsk->icsk_mtup.search_high = tp->rx_opt.mss_clamp + sizeof(struct tcphdr) + icsk->icsk_af_ops->net_header_len; - icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, net->ipv4.sysctl_tcp_base_mss); + icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, READ_ONCE(net->ipv4.sysctl_tcp_base_mss)); icsk->icsk_mtup.probe_size = 0; if (icsk->icsk_mtup.enabled) icsk->icsk_mtup.probe_timestamp = tcp_jiffies32; diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 57fa707e9e98..0c3ee2aa244f 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -171,7 +171,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk) icsk->icsk_mtup.probe_timestamp = tcp_jiffies32; } else { mss = tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_low) >> 1; - mss = min(net->ipv4.sysctl_tcp_base_mss, mss); + mss = min(READ_ONCE(net->ipv4.sysctl_tcp_base_mss), mss); mss = max(mss, net->ipv4.sysctl_tcp_mtu_probe_floor); mss = max(mss, net->ipv4.sysctl_tcp_min_snd_mss); icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, mss);
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 78eb166cdefcc3221c8c7c1e2d514e91a2eb5014 ]
While reading sysctl_tcp_min_snd_mss, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 5f3e2bf008c2 ("tcp: add tcp_min_snd_mss sysctl") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_output.c | 3 ++- net/ipv4/tcp_timer.c | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index b84bedf2804a..7c0b96319fc0 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1494,7 +1494,8 @@ static inline int __tcp_mtu_to_mss(struct sock *sk, int pmtu) mss_now -= icsk->icsk_ext_hdr_len;
/* Then reserve room for full set of TCP options and 8 bytes of data */ - mss_now = max(mss_now, sock_net(sk)->ipv4.sysctl_tcp_min_snd_mss); + mss_now = max(mss_now, + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_min_snd_mss)); return mss_now; }
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 0c3ee2aa244f..0460c5deee3f 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -173,7 +173,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk) mss = tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_low) >> 1; mss = min(READ_ONCE(net->ipv4.sysctl_tcp_base_mss), mss); mss = max(mss, net->ipv4.sysctl_tcp_mtu_probe_floor); - mss = max(mss, net->ipv4.sysctl_tcp_min_snd_mss); + mss = max(mss, READ_ONCE(net->ipv4.sysctl_tcp_min_snd_mss)); icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, mss); } tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 8e92d4423615a5257d0d871fc067aa561f597deb ]
While reading sysctl_tcp_mtu_probe_floor, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: c04b79b6cfd7 ("tcp: add new tcp_mtu_probe_floor sysctl") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_timer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 0460c5deee3f..c48aeaef3ec7 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -172,7 +172,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk) } else { mss = tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_low) >> 1; mss = min(READ_ONCE(net->ipv4.sysctl_tcp_base_mss), mss); - mss = max(mss, net->ipv4.sysctl_tcp_mtu_probe_floor); + mss = max(mss, READ_ONCE(net->ipv4.sysctl_tcp_mtu_probe_floor)); mss = max(mss, READ_ONCE(net->ipv4.sysctl_tcp_min_snd_mss)); icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, mss); }
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 92c0aa4175474483d6cf373314343d4e624e882a ]
While reading sysctl_tcp_probe_threshold, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 6b58e0a5f32d ("ipv4: Use binary search to choose tcp PMTU probe_size") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 7c0b96319fc0..e60cb69d00a4 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2134,7 +2134,7 @@ static int tcp_mtu_probe(struct sock *sk) * probing process by not resetting search range to its orignal. */ if (probe_size > tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_high) || - interval < net->ipv4.sysctl_tcp_probe_threshold) { + interval < READ_ONCE(net->ipv4.sysctl_tcp_probe_threshold)) { /* Check whether enough time has elaplased for * another round of probing. */
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 2a85388f1d94a9f8b5a529118a2c5eaa0520d85c ]
While reading sysctl_tcp_probe_interval, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 05cbc0db03e8 ("ipv4: Create probe timer for tcp PMTU as per RFC4821") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index e60cb69d00a4..9bfe6965b873 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2052,7 +2052,7 @@ static inline void tcp_mtu_check_reprobe(struct sock *sk) u32 interval; s32 delta;
- interval = net->ipv4.sysctl_tcp_probe_interval; + interval = READ_ONCE(net->ipv4.sysctl_tcp_probe_interval); delta = tcp_jiffies32 - icsk->icsk_mtup.probe_timestamp; if (unlikely(delta >= interval * HZ)) { int mss = tcp_current_mss(sk);
From: Robert Hancock robert.hancock@calian.com
[ Upstream commit 4ca8ca873d454635c20d508261bfc0081af75cf8 ]
Problems were observed on the Xilinx ZynqMP platform with large I2C reads. When a read of 277 bytes was performed, the controller NAKed the transfer after only 252 bytes were transferred and returned an ENXIO error on the transfer.
There is some code in cdns_i2c_master_isr to handle this case by resetting the transfer count in the controller before it reaches 0, to allow larger transfers to work, but it was conditional on the CDNS_I2C_BROKEN_HOLD_BIT quirk being set on the controller, and ZynqMP uses the r1p14 version of the core where this quirk is not being set. The requirement to do this to support larger reads seems like an inherently required workaround due to the core only having an 8-bit transfer size register, so it does not appear that this should be conditional on the broken HOLD bit quirk which is used elsewhere in the driver.
Remove the dependency on the CDNS_I2C_BROKEN_HOLD_BIT for this transfer size reset logic to fix this problem.
Fixes: 63cab195bf49 ("i2c: removed work arounds in i2c driver for Zynq Ultrascale+ MPSoC") Signed-off-by: Robert Hancock robert.hancock@calian.com Reviewed-by: Shubhrajyoti Datta Shubhrajyoti.datta@amd.com Acked-by: Michal Simek michal.simek@amd.com Signed-off-by: Wolfram Sang wsa@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i2c/busses/i2c-cadence.c | 30 +++++------------------------- 1 file changed, 5 insertions(+), 25 deletions(-)
diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c index 3a1bdc75275f..8750e444f449 100644 --- a/drivers/i2c/busses/i2c-cadence.c +++ b/drivers/i2c/busses/i2c-cadence.c @@ -198,9 +198,9 @@ static inline bool cdns_is_holdquirk(struct cdns_i2c *id, bool hold_wrkaround) */ static irqreturn_t cdns_i2c_isr(int irq, void *ptr) { - unsigned int isr_status, avail_bytes, updatetx; + unsigned int isr_status, avail_bytes; unsigned int bytes_to_send; - bool hold_quirk; + bool updatetx; struct cdns_i2c *id = ptr; /* Signal completion only after everything is updated */ int done_flag = 0; @@ -219,11 +219,7 @@ static irqreturn_t cdns_i2c_isr(int irq, void *ptr) * Check if transfer size register needs to be updated again for a * large data receive operation. */ - updatetx = 0; - if (id->recv_count > id->curr_recv_count) - updatetx = 1; - - hold_quirk = (id->quirks & CDNS_I2C_BROKEN_HOLD_BIT) && updatetx; + updatetx = id->recv_count > id->curr_recv_count;
/* When receiving, handle data interrupt and completion interrupt */ if (id->p_recv_buf && @@ -246,7 +242,7 @@ static irqreturn_t cdns_i2c_isr(int irq, void *ptr) id->recv_count--; id->curr_recv_count--;
- if (cdns_is_holdquirk(id, hold_quirk)) + if (cdns_is_holdquirk(id, updatetx)) break; }
@@ -257,7 +253,7 @@ static irqreturn_t cdns_i2c_isr(int irq, void *ptr) * maintain transfer size non-zero while performing a large * receive operation. */ - if (cdns_is_holdquirk(id, hold_quirk)) { + if (cdns_is_holdquirk(id, updatetx)) { /* wait while fifo is full */ while (cdns_i2c_readreg(CDNS_I2C_XFER_SIZE_OFFSET) != (id->curr_recv_count - CDNS_I2C_FIFO_DEPTH)) @@ -279,22 +275,6 @@ static irqreturn_t cdns_i2c_isr(int irq, void *ptr) CDNS_I2C_XFER_SIZE_OFFSET); id->curr_recv_count = id->recv_count; } - } else if (id->recv_count && !hold_quirk && - !id->curr_recv_count) { - - /* Set the slave address in address register*/ - cdns_i2c_writereg(id->p_msg->addr & CDNS_I2C_ADDR_MASK, - CDNS_I2C_ADDR_OFFSET); - - if (id->recv_count > CDNS_I2C_TRANSFER_SIZE) { - cdns_i2c_writereg(CDNS_I2C_TRANSFER_SIZE, - CDNS_I2C_XFER_SIZE_OFFSET); - id->curr_recv_count = CDNS_I2C_TRANSFER_SIZE; - } else { - cdns_i2c_writereg(id->recv_count, - CDNS_I2C_XFER_SIZE_OFFSET); - id->curr_recv_count = id->recv_count; - } }
/* Clear hold (if not repeated start) and signal completion */
From: Junxiao Chang junxiao.chang@intel.com
[ Upstream commit 613b065ca32e90209024ec4a6bb5ca887ee70980 ]
When queue number is > 4, left shift overflows due to 32 bits integer variable. Mask calculation is wrong for MTL_RXQ_DMA_MAP1.
If CONFIG_UBSAN is enabled, kernel dumps below warning: [ 10.363842] ================================================================== [ 10.363882] UBSAN: shift-out-of-bounds in /build/linux-intel-iotg-5.15-8e6Tf4/ linux-intel-iotg-5.15-5.15.0/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c:224:12 [ 10.363929] shift exponent 40 is too large for 32-bit type 'unsigned int' [ 10.363953] CPU: 1 PID: 599 Comm: NetworkManager Not tainted 5.15.0-1003-intel-iotg [ 10.363956] Hardware name: ADLINK Technology Inc. LEC-EL/LEC-EL, BIOS 0.15.11 12/22/2021 [ 10.363958] Call Trace: [ 10.363960] <TASK> [ 10.363963] dump_stack_lvl+0x4a/0x5f [ 10.363971] dump_stack+0x10/0x12 [ 10.363974] ubsan_epilogue+0x9/0x45 [ 10.363976] __ubsan_handle_shift_out_of_bounds.cold+0x61/0x10e [ 10.363979] ? wake_up_klogd+0x4a/0x50 [ 10.363983] ? vprintk_emit+0x8f/0x240 [ 10.363986] dwmac4_map_mtl_dma.cold+0x42/0x91 [stmmac] [ 10.364001] stmmac_mtl_configuration+0x1ce/0x7a0 [stmmac] [ 10.364009] ? dwmac410_dma_init_channel+0x70/0x70 [stmmac] [ 10.364020] stmmac_hw_setup.cold+0xf/0xb14 [stmmac] [ 10.364030] ? page_pool_alloc_pages+0x4d/0x70 [ 10.364034] ? stmmac_clear_tx_descriptors+0x6e/0xe0 [stmmac] [ 10.364042] stmmac_open+0x39e/0x920 [stmmac] [ 10.364050] __dev_open+0xf0/0x1a0 [ 10.364054] __dev_change_flags+0x188/0x1f0 [ 10.364057] dev_change_flags+0x26/0x60 [ 10.364059] do_setlink+0x908/0xc40 [ 10.364062] ? do_setlink+0xb10/0xc40 [ 10.364064] ? __nla_validate_parse+0x4c/0x1a0 [ 10.364068] __rtnl_newlink+0x597/0xa10 [ 10.364072] ? __nla_reserve+0x41/0x50 [ 10.364074] ? __kmalloc_node_track_caller+0x1d0/0x4d0 [ 10.364079] ? pskb_expand_head+0x75/0x310 [ 10.364082] ? nla_reserve_64bit+0x21/0x40 [ 10.364086] ? skb_free_head+0x65/0x80 [ 10.364089] ? security_sock_rcv_skb+0x2c/0x50 [ 10.364094] ? __cond_resched+0x19/0x30 [ 10.364097] ? kmem_cache_alloc_trace+0x15a/0x420 [ 10.364100] rtnl_newlink+0x49/0x70
This change fixes MTL_RXQ_DMA_MAP1 mask issue and channel/queue mapping warning.
Fixes: d43042f4da3e ("net: stmmac: mapping mtl rx to dma channel") BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=216195 Reported-by: Cedric Wassenaar cedric@bytespeed.nl Signed-off-by: Junxiao Chang junxiao.chang@intel.com Reviewed-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c index 66e60c7e9850..c440b192ec71 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c @@ -215,6 +215,9 @@ static void dwmac4_map_mtl_dma(struct mac_device_info *hw, u32 queue, u32 chan) if (queue == 0 || queue == 4) { value &= ~MTL_RXQ_DMA_Q04MDMACH_MASK; value |= MTL_RXQ_DMA_Q04MDMACH(chan); + } else if (queue > 4) { + value &= ~MTL_RXQ_DMA_QXMDMACH_MASK(queue - 4); + value |= MTL_RXQ_DMA_QXMDMACH(chan, queue - 4); } else { value &= ~MTL_RXQ_DMA_QXMDMACH_MASK(queue); value |= MTL_RXQ_DMA_QXMDMACH(chan, queue);
From: Tariq Toukan tariqt@nvidia.com
[ Upstream commit f08d8c1bb97c48f24a82afaa2fd8c140f8d3da8b ]
Socket destruction flow and tls_device_down function sync against each other using tls_device_lock and the context refcount, to guarantee the device resources are freed via tls_dev_del() by the end of tls_device_down.
In the following unfortunate flow, this won't happen: - refcount is decreased to zero in tls_device_sk_destruct. - tls_device_down starts, skips the context as refcount is zero, going all the way until it flushes the gc work, and returns without freeing the device resources. - only then, tls_device_queue_ctx_destruction is called, queues the gc work and frees the context's device resources.
Solve it by decreasing the refcount in the socket's destruction flow under the tls_device_lock, for perfect synchronization. This does not slow down the common likely destructor flow, in which both the refcount is decreased and the spinlock is acquired, anyway.
Fixes: e8f69799810c ("net/tls: Add generic NIC offload infrastructure") Reviewed-by: Maxim Mikityanskiy maximmi@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Reviewed-by: Jakub Kicinski kuba@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/tls/tls_device.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index abb93f7343c5..2c3cf47d730b 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -94,13 +94,16 @@ static void tls_device_queue_ctx_destruction(struct tls_context *ctx) unsigned long flags;
spin_lock_irqsave(&tls_device_lock, flags); + if (unlikely(!refcount_dec_and_test(&ctx->refcount))) + goto unlock; + list_move_tail(&ctx->list, &tls_device_gc_list);
/* schedule_work inside the spinlock * to make sure tls_device_down waits for that work. */ schedule_work(&tls_device_gc_work); - +unlock: spin_unlock_irqrestore(&tls_device_lock, flags); }
@@ -191,8 +194,7 @@ static void tls_device_sk_destruct(struct sock *sk) clean_acked_data_disable(inet_csk(sk)); }
- if (refcount_dec_and_test(&tls_ctx->refcount)) - tls_device_queue_ctx_destruction(tls_ctx); + tls_device_queue_ctx_destruction(tls_ctx); }
void tls_device_free_resources_tx(struct sock *sk)
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit f6da2267e71106474fbc0943dc24928b9cb79119 ]
While reading sysctl_igmp_llm_reports, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
This test can be packed into a helper, so such changes will be in the follow-up series after net is merged into net-next.
if (ipv4_is_local_multicast(pmc->multiaddr) && !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports))
Fixes: df2cf4a78e48 ("IGMP: Inhibit reports for local multicast groups") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/igmp.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index cac2fdd08df0..7cd444d75c3d 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -469,7 +469,8 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ip_mc_list *pmc,
if (pmc->multiaddr == IGMP_ALL_HOSTS) return skb; - if (ipv4_is_local_multicast(pmc->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(pmc->multiaddr) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return skb;
mtu = READ_ONCE(dev->mtu); @@ -595,7 +596,7 @@ static int igmpv3_send_report(struct in_device *in_dev, struct ip_mc_list *pmc) if (pmc->multiaddr == IGMP_ALL_HOSTS) continue; if (ipv4_is_local_multicast(pmc->multiaddr) && - !net->ipv4.sysctl_igmp_llm_reports) + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) continue; spin_lock_bh(&pmc->lock); if (pmc->sfcount[MCAST_EXCLUDE]) @@ -738,7 +739,8 @@ static int igmp_send_report(struct in_device *in_dev, struct ip_mc_list *pmc, if (type == IGMPV3_HOST_MEMBERSHIP_REPORT) return igmpv3_send_report(in_dev, pmc);
- if (ipv4_is_local_multicast(group) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(group) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return 0;
if (type == IGMP_HOST_LEAVE_MESSAGE) @@ -922,7 +924,8 @@ static bool igmp_heard_report(struct in_device *in_dev, __be32 group)
if (group == IGMP_ALL_HOSTS) return false; - if (ipv4_is_local_multicast(group) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(group) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return false;
rcu_read_lock(); @@ -1047,7 +1050,7 @@ static bool igmp_heard_query(struct in_device *in_dev, struct sk_buff *skb, if (im->multiaddr == IGMP_ALL_HOSTS) continue; if (ipv4_is_local_multicast(im->multiaddr) && - !net->ipv4.sysctl_igmp_llm_reports) + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) continue; spin_lock_bh(&im->lock); if (im->tm_running) @@ -1298,7 +1301,8 @@ static void __igmp_group_dropped(struct ip_mc_list *im, gfp_t gfp) #ifdef CONFIG_IP_MULTICAST if (im->multiaddr == IGMP_ALL_HOSTS) return; - if (ipv4_is_local_multicast(im->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(im->multiaddr) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return;
reporter = im->reporter; @@ -1340,7 +1344,8 @@ static void igmp_group_added(struct ip_mc_list *im) #ifdef CONFIG_IP_MULTICAST if (im->multiaddr == IGMP_ALL_HOSTS) return; - if (ipv4_is_local_multicast(im->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(im->multiaddr) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return;
if (in_dev->dead) @@ -1644,7 +1649,7 @@ static void ip_mc_rejoin_groups(struct in_device *in_dev) if (im->multiaddr == IGMP_ALL_HOSTS) continue; if (ipv4_is_local_multicast(im->multiaddr) && - !net->ipv4.sysctl_igmp_llm_reports) + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) continue;
/* a failover is happening and switches
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 6305d821e3b9b5379d348528e5b5faf316383bc2 ]
While reading sysctl_igmp_max_memberships, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/igmp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index 7cd444d75c3d..660b41040c77 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -2199,7 +2199,7 @@ static int __ip_mc_join_group(struct sock *sk, struct ip_mreqn *imr, count++; } err = -ENOBUFS; - if (count >= net->ipv4.sysctl_igmp_max_memberships) + if (count >= READ_ONCE(net->ipv4.sysctl_igmp_max_memberships)) goto done; iml = sock_kmalloc(sk, sizeof(*iml), GFP_KERNEL); if (!iml)
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit f2e383b5bb6bbc60a0b94b87b3e49a2b1aefd11e ]
While reading sysctl_tcp_syncookies, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/filter.c | 4 ++-- net/ipv4/syncookies.c | 3 ++- net/ipv4/tcp_input.c | 20 ++++++++++++-------- net/ipv6/syncookies.c | 3 ++- 4 files changed, 18 insertions(+), 12 deletions(-)
diff --git a/net/core/filter.c b/net/core/filter.c index 75f53b5e6389..72bf78032f45 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -5839,7 +5839,7 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len if (sk->sk_protocol != IPPROTO_TCP || sk->sk_state != TCP_LISTEN) return -EINVAL;
- if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies)) return -EINVAL;
if (!th->ack || th->rst || th->syn) @@ -5914,7 +5914,7 @@ BPF_CALL_5(bpf_tcp_gen_syncookie, struct sock *, sk, void *, iph, u32, iph_len, if (sk->sk_protocol != IPPROTO_TCP || sk->sk_state != TCP_LISTEN) return -EINVAL;
- if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies)) return -ENOENT;
if (!th->syn || th->ack || th->fin || th->rst) diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c index 6811174ad518..f1cbf8911844 100644 --- a/net/ipv4/syncookies.c +++ b/net/ipv4/syncookies.c @@ -297,7 +297,8 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb) struct flowi4 fl4; u32 tsoff = 0;
- if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies || !th->ack || th->rst) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) || + !th->ack || th->rst) goto out;
if (tcp_synq_no_recent_overflow(sk)) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 0808110451a0..85204903b2fa 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -6530,11 +6530,14 @@ static bool tcp_syn_flood_action(const struct sock *sk, const char *proto) { struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue; const char *msg = "Dropping request"; - bool want_cookie = false; struct net *net = sock_net(sk); + bool want_cookie = false; + u8 syncookies; + + syncookies = READ_ONCE(net->ipv4.sysctl_tcp_syncookies);
#ifdef CONFIG_SYN_COOKIES - if (net->ipv4.sysctl_tcp_syncookies) { + if (syncookies) { msg = "Sending cookies"; want_cookie = true; __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPREQQFULLDOCOOKIES); @@ -6542,8 +6545,7 @@ static bool tcp_syn_flood_action(const struct sock *sk, const char *proto) #endif __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPREQQFULLDROP);
- if (!queue->synflood_warned && - net->ipv4.sysctl_tcp_syncookies != 2 && + if (!queue->synflood_warned && syncookies != 2 && xchg(&queue->synflood_warned, 1) == 0) net_info_ratelimited("%s: Possible SYN flooding on port %d. %s. Check SNMP counters.\n", proto, sk->sk_num, msg); @@ -6578,7 +6580,7 @@ u16 tcp_get_syncookie_mss(struct request_sock_ops *rsk_ops, struct tcp_sock *tp = tcp_sk(sk); u16 mss;
- if (sock_net(sk)->ipv4.sysctl_tcp_syncookies != 2 && + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) != 2 && !inet_csk_reqsk_queue_is_full(sk)) return 0;
@@ -6612,13 +6614,15 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops, bool want_cookie = false; struct dst_entry *dst; struct flowi fl; + u8 syncookies; + + syncookies = READ_ONCE(net->ipv4.sysctl_tcp_syncookies);
/* TW buckets are converted to open requests without * limitations, they conserve resources and peer is * evidently real one. */ - if ((net->ipv4.sysctl_tcp_syncookies == 2 || - inet_csk_reqsk_queue_is_full(sk)) && !isn) { + if ((syncookies == 2 || inet_csk_reqsk_queue_is_full(sk)) && !isn) { want_cookie = tcp_syn_flood_action(sk, rsk_ops->slab_name); if (!want_cookie) goto drop; @@ -6669,7 +6673,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
if (!want_cookie && !isn) { /* Kill the following clause, if you dislike this way. */ - if (!net->ipv4.sysctl_tcp_syncookies && + if (!syncookies && (net->ipv4.sysctl_max_syn_backlog - inet_csk_reqsk_queue_len(sk) < (net->ipv4.sysctl_max_syn_backlog >> 2)) && !tcp_peer_is_proven(req, dst)) { diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c index 37ab254f7b92..7e5550546594 100644 --- a/net/ipv6/syncookies.c +++ b/net/ipv6/syncookies.c @@ -141,7 +141,8 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb) __u8 rcv_wscale; u32 tsoff = 0;
- if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies || !th->ack || th->rst) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) || + !th->ack || th->rst) goto out;
if (tcp_synq_no_recent_overflow(sk))
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 46778cd16e6a5ad1b2e3a91f6c057c907379418e ]
While reading sysctl_tcp_reordering, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp.c | 2 +- net/ipv4/tcp_input.c | 10 +++++++--- net/ipv4/tcp_metrics.c | 3 ++- 3 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 4815cf72569e..790246011fff 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -437,7 +437,7 @@ void tcp_init_sock(struct sock *sk) tp->snd_cwnd_clamp = ~0; tp->mss_cache = TCP_MSS_DEFAULT;
- tp->reordering = sock_net(sk)->ipv4.sysctl_tcp_reordering; + tp->reordering = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reordering); tcp_assign_congestion_control(sk);
tp->tsoffset = 0; diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 85204903b2fa..fbdb5de29a97 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1994,6 +1994,7 @@ void tcp_enter_loss(struct sock *sk) struct tcp_sock *tp = tcp_sk(sk); struct net *net = sock_net(sk); bool new_recovery = icsk->icsk_ca_state < TCP_CA_Recovery; + u8 reordering;
tcp_timeout_mark_lost(sk);
@@ -2014,10 +2015,12 @@ void tcp_enter_loss(struct sock *sk) /* Timeout in disordered state after receiving substantial DUPACKs * suggests that the degree of reordering is over-estimated. */ + reordering = READ_ONCE(net->ipv4.sysctl_tcp_reordering); if (icsk->icsk_ca_state <= TCP_CA_Disorder && - tp->sacked_out >= net->ipv4.sysctl_tcp_reordering) + tp->sacked_out >= reordering) tp->reordering = min_t(unsigned int, tp->reordering, - net->ipv4.sysctl_tcp_reordering); + reordering); + tcp_set_ca_state(sk, TCP_CA_Loss); tp->high_seq = tp->snd_nxt; tcp_ecn_queue_cwr(tp); @@ -3319,7 +3322,8 @@ static inline bool tcp_may_raise_cwnd(const struct sock *sk, const int flag) * new SACK or ECE mark may first advance cwnd here and later reduce * cwnd in tcp_fastretrans_alert() based on more states. */ - if (tcp_sk(sk)->reordering > sock_net(sk)->ipv4.sysctl_tcp_reordering) + if (tcp_sk(sk)->reordering > + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reordering)) return flag & FLAG_FORWARD_PROGRESS;
return flag & FLAG_DATA_ACKED; diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index c4848e7a0aad..9a7d8a599857 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -425,7 +425,8 @@ void tcp_update_metrics(struct sock *sk) if (!tcp_metric_locked(tm, TCP_METRIC_REORDERING)) { val = tcp_metric_get(tm, TCP_METRIC_REORDERING); if (val < tp->reordering && - tp->reordering != net->ipv4.sysctl_tcp_reordering) + tp->reordering != + READ_ONCE(net->ipv4.sysctl_tcp_reordering)) tcp_metric_set(tm, TCP_METRIC_REORDERING, tp->reordering); }
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 39e24435a776e9de5c6dd188836cf2523547804b ]
While reading these sysctl knobs, they can be changed concurrently. Thus, we need to add READ_ONCE() to their readers.
- tcp_retries1 - tcp_retries2 - tcp_orphan_retries - tcp_fin_timeout
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/tcp.h | 3 ++- net/ipv4/tcp.c | 2 +- net/ipv4/tcp_output.c | 2 +- net/ipv4/tcp_timer.c | 10 +++++----- 4 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h index 65be8bd1f0f4..96dae0937998 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1465,7 +1465,8 @@ static inline u32 keepalive_time_elapsed(const struct tcp_sock *tp)
static inline int tcp_fin_time(const struct sock *sk) { - int fin_timeout = tcp_sk(sk)->linger2 ? : sock_net(sk)->ipv4.sysctl_tcp_fin_timeout; + int fin_timeout = tcp_sk(sk)->linger2 ? : + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fin_timeout); const int rto = inet_csk(sk)->icsk_rto;
if (fin_timeout < (rto << 2) - (rto >> 1)) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 790246011fff..333d221e0717 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -3466,7 +3466,7 @@ static int do_tcp_getsockopt(struct sock *sk, int level, case TCP_LINGER2: val = tp->linger2; if (val >= 0) - val = (val ? : net->ipv4.sysctl_tcp_fin_timeout) / HZ; + val = (val ? : READ_ONCE(net->ipv4.sysctl_tcp_fin_timeout)) / HZ; break; case TCP_DEFER_ACCEPT: val = retrans_to_secs(icsk->icsk_accept_queue.rskq_defer_accept, diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 9bfe6965b873..8b602a202acb 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -3847,7 +3847,7 @@ void tcp_send_probe0(struct sock *sk)
icsk->icsk_probes_out++; if (err <= 0) { - if (icsk->icsk_backoff < net->ipv4.sysctl_tcp_retries2) + if (icsk->icsk_backoff < READ_ONCE(net->ipv4.sysctl_tcp_retries2)) icsk->icsk_backoff++; timeout = tcp_probe0_when(sk, TCP_RTO_MAX); } else { diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index c48aeaef3ec7..26da44e196ed 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -143,7 +143,7 @@ static int tcp_out_of_resources(struct sock *sk, bool do_reset) */ static int tcp_orphan_retries(struct sock *sk, bool alive) { - int retries = sock_net(sk)->ipv4.sysctl_tcp_orphan_retries; /* May be zero. */ + int retries = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_orphan_retries); /* May be zero. */
/* We know from an ICMP that something is wrong. */ if (sk->sk_err_soft && !alive) @@ -245,7 +245,7 @@ static int tcp_write_timeout(struct sock *sk) retry_until = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries; expired = icsk->icsk_retransmits >= retry_until; } else { - if (retransmits_timed_out(sk, net->ipv4.sysctl_tcp_retries1, 0)) { + if (retransmits_timed_out(sk, READ_ONCE(net->ipv4.sysctl_tcp_retries1), 0)) { /* Black hole detection */ tcp_mtu_probing(icsk, sk);
@@ -254,7 +254,7 @@ static int tcp_write_timeout(struct sock *sk) sk_rethink_txhash(sk); }
- retry_until = net->ipv4.sysctl_tcp_retries2; + retry_until = READ_ONCE(net->ipv4.sysctl_tcp_retries2); if (sock_flag(sk, SOCK_DEAD)) { const bool alive = icsk->icsk_rto < TCP_RTO_MAX;
@@ -381,7 +381,7 @@ static void tcp_probe_timer(struct sock *sk) msecs_to_jiffies(icsk->icsk_user_timeout)) goto abort;
- max_probes = sock_net(sk)->ipv4.sysctl_tcp_retries2; + max_probes = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_retries2); if (sock_flag(sk, SOCK_DEAD)) { const bool alive = inet_csk_rto_backoff(icsk, TCP_RTO_MAX) < TCP_RTO_MAX;
@@ -580,7 +580,7 @@ void tcp_retransmit_timer(struct sock *sk) } inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, tcp_clamp_rto_to_user_timeout(sk), TCP_RTO_MAX); - if (retransmits_timed_out(sk, net->ipv4.sysctl_tcp_retries1 + 1, 0)) + if (retransmits_timed_out(sk, READ_ONCE(net->ipv4.sysctl_tcp_retries1) + 1, 0)) __sk_dst_reset(sk);
out:;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 55be873695ed8912eb77ff46d1d1cadf028bd0f3 ]
While reading sysctl_tcp_notsent_lowat, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: c9bee3b7fdec ("tcp: TCP_NOTSENT_LOWAT socket option") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/tcp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h index 96dae0937998..eb984ec22f22 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1947,7 +1947,7 @@ void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr); static inline u32 tcp_notsent_lowat(const struct tcp_sock *tp) { struct net *net = sock_net((struct sock *)tp); - return tp->notsent_lowat ?: net->ipv4.sysctl_tcp_notsent_lowat; + return tp->notsent_lowat ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat); }
/* @wake is one when sk_stream_write_space() calls us.
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit cbfc6495586a3f09f6f07d9fb3c7cafe807e3c55 ]
While reading sysctl_tcp_tw_reuse, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_ipv4.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 72fe93ace7d7..b95e1a3487c8 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -105,10 +105,10 @@ static u32 tcp_v4_init_ts_off(const struct net *net, const struct sk_buff *skb)
int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp) { + int reuse = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_tw_reuse); const struct inet_timewait_sock *tw = inet_twsk(sktw); const struct tcp_timewait_sock *tcptw = tcp_twsk(sktw); struct tcp_sock *tp = tcp_sk(sk); - int reuse = sock_net(sk)->ipv4.sysctl_tcp_tw_reuse;
if (reuse == 2) { /* Still does not detect *everything* that goes through
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 79539f34743d3e14cc1fa6577d326a82cc64d62f ]
While reading sysctl_max_syn_backlog, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_input.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index fbdb5de29a97..c1f26603cd2c 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -6676,10 +6676,12 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops, goto drop_and_free;
if (!want_cookie && !isn) { + int max_syn_backlog = READ_ONCE(net->ipv4.sysctl_max_syn_backlog); + /* Kill the following clause, if you dislike this way. */ if (!syncookies && - (net->ipv4.sysctl_max_syn_backlog - inet_csk_reqsk_queue_len(sk) < - (net->ipv4.sysctl_max_syn_backlog >> 2)) && + (max_syn_backlog - inet_csk_reqsk_queue_len(sk) < + (max_syn_backlog >> 2)) && !tcp_peer_is_proven(req, dst)) { /* Without syncookies last quarter of * backlog is filled with destinations,
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 5a54213318c43f4009ae158347aa6016e3b9b55a ]
While reading sysctl_tcp_fastopen, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 2100c8d2d9db ("net-tcp: Fast Open base") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Acked-by: Yuchung Cheng ycheng@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/af_inet.c | 2 +- net/ipv4/tcp.c | 6 ++++-- net/ipv4/tcp_fastopen.c | 4 ++-- 3 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 06153386776d..d61ca7be6eda 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -219,7 +219,7 @@ int inet_listen(struct socket *sock, int backlog) * because the socket was in TCP_LISTEN state previously but * was shutdown() rather than close(). */ - tcp_fastopen = sock_net(sk)->ipv4.sysctl_tcp_fastopen; + tcp_fastopen = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen); if ((tcp_fastopen & TFO_SERVER_WO_SOCKOPT1) && (tcp_fastopen & TFO_SERVER_ENABLE) && !inet_csk(sk)->icsk_accept_queue.fastopenq.max_qlen) { diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 333d221e0717..4b31f6e9ec61 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1148,7 +1148,8 @@ static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, struct sockaddr *uaddr = msg->msg_name; int err, flags;
- if (!(sock_net(sk)->ipv4.sysctl_tcp_fastopen & TFO_CLIENT_ENABLE) || + if (!(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen) & + TFO_CLIENT_ENABLE) || (uaddr && msg->msg_namelen >= sizeof(uaddr->sa_family) && uaddr->sa_family == AF_UNSPEC)) return -EOPNOTSUPP; @@ -3127,7 +3128,8 @@ static int do_tcp_setsockopt(struct sock *sk, int level, case TCP_FASTOPEN_CONNECT: if (val > 1 || val < 0) { err = -EINVAL; - } else if (net->ipv4.sysctl_tcp_fastopen & TFO_CLIENT_ENABLE) { + } else if (READ_ONCE(net->ipv4.sysctl_tcp_fastopen) & + TFO_CLIENT_ENABLE) { if (sk->sk_state == TCP_CLOSE) tp->fastopen_connect = val; else diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c index a5ec77a5ad6f..21705b2ddaff 100644 --- a/net/ipv4/tcp_fastopen.c +++ b/net/ipv4/tcp_fastopen.c @@ -349,7 +349,7 @@ static bool tcp_fastopen_no_cookie(const struct sock *sk, const struct dst_entry *dst, int flag) { - return (sock_net(sk)->ipv4.sysctl_tcp_fastopen & flag) || + return (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen) & flag) || tcp_sk(sk)->fastopen_no_cookie || (dst && dst_metric(dst, RTAX_FASTOPEN_NO_COOKIE)); } @@ -364,7 +364,7 @@ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb, const struct dst_entry *dst) { bool syn_data = TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq + 1; - int tcp_fastopen = sock_net(sk)->ipv4.sysctl_tcp_fastopen; + int tcp_fastopen = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen); struct tcp_fastopen_cookie valid_foc = { .len = -1 }; struct sock *child; int ret = 0;
From: Przemyslaw Patynowski przemyslawx.patynowski@intel.com
[ Upstream commit a9f49e0060301a9bfebeca76739158d0cf91cdf6 ]
Fix memory leak caused by not handling dummy receive descriptor properly. iavf_get_rx_buffer now sets the rx_buffer return value for dummy receive descriptors. Without this patch, when the hardware writes a dummy descriptor, iavf would not free the page allocated for the previous receive buffer. This is an unlikely event but can still happen.
[Jesse: massaged commit message]
Fixes: efa14c398582 ("iavf: allow null RX descriptors") Signed-off-by: Przemyslaw Patynowski przemyslawx.patynowski@intel.com Signed-off-by: Jesse Brandeburg jesse.brandeburg@intel.com Tested-by: Konrad Jankowski konrad0.jankowski@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 7a30d5d5ef53..c6905d1b6182 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -1263,11 +1263,10 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, { struct iavf_rx_buffer *rx_buffer;
- if (!size) - return NULL; - rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean]; prefetchw(rx_buffer->page); + if (!size) + return rx_buffer;
/* we are reusing so sync this buffer for CPU use */ dma_sync_single_range_for_cpu(rx_ring->dev,
From: Dawid Lukwinski dawid.lukwinski@intel.com
[ Upstream commit f838a63369818faadec4ad1736cfbd20ab5da00e ]
Fix an issue when driver incorrectly detects state of recovery process and erroneously reinitializes interrupts, which results in a kernel error and call trace message.
The issue was caused by a combination of two factors: 1. Assuming the EMP reset issued after completing firmware recovery means the whole recovery process is complete. 2. Erroneous reinitialization of interrupt vector after detecting the above mentioned EMP reset.
Fixes (1) by changing how recovery state change is detected and (2) by adjusting the conditional expression to ensure using proper interrupt reinitialization method, depending on the situation.
Fixes: 4ff0ee1af016 ("i40e: Introduce recovery mode support") Signed-off-by: Dawid Lukwinski dawid.lukwinski@intel.com Signed-off-by: Jan Sokolowski jan.sokolowski@intel.com Tested-by: Konrad Jankowski konrad0.jankowski@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Link: https://lore.kernel.org/r/20220715214542.2968762-1-anthony.l.nguyen@intel.co... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/i40e/i40e_main.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 05442bbc218c..0610d344fdbf 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -10068,7 +10068,7 @@ static int i40e_reset(struct i40e_pf *pf) **/ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) { - int old_recovery_mode_bit = test_bit(__I40E_RECOVERY_MODE, pf->state); + const bool is_recovery_mode_reported = i40e_check_recovery_mode(pf); struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi]; struct i40e_hw *hw = &pf->hw; i40e_status ret; @@ -10076,13 +10076,11 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) int v;
if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) && - i40e_check_recovery_mode(pf)) { + is_recovery_mode_reported) i40e_set_ethtool_ops(pf->vsi[pf->lan_vsi]->netdev); - }
if (test_bit(__I40E_DOWN, pf->state) && - !test_bit(__I40E_RECOVERY_MODE, pf->state) && - !old_recovery_mode_bit) + !test_bit(__I40E_RECOVERY_MODE, pf->state)) goto clear_recovery; dev_dbg(&pf->pdev->dev, "Rebuilding internal switch\n");
@@ -10109,13 +10107,12 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) * accordingly with regard to resources initialization * and deinitialization */ - if (test_bit(__I40E_RECOVERY_MODE, pf->state) || - old_recovery_mode_bit) { + if (test_bit(__I40E_RECOVERY_MODE, pf->state)) { if (i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities)) goto end_unlock;
- if (test_bit(__I40E_RECOVERY_MODE, pf->state)) { + if (is_recovery_mode_reported) { /* we're staying in recovery mode so we'll reinitialize * misc vector here */
From: Piotr Skajewski piotrx.skajewski@intel.com
[ Upstream commit 1e53834ce541d4fe271cdcca7703e50be0a44f8a ]
It is possible to disable VFs while the PF driver is processing requests from the VF driver. This can result in a panic.
BUG: unable to handle kernel paging request at 000000000000106c PGD 0 P4D 0 Oops: 0000 [#1] SMP NOPTI CPU: 8 PID: 0 Comm: swapper/8 Kdump: loaded Tainted: G I --------- - Hardware name: Dell Inc. PowerEdge R740/06WXJT, BIOS 2.8.2 08/27/2020 RIP: 0010:ixgbe_msg_task+0x4c8/0x1690 [ixgbe] Code: 00 00 48 8d 04 40 48 c1 e0 05 89 7c 24 24 89 fd 48 89 44 24 10 83 ff 01 0f 84 b8 04 00 00 4c 8b 64 24 10 4d 03 a5 48 22 00 00 <41> 80 7c 24 4c 00 0f 84 8a 03 00 00 0f b7 c7 83 f8 08 0f 84 8f 0a RSP: 0018:ffffb337869f8df8 EFLAGS: 00010002 RAX: 0000000000001020 RBX: 0000000000000000 RCX: 000000000000002b RDX: 0000000000000002 RSI: 0000000000000008 RDI: 0000000000000006 RBP: 0000000000000006 R08: 0000000000000002 R09: 0000000000029780 R10: 00006957d8f42832 R11: 0000000000000000 R12: 0000000000001020 R13: ffff8a00e8978ac0 R14: 000000000000002b R15: ffff8a00e8979c80 FS: 0000000000000000(0000) GS:ffff8a07dfd00000(0000) knlGS:00000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000000106c CR3: 0000000063e10004 CR4: 00000000007726e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <IRQ> ? ttwu_do_wakeup+0x19/0x140 ? try_to_wake_up+0x1cd/0x550 ? ixgbevf_update_xcast_mode+0x71/0xc0 [ixgbevf] ixgbe_msix_other+0x17e/0x310 [ixgbe] __handle_irq_event_percpu+0x40/0x180 handle_irq_event_percpu+0x30/0x80 handle_irq_event+0x36/0x53 handle_edge_irq+0x82/0x190 handle_irq+0x1c/0x30 do_IRQ+0x49/0xd0 common_interrupt+0xf/0xf
This can be eventually be reproduced with the following script:
while : do echo 63 > /sys/class/net/<devname>/device/sriov_numvfs sleep 1 echo 0 > /sys/class/net/<devname>/device/sriov_numvfs sleep 1 done
Add lock when disabling SR-IOV to prevent process VF mailbox communication.
Fixes: d773d1310625 ("ixgbe: Fix memory leak when SR-IOV VFs are direct assigned") Signed-off-by: Piotr Skajewski piotrx.skajewski@intel.com Tested-by: Marek Szlosek marek.szlosek@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Link: https://lore.kernel.org/r/20220715214456.2968711-1-anthony.l.nguyen@intel.co... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ixgbe/ixgbe.h | 1 + drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 3 +++ drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 6 ++++++ 3 files changed, 10 insertions(+)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h index 39e73ad60352..fa49ef2afde5 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h @@ -773,6 +773,7 @@ struct ixgbe_adapter { #ifdef CONFIG_IXGBE_IPSEC struct ixgbe_ipsec *ipsec; #endif /* CONFIG_IXGBE_IPSEC */ + spinlock_t vfs_lock; };
static inline u8 ixgbe_max_rss_indices(struct ixgbe_adapter *adapter) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 8a894e5d923f..f8aa1a0b89c5 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -6396,6 +6396,9 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter, /* n-tuple support exists, always init our spinlock */ spin_lock_init(&adapter->fdir_perfect_lock);
+ /* init spinlock to avoid concurrency of VF resources */ + spin_lock_init(&adapter->vfs_lock); + #ifdef CONFIG_IXGBE_DCB ixgbe_init_dcb(adapter); #endif diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c index cf5c2b9465eb..0e73e3b1af19 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c @@ -204,10 +204,13 @@ void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, unsigned int max_vfs) int ixgbe_disable_sriov(struct ixgbe_adapter *adapter) { unsigned int num_vfs = adapter->num_vfs, vf; + unsigned long flags; int rss;
+ spin_lock_irqsave(&adapter->vfs_lock, flags); /* set num VFs to 0 to prevent access to vfinfo */ adapter->num_vfs = 0; + spin_unlock_irqrestore(&adapter->vfs_lock, flags);
/* put the reference to all of the vf devices */ for (vf = 0; vf < num_vfs; ++vf) { @@ -1305,8 +1308,10 @@ static void ixgbe_rcv_ack_from_vf(struct ixgbe_adapter *adapter, u32 vf) void ixgbe_msg_task(struct ixgbe_adapter *adapter) { struct ixgbe_hw *hw = &adapter->hw; + unsigned long flags; u32 vf;
+ spin_lock_irqsave(&adapter->vfs_lock, flags); for (vf = 0; vf < adapter->num_vfs; vf++) { /* process any reset requests */ if (!ixgbe_check_for_rst(hw, vf)) @@ -1320,6 +1325,7 @@ void ixgbe_msg_task(struct ixgbe_adapter *adapter) if (!ixgbe_check_for_ack(hw, vf)) ixgbe_rcv_ack_from_vf(adapter, vf); } + spin_unlock_irqrestore(&adapter->vfs_lock, flags); }
void ixgbe_disable_tx_rx(struct ixgbe_adapter *adapter)
From: Haibo Chen haibo.chen@nxp.com
[ Upstream commit db8edaa09d7461ec08672a92a2eef63d5882bb79 ]
For the device use NO AI mode(not support auto address increment), only use the single read/write when config the regmap.
We meet issue on PCA9557PW on i.MX8QXP/DXL evk board, this device do not support AI mode, but when do the regmap sync, regmap will sync 3 byte data to register 1, logically this means write first data to register 1, write second data to register 2, write third data to register 3. But this device do not support AI mode, finally, these three data write only into register 1 one by one. the reault is the value of register 1 alway equal to the latest data, here is the third data, no operation happened on register 2 and register 3. This is not what we expect.
Fixes: 49427232764d ("gpio: pca953x: Perform basic regmap conversion") Signed-off-by: Haibo Chen haibo.chen@nxp.com Reviewed-by: Andy Shevchenko andy.shevchenko@gmail.com Signed-off-by: Bartosz Golaszewski brgl@bgdev.pl Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-pca953x.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index 54da66d02b0e..317f54f19477 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -379,6 +379,9 @@ static const struct regmap_config pca953x_i2c_regmap = { .reg_bits = 8, .val_bits = 8,
+ .use_single_read = true, + .use_single_write = true, + .readable_reg = pca953x_readable_register, .writeable_reg = pca953x_writeable_register, .volatile_reg = pca953x_volatile_register,
From: Hristo Venev hristo@venev.name
[ Upstream commit d7241f679a59cfe27f92cb5c6272cb429fb1f7ec ]
be_cmd_read_port_transceiver_data assumes that it is given a buffer that is at least PAGE_DATA_LEN long, or twice that if the module supports SFF 8472. However, this is not always the case.
Fix this by passing the desired offset and length to be_cmd_read_port_transceiver_data so that we only copy the bytes once.
Fixes: e36edd9d26cf ("be2net: add ethtool "-m" option support") Signed-off-by: Hristo Venev hristo@venev.name Link: https://lore.kernel.org/r/20220716085134.6095-1-hristo@venev.name Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/emulex/benet/be_cmds.c | 10 +++--- drivers/net/ethernet/emulex/benet/be_cmds.h | 2 +- .../net/ethernet/emulex/benet/be_ethtool.c | 31 ++++++++++++------- 3 files changed, 25 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c index 649c5c429bd7..1288b5e3d220 100644 --- a/drivers/net/ethernet/emulex/benet/be_cmds.c +++ b/drivers/net/ethernet/emulex/benet/be_cmds.c @@ -2287,7 +2287,7 @@ int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, u32 *state)
/* Uses sync mcc */ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, - u8 page_num, u8 *data) + u8 page_num, u32 off, u32 len, u8 *data) { struct be_dma_mem cmd; struct be_mcc_wrb *wrb; @@ -2321,10 +2321,10 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, req->port = cpu_to_le32(adapter->hba_port_num); req->page_num = cpu_to_le32(page_num); status = be_mcc_notify_wait(adapter); - if (!status) { + if (!status && len > 0) { struct be_cmd_resp_port_type *resp = cmd.va;
- memcpy(data, resp->page_data, PAGE_DATA_LEN); + memcpy(data, resp->page_data + off, len); } err: mutex_unlock(&adapter->mcc_lock); @@ -2415,7 +2415,7 @@ int be_cmd_query_cable_type(struct be_adapter *adapter) int status;
status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, - page_data); + 0, PAGE_DATA_LEN, page_data); if (!status) { switch (adapter->phy.interface_type) { case PHY_TYPE_QSFP: @@ -2440,7 +2440,7 @@ int be_cmd_query_sfp_info(struct be_adapter *adapter) int status;
status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, - page_data); + 0, PAGE_DATA_LEN, page_data); if (!status) { strlcpy(adapter->phy.vendor_name, page_data + SFP_VENDOR_NAME_OFFSET, SFP_VENDOR_NAME_LEN - 1); diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.h b/drivers/net/ethernet/emulex/benet/be_cmds.h index c30d6d6f0f3a..9e17d6a7ab8c 100644 --- a/drivers/net/ethernet/emulex/benet/be_cmds.h +++ b/drivers/net/ethernet/emulex/benet/be_cmds.h @@ -2427,7 +2427,7 @@ int be_cmd_set_beacon_state(struct be_adapter *adapter, u8 port_num, u8 beacon, int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, u32 *state); int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, - u8 page_num, u8 *data); + u8 page_num, u32 off, u32 len, u8 *data); int be_cmd_query_cable_type(struct be_adapter *adapter); int be_cmd_query_sfp_info(struct be_adapter *adapter); int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd, diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c index 5bb5abf99588..7cc1f41971c5 100644 --- a/drivers/net/ethernet/emulex/benet/be_ethtool.c +++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c @@ -1339,7 +1339,7 @@ static int be_get_module_info(struct net_device *netdev, return -EOPNOTSUPP;
status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, - page_data); + 0, PAGE_DATA_LEN, page_data); if (!status) { if (!page_data[SFP_PLUS_SFF_8472_COMP]) { modinfo->type = ETH_MODULE_SFF_8079; @@ -1357,25 +1357,32 @@ static int be_get_module_eeprom(struct net_device *netdev, { struct be_adapter *adapter = netdev_priv(netdev); int status; + u32 begin, end;
if (!check_privilege(adapter, MAX_PRIVILEGES)) return -EOPNOTSUPP;
- status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, - data); - if (status) - goto err; + begin = eeprom->offset; + end = eeprom->offset + eeprom->len; + + if (begin < PAGE_DATA_LEN) { + status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, begin, + min_t(u32, end, PAGE_DATA_LEN) - begin, + data); + if (status) + goto err; + + data += PAGE_DATA_LEN - begin; + begin = PAGE_DATA_LEN; + }
- if (eeprom->offset + eeprom->len > PAGE_DATA_LEN) { - status = be_cmd_read_port_transceiver_data(adapter, - TR_PAGE_A2, - data + - PAGE_DATA_LEN); + if (end > PAGE_DATA_LEN) { + status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A2, + begin - PAGE_DATA_LEN, + end - begin, data); if (status) goto err; } - if (eeprom->offset) - memcpy(data, data + eeprom->offset, eeprom->len); err: return be_cmd_status(status); }
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 87507bcb4f5de16bb419e9509d874f4db6c0ad0f ]
While reading sysctl_fib_multipath_use_neigh, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: a6db4494d218 ("net: ipv4: Consider failed nexthops in multipath routes") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/fib_semantics.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index 16fe03461563..28da0443f3e9 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -2209,7 +2209,7 @@ void fib_select_multipath(struct fib_result *res, int hash) }
change_nexthops(fi) { - if (net->ipv4.sysctl_fib_multipath_use_neigh) { + if (READ_ONCE(net->ipv4.sysctl_fib_multipath_use_neigh)) { if (!fib_good_nh(nexthop_nh)) continue; if (!first) {
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 3d72bb4188c708bb16758c60822fc4dda7a95174 ]
While reading sysctl_udp_l3mdev_accept, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 63a6fff353d0 ("net: Avoid receiving packets with an l3mdev on unbound UDP sockets") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/udp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/net/udp.h b/include/net/udp.h index 9787a42f7ed3..e66854e767dc 100644 --- a/include/net/udp.h +++ b/include/net/udp.h @@ -252,7 +252,7 @@ static inline bool udp_sk_bound_dev_eq(struct net *net, int bound_dev_if, int dif, int sdif) { #if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV) - return inet_bound_dev_eq(!!net->ipv4.sysctl_udp_l3mdev_accept, + return inet_bound_dev_eq(!!READ_ONCE(net->ipv4.sysctl_udp_l3mdev_accept), bound_dev_if, dif, sdif); #else return inet_bound_dev_eq(true, bound_dev_if, dif, sdif);
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 3666f666e99600518ab20982af04a078bbdad277 ]
While reading these knobs, they can be changed concurrently. Thus, we need to add READ_ONCE() to their readers.
- tcp_sack - tcp_window_scaling - tcp_timestamps
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/crypto/chelsio/chtls/chtls_cm.c | 6 +++--- net/core/secure_seq.c | 4 ++-- net/ipv4/syncookies.c | 6 +++--- net/ipv4/tcp_input.c | 6 +++--- net/ipv4/tcp_output.c | 10 +++++----- 5 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.c b/drivers/crypto/chelsio/chtls/chtls_cm.c index 82b76df43ae5..3b79bcd03e7b 100644 --- a/drivers/crypto/chelsio/chtls/chtls_cm.c +++ b/drivers/crypto/chelsio/chtls/chtls_cm.c @@ -1103,8 +1103,8 @@ static struct sock *chtls_recv_sock(struct sock *lsk, csk->sndbuf = newsk->sk_sndbuf; csk->smac_idx = ((struct port_info *)netdev_priv(ndev))->smt_idx; RCV_WSCALE(tp) = select_rcv_wscale(tcp_full_space(newsk), - sock_net(newsk)-> - ipv4.sysctl_tcp_window_scaling, + READ_ONCE(sock_net(newsk)-> + ipv4.sysctl_tcp_window_scaling), tp->window_clamp); neigh_release(n); inet_inherit_port(&tcp_hashinfo, lsk, newsk); @@ -1235,7 +1235,7 @@ static void chtls_pass_accept_request(struct sock *sk, chtls_set_req_addr(oreq, iph->daddr, iph->saddr); ip_dsfield = ipv4_get_dsfield(iph); if (req->tcpopt.wsf <= 14 && - sock_net(sk)->ipv4.sysctl_tcp_window_scaling) { + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling)) { inet_rsk(oreq)->wscale_ok = 1; inet_rsk(oreq)->snd_wscale = req->tcpopt.wsf; } diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c index a1867c65ac63..6d86506e315f 100644 --- a/net/core/secure_seq.c +++ b/net/core/secure_seq.c @@ -65,7 +65,7 @@ u32 secure_tcpv6_ts_off(const struct net *net, .daddr = *(struct in6_addr *)daddr, };
- if (net->ipv4.sysctl_tcp_timestamps != 1) + if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1) return 0;
ts_secret_init(); @@ -121,7 +121,7 @@ EXPORT_SYMBOL(secure_ipv6_port_ephemeral); #ifdef CONFIG_INET u32 secure_tcp_ts_off(const struct net *net, __be32 saddr, __be32 daddr) { - if (net->ipv4.sysctl_tcp_timestamps != 1) + if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1) return 0;
ts_secret_init(); diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c index f1cbf8911844..3f6c9514c7a9 100644 --- a/net/ipv4/syncookies.c +++ b/net/ipv4/syncookies.c @@ -243,12 +243,12 @@ bool cookie_timestamp_decode(const struct net *net, return true; }
- if (!net->ipv4.sysctl_tcp_timestamps) + if (!READ_ONCE(net->ipv4.sysctl_tcp_timestamps)) return false;
tcp_opt->sack_ok = (options & TS_OPT_SACK) ? TCP_SACK_SEEN : 0;
- if (tcp_opt->sack_ok && !net->ipv4.sysctl_tcp_sack) + if (tcp_opt->sack_ok && !READ_ONCE(net->ipv4.sysctl_tcp_sack)) return false;
if ((options & TS_OPT_WSCALE_MASK) == TS_OPT_WSCALE_MASK) @@ -257,7 +257,7 @@ bool cookie_timestamp_decode(const struct net *net, tcp_opt->wscale_ok = 1; tcp_opt->snd_wscale = options & TS_OPT_WSCALE_MASK;
- return net->ipv4.sysctl_tcp_window_scaling != 0; + return READ_ONCE(net->ipv4.sysctl_tcp_window_scaling) != 0; } EXPORT_SYMBOL(cookie_timestamp_decode);
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index c1f26603cd2c..28df6c3feb3f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3906,7 +3906,7 @@ void tcp_parse_options(const struct net *net, break; case TCPOPT_WINDOW: if (opsize == TCPOLEN_WINDOW && th->syn && - !estab && net->ipv4.sysctl_tcp_window_scaling) { + !estab && READ_ONCE(net->ipv4.sysctl_tcp_window_scaling)) { __u8 snd_wscale = *(__u8 *)ptr; opt_rx->wscale_ok = 1; if (snd_wscale > TCP_MAX_WSCALE) { @@ -3922,7 +3922,7 @@ void tcp_parse_options(const struct net *net, case TCPOPT_TIMESTAMP: if ((opsize == TCPOLEN_TIMESTAMP) && ((estab && opt_rx->tstamp_ok) || - (!estab && net->ipv4.sysctl_tcp_timestamps))) { + (!estab && READ_ONCE(net->ipv4.sysctl_tcp_timestamps)))) { opt_rx->saw_tstamp = 1; opt_rx->rcv_tsval = get_unaligned_be32(ptr); opt_rx->rcv_tsecr = get_unaligned_be32(ptr + 4); @@ -3930,7 +3930,7 @@ void tcp_parse_options(const struct net *net, break; case TCPOPT_SACK_PERM: if (opsize == TCPOLEN_SACK_PERM && th->syn && - !estab && net->ipv4.sysctl_tcp_sack) { + !estab && READ_ONCE(net->ipv4.sysctl_tcp_sack)) { opt_rx->sack_ok = TCP_SACK_SEEN; tcp_sack_reset(opt_rx); } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 8b602a202acb..5cc345c4006e 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -620,18 +620,18 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, opts->mss = tcp_advertise_mss(sk); remaining -= TCPOLEN_MSS_ALIGNED;
- if (likely(sock_net(sk)->ipv4.sysctl_tcp_timestamps && !*md5)) { + if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps) && !*md5)) { opts->options |= OPTION_TS; opts->tsval = tcp_skb_timestamp(skb) + tp->tsoffset; opts->tsecr = tp->rx_opt.ts_recent; remaining -= TCPOLEN_TSTAMP_ALIGNED; } - if (likely(sock_net(sk)->ipv4.sysctl_tcp_window_scaling)) { + if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling))) { opts->ws = tp->rx_opt.rcv_wscale; opts->options |= OPTION_WSCALE; remaining -= TCPOLEN_WSCALE_ALIGNED; } - if (likely(sock_net(sk)->ipv4.sysctl_tcp_sack)) { + if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_sack))) { opts->options |= OPTION_SACK_ADVERTISE; if (unlikely(!(OPTION_TS & opts->options))) remaining -= TCPOLEN_SACKPERM_ALIGNED; @@ -3407,7 +3407,7 @@ static void tcp_connect_init(struct sock *sk) * See tcp_input.c:tcp_rcv_state_process case TCP_SYN_SENT. */ tp->tcp_header_len = sizeof(struct tcphdr); - if (sock_net(sk)->ipv4.sysctl_tcp_timestamps) + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps)) tp->tcp_header_len += TCPOLEN_TSTAMP_ALIGNED;
#ifdef CONFIG_TCP_MD5SIG @@ -3443,7 +3443,7 @@ static void tcp_connect_init(struct sock *sk) tp->advmss - (tp->rx_opt.ts_recent_stamp ? tp->tcp_header_len - sizeof(struct tcphdr) : 0), &tp->rcv_wnd, &tp->window_clamp, - sock_net(sk)->ipv4.sysctl_tcp_window_scaling, + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling), &rcv_wscale, rcv_wnd);
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 52e65865deb6a36718a463030500f16530eaab74 ]
While reading sysctl_tcp_early_retrans, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: eed530b6c676 ("tcp: early retransmit") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 5cc345c4006e..72ee1fca0501 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2509,7 +2509,7 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto) if (rcu_access_pointer(tp->fastopen_rsk)) return false;
- early_retrans = sock_net(sk)->ipv4.sysctl_tcp_early_retrans; + early_retrans = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_early_retrans); /* Schedule a loss probe in 2*RTT for SACK capable connections * not in loss recovery, that are either limited by cwnd or application. */
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit e7d2ef837e14a971a05f60ea08c47f3fed1a36e4 ]
While reading sysctl_tcp_recovery, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 4f41b1c58a32 ("tcp: use RACK to detect losses") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_input.c | 3 ++- net/ipv4/tcp_recovery.c | 6 ++++-- 2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 28df6c3feb3f..2f57c365ebd5 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1950,7 +1950,8 @@ static inline void tcp_init_undo(struct tcp_sock *tp)
static bool tcp_is_rack(const struct sock *sk) { - return sock_net(sk)->ipv4.sysctl_tcp_recovery & TCP_RACK_LOSS_DETECTION; + return READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) & + TCP_RACK_LOSS_DETECTION; }
/* If we detect SACK reneging, forget all SACK information diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c index 8757bb6cb1d9..22ec8dcc1428 100644 --- a/net/ipv4/tcp_recovery.c +++ b/net/ipv4/tcp_recovery.c @@ -33,7 +33,8 @@ static u32 tcp_rack_reo_wnd(const struct sock *sk) return 0;
if (tp->sacked_out >= tp->reordering && - !(sock_net(sk)->ipv4.sysctl_tcp_recovery & TCP_RACK_NO_DUPTHRESH)) + !(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) & + TCP_RACK_NO_DUPTHRESH)) return 0; }
@@ -204,7 +205,8 @@ void tcp_rack_update_reo_wnd(struct sock *sk, struct rate_sample *rs) { struct tcp_sock *tp = tcp_sk(sk);
- if (sock_net(sk)->ipv4.sysctl_tcp_recovery & TCP_RACK_STATIC_REO_WND || + if ((READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) & + TCP_RACK_STATIC_REO_WND) || !rs->prior_delivered) return;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 7c6f2a86ca590d5187a073d987e9599985fb1c7c ]
While reading sysctl_tcp_thin_linear_timeouts, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 36e31b0af587 ("net: TCP thin linear timeouts") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_timer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 26da44e196ed..a0107eb02ae4 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -569,7 +569,7 @@ void tcp_retransmit_timer(struct sock *sk) * linear-timeout retransmissions into a black hole */ if (sk->sk_state == TCP_ESTABLISHED && - (tp->thin_lto || net->ipv4.sysctl_tcp_thin_linear_timeouts) && + (tp->thin_lto || READ_ONCE(net->ipv4.sysctl_tcp_thin_linear_timeouts)) && tcp_stream_is_thin(tp) && icsk->icsk_retransmits <= TCP_THIN_LINEAR_RETRIES) { icsk->icsk_backoff = 0;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 4845b5713ab18a1bb6e31d1fbb4d600240b8b691 ]
While reading sysctl_tcp_slow_start_after_idle, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: 35089bb203f4 ("[TCP]: Add tcp_slow_start_after_idle sysctl.") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/tcp.h | 4 ++-- net/ipv4/tcp_output.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h index eb984ec22f22..aaf1d5d5a13b 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1373,8 +1373,8 @@ static inline void tcp_slow_start_after_idle_check(struct sock *sk) struct tcp_sock *tp = tcp_sk(sk); s32 delta;
- if (!sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle || tp->packets_out || - ca_ops->cong_control) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle) || + tp->packets_out || ca_ops->cong_control) return; delta = tcp_jiffies32 - tp->lsndtime; if (delta > inet_csk(sk)->icsk_rto) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 72ee1fca0501..5d9a1a498a18 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1673,7 +1673,7 @@ static void tcp_cwnd_validate(struct sock *sk, bool is_cwnd_limited) if (tp->packets_out > tp->snd_cwnd_used) tp->snd_cwnd_used = tp->packets_out;
- if (sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle && + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle) && (s32)(tcp_jiffies32 - tp->snd_cwnd_stamp) >= inet_csk(sk)->icsk_rto && !ca_ops->cong_control) tcp_cwnd_application_limited(sk);
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 1a63cb91f0c2fcdeced6d6edee8d1d886583d139 ]
While reading sysctl_tcp_retrans_collapse, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 5d9a1a498a18..97f29ece3800 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2871,7 +2871,7 @@ static void tcp_retrans_try_collapse(struct sock *sk, struct sk_buff *to, struct sk_buff *skb = to, *tmp; bool first = true;
- if (!sock_net(sk)->ipv4.sysctl_tcp_retrans_collapse) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_retrans_collapse)) return; if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN) return;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 4e08ed41cb1194009fc1a916a59ce3ed4afd77cd ]
While reading sysctl_tcp_stdurg, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_input.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 2f57c365ebd5..f9884956aa13 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5356,7 +5356,7 @@ static void tcp_check_urg(struct sock *sk, const struct tcphdr *th) struct tcp_sock *tp = tcp_sk(sk); u32 ptr = ntohs(th->urg_ptr);
- if (ptr && !sock_net(sk)->ipv4.sysctl_tcp_stdurg) + if (ptr && !READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_stdurg)) ptr--; ptr += ntohl(th->seq);
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 0b484c91911e758e53656d570de58c2ed81ec6f2 ]
While reading sysctl_tcp_rfc1337, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_minisocks.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index 9b038cb0a43d..324f43fadb37 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -180,7 +180,7 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb, * Oh well... nobody has a sufficient solution to this * protocol bug yet. */ - if (twsk_net(tw)->ipv4.sysctl_tcp_rfc1337 == 0) { + if (!READ_ONCE(twsk_net(tw)->ipv4.sysctl_tcp_rfc1337)) { kill: inet_twsk_deschedule_put(tw); return TCP_TW_SUCCESS;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit a11e5b3e7a59fde1a90b0eaeaa82320495cf8cae ]
While reading sysctl_tcp_max_reordering, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers.
Fixes: dca145ffaa8d ("tcp: allow for bigger reordering level") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_input.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index f9884956aa13..c151c4dd4ae6 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -905,7 +905,7 @@ static void tcp_check_sack_reordering(struct sock *sk, const u32 low_seq, tp->undo_marker ? tp->undo_retrans : 0); #endif tp->reordering = min_t(u32, (metric + mss - 1) / mss, - sock_net(sk)->ipv4.sysctl_tcp_max_reordering); + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_max_reordering)); }
/* This exciting event is worth to be remembered. 8) */ @@ -1886,7 +1886,7 @@ static void tcp_check_reno_reordering(struct sock *sk, const int addend) return;
tp->reordering = min_t(u32, tp->packets_out + addend, - sock_net(sk)->ipv4.sysctl_tcp_max_reordering); + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_max_reordering)); tp->reord_seen++; NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRENOREORDER); }
From: Marc Kleine-Budde mkl@pengutronix.de
commit 4ceaa684459d414992acbefb4e4c31f2dfc50641 upstream.
In case a IRQ based transfer times out the bcm2835_spi_handle_err() function is called. Since commit 1513ceee70f2 ("spi: bcm2835: Drop dma_pending flag") the TX and RX DMA transfers are unconditionally canceled, leading to NULL pointer derefs if ctlr->dma_tx or ctlr->dma_rx are not set.
Fix the NULL pointer deref by checking that ctlr->dma_tx and ctlr->dma_rx are valid pointers before accessing them.
Fixes: 1513ceee70f2 ("spi: bcm2835: Drop dma_pending flag") Cc: Lukas Wunner lukas@wunner.de Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Link: https://lore.kernel.org/r/20220719072234.2782764-1-mkl@pengutronix.de Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/spi/spi-bcm2835.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
--- a/drivers/spi/spi-bcm2835.c +++ b/drivers/spi/spi-bcm2835.c @@ -1159,10 +1159,14 @@ static void bcm2835_spi_handle_err(struc struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);
/* if an error occurred and we have an active dma, then terminate */ - dmaengine_terminate_sync(ctlr->dma_tx); - bs->tx_dma_active = false; - dmaengine_terminate_sync(ctlr->dma_rx); - bs->rx_dma_active = false; + if (ctlr->dma_tx) { + dmaengine_terminate_sync(ctlr->dma_tx); + bs->tx_dma_active = false; + } + if (ctlr->dma_rx) { + dmaengine_terminate_sync(ctlr->dma_rx); + bs->rx_dma_active = false; + } bcm2835_spi_undo_prologue(bs);
/* and reset */
From: Wang Cheng wanngchenng@gmail.com
commit 018160ad314d75b1409129b2247b614a9f35894c upstream.
mpol_set_nodemask()(mm/mempolicy.c) does not set up nodemask when pol->mode is MPOL_LOCAL. Check pol->mode before access pol->w.cpuset_mems_allowed in mpol_rebind_policy()(mm/mempolicy.c).
BUG: KMSAN: uninit-value in mpol_rebind_policy mm/mempolicy.c:352 [inline] BUG: KMSAN: uninit-value in mpol_rebind_task+0x2ac/0x2c0 mm/mempolicy.c:368 mpol_rebind_policy mm/mempolicy.c:352 [inline] mpol_rebind_task+0x2ac/0x2c0 mm/mempolicy.c:368 cpuset_change_task_nodemask kernel/cgroup/cpuset.c:1711 [inline] cpuset_attach+0x787/0x15e0 kernel/cgroup/cpuset.c:2278 cgroup_migrate_execute+0x1023/0x1d20 kernel/cgroup/cgroup.c:2515 cgroup_migrate kernel/cgroup/cgroup.c:2771 [inline] cgroup_attach_task+0x540/0x8b0 kernel/cgroup/cgroup.c:2804 __cgroup1_procs_write+0x5cc/0x7a0 kernel/cgroup/cgroup-v1.c:520 cgroup1_tasks_write+0x94/0xb0 kernel/cgroup/cgroup-v1.c:539 cgroup_file_write+0x4c2/0x9e0 kernel/cgroup/cgroup.c:3852 kernfs_fop_write_iter+0x66a/0x9f0 fs/kernfs/file.c:296 call_write_iter include/linux/fs.h:2162 [inline] new_sync_write fs/read_write.c:503 [inline] vfs_write+0x1318/0x2030 fs/read_write.c:590 ksys_write+0x28b/0x510 fs/read_write.c:643 __do_sys_write fs/read_write.c:655 [inline] __se_sys_write fs/read_write.c:652 [inline] __x64_sys_write+0xdb/0x120 fs/read_write.c:652 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x54/0xd0 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x44/0xae
Uninit was created at: slab_post_alloc_hook mm/slab.h:524 [inline] slab_alloc_node mm/slub.c:3251 [inline] slab_alloc mm/slub.c:3259 [inline] kmem_cache_alloc+0x902/0x11c0 mm/slub.c:3264 mpol_new mm/mempolicy.c:293 [inline] do_set_mempolicy+0x421/0xb70 mm/mempolicy.c:853 kernel_set_mempolicy mm/mempolicy.c:1504 [inline] __do_sys_set_mempolicy mm/mempolicy.c:1510 [inline] __se_sys_set_mempolicy+0x44c/0xb60 mm/mempolicy.c:1507 __x64_sys_set_mempolicy+0xd8/0x110 mm/mempolicy.c:1507 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x54/0xd0 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x44/0xae
KMSAN: uninit-value in mpol_rebind_task (2) https://syzkaller.appspot.com/bug?id=d6eb90f952c2a5de9ea718a1b873c55cb13b59d...
This patch seems to fix below bug too. KMSAN: uninit-value in mpol_rebind_mm (2) https://syzkaller.appspot.com/bug?id=f2fecd0d7013f54ec4162f60743a2b28df40926...
The uninit-value is pol->w.cpuset_mems_allowed in mpol_rebind_policy(). When syzkaller reproducer runs to the beginning of mpol_new(),
mpol_new() mm/mempolicy.c do_mbind() mm/mempolicy.c kernel_mbind() mm/mempolicy.c
`mode` is 1(MPOL_PREFERRED), nodes_empty(*nodes) is `true` and `flags` is 0. Then
mode = MPOL_LOCAL; ... policy->mode = mode; policy->flags = flags;
will be executed. So in mpol_set_nodemask(),
mpol_set_nodemask() mm/mempolicy.c do_mbind() kernel_mbind()
pol->mode is 4 (MPOL_LOCAL), that `nodemask` in `pol` is not initialized, which will be accessed in mpol_rebind_policy().
Link: https://lkml.kernel.org/r/20220512123428.fq3wofedp6oiotd4@ppc.localdomain Signed-off-by: Wang Cheng wanngchenng@gmail.com Reported-by: syzbot+217f792c92599518a2ab@syzkaller.appspotmail.com Tested-by: syzbot+217f792c92599518a2ab@syzkaller.appspotmail.com Cc: David Rientjes rientjes@google.com Cc: Vlastimil Babka vbabka@suse.cz Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/mempolicy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -348,7 +348,7 @@ static void mpol_rebind_preferred(struct */ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask) { - if (!pol) + if (!pol || pol->mode == MPOL_LOCAL) return; if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) && nodes_equal(pol->w.cpuset_mems_allowed, *newmask))
From: Eric Dumazet edumazet@google.com
commit 0326195f523a549e0a9d7fd44c70b26fd7265090 upstream.
Classic BPF has a way to load bytes starting from the mac header.
Some skbs do not have a mac header, and skb_mac_header() in this case is returning a pointer that 65535 bytes after skb->head.
Existing range check in bpf_internal_load_pointer_neg_helper() was properly kicking and no illegal access was happening.
New sanity check in skb_mac_header() is firing, so we need to avoid it.
WARNING: CPU: 1 PID: 28990 at include/linux/skbuff.h:2785 skb_mac_header include/linux/skbuff.h:2785 [inline] WARNING: CPU: 1 PID: 28990 at include/linux/skbuff.h:2785 bpf_internal_load_pointer_neg_helper+0x1b1/0x1c0 kernel/bpf/core.c:74 Modules linked in: CPU: 1 PID: 28990 Comm: syz-executor.0 Not tainted 5.19.0-rc4-syzkaller-00865-g4874fb9484be #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/29/2022 RIP: 0010:skb_mac_header include/linux/skbuff.h:2785 [inline] RIP: 0010:bpf_internal_load_pointer_neg_helper+0x1b1/0x1c0 kernel/bpf/core.c:74 Code: ff ff 45 31 f6 e9 5a ff ff ff e8 aa 27 40 00 e9 3b ff ff ff e8 90 27 40 00 e9 df fe ff ff e8 86 27 40 00 eb 9e e8 2f 2c f3 ff <0f> 0b eb b1 e8 96 27 40 00 e9 79 fe ff ff 90 41 57 41 56 41 55 41 RSP: 0018:ffffc9000309f668 EFLAGS: 00010216 RAX: 0000000000000118 RBX: ffffffffffeff00c RCX: ffffc9000e417000 RDX: 0000000000040000 RSI: ffffffff81873f21 RDI: 0000000000000003 RBP: ffff8880842878c0 R08: 0000000000000003 R09: 000000000000ffff R10: 000000000000ffff R11: 0000000000000001 R12: 0000000000000004 R13: ffff88803ac56c00 R14: 000000000000ffff R15: dffffc0000000000 FS: 00007f5c88a16700(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fdaa9f6c058 CR3: 000000003a82c000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ____bpf_skb_load_helper_32 net/core/filter.c:276 [inline] bpf_skb_load_helper_32+0x191/0x220 net/core/filter.c:264
Fixes: f9aefd6b2aa3 ("net: warn if mac header was not set") Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: Daniel Borkmann daniel@iogearbox.net Link: https://lore.kernel.org/bpf/20220707123900.945305-1-edumazet@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/bpf/core.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -64,11 +64,13 @@ void *bpf_internal_load_pointer_neg_help { u8 *ptr = NULL;
- if (k >= SKF_NET_OFF) + if (k >= SKF_NET_OFF) { ptr = skb_network_header(skb) + k - SKF_NET_OFF; - else if (k >= SKF_LL_OFF) + } else if (k >= SKF_LL_OFF) { + if (unlikely(!skb_mac_header_was_set(skb))) + return NULL; ptr = skb_mac_header(skb) + k - SKF_LL_OFF; - + } if (ptr >= skb->head && ptr + size <= skb_tail_pointer(skb)) return ptr;
From: Alexander Aring aahringo@redhat.com
[ Upstream commit ba58995909b5098ca4003af65b0ccd5a8d13dd25 ]
This patch unsets ls_remove_len and ls_remove_name if a message allocation of a remove messages fails. In this case we never send a remove message out but set the per ls ls_remove_len ls_remove_name variable for a pending remove. Unset those variable should indicate possible waiters in wait_pending_remove() that no pending remove is going on at this moment.
Cc: stable@vger.kernel.org Signed-off-by: Alexander Aring aahringo@redhat.com Signed-off-by: David Teigland teigland@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dlm/lock.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index 4ae8becdb51d..9165bf56c6e8 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -4067,13 +4067,14 @@ static void send_repeat_remove(struct dlm_ls *ls, char *ms_name, int len) rv = _create_message(ls, sizeof(struct dlm_message) + len, dir_nodeid, DLM_MSG_REMOVE, &ms, &mh); if (rv) - return; + goto out;
memcpy(ms->m_extra, name, len); ms->m_hash = hash;
send_message(mh, ms);
+out: spin_lock(&ls->ls_remove_spin); ls->ls_remove_len = 0; memset(ls->ls_remove_name, 0, DLM_RESNAME_MAXLEN);
From: GUO Zihua guozihua@huawei.com
[ Upstream commit 891163adf180bc369b2f11c9dfce6d2758d2a5bd ]
The original 'ima' measurement list template contains a hash, defined as 20 bytes, and a null terminated pathname, limited to 255 characters. Other measurement list templates permit both larger hashes and longer pathnames. When the "ima" template is configured as the default, a new measurement list template (ima_template=) must be specified before specifying a larger hash algorithm (ima_hash=) on the boot command line.
To avoid this boot command line ordering issue, remove the legacy "ima" template configuration option, allowing it to still be specified on the boot command line.
The root cause of this issue is that during the processing of ima_hash, we would try to check whether the hash algorithm is compatible with the template. If the template is not set at the moment we do the check, we check the algorithm against the configured default template. If the default template is "ima", then we reject any hash algorithm other than sha1 and md5.
For example, if the compiled default template is "ima", and the default algorithm is sha1 (which is the current default). In the cmdline, we put in "ima_hash=sha256 ima_template=ima-ng". The expected behavior would be that ima starts with ima-ng as the template and sha256 as the hash algorithm. However, during the processing of "ima_hash=", "ima_template=" has not been processed yet, and hash_setup would check the configured hash algorithm against the compiled default: ima, and reject sha256. So at the end, the hash algorithm that is actually used will be sha1.
With template "ima" removed from the configured default, we ensure that the default tempalte would at least be "ima-ng" which allows for basically any hash algorithm.
This change would not break the algorithm compatibility checks for IMA.
Fixes: 4286587dccd43 ("ima: add Kconfig default measurement list template") Signed-off-by: GUO Zihua guozihua@huawei.com Cc: Stable@vger.kernel.org Signed-off-by: Mimi Zohar zohar@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- security/integrity/ima/Kconfig | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig index 748f3ee27b23..44b3315f3235 100644 --- a/security/integrity/ima/Kconfig +++ b/security/integrity/ima/Kconfig @@ -69,10 +69,9 @@ choice hash, defined as 20 bytes, and a null terminated pathname, limited to 255 characters. The 'ima-ng' measurement list template permits both larger hash digests and longer - pathnames. + pathnames. The configured default template can be replaced + by specifying "ima_template=" on the boot command line.
- config IMA_TEMPLATE - bool "ima" config IMA_NG_TEMPLATE bool "ima-ng (default)" config IMA_SIG_TEMPLATE @@ -82,7 +81,6 @@ endchoice config IMA_DEFAULT_TEMPLATE string depends on IMA - default "ima" if IMA_TEMPLATE default "ima-ng" if IMA_NG_TEMPLATE default "ima-sig" if IMA_SIG_TEMPLATE
@@ -102,15 +100,15 @@ choice
config IMA_DEFAULT_HASH_SHA256 bool "SHA256" - depends on CRYPTO_SHA256=y && !IMA_TEMPLATE + depends on CRYPTO_SHA256=y
config IMA_DEFAULT_HASH_SHA512 bool "SHA512" - depends on CRYPTO_SHA512=y && !IMA_TEMPLATE + depends on CRYPTO_SHA512=y
config IMA_DEFAULT_HASH_WP512 bool "WP512" - depends on CRYPTO_WP512=y && !IMA_TEMPLATE + depends on CRYPTO_WP512=y endchoice
config IMA_DEFAULT_HASH
From: Will Deacon will@kernel.org
[ Upstream commit 23e6b169c9917fbd77534f8c5f378cb073f548bd ]
The REFCOUNT_FULL implementation uses a different saturation point than the x86 implementation, which means that the shared refcount code in lib/refcount.c (e.g. refcount_dec_not_one()) needs to be aware of the difference.
Rather than duplicate the definitions from the lkdtm driver, instead move them into <linux/refcount.h> and update all references accordingly.
Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Ard Biesheuvel ardb@kernel.org Reviewed-by: Kees Cook keescook@chromium.org Tested-by: Hanjun Guo guohanjun@huawei.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Elena Reshetova elena.reshetova@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20191121115902.2551-2-will@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/misc/lkdtm/refcount.c | 8 -------- include/linux/refcount.h | 10 +++++++++- lib/refcount.c | 37 +++++++++++++++++++---------------- 3 files changed, 29 insertions(+), 26 deletions(-)
diff --git a/drivers/misc/lkdtm/refcount.c b/drivers/misc/lkdtm/refcount.c index 0a146b32da13..abf3b7c1f686 100644 --- a/drivers/misc/lkdtm/refcount.c +++ b/drivers/misc/lkdtm/refcount.c @@ -6,14 +6,6 @@ #include "lkdtm.h" #include <linux/refcount.h>
-#ifdef CONFIG_REFCOUNT_FULL -#define REFCOUNT_MAX (UINT_MAX - 1) -#define REFCOUNT_SATURATED UINT_MAX -#else -#define REFCOUNT_MAX INT_MAX -#define REFCOUNT_SATURATED (INT_MIN / 2) -#endif - static void overflow_check(refcount_t *ref) { switch (refcount_read(ref)) { diff --git a/include/linux/refcount.h b/include/linux/refcount.h index e28cce21bad6..79f62e8d2256 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -4,6 +4,7 @@
#include <linux/atomic.h> #include <linux/compiler.h> +#include <linux/limits.h> #include <linux/spinlock_types.h>
struct mutex; @@ -12,7 +13,7 @@ struct mutex; * struct refcount_t - variant of atomic_t specialized for reference counts * @refs: atomic_t counter field * - * The counter saturates at UINT_MAX and will not move once + * The counter saturates at REFCOUNT_SATURATED and will not move once * there. This avoids wrapping the counter and causing 'spurious' * use-after-free bugs. */ @@ -56,6 +57,9 @@ extern void refcount_dec_checked(refcount_t *r);
#ifdef CONFIG_REFCOUNT_FULL
+#define REFCOUNT_MAX (UINT_MAX - 1) +#define REFCOUNT_SATURATED UINT_MAX + #define refcount_add_not_zero refcount_add_not_zero_checked #define refcount_add refcount_add_checked
@@ -68,6 +72,10 @@ extern void refcount_dec_checked(refcount_t *r); #define refcount_dec refcount_dec_checked
#else + +#define REFCOUNT_MAX INT_MAX +#define REFCOUNT_SATURATED (INT_MIN / 2) + # ifdef CONFIG_ARCH_HAS_REFCOUNT # include <asm/refcount.h> # else diff --git a/lib/refcount.c b/lib/refcount.c index 6e904af0fb3e..48b78a423d7d 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -5,8 +5,8 @@ * The interface matches the atomic_t interface (to aid in porting) but only * provides the few functions one should use for reference counting. * - * It differs in that the counter saturates at UINT_MAX and will not move once - * there. This avoids wrapping the counter and causing 'spurious' + * It differs in that the counter saturates at REFCOUNT_SATURATED and will not + * move once there. This avoids wrapping the counter and causing 'spurious' * use-after-free issues. * * Memory ordering rules are slightly relaxed wrt regular atomic_t functions @@ -48,7 +48,7 @@ * @i: the value to add to the refcount * @r: the refcount * - * Will saturate at UINT_MAX and WARN. + * Will saturate at REFCOUNT_SATURATED and WARN. * * Provides no memory ordering, it is assumed the caller has guaranteed the * object memory to be stable (RCU, etc.). It does provide a control dependency @@ -69,16 +69,17 @@ bool refcount_add_not_zero_checked(unsigned int i, refcount_t *r) if (!val) return false;
- if (unlikely(val == UINT_MAX)) + if (unlikely(val == REFCOUNT_SATURATED)) return true;
new = val + i; if (new < val) - new = UINT_MAX; + new = REFCOUNT_SATURATED;
} while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new));
- WARN_ONCE(new == UINT_MAX, "refcount_t: saturated; leaking memory.\n"); + WARN_ONCE(new == REFCOUNT_SATURATED, + "refcount_t: saturated; leaking memory.\n");
return true; } @@ -89,7 +90,7 @@ EXPORT_SYMBOL(refcount_add_not_zero_checked); * @i: the value to add to the refcount * @r: the refcount * - * Similar to atomic_add(), but will saturate at UINT_MAX and WARN. + * Similar to atomic_add(), but will saturate at REFCOUNT_SATURATED and WARN. * * Provides no memory ordering, it is assumed the caller has guaranteed the * object memory to be stable (RCU, etc.). It does provide a control dependency @@ -110,7 +111,8 @@ EXPORT_SYMBOL(refcount_add_checked); * refcount_inc_not_zero_checked - increment a refcount unless it is 0 * @r: the refcount to increment * - * Similar to atomic_inc_not_zero(), but will saturate at UINT_MAX and WARN. + * Similar to atomic_inc_not_zero(), but will saturate at REFCOUNT_SATURATED + * and WARN. * * Provides no memory ordering, it is assumed the caller has guaranteed the * object memory to be stable (RCU, etc.). It does provide a control dependency @@ -133,7 +135,8 @@ bool refcount_inc_not_zero_checked(refcount_t *r)
} while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new));
- WARN_ONCE(new == UINT_MAX, "refcount_t: saturated; leaking memory.\n"); + WARN_ONCE(new == REFCOUNT_SATURATED, + "refcount_t: saturated; leaking memory.\n");
return true; } @@ -143,7 +146,7 @@ EXPORT_SYMBOL(refcount_inc_not_zero_checked); * refcount_inc_checked - increment a refcount * @r: the refcount to increment * - * Similar to atomic_inc(), but will saturate at UINT_MAX and WARN. + * Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN. * * Provides no memory ordering, it is assumed the caller already has a * reference on the object. @@ -164,7 +167,7 @@ EXPORT_SYMBOL(refcount_inc_checked); * * Similar to atomic_dec_and_test(), but it will WARN, return false and * ultimately leak on underflow and will fail to decrement when saturated - * at UINT_MAX. + * at REFCOUNT_SATURATED. * * Provides release memory ordering, such that prior loads and stores are done * before, and provides an acquire ordering on success such that free() @@ -182,7 +185,7 @@ bool refcount_sub_and_test_checked(unsigned int i, refcount_t *r) unsigned int new, val = atomic_read(&r->refs);
do { - if (unlikely(val == UINT_MAX)) + if (unlikely(val == REFCOUNT_SATURATED)) return false;
new = val - i; @@ -207,7 +210,7 @@ EXPORT_SYMBOL(refcount_sub_and_test_checked); * @r: the refcount * * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to - * decrement when saturated at UINT_MAX. + * decrement when saturated at REFCOUNT_SATURATED. * * Provides release memory ordering, such that prior loads and stores are done * before, and provides an acquire ordering on success such that free() @@ -226,7 +229,7 @@ EXPORT_SYMBOL(refcount_dec_and_test_checked); * @r: the refcount * * Similar to atomic_dec(), it will WARN on underflow and fail to decrement - * when saturated at UINT_MAX. + * when saturated at REFCOUNT_SATURATED. * * Provides release memory ordering, such that prior loads and stores are done * before. @@ -277,7 +280,7 @@ bool refcount_dec_not_one(refcount_t *r) unsigned int new, val = atomic_read(&r->refs);
do { - if (unlikely(val == UINT_MAX)) + if (unlikely(val == REFCOUNT_SATURATED)) return true;
if (val == 1) @@ -302,7 +305,7 @@ EXPORT_SYMBOL(refcount_dec_not_one); * @lock: the mutex to be locked * * Similar to atomic_dec_and_mutex_lock(), it will WARN on underflow and fail - * to decrement when saturated at UINT_MAX. + * to decrement when saturated at REFCOUNT_SATURATED. * * Provides release memory ordering, such that prior loads and stores are done * before, and provides a control dependency such that free() must come after. @@ -333,7 +336,7 @@ EXPORT_SYMBOL(refcount_dec_and_mutex_lock); * @lock: the spinlock to be locked * * Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to - * decrement when saturated at UINT_MAX. + * decrement when saturated at REFCOUNT_SATURATED. * * Provides release memory ordering, such that prior loads and stores are done * before, and provides a control dependency such that free() must come after.
From: Will Deacon will@kernel.org
[ Upstream commit 97a1420adf0cdf0cf6f41bab0b2acf658c96b94b ]
In preparation for changing the saturation point of REFCOUNT_FULL to INT_MIN/2, change the type of integer operands passed into the API from 'unsigned int' to 'int' so that we can avoid casting during comparisons when we don't want to fall foul of C integral conversion rules for signed and unsigned types.
Since the kernel is compiled with '-fno-strict-overflow', we don't need to worry about the UB introduced by signed overflow here. Furthermore, we're already making heavy use of the atomic_t API, which operates exclusively on signed types.
Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Ard Biesheuvel ardb@kernel.org Reviewed-by: Kees Cook keescook@chromium.org Tested-by: Hanjun Guo guohanjun@huawei.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Elena Reshetova elena.reshetova@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20191121115902.2551-3-will@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/refcount.h | 14 +++++++------- lib/refcount.c | 6 +++--- 2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 79f62e8d2256..89066a1471dd 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -28,7 +28,7 @@ typedef struct refcount_struct { * @r: the refcount * @n: value to which the refcount will be set */ -static inline void refcount_set(refcount_t *r, unsigned int n) +static inline void refcount_set(refcount_t *r, int n) { atomic_set(&r->refs, n); } @@ -44,13 +44,13 @@ static inline unsigned int refcount_read(const refcount_t *r) return atomic_read(&r->refs); }
-extern __must_check bool refcount_add_not_zero_checked(unsigned int i, refcount_t *r); -extern void refcount_add_checked(unsigned int i, refcount_t *r); +extern __must_check bool refcount_add_not_zero_checked(int i, refcount_t *r); +extern void refcount_add_checked(int i, refcount_t *r);
extern __must_check bool refcount_inc_not_zero_checked(refcount_t *r); extern void refcount_inc_checked(refcount_t *r);
-extern __must_check bool refcount_sub_and_test_checked(unsigned int i, refcount_t *r); +extern __must_check bool refcount_sub_and_test_checked(int i, refcount_t *r);
extern __must_check bool refcount_dec_and_test_checked(refcount_t *r); extern void refcount_dec_checked(refcount_t *r); @@ -79,12 +79,12 @@ extern void refcount_dec_checked(refcount_t *r); # ifdef CONFIG_ARCH_HAS_REFCOUNT # include <asm/refcount.h> # else -static inline __must_check bool refcount_add_not_zero(unsigned int i, refcount_t *r) +static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) { return atomic_add_unless(&r->refs, i, 0); }
-static inline void refcount_add(unsigned int i, refcount_t *r) +static inline void refcount_add(int i, refcount_t *r) { atomic_add(i, &r->refs); } @@ -99,7 +99,7 @@ static inline void refcount_inc(refcount_t *r) atomic_inc(&r->refs); }
-static inline __must_check bool refcount_sub_and_test(unsigned int i, refcount_t *r) +static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) { return atomic_sub_and_test(i, &r->refs); } diff --git a/lib/refcount.c b/lib/refcount.c index 48b78a423d7d..719b0bc42ab1 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -61,7 +61,7 @@ * * Return: false if the passed refcount is 0, true otherwise */ -bool refcount_add_not_zero_checked(unsigned int i, refcount_t *r) +bool refcount_add_not_zero_checked(int i, refcount_t *r) { unsigned int new, val = atomic_read(&r->refs);
@@ -101,7 +101,7 @@ EXPORT_SYMBOL(refcount_add_not_zero_checked); * cases, refcount_inc(), or one of its variants, should instead be used to * increment a reference count. */ -void refcount_add_checked(unsigned int i, refcount_t *r) +void refcount_add_checked(int i, refcount_t *r) { WARN_ONCE(!refcount_add_not_zero_checked(i, r), "refcount_t: addition on 0; use-after-free.\n"); } @@ -180,7 +180,7 @@ EXPORT_SYMBOL(refcount_inc_checked); * * Return: true if the resulting refcount is 0, false otherwise */ -bool refcount_sub_and_test_checked(unsigned int i, refcount_t *r) +bool refcount_sub_and_test_checked(int i, refcount_t *r) { unsigned int new, val = atomic_read(&r->refs);
From: Will Deacon will@kernel.org
[ Upstream commit 7221762c48c6bbbcc6cc51d8b803c06930215e34 ]
The full-fat refcount implementation is exposed via a set of functions suffixed with "_checked()", the idea being that code can choose to use the more expensive, yet more secure implementation on a case-by-case basis.
In reality, this hasn't happened, so with a grand total of zero users, let's remove the checked variants for now by simply dropping the suffix and predicating the out-of-line functions on CONFIG_REFCOUNT_FULL=y.
Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Ard Biesheuvel ardb@kernel.org Reviewed-by: Kees Cook keescook@chromium.org Tested-by: Hanjun Guo guohanjun@huawei.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Elena Reshetova elena.reshetova@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20191121115902.2551-4-will@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/refcount.h | 25 ++++++------------- lib/refcount.c | 54 +++++++++++++++++++++------------------- 2 files changed, 36 insertions(+), 43 deletions(-)
diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 89066a1471dd..edd505d1a23b 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -44,32 +44,21 @@ static inline unsigned int refcount_read(const refcount_t *r) return atomic_read(&r->refs); }
-extern __must_check bool refcount_add_not_zero_checked(int i, refcount_t *r); -extern void refcount_add_checked(int i, refcount_t *r); - -extern __must_check bool refcount_inc_not_zero_checked(refcount_t *r); -extern void refcount_inc_checked(refcount_t *r); - -extern __must_check bool refcount_sub_and_test_checked(int i, refcount_t *r); - -extern __must_check bool refcount_dec_and_test_checked(refcount_t *r); -extern void refcount_dec_checked(refcount_t *r); - #ifdef CONFIG_REFCOUNT_FULL
#define REFCOUNT_MAX (UINT_MAX - 1) #define REFCOUNT_SATURATED UINT_MAX
-#define refcount_add_not_zero refcount_add_not_zero_checked -#define refcount_add refcount_add_checked +extern __must_check bool refcount_add_not_zero(int i, refcount_t *r); +extern void refcount_add(int i, refcount_t *r);
-#define refcount_inc_not_zero refcount_inc_not_zero_checked -#define refcount_inc refcount_inc_checked +extern __must_check bool refcount_inc_not_zero(refcount_t *r); +extern void refcount_inc(refcount_t *r);
-#define refcount_sub_and_test refcount_sub_and_test_checked +extern __must_check bool refcount_sub_and_test(int i, refcount_t *r);
-#define refcount_dec_and_test refcount_dec_and_test_checked -#define refcount_dec refcount_dec_checked +extern __must_check bool refcount_dec_and_test(refcount_t *r); +extern void refcount_dec(refcount_t *r);
#else
diff --git a/lib/refcount.c b/lib/refcount.c index 719b0bc42ab1..a2f670998cee 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -43,8 +43,10 @@ #include <linux/spinlock.h> #include <linux/bug.h>
+#ifdef CONFIG_REFCOUNT_FULL + /** - * refcount_add_not_zero_checked - add a value to a refcount unless it is 0 + * refcount_add_not_zero - add a value to a refcount unless it is 0 * @i: the value to add to the refcount * @r: the refcount * @@ -61,7 +63,7 @@ * * Return: false if the passed refcount is 0, true otherwise */ -bool refcount_add_not_zero_checked(int i, refcount_t *r) +bool refcount_add_not_zero(int i, refcount_t *r) { unsigned int new, val = atomic_read(&r->refs);
@@ -83,10 +85,10 @@ bool refcount_add_not_zero_checked(int i, refcount_t *r)
return true; } -EXPORT_SYMBOL(refcount_add_not_zero_checked); +EXPORT_SYMBOL(refcount_add_not_zero);
/** - * refcount_add_checked - add a value to a refcount + * refcount_add - add a value to a refcount * @i: the value to add to the refcount * @r: the refcount * @@ -101,14 +103,14 @@ EXPORT_SYMBOL(refcount_add_not_zero_checked); * cases, refcount_inc(), or one of its variants, should instead be used to * increment a reference count. */ -void refcount_add_checked(int i, refcount_t *r) +void refcount_add(int i, refcount_t *r) { - WARN_ONCE(!refcount_add_not_zero_checked(i, r), "refcount_t: addition on 0; use-after-free.\n"); + WARN_ONCE(!refcount_add_not_zero(i, r), "refcount_t: addition on 0; use-after-free.\n"); } -EXPORT_SYMBOL(refcount_add_checked); +EXPORT_SYMBOL(refcount_add);
/** - * refcount_inc_not_zero_checked - increment a refcount unless it is 0 + * refcount_inc_not_zero - increment a refcount unless it is 0 * @r: the refcount to increment * * Similar to atomic_inc_not_zero(), but will saturate at REFCOUNT_SATURATED @@ -120,7 +122,7 @@ EXPORT_SYMBOL(refcount_add_checked); * * Return: true if the increment was successful, false otherwise */ -bool refcount_inc_not_zero_checked(refcount_t *r) +bool refcount_inc_not_zero(refcount_t *r) { unsigned int new, val = atomic_read(&r->refs);
@@ -140,10 +142,10 @@ bool refcount_inc_not_zero_checked(refcount_t *r)
return true; } -EXPORT_SYMBOL(refcount_inc_not_zero_checked); +EXPORT_SYMBOL(refcount_inc_not_zero);
/** - * refcount_inc_checked - increment a refcount + * refcount_inc - increment a refcount * @r: the refcount to increment * * Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN. @@ -154,14 +156,14 @@ EXPORT_SYMBOL(refcount_inc_not_zero_checked); * Will WARN if the refcount is 0, as this represents a possible use-after-free * condition. */ -void refcount_inc_checked(refcount_t *r) +void refcount_inc(refcount_t *r) { - WARN_ONCE(!refcount_inc_not_zero_checked(r), "refcount_t: increment on 0; use-after-free.\n"); + WARN_ONCE(!refcount_inc_not_zero(r), "refcount_t: increment on 0; use-after-free.\n"); } -EXPORT_SYMBOL(refcount_inc_checked); +EXPORT_SYMBOL(refcount_inc);
/** - * refcount_sub_and_test_checked - subtract from a refcount and test if it is 0 + * refcount_sub_and_test - subtract from a refcount and test if it is 0 * @i: amount to subtract from the refcount * @r: the refcount * @@ -180,7 +182,7 @@ EXPORT_SYMBOL(refcount_inc_checked); * * Return: true if the resulting refcount is 0, false otherwise */ -bool refcount_sub_and_test_checked(int i, refcount_t *r) +bool refcount_sub_and_test(int i, refcount_t *r) { unsigned int new, val = atomic_read(&r->refs);
@@ -203,10 +205,10 @@ bool refcount_sub_and_test_checked(int i, refcount_t *r) return false;
} -EXPORT_SYMBOL(refcount_sub_and_test_checked); +EXPORT_SYMBOL(refcount_sub_and_test);
/** - * refcount_dec_and_test_checked - decrement a refcount and test if it is 0 + * refcount_dec_and_test - decrement a refcount and test if it is 0 * @r: the refcount * * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to @@ -218,14 +220,14 @@ EXPORT_SYMBOL(refcount_sub_and_test_checked); * * Return: true if the resulting refcount is 0, false otherwise */ -bool refcount_dec_and_test_checked(refcount_t *r) +bool refcount_dec_and_test(refcount_t *r) { - return refcount_sub_and_test_checked(1, r); + return refcount_sub_and_test(1, r); } -EXPORT_SYMBOL(refcount_dec_and_test_checked); +EXPORT_SYMBOL(refcount_dec_and_test);
/** - * refcount_dec_checked - decrement a refcount + * refcount_dec - decrement a refcount * @r: the refcount * * Similar to atomic_dec(), it will WARN on underflow and fail to decrement @@ -234,11 +236,13 @@ EXPORT_SYMBOL(refcount_dec_and_test_checked); * Provides release memory ordering, such that prior loads and stores are done * before. */ -void refcount_dec_checked(refcount_t *r) +void refcount_dec(refcount_t *r) { - WARN_ONCE(refcount_dec_and_test_checked(r), "refcount_t: decrement hit 0; leaking memory.\n"); + WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n"); } -EXPORT_SYMBOL(refcount_dec_checked); +EXPORT_SYMBOL(refcount_dec); + +#endif /* CONFIG_REFCOUNT_FULL */
/** * refcount_dec_if_one - decrement a refcount if it is 1
From: Will Deacon will@kernel.org
[ Upstream commit 77e9971c79c29542ab7dd4140f9343bf2ff36158 ]
In an effort to improve performance of the REFCOUNT_FULL implementation, move the bulk of its functions into linux/refcount.h. This allows them to be inlined in the same way as if they had been provided via CONFIG_ARCH_HAS_REFCOUNT.
Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Ard Biesheuvel ardb@kernel.org Reviewed-by: Kees Cook keescook@chromium.org Tested-by: Hanjun Guo guohanjun@huawei.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Elena Reshetova elena.reshetova@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20191121115902.2551-5-will@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/refcount.h | 237 ++++++++++++++++++++++++++++++++++++-- lib/refcount.c | 238 +-------------------------------------- 2 files changed, 229 insertions(+), 246 deletions(-)
diff --git a/include/linux/refcount.h b/include/linux/refcount.h index edd505d1a23b..e719b5b1220e 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -45,22 +45,241 @@ static inline unsigned int refcount_read(const refcount_t *r) }
#ifdef CONFIG_REFCOUNT_FULL +#include <linux/bug.h>
#define REFCOUNT_MAX (UINT_MAX - 1) #define REFCOUNT_SATURATED UINT_MAX
-extern __must_check bool refcount_add_not_zero(int i, refcount_t *r); -extern void refcount_add(int i, refcount_t *r); +/* + * Variant of atomic_t specialized for reference counts. + * + * The interface matches the atomic_t interface (to aid in porting) but only + * provides the few functions one should use for reference counting. + * + * It differs in that the counter saturates at REFCOUNT_SATURATED and will not + * move once there. This avoids wrapping the counter and causing 'spurious' + * use-after-free issues. + * + * Memory ordering rules are slightly relaxed wrt regular atomic_t functions + * and provide only what is strictly required for refcounts. + * + * The increments are fully relaxed; these will not provide ordering. The + * rationale is that whatever is used to obtain the object we're increasing the + * reference count on will provide the ordering. For locked data structures, + * its the lock acquire, for RCU/lockless data structures its the dependent + * load. + * + * Do note that inc_not_zero() provides a control dependency which will order + * future stores against the inc, this ensures we'll never modify the object + * if we did not in fact acquire a reference. + * + * The decrements will provide release order, such that all the prior loads and + * stores will be issued before, it also provides a control dependency, which + * will order us against the subsequent free(). + * + * The control dependency is against the load of the cmpxchg (ll/sc) that + * succeeded. This means the stores aren't fully ordered, but this is fine + * because the 1->0 transition indicates no concurrency. + * + * Note that the allocator is responsible for ordering things between free() + * and alloc(). + * + * The decrements dec_and_test() and sub_and_test() also provide acquire + * ordering on success. + * + */ + +/** + * refcount_add_not_zero - add a value to a refcount unless it is 0 + * @i: the value to add to the refcount + * @r: the refcount + * + * Will saturate at REFCOUNT_SATURATED and WARN. + * + * Provides no memory ordering, it is assumed the caller has guaranteed the + * object memory to be stable (RCU, etc.). It does provide a control dependency + * and thereby orders future stores. See the comment on top. + * + * Use of this function is not recommended for the normal reference counting + * use case in which references are taken and released one at a time. In these + * cases, refcount_inc(), or one of its variants, should instead be used to + * increment a reference count. + * + * Return: false if the passed refcount is 0, true otherwise + */ +static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) +{ + unsigned int new, val = atomic_read(&r->refs); + + do { + if (!val) + return false; + + if (unlikely(val == REFCOUNT_SATURATED)) + return true; + + new = val + i; + if (new < val) + new = REFCOUNT_SATURATED; + + } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new)); + + WARN_ONCE(new == REFCOUNT_SATURATED, + "refcount_t: saturated; leaking memory.\n"); + + return true; +} + +/** + * refcount_add - add a value to a refcount + * @i: the value to add to the refcount + * @r: the refcount + * + * Similar to atomic_add(), but will saturate at REFCOUNT_SATURATED and WARN. + * + * Provides no memory ordering, it is assumed the caller has guaranteed the + * object memory to be stable (RCU, etc.). It does provide a control dependency + * and thereby orders future stores. See the comment on top. + * + * Use of this function is not recommended for the normal reference counting + * use case in which references are taken and released one at a time. In these + * cases, refcount_inc(), or one of its variants, should instead be used to + * increment a reference count. + */ +static inline void refcount_add(int i, refcount_t *r) +{ + WARN_ONCE(!refcount_add_not_zero(i, r), "refcount_t: addition on 0; use-after-free.\n"); +} + +/** + * refcount_inc_not_zero - increment a refcount unless it is 0 + * @r: the refcount to increment + * + * Similar to atomic_inc_not_zero(), but will saturate at REFCOUNT_SATURATED + * and WARN. + * + * Provides no memory ordering, it is assumed the caller has guaranteed the + * object memory to be stable (RCU, etc.). It does provide a control dependency + * and thereby orders future stores. See the comment on top. + * + * Return: true if the increment was successful, false otherwise + */ +static inline __must_check bool refcount_inc_not_zero(refcount_t *r) +{ + unsigned int new, val = atomic_read(&r->refs); + + do { + new = val + 1;
-extern __must_check bool refcount_inc_not_zero(refcount_t *r); -extern void refcount_inc(refcount_t *r); + if (!val) + return false;
-extern __must_check bool refcount_sub_and_test(int i, refcount_t *r); + if (unlikely(!new)) + return true;
-extern __must_check bool refcount_dec_and_test(refcount_t *r); -extern void refcount_dec(refcount_t *r); + } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new)); + + WARN_ONCE(new == REFCOUNT_SATURATED, + "refcount_t: saturated; leaking memory.\n"); + + return true; +} + +/** + * refcount_inc - increment a refcount + * @r: the refcount to increment + * + * Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN. + * + * Provides no memory ordering, it is assumed the caller already has a + * reference on the object. + * + * Will WARN if the refcount is 0, as this represents a possible use-after-free + * condition. + */ +static inline void refcount_inc(refcount_t *r) +{ + WARN_ONCE(!refcount_inc_not_zero(r), "refcount_t: increment on 0; use-after-free.\n"); +} + +/** + * refcount_sub_and_test - subtract from a refcount and test if it is 0 + * @i: amount to subtract from the refcount + * @r: the refcount + * + * Similar to atomic_dec_and_test(), but it will WARN, return false and + * ultimately leak on underflow and will fail to decrement when saturated + * at REFCOUNT_SATURATED. + * + * Provides release memory ordering, such that prior loads and stores are done + * before, and provides an acquire ordering on success such that free() + * must come after. + * + * Use of this function is not recommended for the normal reference counting + * use case in which references are taken and released one at a time. In these + * cases, refcount_dec(), or one of its variants, should instead be used to + * decrement a reference count. + * + * Return: true if the resulting refcount is 0, false otherwise + */ +static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) +{ + unsigned int new, val = atomic_read(&r->refs); + + do { + if (unlikely(val == REFCOUNT_SATURATED)) + return false; + + new = val - i; + if (new > val) { + WARN_ONCE(new > val, "refcount_t: underflow; use-after-free.\n"); + return false; + } + + } while (!atomic_try_cmpxchg_release(&r->refs, &val, new)); + + if (!new) { + smp_acquire__after_ctrl_dep(); + return true; + } + return false; + +} + +/** + * refcount_dec_and_test - decrement a refcount and test if it is 0 + * @r: the refcount + * + * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to + * decrement when saturated at REFCOUNT_SATURATED. + * + * Provides release memory ordering, such that prior loads and stores are done + * before, and provides an acquire ordering on success such that free() + * must come after. + * + * Return: true if the resulting refcount is 0, false otherwise + */ +static inline __must_check bool refcount_dec_and_test(refcount_t *r) +{ + return refcount_sub_and_test(1, r); +} + +/** + * refcount_dec - decrement a refcount + * @r: the refcount + * + * Similar to atomic_dec(), it will WARN on underflow and fail to decrement + * when saturated at REFCOUNT_SATURATED. + * + * Provides release memory ordering, such that prior loads and stores are done + * before. + */ +static inline void refcount_dec(refcount_t *r) +{ + WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n"); +}
-#else +#else /* CONFIG_REFCOUNT_FULL */
#define REFCOUNT_MAX INT_MAX #define REFCOUNT_SATURATED (INT_MIN / 2) @@ -103,7 +322,7 @@ static inline void refcount_dec(refcount_t *r) atomic_dec(&r->refs); } # endif /* !CONFIG_ARCH_HAS_REFCOUNT */ -#endif /* CONFIG_REFCOUNT_FULL */ +#endif /* !CONFIG_REFCOUNT_FULL */
extern __must_check bool refcount_dec_if_one(refcount_t *r); extern __must_check bool refcount_dec_not_one(refcount_t *r); diff --git a/lib/refcount.c b/lib/refcount.c index a2f670998cee..3a534fbebdcc 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -1,41 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Variant of atomic_t specialized for reference counts. - * - * The interface matches the atomic_t interface (to aid in porting) but only - * provides the few functions one should use for reference counting. - * - * It differs in that the counter saturates at REFCOUNT_SATURATED and will not - * move once there. This avoids wrapping the counter and causing 'spurious' - * use-after-free issues. - * - * Memory ordering rules are slightly relaxed wrt regular atomic_t functions - * and provide only what is strictly required for refcounts. - * - * The increments are fully relaxed; these will not provide ordering. The - * rationale is that whatever is used to obtain the object we're increasing the - * reference count on will provide the ordering. For locked data structures, - * its the lock acquire, for RCU/lockless data structures its the dependent - * load. - * - * Do note that inc_not_zero() provides a control dependency which will order - * future stores against the inc, this ensures we'll never modify the object - * if we did not in fact acquire a reference. - * - * The decrements will provide release order, such that all the prior loads and - * stores will be issued before, it also provides a control dependency, which - * will order us against the subsequent free(). - * - * The control dependency is against the load of the cmpxchg (ll/sc) that - * succeeded. This means the stores aren't fully ordered, but this is fine - * because the 1->0 transition indicates no concurrency. - * - * Note that the allocator is responsible for ordering things between free() - * and alloc(). - * - * The decrements dec_and_test() and sub_and_test() also provide acquire - * ordering on success. - * + * Out-of-line refcount functions common to all refcount implementations. */
#include <linux/mutex.h> @@ -43,207 +8,6 @@ #include <linux/spinlock.h> #include <linux/bug.h>
-#ifdef CONFIG_REFCOUNT_FULL - -/** - * refcount_add_not_zero - add a value to a refcount unless it is 0 - * @i: the value to add to the refcount - * @r: the refcount - * - * Will saturate at REFCOUNT_SATURATED and WARN. - * - * Provides no memory ordering, it is assumed the caller has guaranteed the - * object memory to be stable (RCU, etc.). It does provide a control dependency - * and thereby orders future stores. See the comment on top. - * - * Use of this function is not recommended for the normal reference counting - * use case in which references are taken and released one at a time. In these - * cases, refcount_inc(), or one of its variants, should instead be used to - * increment a reference count. - * - * Return: false if the passed refcount is 0, true otherwise - */ -bool refcount_add_not_zero(int i, refcount_t *r) -{ - unsigned int new, val = atomic_read(&r->refs); - - do { - if (!val) - return false; - - if (unlikely(val == REFCOUNT_SATURATED)) - return true; - - new = val + i; - if (new < val) - new = REFCOUNT_SATURATED; - - } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new)); - - WARN_ONCE(new == REFCOUNT_SATURATED, - "refcount_t: saturated; leaking memory.\n"); - - return true; -} -EXPORT_SYMBOL(refcount_add_not_zero); - -/** - * refcount_add - add a value to a refcount - * @i: the value to add to the refcount - * @r: the refcount - * - * Similar to atomic_add(), but will saturate at REFCOUNT_SATURATED and WARN. - * - * Provides no memory ordering, it is assumed the caller has guaranteed the - * object memory to be stable (RCU, etc.). It does provide a control dependency - * and thereby orders future stores. See the comment on top. - * - * Use of this function is not recommended for the normal reference counting - * use case in which references are taken and released one at a time. In these - * cases, refcount_inc(), or one of its variants, should instead be used to - * increment a reference count. - */ -void refcount_add(int i, refcount_t *r) -{ - WARN_ONCE(!refcount_add_not_zero(i, r), "refcount_t: addition on 0; use-after-free.\n"); -} -EXPORT_SYMBOL(refcount_add); - -/** - * refcount_inc_not_zero - increment a refcount unless it is 0 - * @r: the refcount to increment - * - * Similar to atomic_inc_not_zero(), but will saturate at REFCOUNT_SATURATED - * and WARN. - * - * Provides no memory ordering, it is assumed the caller has guaranteed the - * object memory to be stable (RCU, etc.). It does provide a control dependency - * and thereby orders future stores. See the comment on top. - * - * Return: true if the increment was successful, false otherwise - */ -bool refcount_inc_not_zero(refcount_t *r) -{ - unsigned int new, val = atomic_read(&r->refs); - - do { - new = val + 1; - - if (!val) - return false; - - if (unlikely(!new)) - return true; - - } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new)); - - WARN_ONCE(new == REFCOUNT_SATURATED, - "refcount_t: saturated; leaking memory.\n"); - - return true; -} -EXPORT_SYMBOL(refcount_inc_not_zero); - -/** - * refcount_inc - increment a refcount - * @r: the refcount to increment - * - * Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN. - * - * Provides no memory ordering, it is assumed the caller already has a - * reference on the object. - * - * Will WARN if the refcount is 0, as this represents a possible use-after-free - * condition. - */ -void refcount_inc(refcount_t *r) -{ - WARN_ONCE(!refcount_inc_not_zero(r), "refcount_t: increment on 0; use-after-free.\n"); -} -EXPORT_SYMBOL(refcount_inc); - -/** - * refcount_sub_and_test - subtract from a refcount and test if it is 0 - * @i: amount to subtract from the refcount - * @r: the refcount - * - * Similar to atomic_dec_and_test(), but it will WARN, return false and - * ultimately leak on underflow and will fail to decrement when saturated - * at REFCOUNT_SATURATED. - * - * Provides release memory ordering, such that prior loads and stores are done - * before, and provides an acquire ordering on success such that free() - * must come after. - * - * Use of this function is not recommended for the normal reference counting - * use case in which references are taken and released one at a time. In these - * cases, refcount_dec(), or one of its variants, should instead be used to - * decrement a reference count. - * - * Return: true if the resulting refcount is 0, false otherwise - */ -bool refcount_sub_and_test(int i, refcount_t *r) -{ - unsigned int new, val = atomic_read(&r->refs); - - do { - if (unlikely(val == REFCOUNT_SATURATED)) - return false; - - new = val - i; - if (new > val) { - WARN_ONCE(new > val, "refcount_t: underflow; use-after-free.\n"); - return false; - } - - } while (!atomic_try_cmpxchg_release(&r->refs, &val, new)); - - if (!new) { - smp_acquire__after_ctrl_dep(); - return true; - } - return false; - -} -EXPORT_SYMBOL(refcount_sub_and_test); - -/** - * refcount_dec_and_test - decrement a refcount and test if it is 0 - * @r: the refcount - * - * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to - * decrement when saturated at REFCOUNT_SATURATED. - * - * Provides release memory ordering, such that prior loads and stores are done - * before, and provides an acquire ordering on success such that free() - * must come after. - * - * Return: true if the resulting refcount is 0, false otherwise - */ -bool refcount_dec_and_test(refcount_t *r) -{ - return refcount_sub_and_test(1, r); -} -EXPORT_SYMBOL(refcount_dec_and_test); - -/** - * refcount_dec - decrement a refcount - * @r: the refcount - * - * Similar to atomic_dec(), it will WARN on underflow and fail to decrement - * when saturated at REFCOUNT_SATURATED. - * - * Provides release memory ordering, such that prior loads and stores are done - * before. - */ -void refcount_dec(refcount_t *r) -{ - WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n"); -} -EXPORT_SYMBOL(refcount_dec); - -#endif /* CONFIG_REFCOUNT_FULL */ - /** * refcount_dec_if_one - decrement a refcount if it is 1 * @r: the refcount
From: Will Deacon will@kernel.org
[ Upstream commit dcb786493f3e48da3272b710028d42ec608cfda1 ]
Rewrite the generic REFCOUNT_FULL implementation so that the saturation point is moved to INT_MIN / 2. This allows us to defer the sanity checks until after the atomic operation, which removes many uses of cmpxchg() in favour of atomic_fetch_{add,sub}().
Some crude perf results obtained from lkdtm show substantially less overhead, despite the checking:
$ perf stat -r 3 -B -- echo {ATOMIC,REFCOUNT}_TIMING >/sys/kernel/debug/provoke-crash/DIRECT
# arm64 ATOMIC_TIMING: 46.50451 +- 0.00134 seconds time elapsed ( +- 0.00% ) REFCOUNT_TIMING (REFCOUNT_FULL, mainline): 77.57522 +- 0.00982 seconds time elapsed ( +- 0.01% ) REFCOUNT_TIMING (REFCOUNT_FULL, this series): 48.7181 +- 0.0256 seconds time elapsed ( +- 0.05% )
# x86 ATOMIC_TIMING: 31.6225 +- 0.0776 seconds time elapsed ( +- 0.25% ) REFCOUNT_TIMING (!REFCOUNT_FULL, mainline/x86 asm): 31.6689 +- 0.0901 seconds time elapsed ( +- 0.28% ) REFCOUNT_TIMING (REFCOUNT_FULL, mainline): 53.203 +- 0.138 seconds time elapsed ( +- 0.26% ) REFCOUNT_TIMING (REFCOUNT_FULL, this series): 31.7408 +- 0.0486 seconds time elapsed ( +- 0.15% )
Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Ard Biesheuvel ardb@kernel.org Reviewed-by: Kees Cook keescook@chromium.org Tested-by: Hanjun Guo guohanjun@huawei.com Tested-by: Jan Glauber jglauber@marvell.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Elena Reshetova elena.reshetova@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20191121115902.2551-6-will@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/refcount.h | 131 ++++++++++++++++++++++----------------- 1 file changed, 75 insertions(+), 56 deletions(-)
diff --git a/include/linux/refcount.h b/include/linux/refcount.h index e719b5b1220e..e3b218d669ce 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -47,8 +47,8 @@ static inline unsigned int refcount_read(const refcount_t *r) #ifdef CONFIG_REFCOUNT_FULL #include <linux/bug.h>
-#define REFCOUNT_MAX (UINT_MAX - 1) -#define REFCOUNT_SATURATED UINT_MAX +#define REFCOUNT_MAX INT_MAX +#define REFCOUNT_SATURATED (INT_MIN / 2)
/* * Variant of atomic_t specialized for reference counts. @@ -56,9 +56,47 @@ static inline unsigned int refcount_read(const refcount_t *r) * The interface matches the atomic_t interface (to aid in porting) but only * provides the few functions one should use for reference counting. * - * It differs in that the counter saturates at REFCOUNT_SATURATED and will not - * move once there. This avoids wrapping the counter and causing 'spurious' - * use-after-free issues. + * Saturation semantics + * ==================== + * + * refcount_t differs from atomic_t in that the counter saturates at + * REFCOUNT_SATURATED and will not move once there. This avoids wrapping the + * counter and causing 'spurious' use-after-free issues. In order to avoid the + * cost associated with introducing cmpxchg() loops into all of the saturating + * operations, we temporarily allow the counter to take on an unchecked value + * and then explicitly set it to REFCOUNT_SATURATED on detecting that underflow + * or overflow has occurred. Although this is racy when multiple threads + * access the refcount concurrently, by placing REFCOUNT_SATURATED roughly + * equidistant from 0 and INT_MAX we minimise the scope for error: + * + * INT_MAX REFCOUNT_SATURATED UINT_MAX + * 0 (0x7fff_ffff) (0xc000_0000) (0xffff_ffff) + * +--------------------------------+----------------+----------------+ + * <---------- bad value! ----------> + * + * (in a signed view of the world, the "bad value" range corresponds to + * a negative counter value). + * + * As an example, consider a refcount_inc() operation that causes the counter + * to overflow: + * + * int old = atomic_fetch_add_relaxed(r); + * // old is INT_MAX, refcount now INT_MIN (0x8000_0000) + * if (old < 0) + * atomic_set(r, REFCOUNT_SATURATED); + * + * If another thread also performs a refcount_inc() operation between the two + * atomic operations, then the count will continue to edge closer to 0. If it + * reaches a value of 1 before /any/ of the threads reset it to the saturated + * value, then a concurrent refcount_dec_and_test() may erroneously free the + * underlying object. Given the precise timing details involved with the + * round-robin scheduling of each thread manipulating the refcount and the need + * to hit the race multiple times in succession, there doesn't appear to be a + * practical avenue of attack even if using refcount_add() operations with + * larger increments. + * + * Memory ordering + * =============== * * Memory ordering rules are slightly relaxed wrt regular atomic_t functions * and provide only what is strictly required for refcounts. @@ -109,25 +147,19 @@ static inline unsigned int refcount_read(const refcount_t *r) */ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) { - unsigned int new, val = atomic_read(&r->refs); + int old = refcount_read(r);
do { - if (!val) - return false; - - if (unlikely(val == REFCOUNT_SATURATED)) - return true; - - new = val + i; - if (new < val) - new = REFCOUNT_SATURATED; + if (!old) + break; + } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i));
- } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new)); - - WARN_ONCE(new == REFCOUNT_SATURATED, - "refcount_t: saturated; leaking memory.\n"); + if (unlikely(old < 0 || old + i < 0)) { + refcount_set(r, REFCOUNT_SATURATED); + WARN_ONCE(1, "refcount_t: saturated; leaking memory.\n"); + }
- return true; + return old; }
/** @@ -148,7 +180,13 @@ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) */ static inline void refcount_add(int i, refcount_t *r) { - WARN_ONCE(!refcount_add_not_zero(i, r), "refcount_t: addition on 0; use-after-free.\n"); + int old = atomic_fetch_add_relaxed(i, &r->refs); + + WARN_ONCE(!old, "refcount_t: addition on 0; use-after-free.\n"); + if (unlikely(old <= 0 || old + i <= 0)) { + refcount_set(r, REFCOUNT_SATURATED); + WARN_ONCE(old, "refcount_t: saturated; leaking memory.\n"); + } }
/** @@ -166,23 +204,7 @@ static inline void refcount_add(int i, refcount_t *r) */ static inline __must_check bool refcount_inc_not_zero(refcount_t *r) { - unsigned int new, val = atomic_read(&r->refs); - - do { - new = val + 1; - - if (!val) - return false; - - if (unlikely(!new)) - return true; - - } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new)); - - WARN_ONCE(new == REFCOUNT_SATURATED, - "refcount_t: saturated; leaking memory.\n"); - - return true; + return refcount_add_not_zero(1, r); }
/** @@ -199,7 +221,7 @@ static inline __must_check bool refcount_inc_not_zero(refcount_t *r) */ static inline void refcount_inc(refcount_t *r) { - WARN_ONCE(!refcount_inc_not_zero(r), "refcount_t: increment on 0; use-after-free.\n"); + refcount_add(1, r); }
/** @@ -224,26 +246,19 @@ static inline void refcount_inc(refcount_t *r) */ static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) { - unsigned int new, val = atomic_read(&r->refs); - - do { - if (unlikely(val == REFCOUNT_SATURATED)) - return false; + int old = atomic_fetch_sub_release(i, &r->refs);
- new = val - i; - if (new > val) { - WARN_ONCE(new > val, "refcount_t: underflow; use-after-free.\n"); - return false; - } - - } while (!atomic_try_cmpxchg_release(&r->refs, &val, new)); - - if (!new) { + if (old == i) { smp_acquire__after_ctrl_dep(); return true; } - return false;
+ if (unlikely(old < 0 || old - i < 0)) { + refcount_set(r, REFCOUNT_SATURATED); + WARN_ONCE(1, "refcount_t: underflow; use-after-free.\n"); + } + + return false; }
/** @@ -276,9 +291,13 @@ static inline __must_check bool refcount_dec_and_test(refcount_t *r) */ static inline void refcount_dec(refcount_t *r) { - WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n"); -} + int old = atomic_fetch_sub_release(1, &r->refs);
+ if (unlikely(old <= 1)) { + refcount_set(r, REFCOUNT_SATURATED); + WARN_ONCE(1, "refcount_t: decrement hit 0; leaking memory.\n"); + } +} #else /* CONFIG_REFCOUNT_FULL */
#define REFCOUNT_MAX INT_MAX
From: Will Deacon will@kernel.org
[ Upstream commit 1eb085d94256aaa69b00cf5a86e3c5f5bb2bc460 ]
Having the refcount saturation and warnings inline bloats the text, despite the fact that these paths should never be executed in normal operation.
Move the refcount saturation and warnings out of line to reduce the image size when refcount_t checking is enabled. Relative to an x86_64 defconfig, the sizes reported by bloat-o-meter are:
# defconfig+REFCOUNT_FULL, inline saturation (i.e. before this patch) Total: Before=14762076, After=14915442, chg +1.04%
# defconfig+REFCOUNT_FULL, out-of-line saturation (i.e. after this patch) Total: Before=14762076, After=14835497, chg +0.50%
A side-effect of this change is that we now only get one warning per refcount saturation type, rather than one per problematic call-site.
Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Ard Biesheuvel ardb@kernel.org Reviewed-by: Kees Cook keescook@chromium.org Tested-by: Hanjun Guo guohanjun@huawei.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Elena Reshetova elena.reshetova@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20191121115902.2551-7-will@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/refcount.h | 39 ++++++++++++++++++++------------------- lib/refcount.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 48 insertions(+), 19 deletions(-)
diff --git a/include/linux/refcount.h b/include/linux/refcount.h index e3b218d669ce..1cd0a876a789 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -23,6 +23,16 @@ typedef struct refcount_struct {
#define REFCOUNT_INIT(n) { .refs = ATOMIC_INIT(n), }
+enum refcount_saturation_type { + REFCOUNT_ADD_NOT_ZERO_OVF, + REFCOUNT_ADD_OVF, + REFCOUNT_ADD_UAF, + REFCOUNT_SUB_UAF, + REFCOUNT_DEC_LEAK, +}; + +void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t); + /** * refcount_set - set a refcount's value * @r: the refcount @@ -154,10 +164,8 @@ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) break; } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i));
- if (unlikely(old < 0 || old + i < 0)) { - refcount_set(r, REFCOUNT_SATURATED); - WARN_ONCE(1, "refcount_t: saturated; leaking memory.\n"); - } + if (unlikely(old < 0 || old + i < 0)) + refcount_warn_saturate(r, REFCOUNT_ADD_NOT_ZERO_OVF);
return old; } @@ -182,11 +190,10 @@ static inline void refcount_add(int i, refcount_t *r) { int old = atomic_fetch_add_relaxed(i, &r->refs);
- WARN_ONCE(!old, "refcount_t: addition on 0; use-after-free.\n"); - if (unlikely(old <= 0 || old + i <= 0)) { - refcount_set(r, REFCOUNT_SATURATED); - WARN_ONCE(old, "refcount_t: saturated; leaking memory.\n"); - } + if (unlikely(!old)) + refcount_warn_saturate(r, REFCOUNT_ADD_UAF); + else if (unlikely(old < 0 || old + i < 0)) + refcount_warn_saturate(r, REFCOUNT_ADD_OVF); }
/** @@ -253,10 +260,8 @@ static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) return true; }
- if (unlikely(old < 0 || old - i < 0)) { - refcount_set(r, REFCOUNT_SATURATED); - WARN_ONCE(1, "refcount_t: underflow; use-after-free.\n"); - } + if (unlikely(old < 0 || old - i < 0)) + refcount_warn_saturate(r, REFCOUNT_SUB_UAF);
return false; } @@ -291,12 +296,8 @@ static inline __must_check bool refcount_dec_and_test(refcount_t *r) */ static inline void refcount_dec(refcount_t *r) { - int old = atomic_fetch_sub_release(1, &r->refs); - - if (unlikely(old <= 1)) { - refcount_set(r, REFCOUNT_SATURATED); - WARN_ONCE(1, "refcount_t: decrement hit 0; leaking memory.\n"); - } + if (unlikely(atomic_fetch_sub_release(1, &r->refs) <= 1)) + refcount_warn_saturate(r, REFCOUNT_DEC_LEAK); } #else /* CONFIG_REFCOUNT_FULL */
diff --git a/lib/refcount.c b/lib/refcount.c index 3a534fbebdcc..8b7e249c0e10 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -8,6 +8,34 @@ #include <linux/spinlock.h> #include <linux/bug.h>
+#define REFCOUNT_WARN(str) WARN_ONCE(1, "refcount_t: " str ".\n") + +void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t) +{ + refcount_set(r, REFCOUNT_SATURATED); + + switch (t) { + case REFCOUNT_ADD_NOT_ZERO_OVF: + REFCOUNT_WARN("saturated; leaking memory"); + break; + case REFCOUNT_ADD_OVF: + REFCOUNT_WARN("saturated; leaking memory"); + break; + case REFCOUNT_ADD_UAF: + REFCOUNT_WARN("addition on 0; use-after-free"); + break; + case REFCOUNT_SUB_UAF: + REFCOUNT_WARN("underflow; use-after-free"); + break; + case REFCOUNT_DEC_LEAK: + REFCOUNT_WARN("decrement hit 0; leaking memory"); + break; + default: + REFCOUNT_WARN("unknown saturation event!?"); + } +} +EXPORT_SYMBOL(refcount_warn_saturate); + /** * refcount_dec_if_one - decrement a refcount if it is 1 * @r: the refcount
From: Will Deacon will@kernel.org
[ Upstream commit 65b008552469f1c37f5e06e0016924502e40b4f5 ]
The definitions of REFCOUNT_MAX and REFCOUNT_SATURATED are the same, regardless of CONFIG_REFCOUNT_FULL, so consolidate them into a single pair of definitions.
Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Ard Biesheuvel ardb@kernel.org Reviewed-by: Kees Cook keescook@chromium.org Tested-by: Hanjun Guo guohanjun@huawei.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Elena Reshetova elena.reshetova@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20191121115902.2551-8-will@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/refcount.h | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 1cd0a876a789..757d4630115c 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -22,6 +22,8 @@ typedef struct refcount_struct { } refcount_t;
#define REFCOUNT_INIT(n) { .refs = ATOMIC_INIT(n), } +#define REFCOUNT_MAX INT_MAX +#define REFCOUNT_SATURATED (INT_MIN / 2)
enum refcount_saturation_type { REFCOUNT_ADD_NOT_ZERO_OVF, @@ -57,9 +59,6 @@ static inline unsigned int refcount_read(const refcount_t *r) #ifdef CONFIG_REFCOUNT_FULL #include <linux/bug.h>
-#define REFCOUNT_MAX INT_MAX -#define REFCOUNT_SATURATED (INT_MIN / 2) - /* * Variant of atomic_t specialized for reference counts. * @@ -300,10 +299,6 @@ static inline void refcount_dec(refcount_t *r) refcount_warn_saturate(r, REFCOUNT_DEC_LEAK); } #else /* CONFIG_REFCOUNT_FULL */ - -#define REFCOUNT_MAX INT_MAX -#define REFCOUNT_SATURATED (INT_MIN / 2) - # ifdef CONFIG_ARCH_HAS_REFCOUNT # include <asm/refcount.h> # else
From: Will Deacon will@kernel.org
[ Upstream commit fb041bb7c0a918b95c6889fc965cdc4a75b4c0ca ]
The generic implementation of refcount_t should be good enough for everybody, so remove ARCH_HAS_REFCOUNT and REFCOUNT_FULL entirely, leaving the generic implementation enabled unconditionally.
Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Ard Biesheuvel ardb@kernel.org Acked-by: Kees Cook keescook@chromium.org Tested-by: Hanjun Guo guohanjun@huawei.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Elena Reshetova elena.reshetova@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20191121115902.2551-9-will@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/Kconfig | 21 ---- arch/arm/Kconfig | 1 - arch/arm64/Kconfig | 1 - arch/s390/configs/debug_defconfig | 1 - arch/x86/Kconfig | 1 - arch/x86/include/asm/asm.h | 6 -- arch/x86/include/asm/refcount.h | 126 ----------------------- arch/x86/mm/extable.c | 49 --------- drivers/gpu/drm/i915/Kconfig.debug | 1 - include/linux/refcount.h | 158 +++++++++++------------------ lib/refcount.c | 2 +- 11 files changed, 59 insertions(+), 308 deletions(-) delete mode 100644 arch/x86/include/asm/refcount.h
diff --git a/arch/Kconfig b/arch/Kconfig index a8df66e64544..2219a07dca1e 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -915,27 +915,6 @@ config STRICT_MODULE_RWX config ARCH_HAS_PHYS_TO_DMA bool
-config ARCH_HAS_REFCOUNT - bool - help - An architecture selects this when it has implemented refcount_t - using open coded assembly primitives that provide an optimized - refcount_t implementation, possibly at the expense of some full - refcount state checks of CONFIG_REFCOUNT_FULL=y. - - The refcount overflow check behavior, however, must be retained. - Catching overflows is the primary security concern for protecting - against bugs in reference counts. - -config REFCOUNT_FULL - bool "Perform full reference count validation at the expense of speed" - help - Enabling this switches the refcounting infrastructure from a fast - unchecked atomic_t implementation to a fully state checked - implementation, which can be (slightly) slower but provides protections - against various use-after-free conditions that can be used in - security flaw exploits. - config HAVE_ARCH_COMPILER_H bool help diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index a1622b9290fd..a4364cce85f8 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -119,7 +119,6 @@ config ARM select OLD_SIGSUSPEND3 select PCI_SYSCALL if PCI select PERF_USE_VMALLOC - select REFCOUNT_FULL select RTC_LIB select SYS_SUPPORTS_APM_EMULATION # Above selects are sorted alphabetically; please add new ones diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a1a828ca188c..6b73143f0cf8 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -181,7 +181,6 @@ config ARM64 select PCI_SYSCALL if PCI select POWER_RESET select POWER_SUPPLY - select REFCOUNT_FULL select SPARSE_IRQ select SWIOTLB select SYSCTL_EXCEPTION_TRACE diff --git a/arch/s390/configs/debug_defconfig b/arch/s390/configs/debug_defconfig index 38d64030aacf..2e60c80395ab 100644 --- a/arch/s390/configs/debug_defconfig +++ b/arch/s390/configs/debug_defconfig @@ -62,7 +62,6 @@ CONFIG_OPROFILE=m CONFIG_KPROBES=y CONFIG_JUMP_LABEL=y CONFIG_STATIC_KEYS_SELFTEST=y -CONFIG_REFCOUNT_FULL=y CONFIG_LOCK_EVENT_COUNTS=y CONFIG_MODULES=y CONFIG_MODULE_FORCE_LOAD=y diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index c6c71592f6e4..6002252692af 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -73,7 +73,6 @@ config X86 select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_PTE_DEVMAP if X86_64 select ARCH_HAS_PTE_SPECIAL - select ARCH_HAS_REFCOUNT select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64 select ARCH_HAS_UACCESS_MCSAFE if X86_64 && X86_MCE select ARCH_HAS_SET_MEMORY diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h index 1b563f9167ea..cd339b88d5d4 100644 --- a/arch/x86/include/asm/asm.h +++ b/arch/x86/include/asm/asm.h @@ -141,9 +141,6 @@ # define _ASM_EXTABLE_EX(from, to) \ _ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
-# define _ASM_EXTABLE_REFCOUNT(from, to) \ - _ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount) - # define _ASM_NOKPROBE(entry) \ .pushsection "_kprobe_blacklist","aw" ; \ _ASM_ALIGN ; \ @@ -172,9 +169,6 @@ # define _ASM_EXTABLE_EX(from, to) \ _ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
-# define _ASM_EXTABLE_REFCOUNT(from, to) \ - _ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount) - /* For C file, we already have NOKPROBE_SYMBOL macro */ #endif
diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h deleted file mode 100644 index 232f856e0db0..000000000000 --- a/arch/x86/include/asm/refcount.h +++ /dev/null @@ -1,126 +0,0 @@ -#ifndef __ASM_X86_REFCOUNT_H -#define __ASM_X86_REFCOUNT_H -/* - * x86-specific implementation of refcount_t. Based on PAX_REFCOUNT from - * PaX/grsecurity. - */ -#include <linux/refcount.h> -#include <asm/bug.h> - -/* - * This is the first portion of the refcount error handling, which lives in - * .text.unlikely, and is jumped to from the CPU flag check (in the - * following macros). This saves the refcount value location into CX for - * the exception handler to use (in mm/extable.c), and then triggers the - * central refcount exception. The fixup address for the exception points - * back to the regular execution flow in .text. - */ -#define _REFCOUNT_EXCEPTION \ - ".pushsection .text..refcount\n" \ - "111:\tlea %[var], %%" _ASM_CX "\n" \ - "112:\t" ASM_UD2 "\n" \ - ASM_UNREACHABLE \ - ".popsection\n" \ - "113:\n" \ - _ASM_EXTABLE_REFCOUNT(112b, 113b) - -/* Trigger refcount exception if refcount result is negative. */ -#define REFCOUNT_CHECK_LT_ZERO \ - "js 111f\n\t" \ - _REFCOUNT_EXCEPTION - -/* Trigger refcount exception if refcount result is zero or negative. */ -#define REFCOUNT_CHECK_LE_ZERO \ - "jz 111f\n\t" \ - REFCOUNT_CHECK_LT_ZERO - -/* Trigger refcount exception unconditionally. */ -#define REFCOUNT_ERROR \ - "jmp 111f\n\t" \ - _REFCOUNT_EXCEPTION - -static __always_inline void refcount_add(unsigned int i, refcount_t *r) -{ - asm volatile(LOCK_PREFIX "addl %1,%0\n\t" - REFCOUNT_CHECK_LT_ZERO - : [var] "+m" (r->refs.counter) - : "ir" (i) - : "cc", "cx"); -} - -static __always_inline void refcount_inc(refcount_t *r) -{ - asm volatile(LOCK_PREFIX "incl %0\n\t" - REFCOUNT_CHECK_LT_ZERO - : [var] "+m" (r->refs.counter) - : : "cc", "cx"); -} - -static __always_inline void refcount_dec(refcount_t *r) -{ - asm volatile(LOCK_PREFIX "decl %0\n\t" - REFCOUNT_CHECK_LE_ZERO - : [var] "+m" (r->refs.counter) - : : "cc", "cx"); -} - -static __always_inline __must_check -bool refcount_sub_and_test(unsigned int i, refcount_t *r) -{ - bool ret = GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", - REFCOUNT_CHECK_LT_ZERO, - r->refs.counter, e, "er", i, "cx"); - - if (ret) { - smp_acquire__after_ctrl_dep(); - return true; - } - - return false; -} - -static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r) -{ - bool ret = GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", - REFCOUNT_CHECK_LT_ZERO, - r->refs.counter, e, "cx"); - - if (ret) { - smp_acquire__after_ctrl_dep(); - return true; - } - - return false; -} - -static __always_inline __must_check -bool refcount_add_not_zero(unsigned int i, refcount_t *r) -{ - int c, result; - - c = atomic_read(&(r->refs)); - do { - if (unlikely(c == 0)) - return false; - - result = c + i; - - /* Did we try to increment from/to an undesirable state? */ - if (unlikely(c < 0 || c == INT_MAX || result < c)) { - asm volatile(REFCOUNT_ERROR - : : [var] "m" (r->refs.counter) - : "cc", "cx"); - break; - } - - } while (!atomic_try_cmpxchg(&(r->refs), &c, result)); - - return c != 0; -} - -static __always_inline __must_check bool refcount_inc_not_zero(refcount_t *r) -{ - return refcount_add_not_zero(1, r); -} - -#endif diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c index 4d75bc656f97..30bb0bd3b1b8 100644 --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -44,55 +44,6 @@ __visible bool ex_handler_fault(const struct exception_table_entry *fixup, } EXPORT_SYMBOL_GPL(ex_handler_fault);
-/* - * Handler for UD0 exception following a failed test against the - * result of a refcount inc/dec/add/sub. - */ -__visible bool ex_handler_refcount(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr, - unsigned long error_code, - unsigned long fault_addr) -{ - /* First unconditionally saturate the refcount. */ - *(int *)regs->cx = INT_MIN / 2; - - /* - * Strictly speaking, this reports the fixup destination, not - * the fault location, and not the actually overflowing - * instruction, which is the instruction before the "js", but - * since that instruction could be a variety of lengths, just - * report the location after the overflow, which should be close - * enough for finding the overflow, as it's at least back in - * the function, having returned from .text.unlikely. - */ - regs->ip = ex_fixup_addr(fixup); - - /* - * This function has been called because either a negative refcount - * value was seen by any of the refcount functions, or a zero - * refcount value was seen by refcount_dec(). - * - * If we crossed from INT_MAX to INT_MIN, OF (Overflow Flag: result - * wrapped around) will be set. Additionally, seeing the refcount - * reach 0 will set ZF (Zero Flag: result was zero). In each of - * these cases we want a report, since it's a boundary condition. - * The SF case is not reported since it indicates post-boundary - * manipulations below zero or above INT_MAX. And if none of the - * flags are set, something has gone very wrong, so report it. - */ - if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_ZF)) { - bool zero = regs->flags & X86_EFLAGS_ZF; - - refcount_error_report(regs, zero ? "hit zero" : "overflow"); - } else if ((regs->flags & X86_EFLAGS_SF) == 0) { - /* Report if none of OF, ZF, nor SF are set. */ - refcount_error_report(regs, "unexpected saturation"); - } - - return true; -} -EXPORT_SYMBOL(ex_handler_refcount); - /* * Handler for when we fail to restore a task's FPU state. We should never get * here because the FPU state of a task using the FPU (task->thread.fpu.state) diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug index 41c8e39a73ba..e4f03fcb125e 100644 --- a/drivers/gpu/drm/i915/Kconfig.debug +++ b/drivers/gpu/drm/i915/Kconfig.debug @@ -21,7 +21,6 @@ config DRM_I915_DEBUG depends on DRM_I915 select DEBUG_FS select PREEMPT_COUNT - select REFCOUNT_FULL select I2C_CHARDEV select STACKDEPOT select DRM_DP_AUX_CHARDEV diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 757d4630115c..0ac50cf62d06 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -1,64 +1,4 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_REFCOUNT_H -#define _LINUX_REFCOUNT_H - -#include <linux/atomic.h> -#include <linux/compiler.h> -#include <linux/limits.h> -#include <linux/spinlock_types.h> - -struct mutex; - -/** - * struct refcount_t - variant of atomic_t specialized for reference counts - * @refs: atomic_t counter field - * - * The counter saturates at REFCOUNT_SATURATED and will not move once - * there. This avoids wrapping the counter and causing 'spurious' - * use-after-free bugs. - */ -typedef struct refcount_struct { - atomic_t refs; -} refcount_t; - -#define REFCOUNT_INIT(n) { .refs = ATOMIC_INIT(n), } -#define REFCOUNT_MAX INT_MAX -#define REFCOUNT_SATURATED (INT_MIN / 2) - -enum refcount_saturation_type { - REFCOUNT_ADD_NOT_ZERO_OVF, - REFCOUNT_ADD_OVF, - REFCOUNT_ADD_UAF, - REFCOUNT_SUB_UAF, - REFCOUNT_DEC_LEAK, -}; - -void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t); - -/** - * refcount_set - set a refcount's value - * @r: the refcount - * @n: value to which the refcount will be set - */ -static inline void refcount_set(refcount_t *r, int n) -{ - atomic_set(&r->refs, n); -} - -/** - * refcount_read - get a refcount's value - * @r: the refcount - * - * Return: the refcount's value - */ -static inline unsigned int refcount_read(const refcount_t *r) -{ - return atomic_read(&r->refs); -} - -#ifdef CONFIG_REFCOUNT_FULL -#include <linux/bug.h> - /* * Variant of atomic_t specialized for reference counts. * @@ -136,6 +76,64 @@ static inline unsigned int refcount_read(const refcount_t *r) * */
+#ifndef _LINUX_REFCOUNT_H +#define _LINUX_REFCOUNT_H + +#include <linux/atomic.h> +#include <linux/bug.h> +#include <linux/compiler.h> +#include <linux/limits.h> +#include <linux/spinlock_types.h> + +struct mutex; + +/** + * struct refcount_t - variant of atomic_t specialized for reference counts + * @refs: atomic_t counter field + * + * The counter saturates at REFCOUNT_SATURATED and will not move once + * there. This avoids wrapping the counter and causing 'spurious' + * use-after-free bugs. + */ +typedef struct refcount_struct { + atomic_t refs; +} refcount_t; + +#define REFCOUNT_INIT(n) { .refs = ATOMIC_INIT(n), } +#define REFCOUNT_MAX INT_MAX +#define REFCOUNT_SATURATED (INT_MIN / 2) + +enum refcount_saturation_type { + REFCOUNT_ADD_NOT_ZERO_OVF, + REFCOUNT_ADD_OVF, + REFCOUNT_ADD_UAF, + REFCOUNT_SUB_UAF, + REFCOUNT_DEC_LEAK, +}; + +void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t); + +/** + * refcount_set - set a refcount's value + * @r: the refcount + * @n: value to which the refcount will be set + */ +static inline void refcount_set(refcount_t *r, int n) +{ + atomic_set(&r->refs, n); +} + +/** + * refcount_read - get a refcount's value + * @r: the refcount + * + * Return: the refcount's value + */ +static inline unsigned int refcount_read(const refcount_t *r) +{ + return atomic_read(&r->refs); +} + /** * refcount_add_not_zero - add a value to a refcount unless it is 0 * @i: the value to add to the refcount @@ -298,46 +296,6 @@ static inline void refcount_dec(refcount_t *r) if (unlikely(atomic_fetch_sub_release(1, &r->refs) <= 1)) refcount_warn_saturate(r, REFCOUNT_DEC_LEAK); } -#else /* CONFIG_REFCOUNT_FULL */ -# ifdef CONFIG_ARCH_HAS_REFCOUNT -# include <asm/refcount.h> -# else -static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) -{ - return atomic_add_unless(&r->refs, i, 0); -} - -static inline void refcount_add(int i, refcount_t *r) -{ - atomic_add(i, &r->refs); -} - -static inline __must_check bool refcount_inc_not_zero(refcount_t *r) -{ - return atomic_add_unless(&r->refs, 1, 0); -} - -static inline void refcount_inc(refcount_t *r) -{ - atomic_inc(&r->refs); -} - -static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) -{ - return atomic_sub_and_test(i, &r->refs); -} - -static inline __must_check bool refcount_dec_and_test(refcount_t *r) -{ - return atomic_dec_and_test(&r->refs); -} - -static inline void refcount_dec(refcount_t *r) -{ - atomic_dec(&r->refs); -} -# endif /* !CONFIG_ARCH_HAS_REFCOUNT */ -#endif /* !CONFIG_REFCOUNT_FULL */
extern __must_check bool refcount_dec_if_one(refcount_t *r); extern __must_check bool refcount_dec_not_one(refcount_t *r); diff --git a/lib/refcount.c b/lib/refcount.c index 8b7e249c0e10..ebac8b7d15a7 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Out-of-line refcount functions common to all refcount implementations. + * Out-of-line refcount functions. */
#include <linux/mutex.h>
From: Al Viro viro@zeniv.linux.org.uk
[ Upstream commit 4b842e4e25b12951fa10dedb4bc16bc47e3b850c ]
Very few call sites where that would be triggered remain, and none of those is anywhere near hot enough to bother.
Signed-off-by: Al Viro viro@zeniv.linux.org.uk Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/include/asm/uaccess.h | 12 ---- arch/x86/include/asm/uaccess_32.h | 27 -------- arch/x86/include/asm/uaccess_64.h | 108 +----------------------------- 3 files changed, 2 insertions(+), 145 deletions(-)
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 61d93f062a36..a19effb98fdc 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -378,18 +378,6 @@ do { \ : "=r" (err), ltype(x) \ : "m" (__m(addr)), "i" (errret), "0" (err))
-#define __get_user_asm_nozero(x, addr, err, itype, rtype, ltype, errret) \ - asm volatile("\n" \ - "1: mov"itype" %2,%"rtype"1\n" \ - "2:\n" \ - ".section .fixup,"ax"\n" \ - "3: mov %3,%0\n" \ - " jmp 2b\n" \ - ".previous\n" \ - _ASM_EXTABLE_UA(1b, 3b) \ - : "=r" (err), ltype(x) \ - : "m" (__m(addr)), "i" (errret), "0" (err)) - /* * This doesn't do __uaccess_begin/end - the exception handling * around it must do that. diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h index ba2dc1930630..388a40660c7b 100644 --- a/arch/x86/include/asm/uaccess_32.h +++ b/arch/x86/include/asm/uaccess_32.h @@ -23,33 +23,6 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) static __always_inline unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n) { - if (__builtin_constant_p(n)) { - unsigned long ret; - - switch (n) { - case 1: - ret = 0; - __uaccess_begin_nospec(); - __get_user_asm_nozero(*(u8 *)to, from, ret, - "b", "b", "=q", 1); - __uaccess_end(); - return ret; - case 2: - ret = 0; - __uaccess_begin_nospec(); - __get_user_asm_nozero(*(u16 *)to, from, ret, - "w", "w", "=r", 2); - __uaccess_end(); - return ret; - case 4: - ret = 0; - __uaccess_begin_nospec(); - __get_user_asm_nozero(*(u32 *)to, from, ret, - "l", "k", "=r", 4); - __uaccess_end(); - return ret; - } - } return __copy_user_ll(to, (__force const void *)from, n); }
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index 5cd1caa8bc65..bc10e3dc64fe 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -65,117 +65,13 @@ copy_to_user_mcsafe(void *to, const void *from, unsigned len) static __always_inline __must_check unsigned long raw_copy_from_user(void *dst, const void __user *src, unsigned long size) { - int ret = 0; - - if (!__builtin_constant_p(size)) - return copy_user_generic(dst, (__force void *)src, size); - switch (size) { - case 1: - __uaccess_begin_nospec(); - __get_user_asm_nozero(*(u8 *)dst, (u8 __user *)src, - ret, "b", "b", "=q", 1); - __uaccess_end(); - return ret; - case 2: - __uaccess_begin_nospec(); - __get_user_asm_nozero(*(u16 *)dst, (u16 __user *)src, - ret, "w", "w", "=r", 2); - __uaccess_end(); - return ret; - case 4: - __uaccess_begin_nospec(); - __get_user_asm_nozero(*(u32 *)dst, (u32 __user *)src, - ret, "l", "k", "=r", 4); - __uaccess_end(); - return ret; - case 8: - __uaccess_begin_nospec(); - __get_user_asm_nozero(*(u64 *)dst, (u64 __user *)src, - ret, "q", "", "=r", 8); - __uaccess_end(); - return ret; - case 10: - __uaccess_begin_nospec(); - __get_user_asm_nozero(*(u64 *)dst, (u64 __user *)src, - ret, "q", "", "=r", 10); - if (likely(!ret)) - __get_user_asm_nozero(*(u16 *)(8 + (char *)dst), - (u16 __user *)(8 + (char __user *)src), - ret, "w", "w", "=r", 2); - __uaccess_end(); - return ret; - case 16: - __uaccess_begin_nospec(); - __get_user_asm_nozero(*(u64 *)dst, (u64 __user *)src, - ret, "q", "", "=r", 16); - if (likely(!ret)) - __get_user_asm_nozero(*(u64 *)(8 + (char *)dst), - (u64 __user *)(8 + (char __user *)src), - ret, "q", "", "=r", 8); - __uaccess_end(); - return ret; - default: - return copy_user_generic(dst, (__force void *)src, size); - } + return copy_user_generic(dst, (__force void *)src, size); }
static __always_inline __must_check unsigned long raw_copy_to_user(void __user *dst, const void *src, unsigned long size) { - int ret = 0; - - if (!__builtin_constant_p(size)) - return copy_user_generic((__force void *)dst, src, size); - switch (size) { - case 1: - __uaccess_begin(); - __put_user_asm(*(u8 *)src, (u8 __user *)dst, - ret, "b", "b", "iq", 1); - __uaccess_end(); - return ret; - case 2: - __uaccess_begin(); - __put_user_asm(*(u16 *)src, (u16 __user *)dst, - ret, "w", "w", "ir", 2); - __uaccess_end(); - return ret; - case 4: - __uaccess_begin(); - __put_user_asm(*(u32 *)src, (u32 __user *)dst, - ret, "l", "k", "ir", 4); - __uaccess_end(); - return ret; - case 8: - __uaccess_begin(); - __put_user_asm(*(u64 *)src, (u64 __user *)dst, - ret, "q", "", "er", 8); - __uaccess_end(); - return ret; - case 10: - __uaccess_begin(); - __put_user_asm(*(u64 *)src, (u64 __user *)dst, - ret, "q", "", "er", 10); - if (likely(!ret)) { - asm("":::"memory"); - __put_user_asm(4[(u16 *)src], 4 + (u16 __user *)dst, - ret, "w", "w", "ir", 2); - } - __uaccess_end(); - return ret; - case 16: - __uaccess_begin(); - __put_user_asm(*(u64 *)src, (u64 __user *)dst, - ret, "q", "", "er", 16); - if (likely(!ret)) { - asm("":::"memory"); - __put_user_asm(1[(u64 *)src], 1 + (u64 __user *)dst, - ret, "q", "", "er", 8); - } - __uaccess_end(); - return ret; - default: - return copy_user_generic((__force void *)dst, src, size); - } + return copy_user_generic((__force void *)dst, src, size); }
static __always_inline __must_check
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit 989b5db215a2f22f89d730b607b071d964780f10 ]
Add support for CMPXCHG loops on userspace addresses. Provide both an "unsafe" version for tight loops that do their own uaccess begin/end, as well as a "safe" version for use cases where the CMPXCHG is not buried in a loop, e.g. KVM will resume the guest instead of looping when emulation of a guest atomic accesses fails the CMPXCHG.
Provide 8-byte versions for 32-bit kernels so that KVM can do CMPXCHG on guest PAE PTEs, which are accessed via userspace addresses.
Guard the asm_volatile_goto() variation with CC_HAS_ASM_GOTO_TIED_OUTPUT, the "+m" constraint fails on some compilers that otherwise support CC_HAS_ASM_GOTO_OUTPUT.
Cc: stable@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Co-developed-by: Sean Christopherson seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Message-Id: 20220202004945.2540433-3-seanjc@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/include/asm/uaccess.h | 142 +++++++++++++++++++++++++++++++++ 1 file changed, 142 insertions(+)
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index a19effb98fdc..865795e2355e 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -441,6 +441,103 @@ __pu_label: \ __builtin_expect(__gu_err, 0); \ })
+#ifdef CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT +#define __try_cmpxchg_user_asm(itype, ltype, _ptr, _pold, _new, label) ({ \ + bool success; \ + __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \ + __typeof__(*(_ptr)) __old = *_old; \ + __typeof__(*(_ptr)) __new = (_new); \ + asm_volatile_goto("\n" \ + "1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\ + _ASM_EXTABLE_UA(1b, %l[label]) \ + : CC_OUT(z) (success), \ + [ptr] "+m" (*_ptr), \ + [old] "+a" (__old) \ + : [new] ltype (__new) \ + : "memory" \ + : label); \ + if (unlikely(!success)) \ + *_old = __old; \ + likely(success); }) + +#ifdef CONFIG_X86_32 +#define __try_cmpxchg64_user_asm(_ptr, _pold, _new, label) ({ \ + bool success; \ + __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \ + __typeof__(*(_ptr)) __old = *_old; \ + __typeof__(*(_ptr)) __new = (_new); \ + asm_volatile_goto("\n" \ + "1: " LOCK_PREFIX "cmpxchg8b %[ptr]\n" \ + _ASM_EXTABLE_UA(1b, %l[label]) \ + : CC_OUT(z) (success), \ + "+A" (__old), \ + [ptr] "+m" (*_ptr) \ + : "b" ((u32)__new), \ + "c" ((u32)((u64)__new >> 32)) \ + : "memory" \ + : label); \ + if (unlikely(!success)) \ + *_old = __old; \ + likely(success); }) +#endif // CONFIG_X86_32 +#else // !CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT +#define __try_cmpxchg_user_asm(itype, ltype, _ptr, _pold, _new, label) ({ \ + int __err = 0; \ + bool success; \ + __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \ + __typeof__(*(_ptr)) __old = *_old; \ + __typeof__(*(_ptr)) __new = (_new); \ + asm volatile("\n" \ + "1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\ + CC_SET(z) \ + "2:\n" \ + _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, \ + %[errout]) \ + : CC_OUT(z) (success), \ + [errout] "+r" (__err), \ + [ptr] "+m" (*_ptr), \ + [old] "+a" (__old) \ + : [new] ltype (__new) \ + : "memory", "cc"); \ + if (unlikely(__err)) \ + goto label; \ + if (unlikely(!success)) \ + *_old = __old; \ + likely(success); }) + +#ifdef CONFIG_X86_32 +/* + * Unlike the normal CMPXCHG, hardcode ECX for both success/fail and error. + * There are only six GPRs available and four (EAX, EBX, ECX, and EDX) are + * hardcoded by CMPXCHG8B, leaving only ESI and EDI. If the compiler uses + * both ESI and EDI for the memory operand, compilation will fail if the error + * is an input+output as there will be no register available for input. + */ +#define __try_cmpxchg64_user_asm(_ptr, _pold, _new, label) ({ \ + int __result; \ + __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \ + __typeof__(*(_ptr)) __old = *_old; \ + __typeof__(*(_ptr)) __new = (_new); \ + asm volatile("\n" \ + "1: " LOCK_PREFIX "cmpxchg8b %[ptr]\n" \ + "mov $0, %%ecx\n\t" \ + "setz %%cl\n" \ + "2:\n" \ + _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %%ecx) \ + : [result]"=c" (__result), \ + "+A" (__old), \ + [ptr] "+m" (*_ptr) \ + : "b" ((u32)__new), \ + "c" ((u32)((u64)__new >> 32)) \ + : "memory", "cc"); \ + if (unlikely(__result < 0)) \ + goto label; \ + if (unlikely(!__result)) \ + *_old = __old; \ + likely(__result); }) +#endif // CONFIG_X86_32 +#endif // CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT + /* FIXME: this hack is definitely wrong -AK */ struct __large_struct { unsigned long buf[100]; }; #define __m(x) (*(struct __large_struct __user *)(x)) @@ -722,6 +819,51 @@ do { \ if (unlikely(__gu_err)) goto err_label; \ } while (0)
+extern void __try_cmpxchg_user_wrong_size(void); + +#ifndef CONFIG_X86_32 +#define __try_cmpxchg64_user_asm(_ptr, _oldp, _nval, _label) \ + __try_cmpxchg_user_asm("q", "r", (_ptr), (_oldp), (_nval), _label) +#endif + +/* + * Force the pointer to u<size> to match the size expected by the asm helper. + * clang/LLVM compiles all cases and only discards the unused paths after + * processing errors, which breaks i386 if the pointer is an 8-byte value. + */ +#define unsafe_try_cmpxchg_user(_ptr, _oldp, _nval, _label) ({ \ + bool __ret; \ + __chk_user_ptr(_ptr); \ + switch (sizeof(*(_ptr))) { \ + case 1: __ret = __try_cmpxchg_user_asm("b", "q", \ + (__force u8 *)(_ptr), (_oldp), \ + (_nval), _label); \ + break; \ + case 2: __ret = __try_cmpxchg_user_asm("w", "r", \ + (__force u16 *)(_ptr), (_oldp), \ + (_nval), _label); \ + break; \ + case 4: __ret = __try_cmpxchg_user_asm("l", "r", \ + (__force u32 *)(_ptr), (_oldp), \ + (_nval), _label); \ + break; \ + case 8: __ret = __try_cmpxchg64_user_asm((__force u64 *)(_ptr), (_oldp),\ + (_nval), _label); \ + break; \ + default: __try_cmpxchg_user_wrong_size(); \ + } \ + __ret; }) + +/* "Returns" 0 on success, 1 on failure, -EFAULT if the access faults. */ +#define __try_cmpxchg_user(_ptr, _oldp, _nval, _label) ({ \ + int __ret = -EFAULT; \ + __uaccess_begin_nospec(); \ + __ret = !unsafe_try_cmpxchg_user(_ptr, _oldp, _nval, _label); \ +_label: \ + __uaccess_end(); \ + __ret; \ + }) + /* * We want the unsafe accessors to always be inlined and use * the error labels - thus the macro games.
From: Michel Lespinasse walken@google.com
[ Upstream commit 9740ca4e95b43b91a4a848694a20d01ba6818f7b ]
This patch series adds a new mmap locking API replacing the existing mmap_sem lock and unlocks. Initially the API is just implemente in terms of inlined rwsem calls, so it doesn't provide any new functionality.
There are two justifications for the new API:
- At first, it provides an easy hooking point to instrument mmap_sem locking latencies independently of any other rwsems.
- In the future, it may be a starting point for replacing the rwsem implementation with a different one, such as range locks. This is something that is being explored, even though there is no wide concensus about this possible direction yet. (see https://patchwork.kernel.org/cover/11401483/)
This patch (of 12):
This change wraps the existing mmap_sem related rwsem calls into a new mmap locking API. There are two justifications for the new API:
- At first, it provides an easy hooking point to instrument mmap_sem locking latencies independently of any other rwsems.
- In the future, it may be a starting point for replacing the rwsem implementation with a different one, such as range locks.
Signed-off-by: Michel Lespinasse walken@google.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Daniel Jordan daniel.m.jordan@oracle.com Reviewed-by: Davidlohr Bueso dbueso@suse.de Reviewed-by: Laurent Dufour ldufour@linux.ibm.com Reviewed-by: Vlastimil Babka vbabka@suse.cz Cc: Peter Zijlstra peterz@infradead.org Cc: Matthew Wilcox willy@infradead.org Cc: Liam Howlett Liam.Howlett@oracle.com Cc: Jerome Glisse jglisse@redhat.com Cc: David Rientjes rientjes@google.com Cc: Hugh Dickins hughd@google.com Cc: Ying Han yinghan@google.com Cc: Jason Gunthorpe jgg@ziepe.ca Cc: John Hubbard jhubbard@nvidia.com Cc: Michel Lespinasse walken@google.com Link: http://lkml.kernel.org/r/20200520052908.204642-1-walken@google.com Link: http://lkml.kernel.org/r/20200520052908.204642-2-walken@google.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/mm.h | 1 + include/linux/mmap_lock.h | 54 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 55 insertions(+) create mode 100644 include/linux/mmap_lock.h
diff --git a/include/linux/mm.h b/include/linux/mm.h index c125fea49752..d35c29d322d8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -15,6 +15,7 @@ #include <linux/atomic.h> #include <linux/debug_locks.h> #include <linux/mm_types.h> +#include <linux/mmap_lock.h> #include <linux/range.h> #include <linux/pfn.h> #include <linux/percpu-refcount.h> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h new file mode 100644 index 000000000000..97ac53b66052 --- /dev/null +++ b/include/linux/mmap_lock.h @@ -0,0 +1,54 @@ +#ifndef _LINUX_MMAP_LOCK_H +#define _LINUX_MMAP_LOCK_H + +static inline void mmap_init_lock(struct mm_struct *mm) +{ + init_rwsem(&mm->mmap_sem); +} + +static inline void mmap_write_lock(struct mm_struct *mm) +{ + down_write(&mm->mmap_sem); +} + +static inline int mmap_write_lock_killable(struct mm_struct *mm) +{ + return down_write_killable(&mm->mmap_sem); +} + +static inline bool mmap_write_trylock(struct mm_struct *mm) +{ + return down_write_trylock(&mm->mmap_sem) != 0; +} + +static inline void mmap_write_unlock(struct mm_struct *mm) +{ + up_write(&mm->mmap_sem); +} + +static inline void mmap_write_downgrade(struct mm_struct *mm) +{ + downgrade_write(&mm->mmap_sem); +} + +static inline void mmap_read_lock(struct mm_struct *mm) +{ + down_read(&mm->mmap_sem); +} + +static inline int mmap_read_lock_killable(struct mm_struct *mm) +{ + return down_read_killable(&mm->mmap_sem); +} + +static inline bool mmap_read_trylock(struct mm_struct *mm) +{ + return down_read_trylock(&mm->mmap_sem) != 0; +} + +static inline void mmap_read_unlock(struct mm_struct *mm) +{ + up_read(&mm->mmap_sem); +} + +#endif /* _LINUX_MMAP_LOCK_H */
From: Thomas Gleixner tglx@linutronix.de
[ Upstream commit e42404afc4ca856c48f1e05752541faa3587c472 ]
Prepare code for further simplification. No functional change.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20210908132525.096452100@linutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/cpu/mce/core.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 8a2b8e791314..9b98a7d8ac60 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -397,13 +397,16 @@ static int msr_to_offset(u32 msr) return -1; }
-__visible bool ex_handler_rdmsr_fault(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr, - unsigned long error_code, - unsigned long fault_addr) +static void ex_handler_msr_mce(struct pt_regs *regs, bool wrmsr) { - pr_emerg("MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pS)\n", - (unsigned int)regs->cx, regs->ip, (void *)regs->ip); + if (wrmsr) { + pr_emerg("MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pS)\n", + (unsigned int)regs->cx, (unsigned int)regs->dx, (unsigned int)regs->ax, + regs->ip, (void *)regs->ip); + } else { + pr_emerg("MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pS)\n", + (unsigned int)regs->cx, regs->ip, (void *)regs->ip); + }
show_stack_regs(regs);
@@ -411,7 +414,14 @@ __visible bool ex_handler_rdmsr_fault(const struct exception_table_entry *fixup,
while (true) cpu_relax(); +}
+__visible bool ex_handler_rdmsr_fault(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) +{ + ex_handler_msr_mce(regs, false); return true; }
@@ -447,17 +457,7 @@ __visible bool ex_handler_wrmsr_fault(const struct exception_table_entry *fixup, unsigned long error_code, unsigned long fault_addr) { - pr_emerg("MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pS)\n", - (unsigned int)regs->cx, (unsigned int)regs->dx, (unsigned int)regs->ax, - regs->ip, (void *)regs->ip); - - show_stack_regs(regs); - - panic("MCA architectural violation!\n"); - - while (true) - cpu_relax(); - + ex_handler_msr_mce(regs, true); return true; }
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit bff8c3848e071d387d8b0784dc91fa49cd563774 ]
The test: 'mask > (typeof(_reg))~0ull' only works correctly when both sides are unsigned, consider:
- 0xff000000 vs (int)~0ull - 0x000000ff vs (int)~0ull
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com Link: https://lore.kernel.org/r/20211110101324.950210584@infradead.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/bitfield.h | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 4c0224ff0a14..4f1c0f8e1bb0 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -41,6 +41,22 @@
#define __bf_shf(x) (__builtin_ffsll(x) - 1)
+#define __scalar_type_to_unsigned_cases(type) \ + unsigned type: (unsigned type)0, \ + signed type: (unsigned type)0 + +#define __unsigned_scalar_typeof(x) typeof( \ + _Generic((x), \ + char: (unsigned char)0, \ + __scalar_type_to_unsigned_cases(char), \ + __scalar_type_to_unsigned_cases(short), \ + __scalar_type_to_unsigned_cases(int), \ + __scalar_type_to_unsigned_cases(long), \ + __scalar_type_to_unsigned_cases(long long), \ + default: (x))) + +#define __bf_cast_unsigned(type, x) ((__unsigned_scalar_typeof(type))(x)) + #define __BF_FIELD_CHECK(_mask, _reg, _val, _pfx) \ ({ \ BUILD_BUG_ON_MSG(!__builtin_constant_p(_mask), \ @@ -49,7 +65,8 @@ BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \ ~((_mask) >> __bf_shf(_mask)) & (_val) : 0, \ _pfx "value too large for the field"); \ - BUILD_BUG_ON_MSG((_mask) > (typeof(_reg))~0ull, \ + BUILD_BUG_ON_MSG(__bf_cast_unsigned(_mask, _mask) > \ + __bf_cast_unsigned(_reg, ~0ull), \ _pfx "type of reg too small for mask"); \ __BUILD_BUG_ON_NOT_POWER_OF_2((_mask) + \ (1ULL << __bf_shf(_mask))); \
From: Takashi Iwai tiwai@suse.de
commit 5c1733e33c888a3cb7f576564d8ad543d5ad4a9e upstream.
Currently the standard memory allocator (snd_dma_malloc_pages*()) passes the byte size to allocate as is. Most of the backends allocates real pages, hence the actual allocations are aligned in page size. However, the genalloc doesn't seem assuring the size alignment, hence it may result in the access outside the buffer when the whole memory pages are exposed via mmap.
For avoiding such inconsistencies, this patch makes the allocation size always to be aligned in page size.
Note that, after this change, snd_dma_buffer.bytes field contains the aligned size, not the originally requested size. This value is also used for releasing the pages in return.
Reviewed-by: Lars-Peter Clausen lars@metafoo.de Link: https://lore.kernel.org/r/20201218145625.2045-2-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/core/memalloc.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/core/memalloc.c +++ b/sound/core/memalloc.c @@ -124,6 +124,7 @@ int snd_dma_alloc_pages(int type, struct if (WARN_ON(!device)) return -EINVAL;
+ size = PAGE_ALIGN(size); dmab->dev.type = type; dmab->dev.dev = device; dmab->bytes = 0;
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
commit 38f64f650dc0e44c146ff88d15a7339efa325918 upstream.
bt_skb_sendmsg helps takes care of allocation the skb and copying the the contents of msg over to the skb while checking for possible errors so it should be safe to call it without holding lock_sock.
Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Marcel Holtmann marcel@holtmann.org Cc: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/bluetooth/bluetooth.h | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+)
--- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -370,6 +370,34 @@ out: return NULL; }
+/* Shall not be called with lock_sock held */ +static inline struct sk_buff *bt_skb_sendmsg(struct sock *sk, + struct msghdr *msg, + size_t len, size_t mtu, + size_t headroom, size_t tailroom) +{ + struct sk_buff *skb; + size_t size = min_t(size_t, len, mtu); + int err; + + skb = bt_skb_send_alloc(sk, size + headroom + tailroom, + msg->msg_flags & MSG_DONTWAIT, &err); + if (!skb) + return ERR_PTR(err); + + skb_reserve(skb, headroom); + skb_tailroom_reserve(skb, mtu, tailroom); + + if (!copy_from_iter_full(skb_put(skb, size), size, &msg->msg_iter)) { + kfree_skb(skb); + return ERR_PTR(-EFAULT); + } + + skb->priority = sk->sk_priority; + + return skb; +} + int bt_to_errno(u16 code);
void hci_sock_set_flag(struct sock *sk, int nr);
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
commit 97e4e80299844bb5f6ce5a7540742ffbffae3d97 upstream.
This works similarly to bt_skb_sendmsg but can split the msg into multiple skb fragments which is useful for stream sockets.
Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Marcel Holtmann marcel@holtmann.org Cc: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/bluetooth/bluetooth.h | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+)
--- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -398,6 +398,44 @@ static inline struct sk_buff *bt_skb_sen return skb; }
+/* Similar to bt_skb_sendmsg but can split the msg into multiple fragments + * accourding to the MTU. + */ +static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk, + struct msghdr *msg, + size_t len, size_t mtu, + size_t headroom, size_t tailroom) +{ + struct sk_buff *skb, **frag; + + skb = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom); + if (IS_ERR_OR_NULL(skb)) + return skb; + + len -= skb->len; + if (!len) + return skb; + + /* Add remaining data over MTU as continuation fragments */ + frag = &skb_shinfo(skb)->frag_list; + while (len) { + struct sk_buff *tmp; + + tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom); + if (IS_ERR_OR_NULL(tmp)) { + kfree_skb(skb); + return tmp; + } + + len -= tmp->len; + + *frag = tmp; + frag = &(*frag)->next; + } + + return skb; +} + int bt_to_errno(u16 code);
void hci_sock_set_flag(struct sock *sk, int nr);
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
commit 0771cbb3b97d3c1d68eecd7f00055f599954c34e upstream.
This makes use of bt_skb_sendmsg instead of allocating a different buffer to be used with memcpy_from_msg which cause one extra copy.
Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Marcel Holtmann marcel@holtmann.org Cc: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/bluetooth/sco.c | 34 +++++++++++----------------------- 1 file changed, 11 insertions(+), 23 deletions(-)
--- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -279,27 +279,19 @@ static int sco_connect(struct hci_dev *h return err; }
-static int sco_send_frame(struct sock *sk, void *buf, int len, - unsigned int msg_flags) +static int sco_send_frame(struct sock *sk, struct sk_buff *skb) { struct sco_conn *conn = sco_pi(sk)->conn; - struct sk_buff *skb; - int err;
/* Check outgoing MTU */ - if (len > conn->mtu) + if (skb->len > conn->mtu) return -EINVAL;
- BT_DBG("sk %p len %d", sk, len); - - skb = bt_skb_send_alloc(sk, len, msg_flags & MSG_DONTWAIT, &err); - if (!skb) - return err; + BT_DBG("sk %p len %d", sk, skb->len);
- memcpy(skb_put(skb, len), buf, len); hci_send_sco(conn->hcon, skb);
- return len; + return skb->len; }
static void sco_recv_frame(struct sco_conn *conn, struct sk_buff *skb) @@ -715,7 +707,7 @@ static int sco_sock_sendmsg(struct socke size_t len) { struct sock *sk = sock->sk; - void *buf; + struct sk_buff *skb; int err;
BT_DBG("sock %p, sk %p", sock, sk); @@ -727,24 +719,20 @@ static int sco_sock_sendmsg(struct socke if (msg->msg_flags & MSG_OOB) return -EOPNOTSUPP;
- buf = kmalloc(len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - - if (memcpy_from_msg(buf, msg, len)) { - kfree(buf); - return -EFAULT; - } + skb = bt_skb_sendmsg(sk, msg, len, len, 0, 0); + if (IS_ERR_OR_NULL(skb)) + return PTR_ERR(skb);
lock_sock(sk);
if (sk->sk_state == BT_CONNECTED) - err = sco_send_frame(sk, buf, len, msg->msg_flags); + err = sco_send_frame(sk, skb); else err = -ENOTCONN;
release_sock(sk); - kfree(buf); + if (err) + kfree_skb(skb); return err; }
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
commit 81be03e026dc0c16dc1c64e088b2a53b73caa895 upstream.
This makes use of bt_skb_sendmmsg instead using memcpy_from_msg which is not considered safe to be used when lock_sock is held.
Also make rfcomm_dlc_send handle skb with fragments and queue them all atomically.
Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Marcel Holtmann marcel@holtmann.org Cc: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/bluetooth/rfcomm/core.c | 50 +++++++++++++++++++++++++++++++++++++------- net/bluetooth/rfcomm/sock.c | 50 ++++++++++---------------------------------- 2 files changed, 55 insertions(+), 45 deletions(-)
--- a/net/bluetooth/rfcomm/core.c +++ b/net/bluetooth/rfcomm/core.c @@ -553,22 +553,58 @@ struct rfcomm_dlc *rfcomm_dlc_exists(bda return dlc; }
+static int rfcomm_dlc_send_frag(struct rfcomm_dlc *d, struct sk_buff *frag) +{ + int len = frag->len; + + BT_DBG("dlc %p mtu %d len %d", d, d->mtu, len); + + if (len > d->mtu) + return -EINVAL; + + rfcomm_make_uih(frag, d->addr); + __skb_queue_tail(&d->tx_queue, frag); + + return len; +} + int rfcomm_dlc_send(struct rfcomm_dlc *d, struct sk_buff *skb) { - int len = skb->len; + unsigned long flags; + struct sk_buff *frag, *next; + int len;
if (d->state != BT_CONNECTED) return -ENOTCONN;
- BT_DBG("dlc %p mtu %d len %d", d, d->mtu, len); + frag = skb_shinfo(skb)->frag_list; + skb_shinfo(skb)->frag_list = NULL;
- if (len > d->mtu) - return -EINVAL; + /* Queue all fragments atomically. */ + spin_lock_irqsave(&d->tx_queue.lock, flags); + + len = rfcomm_dlc_send_frag(d, skb); + if (len < 0 || !frag) + goto unlock; + + for (; frag; frag = next) { + int ret; + + next = frag->next; + + ret = rfcomm_dlc_send_frag(d, frag); + if (ret < 0) { + kfree_skb(frag); + goto unlock; + } + + len += ret; + }
- rfcomm_make_uih(skb, d->addr); - skb_queue_tail(&d->tx_queue, skb); +unlock: + spin_unlock_irqrestore(&d->tx_queue.lock, flags);
- if (!test_bit(RFCOMM_TX_THROTTLED, &d->flags)) + if (len > 0 && !test_bit(RFCOMM_TX_THROTTLED, &d->flags)) rfcomm_schedule(); return len; } --- a/net/bluetooth/rfcomm/sock.c +++ b/net/bluetooth/rfcomm/sock.c @@ -578,47 +578,21 @@ static int rfcomm_sock_sendmsg(struct so lock_sock(sk);
sent = bt_sock_wait_ready(sk, msg->msg_flags); - if (sent) - goto done; - - while (len) { - size_t size = min_t(size_t, len, d->mtu); - int err; - - skb = sock_alloc_send_skb(sk, size + RFCOMM_SKB_RESERVE, - msg->msg_flags & MSG_DONTWAIT, &err); - if (!skb) { - if (sent == 0) - sent = err; - break; - } - skb_reserve(skb, RFCOMM_SKB_HEAD_RESERVE); - - err = memcpy_from_msg(skb_put(skb, size), msg, size); - if (err) { - kfree_skb(skb); - if (sent == 0) - sent = err; - break; - } - - skb->priority = sk->sk_priority; - - err = rfcomm_dlc_send(d, skb); - if (err < 0) { - kfree_skb(skb); - if (sent == 0) - sent = err; - break; - } - - sent += size; - len -= size; - }
-done: release_sock(sk);
+ if (sent) + return sent; + + skb = bt_skb_sendmmsg(sk, msg, len, d->mtu, RFCOMM_SKB_HEAD_RESERVE, + RFCOMM_SKB_TAIL_RESERVE); + if (IS_ERR_OR_NULL(skb)) + return PTR_ERR(skb); + + sent = rfcomm_dlc_send(d, skb); + if (sent < 0) + kfree_skb(skb); + return sent; }
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
commit 266191aa8d14b84958aaeb5e96ee4e97839e3d87 upstream.
Passing NULL to PTR_ERR will result in 0 (success), also since the likes of bt_skb_sendmsg does never return NULL it is safe to replace the instances of IS_ERR_OR_NULL with IS_ERR when checking its return.
Reported-by: Dan Carpenter dan.carpenter@oracle.com Tested-by: Tedd Ho-Jeong An tedd.an@intel.com Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Marcel Holtmann marcel@holtmann.org Cc: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/bluetooth/bluetooth.h | 2 +- net/bluetooth/rfcomm/sock.c | 2 +- net/bluetooth/sco.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-)
--- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -422,7 +422,7 @@ static inline struct sk_buff *bt_skb_sen struct sk_buff *tmp;
tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom); - if (IS_ERR_OR_NULL(tmp)) { + if (IS_ERR(tmp)) { kfree_skb(skb); return tmp; } --- a/net/bluetooth/rfcomm/sock.c +++ b/net/bluetooth/rfcomm/sock.c @@ -586,7 +586,7 @@ static int rfcomm_sock_sendmsg(struct so
skb = bt_skb_sendmmsg(sk, msg, len, d->mtu, RFCOMM_SKB_HEAD_RESERVE, RFCOMM_SKB_TAIL_RESERVE); - if (IS_ERR_OR_NULL(skb)) + if (IS_ERR(skb)) return PTR_ERR(skb);
sent = rfcomm_dlc_send(d, skb); --- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -720,7 +720,7 @@ static int sco_sock_sendmsg(struct socke return -EOPNOTSUPP;
skb = bt_skb_sendmsg(sk, msg, len, len, 0, 0); - if (IS_ERR_OR_NULL(skb)) + if (IS_ERR(skb)) return PTR_ERR(skb);
lock_sock(sk);
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
commit 037ce005af6b8a3e40ee07c6e9266c8997e6a4d6 upstream.
The skb in modified by hci_send_sco which pushes SCO headers thus changing skb->len causing sco_sock_sendmsg to fail.
Fixes: 0771cbb3b97d ("Bluetooth: SCO: Replace use of memcpy_from_msg with bt_skb_sendmsg") Tested-by: Tedd Ho-Jeong An tedd.an@intel.com Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Marcel Holtmann marcel@holtmann.org Cc: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/bluetooth/sco.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
--- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -282,16 +282,17 @@ static int sco_connect(struct hci_dev *h static int sco_send_frame(struct sock *sk, struct sk_buff *skb) { struct sco_conn *conn = sco_pi(sk)->conn; + int len = skb->len;
/* Check outgoing MTU */ - if (skb->len > conn->mtu) + if (len > conn->mtu) return -EINVAL;
- BT_DBG("sk %p len %d", sk, skb->len); + BT_DBG("sk %p len %d", sk, len);
hci_send_sco(conn->hcon, skb);
- return skb->len; + return len; }
static void sco_recv_frame(struct sco_conn *conn, struct sk_buff *skb) @@ -731,7 +732,8 @@ static int sco_sock_sendmsg(struct socke err = -ENOTCONN;
release_sock(sk); - if (err) + + if (err < 0) kfree_skb(skb); return err; }
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
commit 29fb608396d6a62c1b85acc421ad7a4399085b9f upstream.
Since bt_skb_sendmmsg can be used with the likes of SOCK_STREAM it shall return the partial chunks it could allocate instead of freeing everything as otherwise it can cause problems like bellow.
Fixes: 81be03e026dc ("Bluetooth: RFCOMM: Replace use of memcpy_from_msg with bt_skb_sendmmsg") Reported-by: Paul Menzel pmenzel@molgen.mpg.de Link: https://lore.kernel.org/r/d7206e12-1b99-c3be-84f4-df22af427ef5@molgen.mpg.de BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215594 Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Tested-by: Paul Menzel pmenzel@molgen.mpg.de (Nokia N9 (MeeGo/Harmattan) Signed-off-by: Marcel Holtmann marcel@holtmann.org Cc: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/bluetooth/bluetooth.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -423,8 +423,7 @@ static inline struct sk_buff *bt_skb_sen
tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom); if (IS_ERR(tmp)) { - kfree_skb(skb); - return tmp; + return skb; }
len -= tmp->len;
From: Jiri Slaby jslaby@suse.cz
commit 5f6a85158ccacc3f09744b3aafe8b11ab3b6c6f6 upstream.
Since commit a9c3f68f3cd8d (tty: Fix low_latency BUG) in 2014, tty_flip_buffer_push() is only a wrapper to tty_schedule_flip(). We are going to remove the latter (as it is used less), so call the former in drivers/tty/.
Cc: Vladimir Zapolskiy vz@mleia.com Reviewed-by: Johan Hovold johan@kernel.org Signed-off-by: Jiri Slaby jslaby@suse.cz Link: https://lore.kernel.org/r/20211122111648.30379-2-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/cyclades.c | 6 +++--- drivers/tty/goldfish.c | 2 +- drivers/tty/moxa.c | 4 ++-- drivers/tty/serial/lpc32xx_hs.c | 2 +- drivers/tty/vt/keyboard.c | 6 +++--- drivers/tty/vt/vt.c | 2 +- 6 files changed, 11 insertions(+), 11 deletions(-)
--- a/drivers/tty/cyclades.c +++ b/drivers/tty/cyclades.c @@ -556,7 +556,7 @@ static void cyy_chip_rx(struct cyclades_ } info->idle_stats.recv_idle = jiffies; } - tty_schedule_flip(port); + tty_flip_buffer_push(port);
/* end of service */ cyy_writeb(info, CyRIR, save_xir & 0x3f); @@ -996,7 +996,7 @@ static void cyz_handle_rx(struct cyclade mod_timer(&info->rx_full_timer, jiffies + 1); #endif info->idle_stats.recv_idle = jiffies; - tty_schedule_flip(&info->port); + tty_flip_buffer_push(&info->port);
/* Update rx_get */ cy_writel(&buf_ctrl->rx_get, new_rx_get); @@ -1172,7 +1172,7 @@ static void cyz_handle_cmd(struct cyclad if (delta_count) wake_up_interruptible(&info->port.delta_msr_wait); if (special_count) - tty_schedule_flip(&info->port); + tty_flip_buffer_push(&info->port); } }
--- a/drivers/tty/goldfish.c +++ b/drivers/tty/goldfish.c @@ -151,7 +151,7 @@ static irqreturn_t goldfish_tty_interrup address = (unsigned long)(void *)buf; goldfish_tty_rw(qtty, address, count, 0);
- tty_schedule_flip(&qtty->port); + tty_flip_buffer_push(&qtty->port); return IRQ_HANDLED; }
--- a/drivers/tty/moxa.c +++ b/drivers/tty/moxa.c @@ -1385,7 +1385,7 @@ static int moxa_poll_port(struct moxa_po if (inited && !tty_throttled(tty) && MoxaPortRxQueue(p) > 0) { /* RX */ MoxaPortReadData(p); - tty_schedule_flip(&p->port); + tty_flip_buffer_push(&p->port); } } else { clear_bit(EMPTYWAIT, &p->statusflags); @@ -1410,7 +1410,7 @@ static int moxa_poll_port(struct moxa_po
if (tty && (intr & IntrBreak) && !I_IGNBRK(tty)) { /* BREAK */ tty_insert_flip_char(&p->port, 0, TTY_BREAK); - tty_schedule_flip(&p->port); + tty_flip_buffer_push(&p->port); }
if (intr & IntrLine) --- a/drivers/tty/serial/lpc32xx_hs.c +++ b/drivers/tty/serial/lpc32xx_hs.c @@ -345,7 +345,7 @@ static irqreturn_t serial_lpc32xx_interr LPC32XX_HSUART_IIR(port->membase)); port->icount.overrun++; tty_insert_flip_char(tport, 0, TTY_OVERRUN); - tty_schedule_flip(tport); + tty_flip_buffer_push(tport); }
/* Data received? */ --- a/drivers/tty/vt/keyboard.c +++ b/drivers/tty/vt/keyboard.c @@ -310,7 +310,7 @@ int kbd_rate(struct kbd_repeat *rpt) static void put_queue(struct vc_data *vc, int ch) { tty_insert_flip_char(&vc->port, ch, 0); - tty_schedule_flip(&vc->port); + tty_flip_buffer_push(&vc->port); }
static void puts_queue(struct vc_data *vc, char *cp) @@ -319,7 +319,7 @@ static void puts_queue(struct vc_data *v tty_insert_flip_char(&vc->port, *cp, 0); cp++; } - tty_schedule_flip(&vc->port); + tty_flip_buffer_push(&vc->port); }
static void applkey(struct vc_data *vc, int key, char mode) @@ -564,7 +564,7 @@ static void fn_inc_console(struct vc_dat static void fn_send_intr(struct vc_data *vc) { tty_insert_flip_char(&vc->port, 0, TTY_BREAK); - tty_schedule_flip(&vc->port); + tty_flip_buffer_push(&vc->port); }
static void fn_scroll_forw(struct vc_data *vc) --- a/drivers/tty/vt/vt.c +++ b/drivers/tty/vt/vt.c @@ -1837,7 +1837,7 @@ static void respond_string(const char *p tty_insert_flip_char(port, *p, 0); p++; } - tty_schedule_flip(port); + tty_flip_buffer_push(port); }
static void cursor_report(struct vc_data *vc, struct tty_struct *tty)
From: Jiri Slaby jslaby@suse.cz
commit b68b914494df4f79b4e9b58953110574af1cb7a2 upstream.
Since commit a9c3f68f3cd8d (tty: Fix low_latency BUG) in 2014, tty_flip_buffer_push() is only a wrapper to tty_schedule_flip(). We are going to remove the latter (as it is used less), so call the former in the rest of the users.
Cc: Richard Henderson rth@twiddle.net Cc: Ivan Kokshaysky ink@jurassic.park.msu.ru Cc: Matt Turner mattst88@gmail.com Cc: William Hubbs w.d.hubbs@gmail.com Cc: Chris Brannon chris@the-brannons.com Cc: Kirk Reiser kirk@reisers.ca Cc: Samuel Thibault samuel.thibault@ens-lyon.org Cc: Heiko Carstens hca@linux.ibm.com Cc: Vasily Gorbik gor@linux.ibm.com Cc: Christian Borntraeger borntraeger@de.ibm.com Cc: Alexander Gordeev agordeev@linux.ibm.com Reviewed-by: Johan Hovold johan@kernel.org Signed-off-by: Jiri Slaby jslaby@suse.cz Link: https://lore.kernel.org/r/20211122111648.30379-3-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/alpha/kernel/srmcons.c | 2 +- drivers/s390/char/keyboard.h | 4 ++-- drivers/staging/speakup/spk_ttyio.c | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-)
--- a/arch/alpha/kernel/srmcons.c +++ b/arch/alpha/kernel/srmcons.c @@ -59,7 +59,7 @@ srmcons_do_receive_chars(struct tty_port } while((result.bits.status & 1) && (++loops < 10));
if (count) - tty_schedule_flip(port); + tty_flip_buffer_push(port);
return count; } --- a/drivers/s390/char/keyboard.h +++ b/drivers/s390/char/keyboard.h @@ -56,7 +56,7 @@ static inline void kbd_put_queue(struct tty_port *port, int ch) { tty_insert_flip_char(port, ch, 0); - tty_schedule_flip(port); + tty_flip_buffer_push(port); }
static inline void @@ -64,5 +64,5 @@ kbd_puts_queue(struct tty_port *port, ch { while (*cp) tty_insert_flip_char(port, *cp++, 0); - tty_schedule_flip(port); + tty_flip_buffer_push(port); } --- a/drivers/staging/speakup/spk_ttyio.c +++ b/drivers/staging/speakup/spk_ttyio.c @@ -88,7 +88,7 @@ static int spk_ttyio_receive_buf2(struct }
if (!ldisc_data->buf_free) - /* ttyio_in will tty_schedule_flip */ + /* ttyio_in will tty_flip_buffer_push */ return 0;
/* Make sure the consumer has read buf before we have seen @@ -325,7 +325,7 @@ static unsigned char ttyio_in(int timeou mb(); ldisc_data->buf_free = true; /* Let TTY push more characters */ - tty_schedule_flip(speakup_tty->port); + tty_flip_buffer_push(speakup_tty->port);
return rv; }
From: Jiri Slaby jslaby@suse.cz
commit 5db96ef23bda6c2a61a51693c85b78b52d03f654 upstream.
Since commit a9c3f68f3cd8d (tty: Fix low_latency BUG) in 2014, tty_flip_buffer_push() is only a wrapper to tty_schedule_flip(). All users were converted in the previous patches, so remove tty_schedule_flip() completely while inlining its body into tty_flip_buffer_push().
One less exported function.
Reviewed-by: Johan Hovold johan@kernel.org Signed-off-by: Jiri Slaby jslaby@suse.cz Link: https://lore.kernel.org/r/20211122111648.30379-4-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/tty_buffer.c | 30 ++++++++---------------------- include/linux/tty_flip.h | 1 - 2 files changed, 8 insertions(+), 23 deletions(-)
--- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -395,27 +395,6 @@ int __tty_insert_flip_char(struct tty_po EXPORT_SYMBOL(__tty_insert_flip_char);
/** - * tty_schedule_flip - push characters to ldisc - * @port: tty port to push from - * - * Takes any pending buffers and transfers their ownership to the - * ldisc side of the queue. It then schedules those characters for - * processing by the line discipline. - */ - -void tty_schedule_flip(struct tty_port *port) -{ - struct tty_bufhead *buf = &port->buf; - - /* paired w/ acquire in flush_to_ldisc(); ensures - * flush_to_ldisc() sees buffer data. - */ - smp_store_release(&buf->tail->commit, buf->tail->used); - queue_work(system_unbound_wq, &buf->work); -} -EXPORT_SYMBOL(tty_schedule_flip); - -/** * tty_prepare_flip_string - make room for characters * @port: tty port * @chars: return pointer for character write area @@ -557,7 +536,14 @@ static void flush_to_ldisc(struct work_s
void tty_flip_buffer_push(struct tty_port *port) { - tty_schedule_flip(port); + struct tty_bufhead *buf = &port->buf; + + /* + * Paired w/ acquire in flush_to_ldisc(); ensures flush_to_ldisc() sees + * buffer data. + */ + smp_store_release(&buf->tail->commit, buf->tail->used); + queue_work(system_unbound_wq, &buf->work); } EXPORT_SYMBOL(tty_flip_buffer_push);
--- a/include/linux/tty_flip.h +++ b/include/linux/tty_flip.h @@ -12,7 +12,6 @@ extern int tty_insert_flip_string_fixed_ extern int tty_prepare_flip_string(struct tty_port *port, unsigned char **chars, size_t size); extern void tty_flip_buffer_push(struct tty_port *port); -void tty_schedule_flip(struct tty_port *port); int __tty_insert_flip_char(struct tty_port *port, unsigned char ch, char flag);
static inline int tty_insert_flip_char(struct tty_port *port,
From: Jiri Slaby jslaby@suse.cz
commit 716b10580283fda66f2b88140e3964f8a7f9da89 upstream.
We will need this new helper in the next patch.
Cc: Hillf Danton hdanton@sina.com Cc: 一只狗 chennbnbnb@gmail.com Cc: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: Jiri Slaby jslaby@suse.cz Link: https://lore.kernel.org/r/20220707082558.9250-1-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/tty_buffer.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
--- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -523,6 +523,15 @@ static void flush_to_ldisc(struct work_s
}
+static inline void tty_flip_buffer_commit(struct tty_buffer *tail) +{ + /* + * Paired w/ acquire in flush_to_ldisc(); ensures flush_to_ldisc() sees + * buffer data. + */ + smp_store_release(&tail->commit, tail->used); +} + /** * tty_flip_buffer_push - terminal * @port: tty port to push @@ -538,11 +547,7 @@ void tty_flip_buffer_push(struct tty_por { struct tty_bufhead *buf = &port->buf;
- /* - * Paired w/ acquire in flush_to_ldisc(); ensures flush_to_ldisc() sees - * buffer data. - */ - smp_store_release(&buf->tail->commit, buf->tail->used); + tty_flip_buffer_commit(buf->tail); queue_work(system_unbound_wq, &buf->work); } EXPORT_SYMBOL(tty_flip_buffer_push);
From: Jiri Slaby jslaby@suse.cz
commit a501ab75e7624d133a5a3c7ec010687c8b961d23 upstream.
There is a race in pty_write(). pty_write() can be called in parallel with e.g. ioctl(TIOCSTI) or ioctl(TCXONC) which also inserts chars to the buffer. Provided, tty_flip_buffer_push() in pty_write() is called outside the lock, it can commit inconsistent tail. This can lead to out of bounds writes and other issues. See the Link below.
To fix this, we have to introduce a new helper called tty_insert_flip_string_and_push_buffer(). It does both tty_insert_flip_string() and tty_flip_buffer_commit() under the port lock. It also calls queue_work(), but outside the lock. See 71a174b39f10 (pty: do tty_flip_buffer_push without port->lock in pty_write) for the reasons.
Keep the helper internal-only (in drivers' tty.h). It is not intended to be used widely.
Link: https://seclists.org/oss-sec/2022/q2/155 Fixes: 71a174b39f10 (pty: do tty_flip_buffer_push without port->lock in pty_write) Cc: 一只狗 chennbnbnb@gmail.com Cc: Dan Carpenter dan.carpenter@oracle.com Suggested-by: Hillf Danton hdanton@sina.com Signed-off-by: Jiri Slaby jslaby@suse.cz Link: https://lore.kernel.org/r/20220707082558.9250-2-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/pty.c | 14 ++------------ drivers/tty/tty_buffer.c | 31 +++++++++++++++++++++++++++++++ include/linux/tty_flip.h | 3 +++ 3 files changed, 36 insertions(+), 12 deletions(-)
--- a/drivers/tty/pty.c +++ b/drivers/tty/pty.c @@ -111,21 +111,11 @@ static void pty_unthrottle(struct tty_st static int pty_write(struct tty_struct *tty, const unsigned char *buf, int c) { struct tty_struct *to = tty->link; - unsigned long flags;
- if (tty->stopped) + if (tty->stopped || !c) return 0;
- if (c > 0) { - spin_lock_irqsave(&to->port->lock, flags); - /* Stuff the data into the input queue of the other end */ - c = tty_insert_flip_string(to->port, buf, c); - spin_unlock_irqrestore(&to->port->lock, flags); - /* And shovel */ - if (c) - tty_flip_buffer_push(to->port); - } - return c; + return tty_insert_flip_string_and_push_buffer(to->port, buf, c); }
/** --- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -553,6 +553,37 @@ void tty_flip_buffer_push(struct tty_por EXPORT_SYMBOL(tty_flip_buffer_push);
/** + * tty_insert_flip_string_and_push_buffer - add characters to the tty buffer and + * push + * @port: tty port + * @chars: characters + * @size: size + * + * The function combines tty_insert_flip_string() and tty_flip_buffer_push() + * with the exception of properly holding the @port->lock. + * + * To be used only internally (by pty currently). + * + * Returns: the number added. + */ +int tty_insert_flip_string_and_push_buffer(struct tty_port *port, + const unsigned char *chars, size_t size) +{ + struct tty_bufhead *buf = &port->buf; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + size = tty_insert_flip_string(port, chars, size); + if (size) + tty_flip_buffer_commit(buf->tail); + spin_unlock_irqrestore(&port->lock, flags); + + queue_work(system_unbound_wq, &buf->work); + + return size; +} + +/** * tty_buffer_init - prepare a tty buffer structure * @tty: tty to initialise * --- a/include/linux/tty_flip.h +++ b/include/linux/tty_flip.h @@ -39,4 +39,7 @@ static inline int tty_insert_flip_string extern void tty_buffer_lock_exclusive(struct tty_port *port); extern void tty_buffer_unlock_exclusive(struct tty_port *port);
+int tty_insert_flip_string_and_push_buffer(struct tty_port *port, + const unsigned char *chars, size_t cnt); + #endif /* _LINUX_TTY_FLIP_H */
From: Jose Alonso joalonsof@gmail.com
commit 36a15e1cb134c0395261ba1940762703f778438c upstream.
The extra byte inserted by usbnet.c when (length % dev->maxpacket == 0) is causing problems to device.
This patch sets FLAG_SEND_ZLP to avoid this.
Tested with: 0b95:1790 ASIX Electronics Corp. AX88179 Gigabit Ethernet
Problems observed: ====================================================================== 1) Using ssh/sshfs. The remote sshd daemon can abort with the message: "message authentication code incorrect" This happens because the tcp message sent is corrupted during the USB "Bulk out". The device calculate the tcp checksum and send a valid tcp message to the remote sshd. Then the encryption detects the error and aborts. 2) NETDEV WATCHDOG: ... (ax88179_178a): transmit queue 0 timed out 3) Stop normal work without any log message. The "Bulk in" continue receiving packets normally. The host sends "Bulk out" and the device responds with -ECONNRESET. (The netusb.c code tx_complete ignore -ECONNRESET) Under normal conditions these errors take days to happen and in intense usage take hours.
A test with ping gives packet loss, showing that something is wrong: ping -4 -s 462 {destination} # 462 = 512 - 42 - 8 Not all packets fail. My guess is that the device tries to find another packet starting at the extra byte and will fail or not depending on the next bytes (old buffer content). ======================================================================
Signed-off-by: Jose Alonso joalonsof@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/ax88179_178a.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
--- a/drivers/net/usb/ax88179_178a.c +++ b/drivers/net/usb/ax88179_178a.c @@ -1690,7 +1690,7 @@ static const struct driver_info ax88179_ .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1703,7 +1703,7 @@ static const struct driver_info ax88178a .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1716,7 +1716,7 @@ static const struct driver_info cypress_ .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1729,7 +1729,7 @@ static const struct driver_info dlink_du .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1742,7 +1742,7 @@ static const struct driver_info sitecom_ .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1755,7 +1755,7 @@ static const struct driver_info samsung_ .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1768,7 +1768,7 @@ static const struct driver_info lenovo_i .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1781,7 +1781,7 @@ static const struct driver_info belkin_i .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, };
From: Jan Beulich jbeulich@suse.com
commit 1df931d95f4dc1c11db1123e85d4e08156e46ef9 upstream.
As noted (and fixed) a couple of times in the past, "=@cc<cond>" outputs and clobbering of "cc" don't work well together. The compiler appears to mean to reject such, but doesn't - in its upstream form - quite manage to yet for "cc". Furthermore two similar macros don't clobber "cc", and clobbering "cc" is pointless in asm()-s for x86 anyway - the compiler always assumes status flags to be clobbered there.
Fixes: 989b5db215a2 ("x86/uaccess: Implement macros for CMPXCHG on user addresses") Signed-off-by: Jan Beulich jbeulich@suse.com Message-Id: 485c0c0b-a3a7-0b7c-5264-7d00c01de032@suse.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/uaccess.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -498,7 +498,7 @@ __pu_label: \ [ptr] "+m" (*_ptr), \ [old] "+a" (__old) \ : [new] ltype (__new) \ - : "memory", "cc"); \ + : "memory"); \ if (unlikely(__err)) \ goto label; \ if (unlikely(!success)) \
On 7/27/22 09:09, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.4.208 release. There are 87 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 29 Jul 2022 16:09:50 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.4.208-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y and the diffstat can be found below.
thanks,
greg k-h
On ARCH_BRCMSTB using 32-bit and 64-bit ARM kernels and build tested on BMIPS_GENERIC:
Tested-by: Florian Fainelli f.fainelli@gmail.com
On Wed, 27 Jul 2022 at 21:58, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.4.208 release. There are 87 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 29 Jul 2022 16:09:50 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.4.208-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 5.4.208-rc1 * git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc * git branch: linux-5.4.y * git commit: 048552f118bf6191e9f2eb6f21b0c12f04f8f4dd * git describe: v5.4.207-88-g048552f118bf * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.4.y/build/v5.4.20...
## Test Regressions (compared to v5.4.207) No test regressions found.
## Metric Regressions (compared to v5.4.207) No metric regressions found.
## Test Fixes (compared to v5.4.207) No test fixes found.
## Metric Fixes (compared to v5.4.207) No metric fixes found.
## Test result summary total: 123475, pass: 110463, fail: 392, skip: 11618, xfail: 1002
## Build Summary * arc: 10 total, 10 passed, 0 failed * arm: 307 total, 307 passed, 0 failed * arm64: 61 total, 57 passed, 4 failed * i386: 28 total, 26 passed, 2 failed * mips: 45 total, 45 passed, 0 failed * parisc: 12 total, 12 passed, 0 failed * powerpc: 54 total, 54 passed, 0 failed * riscv: 27 total, 27 passed, 0 failed * s390: 12 total, 12 passed, 0 failed * sh: 24 total, 24 passed, 0 failed * sparc: 12 total, 12 passed, 0 failed * x86_64: 55 total, 53 passed, 2 failed
## Test suites summary * fwts * igt-gpu-tools * kunit * kvm-unit-tests * libgpiod * libhugetlbfs * log-parser-boot * log-parser-test * ltp-cap_bounds * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-filecaps * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-fsx * ltp-hugetlb * ltp-io * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-open-posix-tests * ltp-pty * ltp-sched * ltp-securebits * ltp-smoke * ltp-syscalls * ltp-tracing * network-basic-tests * packetdrill * rcutorture * ssuite * v4l2-compliance * vdso
-- Linaro LKFT https://lkft.linaro.org
On Wed, 27 Jul 2022 18:09:53 +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.4.208 release. There are 87 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 29 Jul 2022 16:09:50 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.4.208-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y and the diffstat can be found below.
thanks,
greg k-h
All tests passing for Tegra ...
Test results for stable-v5.4: 10 builds: 10 pass, 0 fail 26 boots: 26 pass, 0 fail 59 tests: 59 pass, 0 fail
Linux version: 5.4.208-rc1-g048552f118bf Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Tested-by: Jon Hunter jonathanh@nvidia.com
Jon
Hi Greg,
On Wed, Jul 27, 2022 at 06:09:53PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.4.208 release. There are 87 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 29 Jul 2022 16:09:50 +0000. Anything received after that time might be too late.
Build test (gcc version 11.3.1 20220724): mips: 65 configs -> no failure arm: 106 configs -> no failure arm64: 2 configs -> no failure x86_64: 4 configs -> no failure alpha allmodconfig -> no failure powerpc allmodconfig -> no failure riscv allmodconfig -> no failure s390 allmodconfig -> no failure xtensa allmodconfig -> no failure
Boot test: x86_64: Booted on my test laptop. No regression. x86_64: Booted on qemu. No regression. [1]
[1]. https://openqa.qa.codethink.co.uk/tests/1571
Tested-by: Sudip Mukherjee sudip.mukherjee@codethink.co.uk
-- Regards Sudip
On 7/27/22 10:09 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.4.208 release. There are 87 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 29 Jul 2022 16:09:50 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.4.208-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
On Wed, Jul 27, 2022 at 06:09:53PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.4.208 release. There are 87 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 29 Jul 2022 16:09:50 +0000. Anything received after that time might be too late.
Build results: total: 161 pass: 161 fail: 0 Qemu test results: total: 449 pass: 449 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
linux-stable-mirror@lists.linaro.org