This is the start of the stable review cycle for the 6.6.81 release. There are 142 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.6.81-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.6.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.6.81-rc1
Borislav Petkov (AMD) bp@alien8.de x86/microcode/AMD: Load only SHA256-checksummed patches
Borislav Petkov (AMD) bp@alien8.de x86/microcode/AMD: Add get_patch_level()
Borislav Petkov (AMD) bp@alien8.de x86/microcode/AMD: Get rid of the _load_microcode_amd() forward declaration
Borislav Petkov (AMD) bp@alien8.de x86/microcode/AMD: Merge early_apply_microcode() into its single callsite
Borislav Petkov (AMD) bp@alien8.de x86/microcode/AMD: Have __apply_microcode_amd() return bool
Nikolay Borisov nik.borisov@suse.com x86/microcode/AMD: Make __verify_patch_size() return bool
Nikolay Borisov nik.borisov@suse.com x86/microcode/AMD: Return bool from find_blobs_in_containers()
Borislav Petkov (AMD) bp@alien8.de x86/microcode/AMD: Flush patch buffer mapping after application
Chang S. Bae chang.seok.bae@intel.com x86/microcode/intel: Remove unnecessary cache writeback and invalidation
Borislav Petkov (AMD) bp@alien8.de x86/microcode/AMD: Split load_microcode_amd()
Borislav Petkov (AMD) bp@alien8.de x86/microcode/AMD: Pay attention to the stepping dynamically
Borislav Petkov bp@alien8.de x86/microcode/AMD: Use the family,model,stepping encoded in the patch ID
Borislav Petkov (AMD) bp@alien8.de x86/microcode/intel: Set new revision only after a successful update
Borislav Petkov (AMD) bp@alien8.de x86/microcode: Rework early revisions reporting
Thomas Gleixner tglx@linutronix.de x86/microcode: Prepare for minimal revision check
Thomas Gleixner tglx@linutronix.de x86/microcode: Handle "offline" CPUs correctly
Thomas Gleixner tglx@linutronix.de x86/apic: Provide apic_force_nmi_on_cpu()
Thomas Gleixner tglx@linutronix.de x86/microcode: Protect against instrumentation
Thomas Gleixner tglx@linutronix.de x86/microcode: Rendezvous and load in NMI
Thomas Gleixner tglx@linutronix.de x86/microcode: Replace the all-in-one rendevous handler
Thomas Gleixner tglx@linutronix.de x86/microcode: Provide new control functions
Thomas Gleixner tglx@linutronix.de x86/microcode: Add per CPU control field
Thomas Gleixner tglx@linutronix.de x86/microcode: Add per CPU result state
Thomas Gleixner tglx@linutronix.de x86/microcode: Sanitize __wait_for_cpus()
Thomas Gleixner tglx@linutronix.de x86/microcode: Clarify the late load logic
Thomas Gleixner tglx@linutronix.de x86/microcode: Handle "nosmt" correctly
Thomas Gleixner tglx@linutronix.de x86/microcode: Clean up mc_cpu_down_prep()
Thomas Gleixner tglx@linutronix.de x86/microcode: Get rid of the schedule work indirection
Thomas Gleixner tglx@linutronix.de x86/microcode: Mop up early loading leftovers
Thomas Gleixner tglx@linutronix.de x86/microcode/amd: Use cached microcode for AP load
Thomas Gleixner tglx@linutronix.de x86/microcode/amd: Cache builtin/initrd microcode early
Thomas Gleixner tglx@linutronix.de x86/microcode/amd: Cache builtin microcode too
Thomas Gleixner tglx@linutronix.de x86/microcode/amd: Use correct per CPU ucode_cpu_info
Thomas Gleixner tglx@linutronix.de x86/microcode: Remove pointless apply() invocation
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Rework intel_find_matching_signature()
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Reuse intel_cpu_collect_info()
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Rework intel_cpu_collect_info()
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Unify microcode apply() functions
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Switch to kvmalloc()
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Save the microcode only after a successful late-load
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Simplify early loading
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Cleanup code further
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Simplify and rename generic_load_microcode()
Thomas Gleixner tglx@linutronix.de x86/microcode/intel: Simplify scan_microcode()
Ashok Raj ashok.raj@intel.com x86/microcode/intel: Rip out mixed stepping support for Intel CPUs
Thomas Gleixner tglx@linutronix.de x86/microcode/32: Move early loading after paging enable
Lukasz Czechowski lukasz.czechowski@thaumatec.com arm64: dts: rockchip: Disable DMA for uart5 on px30-ringneck
Thomas Gleixner tglx@linutronix.de intel_idle: Handle older CPUs, which stop the TSC in deeper C states, correctly
Joshua Washington joshwash@google.com gve: set xdp redirect target only when it is available
chr[] chris@rudorff.com amdgpu/pm/legacy: fix suspend/resume issues
Tomas Glozar tglozar@redhat.com rtla/timerlat_top: Set OSNOISE_WORKLOAD for kernel threads
Tomas Glozar tglozar@redhat.com rtla/timerlat_hist: Set OSNOISE_WORKLOAD for kernel threads
Tomas Glozar tglozar@redhat.com Revert "rtla/timerlat_hist: Set OSNOISE_WORKLOAD for kernel threads"
Tomas Glozar tglozar@redhat.com Revert "rtla/timerlat_top: Set OSNOISE_WORKLOAD for kernel threads"
Yong-Xuan Wang yongxuan.wang@sifive.com riscv: signal: fix signal frame size
Andreas Schwab schwab@suse.de riscv/futex: sign extend compare value in atomic cmpxchg
Stafford Horne shorne@gmail.com rseq/selftests: Fix riscv rseq_offset_deref_addv inline asm
Arthur Simchaev arthur.simchaev@sandisk.com scsi: ufs: core: bsg: Fix crash when arpmb command fails
Thomas Gleixner tglx@linutronix.de sched/core: Prevent rescheduling when interrupts are disabled
Thomas Gleixner tglx@linutronix.de rcuref: Plug slowpath race in rcuref_put()
Ard Biesheuvel ardb@kernel.org vmlinux.lds: Ensure that const vars with relocations are mapped R/O
Matthieu Baerts (NGI0) matttbe@kernel.org mptcp: reset when MPTCP opts are dropped after join
Paolo Abeni pabeni@redhat.com mptcp: always handle address removal under msk socket lock
Kaustabh Chakraborty kauschluss@disroot.org phy: exynos5-usbdrd: fix MPLL_MULTIPLIER and SSC_REFCLKSEL masks in refclk
BH Hsieh bhsieh@nvidia.com phy: tegra: xusb: reset VBUS & ID OVERRIDE
Wei Fang wei.fang@nxp.com net: enetc: fix the off-by-one issue in enetc_map_tx_tso_buffs()
Wei Fang wei.fang@nxp.com net: enetc: correct the xdp_tx statistics
Wei Fang wei.fang@nxp.com net: enetc: update UDP checksum when updating originTimestamp field
Wei Fang wei.fang@nxp.com net: enetc: keep track of correct Tx BD count in enetc_map_tx_tso_buffs()
Wei Fang wei.fang@nxp.com net: enetc: fix the off-by-one issue in enetc_map_tx_buffs()
Nikita Zhandarovich n.zhandarovich@fintech.ru usbnet: gl620a: fix endpoint checking in genelink_bind()
Binbin Zhou zhoubinbin@loongson.cn i2c: ls2x: Fix frequency division register access
Tyrone Ting kfting@nuvoton.com i2c: npcm: disable interrupt enable bit before devm_request_irq
Roman Li Roman.Li@amd.com drm/amd/display: Fix HPD after gpu reset
Tom Chung chiahsuan.chung@amd.com drm/amd/display: Disable PSR-SU on eDP panels
Kan Liang kan.liang@linux.intel.com perf/core: Fix low freq setting via IOC_PERIOD
Kan Liang kan.liang@linux.intel.com perf/x86: Fix low freqency setting issue
Breno Leitao leitao@debian.org perf/core: Add RCU read lock protection to perf_iterate_ctx()
Adrien Vergé adrienverge@gmail.com ALSA: hda/realtek: Fix microphone regression on ASUS N705UD
Dmitry Panchenko dmitry@d-systems.ee ALSA: usb-audio: Re-add sample rate quirk for Pioneer DJM-900NXS2
Nikolay Kuratov kniv@yandex-team.ru ftrace: Avoid potential division by zero in function_stat_show()
Steven Rostedt rostedt@goodmis.org tracing: Fix bad hist from corrupting named_triggers list
Andrew Jones ajones@ventanamicro.com riscv: KVM: Fix SBI TIME error generation
Andrew Jones ajones@ventanamicro.com riscv: KVM: Fix SBI IPI error generation
Andrew Jones ajones@ventanamicro.com riscv: KVM: Fix hart suspend status check
Yong-Xuan Wang yongxuan.wang@sifive.com RISCV: KVM: Introduce mp_state_lock to avoid lock inversion
Chukun Pan amadeus@jmu.edu.cn phy: rockchip: naneng-combphy: compatible reset with old DT
Russell Senior russell@personaltelco.net x86/CPU: Fix warm boot hang regression on AMD SC1100 SoC systems
Pavel Begunkov asml.silence@gmail.com io_uring/net: save msg_control for compat
Tong Tiangen tongtiangen@huawei.com uprobes: Reject the shared zeropage in uprobe_write_opcode()
Luo Gengkun luogengkun@huaweicloud.com perf/core: Order the PMU list to fix warning about unordered pmu_ctx_list
Meghana Malladi m-malladi@ti.com net: ti: icss-iep: Reject perout generation request
Diogo Ivo diogo.ivo@siemens.com net: ti: icss-iep: Remove spinlock-based synchronization
Justin Iurman justin.iurman@uliege.be net: ipv6: fix dst ref loop on input in rpl lwt
Justin Iurman justin.iurman@uliege.be net: ipv6: rpl_iptunnel: mitigate 2-realloc issue
Justin Iurman justin.iurman@uliege.be net: ipv6: fix dst ref loop on input in seg6 lwt
Justin Iurman justin.iurman@uliege.be net: ipv6: seg6_iptunnel: mitigate 2-realloc issue
Justin Iurman justin.iurman@uliege.be include: net: add static inline dst_dev_overhead() to dst.h
Shay Drory shayd@nvidia.com net/mlx5: IRQ, Fix null string in debug print
Harshal Chaudhari hchaudhari@marvell.com net: mvpp2: cls: Fixed Non IP flow, with vlan tag flow defination.
Mohammad Heib mheib@redhat.com net: Clear old fragment checksum value in napi_reuse_skb
Wang Hai wanghai38@huawei.com tcp: Defer ts_recent changes until req is owned
Marcin Szycik marcin.szycik@linux.intel.com ice: Fix deinitializing VF in error path
Paul Greenwalt paul.greenwalt@intel.com ice: add E830 HW VF mailbox message limit support
Paul Greenwalt paul.greenwalt@intel.com ice: Add E830 device IDs, MAC type and registers
Takashi Iwai tiwai@suse.de ALSA: hda/realtek: Fix wrong mic setup for ASUS VivoBook 15
Stefan Binding sbinding@opensource.cirrus.com ALSA: hda/realtek: Add quirks for ASUS ROG 2023 models
Richard Fitzgerald rf@opensource.cirrus.com firmware: cs_dsp: Remove async regmap writes
Philo Lu lulie@linux.alibaba.com ipvs: Always clear ipvs_property flag in skb_scrub_packet()
Nicolas Frattaroli nicolas.frattaroli@collabora.com ASoC: es8328: fix route from DAC to output
Sean Anderson sean.anderson@linux.dev net: cadence: macb: Synchronize stats calculations
Eric Dumazet edumazet@google.com ipvlan: ensure network headers are in skb linear part
Guillaume Nault gnault@redhat.com ipvlan: Prepare ipvlan_process_v4_outbound() to future .flowi4_tos conversion.
Guillaume Nault gnault@redhat.com ipv4: Convert ip_route_input() to dscp_t.
Guillaume Nault gnault@redhat.com ipv4: Convert icmp_route_lookup() to dscp_t.
Ido Schimmel idosch@nvidia.com ipvlan: Unmask upper DSCP bits in ipvlan_process_v4_outbound()
Ido Schimmel idosch@nvidia.com ipv4: icmp: Unmask upper DSCP bits in icmp_route_lookup()
Ido Schimmel idosch@nvidia.com ipv4: icmp: Pass full DS field to ip_route_input()
Peilin He he.peilin@zte.com.cn net/ipv4: add tracepoint for icmp_send
Jiri Slaby (SUSE) jirislaby@kernel.org net: set the minimum for net_hotdata.netdev_budget_usecs
Ido Schimmel idosch@nvidia.com net: loopback: Avoid sending IP packets without an Ethernet header
David Howells dhowells@redhat.com afs: Fix the server_list to unuse a displaced server rather than putting it
David Howells dhowells@redhat.com afs: Make it possible to find the volumes that are using a server
David Howells dhowells@redhat.com rxrpc: rxperf: Fix missing decoding of terminal magic cookie
Luiz Augusto von Dentz luiz.von.dentz@intel.com Bluetooth: L2CAP: Fix L2CAP_ECRED_CONN_RSP response
Takashi Iwai tiwai@suse.de ALSA: usb-audio: Avoid dropping MIDI events at closing multiple ports
Arnd Bergmann arnd@arndb.de sunrpc: suppress warnings for unused procfs functions
Patrisious Haddad phaddad@nvidia.com RDMA/mlx5: Fix bind QP error cleanup flow
Ye Bin yebin10@huawei.com scsi: core: Clear driver private data when retrying request
Patrisious Haddad phaddad@nvidia.com RDMA/mlx5: Fix AH static rate parsing
Or Har-Toov ohartoov@nvidia.com IB/core: Add support for XDR link speed
Benjamin Coddington bcodding@redhat.com SUNRPC: Handle -ETIMEDOUT return from tlshd
Trond Myklebust trond.myklebust@hammerspace.com SUNRPC: Prevent looping due to rpc_signal_task() races
Stephen Brennan stephen.s.brennan@oracle.com SUNRPC: convert RPC_TASK_* constants to enum
Vasiliy Kovalev kovalev@altlinux.org ovl: fix UAF in ovl_dentry_update_reval by moving dput() in ovl_link_up
Bart Van Assche bvanassche@acm.org scsi: ufs: core: Fix ufshcd_is_ufs_dev_busy() and ufshcd_eh_timed_out()
Avri Altman avri.altman@wdc.com scsi: ufs: core: Prepare to introduce a new clock_gating lock
Avri Altman avri.altman@wdc.com scsi: ufs: core: Introduce ufshcd_has_pending_tasks()
Bean Huo beanhuo@micron.com scsi: ufs: core: Add UFS RTC support
Bean Huo beanhuo@micron.com scsi: ufs: core: Add ufshcd_is_ufs_dev_busy()
Konstantin Taranov kotaranov@microsoft.com RDMA/mana_ib: Allocate PAGE aligned doorbell index
Mark Zhang markzhang@nvidia.com IB/mlx5: Set and get correct qp_num for a DCT QP
-------------
Diffstat:
Documentation/admin-guide/kernel-parameters.txt | 5 + Makefile | 4 +- arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi | 5 + arch/riscv/include/asm/futex.h | 2 +- arch/riscv/include/asm/kvm_host.h | 8 +- arch/riscv/kernel/signal.c | 6 - arch/riscv/kvm/vcpu.c | 48 +- arch/riscv/kvm/vcpu_sbi.c | 7 +- arch/riscv/kvm/vcpu_sbi_hsm.c | 45 +- arch/riscv/kvm/vcpu_sbi_replace.c | 15 +- arch/x86/Kconfig | 26 +- arch/x86/events/core.c | 2 +- arch/x86/include/asm/apic.h | 5 +- arch/x86/include/asm/cpu.h | 20 +- arch/x86/include/asm/microcode.h | 18 +- arch/x86/kernel/apic/apic_flat_64.c | 2 + arch/x86/kernel/apic/ipi.c | 8 + arch/x86/kernel/apic/x2apic_cluster.c | 1 + arch/x86/kernel/apic/x2apic_phys.c | 1 + arch/x86/kernel/cpu/common.c | 12 - arch/x86/kernel/cpu/cyrix.c | 4 +- arch/x86/kernel/cpu/microcode/amd.c | 648 ++++++++++++------ arch/x86/kernel/cpu/microcode/amd_shas.c | 444 +++++++++++++ arch/x86/kernel/cpu/microcode/core.c | 723 +++++++++++++-------- arch/x86/kernel/cpu/microcode/intel.c | 706 ++++++-------------- arch/x86/kernel/cpu/microcode/internal.h | 49 +- arch/x86/kernel/head32.c | 3 + arch/x86/kernel/head_32.S | 10 - arch/x86/kernel/nmi.c | 9 +- arch/x86/kernel/smpboot.c | 12 +- drivers/firmware/cirrus/cs_dsp.c | 24 +- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c | 14 + .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c | 3 +- drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 25 +- drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c | 8 +- drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c | 26 +- drivers/i2c/busses/i2c-ls2x.c | 16 +- drivers/i2c/busses/i2c-npcm7xx.c | 7 + drivers/idle/intel_idle.c | 4 + drivers/infiniband/core/sysfs.c | 4 + drivers/infiniband/core/uverbs_std_types_device.c | 3 +- drivers/infiniband/core/verbs.c | 3 + drivers/infiniband/hw/mana/main.c | 2 +- drivers/infiniband/hw/mlx5/ah.c | 3 +- drivers/infiniband/hw/mlx5/counters.c | 8 +- drivers/infiniband/hw/mlx5/qp.c | 10 +- drivers/infiniband/hw/mlx5/qp.h | 1 + drivers/net/ethernet/cadence/macb.h | 2 + drivers/net/ethernet/cadence/macb_main.c | 12 +- drivers/net/ethernet/freescale/enetc/enetc.c | 100 ++- drivers/net/ethernet/google/gve/gve.h | 10 + drivers/net/ethernet/google/gve/gve_main.c | 6 +- drivers/net/ethernet/intel/ice/ice.h | 1 + drivers/net/ethernet/intel/ice/ice_common.c | 65 +- drivers/net/ethernet/intel/ice/ice_devids.h | 10 +- drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c | 24 +- drivers/net/ethernet/intel/ice/ice_hw_autogen.h | 55 +- drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_main.c | 37 +- drivers/net/ethernet/intel/ice/ice_sriov.c | 4 +- drivers/net/ethernet/intel/ice/ice_type.h | 3 +- drivers/net/ethernet/intel/ice/ice_vf_lib.c | 34 +- .../net/ethernet/intel/ice/ice_vf_lib_private.h | 1 + drivers/net/ethernet/intel/ice/ice_vf_mbx.c | 32 + drivers/net/ethernet/intel/ice/ice_vf_mbx.h | 9 + drivers/net/ethernet/intel/ice/ice_virtchnl.c | 8 +- drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c | 29 +- drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c | 2 +- drivers/net/ethernet/ti/icssg/icss_iep.c | 35 +- drivers/net/ipvlan/ipvlan_core.c | 24 +- drivers/net/loopback.c | 14 + drivers/net/usb/gl620a.c | 4 +- drivers/phy/rockchip/phy-rockchip-naneng-combphy.c | 5 +- drivers/phy/samsung/phy-exynos5-usbdrd.c | 12 +- drivers/phy/tegra/xusb-tegra186.c | 11 + drivers/platform/x86/intel/ifs/load.c | 8 +- drivers/scsi/scsi_lib.c | 14 +- drivers/ufs/core/ufs_bsg.c | 6 +- drivers/ufs/core/ufshcd.c | 112 +++- fs/afs/cell.c | 1 + fs/afs/internal.h | 23 +- fs/afs/server.c | 1 + fs/afs/server_list.c | 114 +++- fs/afs/vl_alias.c | 2 +- fs/afs/volume.c | 36 +- fs/overlayfs/copy_up.c | 2 +- include/asm-generic/vmlinux.lds.h | 2 +- include/linux/rcuref.h | 9 +- include/linux/sunrpc/sched.h | 17 +- include/net/dst.h | 9 + include/net/ip.h | 5 + include/net/route.h | 5 +- include/rdma/ib_verbs.h | 2 + include/trace/events/icmp.h | 67 ++ include/trace/events/sunrpc.h | 3 +- include/uapi/rdma/ib_user_ioctl_verbs.h | 3 +- include/ufs/ufs.h | 13 + include/ufs/ufshcd.h | 4 + io_uring/net.c | 4 +- kernel/events/core.c | 31 +- kernel/events/uprobes.c | 5 + kernel/sched/core.c | 2 +- kernel/trace/ftrace.c | 27 +- kernel/trace/trace_events_hist.c | 34 +- lib/rcuref.c | 5 +- net/bluetooth/l2cap_core.c | 9 +- net/bridge/br_netfilter_hooks.c | 8 +- net/core/gro.c | 1 + net/core/skbuff.c | 2 +- net/core/sysctl_net_core.c | 3 +- net/ipv4/icmp.c | 24 +- net/ipv4/ip_options.c | 3 +- net/ipv4/tcp_minisocks.c | 10 +- net/ipv6/ip6_tunnel.c | 4 +- net/ipv6/rpl_iptunnel.c | 58 +- net/ipv6/seg6_iptunnel.c | 97 ++- net/mptcp/pm_netlink.c | 5 - net/mptcp/subflow.c | 15 +- net/rxrpc/rxperf.c | 12 + net/sunrpc/cache.c | 10 +- net/sunrpc/sched.c | 2 - net/sunrpc/xprtsock.c | 10 +- sound/pci/hda/patch_realtek.c | 32 +- sound/soc/codecs/es8328.c | 15 +- sound/usb/midi.c | 2 +- sound/usb/quirks.c | 1 + tools/testing/selftests/rseq/rseq-riscv-bits.h | 6 +- tools/testing/selftests/rseq/rseq-riscv.h | 2 +- tools/tracing/rtla/src/timerlat_hist.c | 2 +- tools/tracing/rtla/src/timerlat_top.c | 2 +- 131 files changed, 2896 insertions(+), 1568 deletions(-)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Zhang markzhang@nvidia.com
[ Upstream commit 12d044770e12c4205fa69535b4fa8a9981fea98f ]
When a DCT QP is created on an active lag, it's dctc.port is assigned in a round-robin way, which is from 1 to dev->lag_port. In this case when querying this QP, we may get qp_attr.port_num > 2. Fix this by setting qp->port when modifying a DCT QP, and read port_num from qp->port instead of dctc.port when querying it.
Fixes: 7c4b1ab9f167 ("IB/mlx5: Add DCT RoCE LAG support") Signed-off-by: Mark Zhang markzhang@nvidia.com Reviewed-by: Maher Sanalla msanalla@nvidia.com Link: https://patch.msgid.link/94c76bf0adbea997f87ffa27674e0a7118ad92a9.1737290358... Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/qp.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 71a856409cee2..3df863c88b31d 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -4555,6 +4555,8 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
set_id = mlx5_ib_get_counters_id(dev, attr->port_num - 1); MLX5_SET(dctc, dctc, counter_set_id, set_id); + + qp->port = attr->port_num; } else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_RTR) { struct mlx5_ib_modify_qp_resp resp = {}; u32 out[MLX5_ST_SZ_DW(create_dct_out)] = {}; @@ -5045,7 +5047,7 @@ static int mlx5_ib_dct_query_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *mqp, }
if (qp_attr_mask & IB_QP_PORT) - qp_attr->port_num = MLX5_GET(dctc, dctc, port); + qp_attr->port_num = mqp->port; if (qp_attr_mask & IB_QP_MIN_RNR_TIMER) qp_attr->min_rnr_timer = MLX5_GET(dctc, dctc, min_rnr_nak); if (qp_attr_mask & IB_QP_AV) {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Konstantin Taranov kotaranov@microsoft.com
[ Upstream commit 29b7bb98234cc287cebef9bccf638c2e3f39be71 ]
Allocate a PAGE aligned doorbell index to ensure each process gets a separate PAGE sized doorbell area space remapped to it in mana_ib_mmap
Fixes: 0266a177631d ("RDMA/mana_ib: Add a driver for Microsoft Azure Network Adapter") Signed-off-by: Shiraz Saleem shirazsaleem@microsoft.com Signed-off-by: Konstantin Taranov kotaranov@microsoft.com Link: https://patch.msgid.link/1738751405-15041-1-git-send-email-kotaranov@linux.m... Reviewed-by: Long Li longli@microsoft.com Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mana/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c index 85717482a616e..6fa9b12532997 100644 --- a/drivers/infiniband/hw/mana/main.c +++ b/drivers/infiniband/hw/mana/main.c @@ -180,7 +180,7 @@ static int mana_gd_allocate_doorbell_page(struct gdma_context *gc,
req.resource_type = GDMA_RESOURCE_DOORBELL_PAGE; req.num_resources = 1; - req.alignment = 1; + req.alignment = PAGE_SIZE / MANA_PAGE_SIZE;
/* Have GDMA start searching from 0 */ req.allocated_resources = 0;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bean Huo beanhuo@micron.com
[ Upstream commit 9fa268875ca4ff5cad0c1b957388a0aef39920c3 ]
Add helper inline for retrieving whether UFS device is busy or not.
Signed-off-by: Bean Huo beanhuo@micron.com Link: https://lore.kernel.org/r/20231212220825.85255-2-beanhuo@iokpp.de Reviewed-by: Avri Altman avri.altman@wdc.com Reviewed-by: Thomas Weißschuh linux@weissschuh.net Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Stable-dep-of: 4fa382be4304 ("scsi: ufs: core: Fix ufshcd_is_ufs_dev_busy() and ufshcd_eh_timed_out()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/ufs/core/ufshcd.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 0ac0b6aaf9c62..fe1c56bc0a127 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -234,6 +234,12 @@ ufs_get_desired_pm_lvl_for_dev_link_state(enum ufs_dev_pwr_mode dev_state, return UFS_PM_LVL_0; }
+static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba) +{ + return (hba->clk_gating.active_reqs || hba->outstanding_reqs || hba->outstanding_tasks || + hba->active_uic_cmd || hba->uic_async_done); +} + static const struct ufs_dev_quirk ufs_fixups[] = { /* UFS cards deviations table */ { .wmanufacturerid = UFS_VENDOR_MICRON, @@ -1816,10 +1822,7 @@ static void ufshcd_gate_work(struct work_struct *work) goto rel_lock; }
- if (hba->clk_gating.active_reqs - || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL - || hba->outstanding_reqs || hba->outstanding_tasks - || hba->active_uic_cmd || hba->uic_async_done) + if (ufshcd_is_ufs_dev_busy(hba) || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL) goto rel_lock;
spin_unlock_irqrestore(hba->host->host_lock, flags);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bean Huo beanhuo@micron.com
[ Upstream commit 6bf999e0eb41850d5c857102535d5c53b2ede224 ]
Add Real Time Clock (RTC) support for UFS device. This enhancement is crucial for the internal maintenance operations of the UFS device. The patch enables the device to handle both absolute and relative time information. Furthermore, it includes periodic task to update the RTC in accordance with the UFS Spec, ensuring the accuracy of RTC information for the device's internal processes.
RTC and qTimestamp serve distinct purposes. The RTC provides a coarse level of granularity with, at best, approximate single-second resolution. This makes the RTC well-suited for the device to determine the approximate age of programmed blocks after being updated by the host. On the other hand, qTimestamp offers nanosecond granularity and is specifically designed for synchronizing Device Error Log entries with corresponding host-side logs.
Given that the RTC has been a standard feature since UFS Spec 2.0, and qTimestamp was introduced in UFS Spec 4.0, the majority of UFS devices currently on the market rely on RTC. Therefore, it is advisable to continue supporting RTC in the Linux kernel. This ensures compatibility with the prevailing UFS device implementations and facilitates seamless integration with existing hardware. By maintaining support for RTC, we ensure broad compatibility and avoid potential issues arising from deviations in device specifications across different UFS versions.
Signed-off-by: Bean Huo beanhuo@micron.com Signed-off-by: Mike Bi mikebi@micron.com Signed-off-by: Luca Porzio lporzio@micron.com Link: https://lore.kernel.org/r/20231212220825.85255-3-beanhuo@iokpp.de Acked-by: Avri Altman avri.altman@wdc.com Reviewed-by: Thomas Weißschuh linux@weissschuh.net Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Stable-dep-of: 4fa382be4304 ("scsi: ufs: core: Fix ufshcd_is_ufs_dev_busy() and ufshcd_eh_timed_out()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/ufs/core/ufshcd.c | 83 ++++++++++++++++++++++++++++++++++++++- include/ufs/ufs.h | 13 ++++++ include/ufs/ufshcd.h | 4 ++ 3 files changed, 98 insertions(+), 2 deletions(-)
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index fe1c56bc0a127..a9a7a84e6cbbc 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -98,6 +98,9 @@ /* Polling time to wait for fDeviceInit */ #define FDEVICEINIT_COMPL_TIMEOUT 1500 /* millisecs */
+/* Default RTC update every 10 seconds */ +#define UFS_RTC_UPDATE_INTERVAL_MS (10 * MSEC_PER_SEC) + /* UFSHC 4.0 compliant HC support this mode. */ static bool use_mcq_mode = true;
@@ -682,6 +685,8 @@ static void ufshcd_device_reset(struct ufs_hba *hba) hba->dev_info.wb_enabled = false; hba->dev_info.wb_buf_flush_enabled = false; } + if (hba->dev_info.rtc_type == UFS_RTC_RELATIVE) + hba->dev_info.rtc_time_baseline = 0; } if (err != -EOPNOTSUPP) ufshcd_update_evt_hist(hba, UFS_EVT_DEV_RESET, err); @@ -8149,6 +8154,72 @@ static void ufs_fixup_device_setup(struct ufs_hba *hba) ufshcd_vops_fixup_dev_quirks(hba); }
+static void ufshcd_update_rtc(struct ufs_hba *hba) +{ + struct timespec64 ts64; + int err; + u32 val; + + ktime_get_real_ts64(&ts64); + + if (ts64.tv_sec < hba->dev_info.rtc_time_baseline) { + dev_warn_once(hba->dev, "%s: Current time precedes previous setting!\n", __func__); + return; + } + + /* + * The Absolute RTC mode has a 136-year limit, spanning from 2010 to 2146. If a time beyond + * 2146 is required, it is recommended to choose the relative RTC mode. + */ + val = ts64.tv_sec - hba->dev_info.rtc_time_baseline; + + ufshcd_rpm_get_sync(hba); + err = ufshcd_query_attr(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, QUERY_ATTR_IDN_SECONDS_PASSED, + 0, 0, &val); + ufshcd_rpm_put_sync(hba); + + if (err) + dev_err(hba->dev, "%s: Failed to update rtc %d\n", __func__, err); + else if (hba->dev_info.rtc_type == UFS_RTC_RELATIVE) + hba->dev_info.rtc_time_baseline = ts64.tv_sec; +} + +static void ufshcd_rtc_work(struct work_struct *work) +{ + struct ufs_hba *hba; + + hba = container_of(to_delayed_work(work), struct ufs_hba, ufs_rtc_update_work); + + /* Update RTC only when there are no requests in progress and UFSHCI is operational */ + if (!ufshcd_is_ufs_dev_busy(hba) && hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL) + ufshcd_update_rtc(hba); + + if (ufshcd_is_ufs_dev_active(hba)) + schedule_delayed_work(&hba->ufs_rtc_update_work, + msecs_to_jiffies(UFS_RTC_UPDATE_INTERVAL_MS)); +} + +static void ufs_init_rtc(struct ufs_hba *hba, u8 *desc_buf) +{ + u16 periodic_rtc_update = get_unaligned_be16(&desc_buf[DEVICE_DESC_PARAM_FRQ_RTC]); + struct ufs_dev_info *dev_info = &hba->dev_info; + + if (periodic_rtc_update & UFS_RTC_TIME_BASELINE) { + dev_info->rtc_type = UFS_RTC_ABSOLUTE; + + /* + * The concept of measuring time in Linux as the number of seconds elapsed since + * 00:00:00 UTC on January 1, 1970, and UFS ABS RTC is elapsed from January 1st + * 2010 00:00, here we need to adjust ABS baseline. + */ + dev_info->rtc_time_baseline = mktime64(2010, 1, 1, 0, 0, 0) - + mktime64(1970, 1, 1, 0, 0, 0); + } else { + dev_info->rtc_type = UFS_RTC_RELATIVE; + dev_info->rtc_time_baseline = 0; + } +} + static int ufs_get_device_desc(struct ufs_hba *hba) { int err; @@ -8201,6 +8272,8 @@ static int ufs_get_device_desc(struct ufs_hba *hba)
ufshcd_temp_notif_probe(hba, desc_buf);
+ ufs_init_rtc(hba, desc_buf); + if (hba->ext_iid_sup) ufshcd_ext_iid_probe(hba, desc_buf);
@@ -8753,6 +8826,8 @@ static int ufshcd_device_init(struct ufs_hba *hba, bool init_dev_params) ufshcd_force_reset_auto_bkops(hba);
ufshcd_set_timestamp_attr(hba); + schedule_delayed_work(&hba->ufs_rtc_update_work, + msecs_to_jiffies(UFS_RTC_UPDATE_INTERVAL_MS));
/* Gear up to HS gear if supported */ if (hba->max_pwr_info.is_valid) { @@ -9698,6 +9773,8 @@ static int __ufshcd_wl_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) ret = ufshcd_vops_suspend(hba, pm_op, POST_CHANGE); if (ret) goto set_link_active; + + cancel_delayed_work_sync(&hba->ufs_rtc_update_work); goto out;
set_link_active: @@ -9792,6 +9869,8 @@ static int __ufshcd_wl_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) if (ret) goto set_old_link_state; ufshcd_set_timestamp_attr(hba); + schedule_delayed_work(&hba->ufs_rtc_update_work, + msecs_to_jiffies(UFS_RTC_UPDATE_INTERVAL_MS)); }
if (ufshcd_keep_autobkops_enabled_except_suspend(hba)) @@ -10500,8 +10579,8 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) UFS_SLEEP_PWR_MODE, UIC_LINK_HIBERN8_STATE);
- INIT_DELAYED_WORK(&hba->rpm_dev_flush_recheck_work, - ufshcd_rpm_dev_flush_recheck_work); + INIT_DELAYED_WORK(&hba->rpm_dev_flush_recheck_work, ufshcd_rpm_dev_flush_recheck_work); + INIT_DELAYED_WORK(&hba->ufs_rtc_update_work, ufshcd_rtc_work);
/* Set the default auto-hiberate idle timer value to 150 ms */ if (ufshcd_is_auto_hibern8_supported(hba) && !hba->ahit) { diff --git a/include/ufs/ufs.h b/include/ufs/ufs.h index 49c90795a2a67..571a08ce91242 100644 --- a/include/ufs/ufs.h +++ b/include/ufs/ufs.h @@ -14,6 +14,7 @@ #include <linux/bitops.h> #include <linux/types.h> #include <uapi/scsi/scsi_bsg_ufs.h> +#include <linux/time64.h>
/* * Using static_assert() is not allowed in UAPI header files. Hence the check @@ -550,6 +551,14 @@ struct ufs_vreg_info { struct ufs_vreg *vdd_hba; };
+/* UFS device descriptor wPeriodicRTCUpdate bit9 defines RTC time baseline */ +#define UFS_RTC_TIME_BASELINE BIT(9) + +enum ufs_rtc_time { + UFS_RTC_RELATIVE, + UFS_RTC_ABSOLUTE +}; + struct ufs_dev_info { bool f_power_on_wp_en; /* Keeps information if any of the LU is power on write protected */ @@ -577,6 +586,10 @@ struct ufs_dev_info {
/* UFS EXT_IID Enable */ bool b_ext_iid_en; + + /* UFS RTC */ + enum ufs_rtc_time rtc_type; + time64_t rtc_time_baseline; };
/* diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 20d129914121d..d5aa832f8dba3 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -908,6 +908,8 @@ enum ufshcd_mcq_opr { * @mcq_base: Multi circular queue registers base address * @uhq: array of supported hardware queues * @dev_cmd_queue: Queue for issuing device management commands + * @mcq_opr: MCQ operation and runtime registers + * @ufs_rtc_update_work: A work for UFS RTC periodic update */ struct ufs_hba { void __iomem *mmio_base; @@ -1068,6 +1070,8 @@ struct ufs_hba { struct ufs_hw_queue *uhq; struct ufs_hw_queue *dev_cmd_queue; struct ufshcd_mcq_opr_info_t mcq_opr[OPR_MAX]; + + struct delayed_work ufs_rtc_update_work; };
/**
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Avri Altman avri.altman@wdc.com
[ Upstream commit e738ba458e7539be1757dcdf85835a5c7b11fad4 ]
Prepare to remove hba->clk_gating.active_reqs check from ufshcd_is_ufs_dev_busy().
Signed-off-by: Avri Altman avri.altman@wdc.com Link: https://lore.kernel.org/r/20241124070808.194860-2-avri.altman@wdc.com Reviewed-by: Bart Van Assche bvanassche@acm.org Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Stable-dep-of: 4fa382be4304 ("scsi: ufs: core: Fix ufshcd_is_ufs_dev_busy() and ufshcd_eh_timed_out()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/ufs/core/ufshcd.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index a9a7a84e6cbbc..83d69e0564f2d 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -237,10 +237,16 @@ ufs_get_desired_pm_lvl_for_dev_link_state(enum ufs_dev_pwr_mode dev_state, return UFS_PM_LVL_0; }
+static bool ufshcd_has_pending_tasks(struct ufs_hba *hba) +{ + return hba->outstanding_tasks || hba->active_uic_cmd || + hba->uic_async_done; +} + static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba) { - return (hba->clk_gating.active_reqs || hba->outstanding_reqs || hba->outstanding_tasks || - hba->active_uic_cmd || hba->uic_async_done); + return hba->clk_gating.active_reqs || hba->outstanding_reqs || + ufshcd_has_pending_tasks(hba); }
static const struct ufs_dev_quirk ufs_fixups[] = { @@ -1883,8 +1889,7 @@ static void __ufshcd_release(struct ufs_hba *hba)
if (hba->clk_gating.active_reqs || hba->clk_gating.is_suspended || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL || - hba->outstanding_tasks || !hba->clk_gating.is_initialized || - hba->active_uic_cmd || hba->uic_async_done || + ufshcd_has_pending_tasks(hba) || !hba->clk_gating.is_initialized || hba->clk_gating.state == CLKS_OFF) return;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Avri Altman avri.altman@wdc.com
[ Upstream commit 7869c6521f5715688b3d1f1c897374a68544eef0 ]
Remove hba->clk_gating.active_reqs check from ufshcd_is_ufs_dev_busy() function to separate clock gating logic from general device busy checks.
Signed-off-by: Avri Altman avri.altman@wdc.com Link: https://lore.kernel.org/r/20241124070808.194860-3-avri.altman@wdc.com Reviewed-by: Bart Van Assche bvanassche@acm.org Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Stable-dep-of: 4fa382be4304 ("scsi: ufs: core: Fix ufshcd_is_ufs_dev_busy() and ufshcd_eh_timed_out()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/ufs/core/ufshcd.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 83d69e0564f2d..24f8b74e166de 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -245,8 +245,7 @@ static bool ufshcd_has_pending_tasks(struct ufs_hba *hba)
static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba) { - return hba->clk_gating.active_reqs || hba->outstanding_reqs || - ufshcd_has_pending_tasks(hba); + return hba->outstanding_reqs || ufshcd_has_pending_tasks(hba); }
static const struct ufs_dev_quirk ufs_fixups[] = { @@ -1833,7 +1832,9 @@ static void ufshcd_gate_work(struct work_struct *work) goto rel_lock; }
- if (ufshcd_is_ufs_dev_busy(hba) || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL) + if (ufshcd_is_ufs_dev_busy(hba) || + hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL || + hba->clk_gating.active_reqs) goto rel_lock;
spin_unlock_irqrestore(hba->host->host_lock, flags); @@ -8196,7 +8197,9 @@ static void ufshcd_rtc_work(struct work_struct *work) hba = container_of(to_delayed_work(work), struct ufs_hba, ufs_rtc_update_work);
/* Update RTC only when there are no requests in progress and UFSHCI is operational */ - if (!ufshcd_is_ufs_dev_busy(hba) && hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL) + if (!ufshcd_is_ufs_dev_busy(hba) && + hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL && + !hba->clk_gating.active_reqs) ufshcd_update_rtc(hba);
if (ufshcd_is_ufs_dev_active(hba))
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche bvanassche@acm.org
[ Upstream commit 4fa382be430421e1445f9c95c4dc9b7e0949ae8a ]
ufshcd_is_ufs_dev_busy(), ufshcd_print_host_state() and ufshcd_eh_timed_out() are used in both modes (legacy mode and MCQ mode). hba->outstanding_reqs only represents the outstanding requests in legacy mode. Hence, change hba->outstanding_reqs into scsi_host_busy(hba->host) in these functions.
Fixes: eacb139b77ff ("scsi: ufs: core: mcq: Enable multi-circular queue") Signed-off-by: Bart Van Assche bvanassche@acm.org Link: https://lore.kernel.org/r/20250214224352.3025151-1-bvanassche@acm.org Reviewed-by: Peter Wang peter.wang@mediatek.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/ufs/core/ufshcd.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 24f8b74e166de..3412ac717807d 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -245,7 +245,7 @@ static bool ufshcd_has_pending_tasks(struct ufs_hba *hba)
static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba) { - return hba->outstanding_reqs || ufshcd_has_pending_tasks(hba); + return scsi_host_busy(hba->host) || ufshcd_has_pending_tasks(hba); }
static const struct ufs_dev_quirk ufs_fixups[] = { @@ -616,8 +616,8 @@ static void ufshcd_print_host_state(struct ufs_hba *hba) const struct scsi_device *sdev_ufs = hba->ufs_device_wlun;
dev_err(hba->dev, "UFS Host state=%d\n", hba->ufshcd_state); - dev_err(hba->dev, "outstanding reqs=0x%lx tasks=0x%lx\n", - hba->outstanding_reqs, hba->outstanding_tasks); + dev_err(hba->dev, "%d outstanding reqs, tasks=0x%lx\n", + scsi_host_busy(hba->host), hba->outstanding_tasks); dev_err(hba->dev, "saved_err=0x%x, saved_uic_err=0x%x\n", hba->saved_err, hba->saved_uic_err); dev_err(hba->dev, "Device power mode=%d, UIC link state=%d\n", @@ -8973,7 +8973,7 @@ static enum scsi_timeout_action ufshcd_eh_timed_out(struct scsi_cmnd *scmd) dev_info(hba->dev, "%s() finished; outstanding_tasks = %#lx.\n", __func__, hba->outstanding_tasks);
- return hba->outstanding_reqs ? SCSI_EH_RESET_TIMER : SCSI_EH_DONE; + return scsi_host_busy(hba->host) ? SCSI_EH_RESET_TIMER : SCSI_EH_DONE; }
static const struct attribute_group *ufshcd_driver_groups[] = {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vasiliy Kovalev kovalev@altlinux.org
[ Upstream commit c84e125fff2615b4d9c259e762596134eddd2f27 ]
The issue was caused by dput(upper) being called before ovl_dentry_update_reval(), while upper->d_flags was still accessed in ovl_dentry_remote().
Move dput(upper) after its last use to prevent use-after-free.
BUG: KASAN: slab-use-after-free in ovl_dentry_remote fs/overlayfs/util.c:162 [inline] BUG: KASAN: slab-use-after-free in ovl_dentry_update_reval+0xd2/0xf0 fs/overlayfs/util.c:167
Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114 print_address_description mm/kasan/report.c:377 [inline] print_report+0xc3/0x620 mm/kasan/report.c:488 kasan_report+0xd9/0x110 mm/kasan/report.c:601 ovl_dentry_remote fs/overlayfs/util.c:162 [inline] ovl_dentry_update_reval+0xd2/0xf0 fs/overlayfs/util.c:167 ovl_link_up fs/overlayfs/copy_up.c:610 [inline] ovl_copy_up_one+0x2105/0x3490 fs/overlayfs/copy_up.c:1170 ovl_copy_up_flags+0x18d/0x200 fs/overlayfs/copy_up.c:1223 ovl_rename+0x39e/0x18c0 fs/overlayfs/dir.c:1136 vfs_rename+0xf84/0x20a0 fs/namei.c:4893 ... </TASK>
Fixes: b07d5cc93e1b ("ovl: update of dentry revalidate flags after copy up") Reported-by: syzbot+316db8a1191938280eb6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=316db8a1191938280eb6 Signed-off-by: Vasiliy Kovalev kovalev@altlinux.org Link: https://lore.kernel.org/r/20250214215148.761147-1-kovalev@altlinux.org Reviewed-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/overlayfs/copy_up.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c index 18e018cb18117..dbf7b3cd70ca5 100644 --- a/fs/overlayfs/copy_up.c +++ b/fs/overlayfs/copy_up.c @@ -570,7 +570,6 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c) err = PTR_ERR(upper); if (!IS_ERR(upper)) { err = ovl_do_link(ofs, ovl_dentry_upper(c->dentry), udir, upper); - dput(upper);
if (!err) { /* Restore timestamps on parent (best effort) */ @@ -578,6 +577,7 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c) ovl_dentry_set_upper_alias(c->dentry); ovl_dentry_update_reval(c->dentry, upper); } + dput(upper); } inode_unlock(udir); if (err)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stephen Brennan stephen.s.brennan@oracle.com
[ Upstream commit 0b108e83795c9c23101f584ef7e3ab4f1f120ef0 ]
The RPC_TASK_* constants are defined as macros, which means that most kernel builds will not contain their definitions in the debuginfo. However, it's quite useful for debuggers to be able to view the task state constant and interpret it correctly. Conversion to an enum will ensure the constants are present in debuginfo and can be interpreted by debuggers without needing to hard-code them and track their changes.
Signed-off-by: Stephen Brennan stephen.s.brennan@oracle.com Signed-off-by: Anna Schumaker anna.schumaker@oracle.com Stable-dep-of: 5bbd6e863b15 ("SUNRPC: Prevent looping due to rpc_signal_task() races") Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/sunrpc/sched.h | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h index 8f9bee0e21c3b..f80b90aca380a 100644 --- a/include/linux/sunrpc/sched.h +++ b/include/linux/sunrpc/sched.h @@ -140,13 +140,15 @@ struct rpc_task_setup { #define RPC_WAS_SENT(t) ((t)->tk_flags & RPC_TASK_SENT) #define RPC_IS_MOVEABLE(t) ((t)->tk_flags & RPC_TASK_MOVEABLE)
-#define RPC_TASK_RUNNING 0 -#define RPC_TASK_QUEUED 1 -#define RPC_TASK_ACTIVE 2 -#define RPC_TASK_NEED_XMIT 3 -#define RPC_TASK_NEED_RECV 4 -#define RPC_TASK_MSG_PIN_WAIT 5 -#define RPC_TASK_SIGNALLED 6 +enum { + RPC_TASK_RUNNING, + RPC_TASK_QUEUED, + RPC_TASK_ACTIVE, + RPC_TASK_NEED_XMIT, + RPC_TASK_NEED_RECV, + RPC_TASK_MSG_PIN_WAIT, + RPC_TASK_SIGNALLED, +};
#define rpc_test_and_set_running(t) \ test_and_set_bit(RPC_TASK_RUNNING, &(t)->tk_runstate)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit 5bbd6e863b15a85221e49b9bdb2d5d8f0bb91f3d ]
If rpc_signal_task() is called while a task is in an rpc_call_done() callback function, and the latter calls rpc_restart_call(), the task can end up looping due to the RPC_TASK_SIGNALLED flag being set without the tk_rpc_status being set. Removing the redundant mechanism for signalling the task fixes the looping behaviour.
Reported-by: Li Lingfeng lilingfeng3@huawei.com Fixes: 39494194f93b ("SUNRPC: Fix races with rpc_killall_tasks()") Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: Anna Schumaker anna.schumaker@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/sunrpc/sched.h | 3 +-- include/trace/events/sunrpc.h | 3 +-- net/sunrpc/sched.c | 2 -- 3 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h index f80b90aca380a..a220b28904ca5 100644 --- a/include/linux/sunrpc/sched.h +++ b/include/linux/sunrpc/sched.h @@ -147,7 +147,6 @@ enum { RPC_TASK_NEED_XMIT, RPC_TASK_NEED_RECV, RPC_TASK_MSG_PIN_WAIT, - RPC_TASK_SIGNALLED, };
#define rpc_test_and_set_running(t) \ @@ -160,7 +159,7 @@ enum {
#define RPC_IS_ACTIVATED(t) test_bit(RPC_TASK_ACTIVE, &(t)->tk_runstate)
-#define RPC_SIGNALLED(t) test_bit(RPC_TASK_SIGNALLED, &(t)->tk_runstate) +#define RPC_SIGNALLED(t) (READ_ONCE(task->tk_rpc_status) == -ERESTARTSYS)
/* * Task priorities. diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h index 6beb38c1dcb5e..9eba2ca0a6ff8 100644 --- a/include/trace/events/sunrpc.h +++ b/include/trace/events/sunrpc.h @@ -360,8 +360,7 @@ TRACE_EVENT(rpc_request, { (1UL << RPC_TASK_ACTIVE), "ACTIVE" }, \ { (1UL << RPC_TASK_NEED_XMIT), "NEED_XMIT" }, \ { (1UL << RPC_TASK_NEED_RECV), "NEED_RECV" }, \ - { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" }, \ - { (1UL << RPC_TASK_SIGNALLED), "SIGNALLED" }) + { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" })
DECLARE_EVENT_CLASS(rpc_task_running,
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c index cef623ea15060..9b45fbdc90cab 100644 --- a/net/sunrpc/sched.c +++ b/net/sunrpc/sched.c @@ -864,8 +864,6 @@ void rpc_signal_task(struct rpc_task *task) if (!rpc_task_set_rpc_status(task, -ERESTARTSYS)) return; trace_rpc_task_signalled(task, task->tk_action); - set_bit(RPC_TASK_SIGNALLED, &task->tk_runstate); - smp_mb__after_atomic(); queue = READ_ONCE(task->tk_waitqueue); if (queue) rpc_wake_up_queued_task(queue, task);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Coddington bcodding@redhat.com
[ Upstream commit 7a2f6f7687c5f7083a35317cddec5ad9fa491443 ]
If the TLS handshake attempt returns -ETIMEDOUT, we currently translate that error into -EACCES. This becomes problematic for cases where the RPC layer is attempting to re-connect in paths that don't resonably handle -EACCES, for example: writeback. The RPC layer can handle -ETIMEDOUT quite well, however - so if the handshake returns this error let's just pass it along.
Fixes: 75eb6af7acdf ("SUNRPC: Add a TCP-with-TLS RPC transport class") Signed-off-by: Benjamin Coddington bcodding@redhat.com Signed-off-by: Anna Schumaker anna.schumaker@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sunrpc/xprtsock.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index 1c4bc8234ea87..29df05879c8e9 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -2561,7 +2561,15 @@ static void xs_tls_handshake_done(void *data, int status, key_serial_t peerid) struct sock_xprt *lower_transport = container_of(lower_xprt, struct sock_xprt, xprt);
- lower_transport->xprt_err = status ? -EACCES : 0; + switch (status) { + case 0: + case -EACCES: + case -ETIMEDOUT: + lower_transport->xprt_err = status; + break; + default: + lower_transport->xprt_err = -EACCES; + } complete(&lower_transport->handshake_done); xprt_put(lower_xprt); }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Or Har-Toov ohartoov@nvidia.com
[ Upstream commit 703289ce43f740b0096724300107df82d008552f ]
Add new IBTA speed XDR, the new rate that was added to Infiniband spec as part of XDR and supporting signaling rate of 200Gb.
In order to report that value to rdma-core, add new u32 field to query_port response.
Signed-off-by: Or Har-Toov ohartoov@nvidia.com Reviewed-by: Mark Zhang markzhang@nvidia.com Link: https://lore.kernel.org/r/9d235fc600a999e8274010f0e18b40fa60540e6c.169520415... Reviewed-by: Jacob Keller jacob.e.keller@intel.com Signed-off-by: Leon Romanovsky leon@kernel.org Stable-dep-of: c534ffda781f ("RDMA/mlx5: Fix AH static rate parsing") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/core/sysfs.c | 4 ++++ drivers/infiniband/core/uverbs_std_types_device.c | 3 ++- drivers/infiniband/core/verbs.c | 3 +++ include/rdma/ib_verbs.h | 2 ++ include/uapi/rdma/ib_user_ioctl_verbs.h | 3 ++- 5 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c index ec5efdc166601..9f97bef021497 100644 --- a/drivers/infiniband/core/sysfs.c +++ b/drivers/infiniband/core/sysfs.c @@ -342,6 +342,10 @@ static ssize_t rate_show(struct ib_device *ibdev, u32 port_num, speed = " NDR"; rate = 1000; break; + case IB_SPEED_XDR: + speed = " XDR"; + rate = 2000; + break; case IB_SPEED_SDR: default: /* default to SDR for invalid rates */ speed = " SDR"; diff --git a/drivers/infiniband/core/uverbs_std_types_device.c b/drivers/infiniband/core/uverbs_std_types_device.c index 049684880ae03..fb0555647336f 100644 --- a/drivers/infiniband/core/uverbs_std_types_device.c +++ b/drivers/infiniband/core/uverbs_std_types_device.c @@ -203,6 +203,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_PORT)(
copy_port_attr_to_resp(&attr, &resp.legacy_resp, ib_dev, port_num); resp.port_cap_flags2 = attr.port_cap_flags2; + resp.active_speed_ex = attr.active_speed;
return uverbs_copy_to_struct_or_zero(attrs, UVERBS_ATTR_QUERY_PORT_RESP, &resp, sizeof(resp)); @@ -461,7 +462,7 @@ DECLARE_UVERBS_NAMED_METHOD( UVERBS_ATTR_PTR_OUT( UVERBS_ATTR_QUERY_PORT_RESP, UVERBS_ATTR_STRUCT(struct ib_uverbs_query_port_resp_ex, - reserved), + active_speed_ex), UA_MANDATORY));
DECLARE_UVERBS_NAMED_METHOD( diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 186ed3c22ec9e..ba05de0380e96 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -147,6 +147,7 @@ __attribute_const__ int ib_rate_to_mult(enum ib_rate rate) case IB_RATE_50_GBPS: return 20; case IB_RATE_400_GBPS: return 160; case IB_RATE_600_GBPS: return 240; + case IB_RATE_800_GBPS: return 320; default: return -1; } } @@ -176,6 +177,7 @@ __attribute_const__ enum ib_rate mult_to_ib_rate(int mult) case 20: return IB_RATE_50_GBPS; case 160: return IB_RATE_400_GBPS; case 240: return IB_RATE_600_GBPS; + case 320: return IB_RATE_800_GBPS; default: return IB_RATE_PORT_CURRENT; } } @@ -205,6 +207,7 @@ __attribute_const__ int ib_rate_to_mbps(enum ib_rate rate) case IB_RATE_50_GBPS: return 53125; case IB_RATE_400_GBPS: return 425000; case IB_RATE_600_GBPS: return 637500; + case IB_RATE_800_GBPS: return 850000; default: return -1; } } diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 62f9d126a71ad..bc459d0616297 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -561,6 +561,7 @@ enum ib_port_speed { IB_SPEED_EDR = 32, IB_SPEED_HDR = 64, IB_SPEED_NDR = 128, + IB_SPEED_XDR = 256, };
enum ib_stat_flag { @@ -840,6 +841,7 @@ enum ib_rate { IB_RATE_50_GBPS = 20, IB_RATE_400_GBPS = 21, IB_RATE_600_GBPS = 22, + IB_RATE_800_GBPS = 23, };
/** diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib_user_ioctl_verbs.h index d7c5aaa327445..fe15bc7e9f707 100644 --- a/include/uapi/rdma/ib_user_ioctl_verbs.h +++ b/include/uapi/rdma/ib_user_ioctl_verbs.h @@ -220,7 +220,8 @@ enum ib_uverbs_advise_mr_flag { struct ib_uverbs_query_port_resp_ex { struct ib_uverbs_query_port_resp legacy_resp; __u16 port_cap_flags2; - __u8 reserved[6]; + __u8 reserved[2]; + __u32 active_speed_ex; };
struct ib_uverbs_qp_cap {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrisious Haddad phaddad@nvidia.com
[ Upstream commit c534ffda781f44a1c6ac25ef6e0e444da38ca8af ]
Previously static rate wasn't translated according to our PRM but simply used the 4 lower bytes.
Correctly translate static rate value passed in AH creation attribute according to our PRM expected values.
In addition change 800GB mapping to zero, which is the PRM specified value.
Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters") Signed-off-by: Patrisious Haddad phaddad@nvidia.com Reviewed-by: Maor Gottlieb maorg@nvidia.com Link: https://patch.msgid.link/18ef4cc5396caf80728341eb74738cd777596f60.1739187089... Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/ah.c | 3 ++- drivers/infiniband/hw/mlx5/qp.c | 6 +++--- drivers/infiniband/hw/mlx5/qp.h | 1 + 3 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/ah.c b/drivers/infiniband/hw/mlx5/ah.c index 505bc47fd575d..99036afb3aef0 100644 --- a/drivers/infiniband/hw/mlx5/ah.c +++ b/drivers/infiniband/hw/mlx5/ah.c @@ -67,7 +67,8 @@ static void create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah, ah->av.tclass = grh->traffic_class; }
- ah->av.stat_rate_sl = (rdma_ah_get_static_rate(ah_attr) << 4); + ah->av.stat_rate_sl = + (mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah_attr)) << 4);
if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) { if (init_attr->xmit_slave) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 3df863c88b31d..0a9ae84600b20 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -3433,11 +3433,11 @@ static int ib_to_mlx5_rate_map(u8 rate) return 0; }
-static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate) +int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate) { u32 stat_rate_support;
- if (rate == IB_RATE_PORT_CURRENT) + if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS) return 0;
if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_600_GBPS) @@ -3582,7 +3582,7 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, sizeof(grh->dgid.raw)); }
- err = ib_rate_to_mlx5(dev, rdma_ah_get_static_rate(ah)); + err = mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah)); if (err < 0) return err; MLX5_SET(ads, path, stat_rate, err); diff --git a/drivers/infiniband/hw/mlx5/qp.h b/drivers/infiniband/hw/mlx5/qp.h index b6ee7c3ee1ca1..2530e7730635f 100644 --- a/drivers/infiniband/hw/mlx5/qp.h +++ b/drivers/infiniband/hw/mlx5/qp.h @@ -56,4 +56,5 @@ int mlx5_core_xrcd_dealloc(struct mlx5_ib_dev *dev, u32 xrcdn); int mlx5_ib_qp_set_counter(struct ib_qp *qp, struct rdma_counter *counter); int mlx5_ib_qp_event_init(void); void mlx5_ib_qp_event_cleanup(void); +int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate); #endif /* _MLX5_IB_QP_H */
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ye Bin yebin10@huawei.com
[ Upstream commit dce5c4afd035e8090a26e5d776b1682c0e649683 ]
After commit 1bad6c4a57ef ("scsi: zero per-cmd private driver data for each MQ I/O"), the xen-scsifront/virtio_scsi/snic drivers all removed code that explicitly zeroed driver-private command data.
In combination with commit 464a00c9e0ad ("scsi: core: Kill DRIVER_SENSE"), after virtio_scsi performs a capacity expansion, the first request will return a unit attention to indicate that the capacity has changed. And then the original command is retried. As driver-private command data was not cleared, the request would return UA again and eventually time out and fail.
Zero driver-private command data when a request is retried.
Fixes: f7de50da1479 ("scsi: xen-scsifront: Remove code that zeroes driver-private command data") Fixes: c2bb87318baa ("scsi: virtio_scsi: Remove code that zeroes driver-private command data") Fixes: c3006a926468 ("scsi: snic: Remove code that zeroes driver-private command data") Signed-off-by: Ye Bin yebin10@huawei.com Reviewed-by: Bart Van Assche bvanassche@acm.org Link: https://lore.kernel.org/r/20250217021628.2929248-1-yebin@huaweicloud.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/scsi_lib.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index f026377f1cf1c..e6dc2c556fde9 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1570,13 +1570,6 @@ static blk_status_t scsi_prepare_cmd(struct request *req) if (in_flight) __set_bit(SCMD_STATE_INFLIGHT, &cmd->state);
- /* - * Only clear the driver-private command data if the LLD does not supply - * a function to initialize that data. - */ - if (!shost->hostt->init_cmd_priv) - memset(cmd + 1, 0, shost->hostt->cmd_size); - cmd->prot_op = SCSI_PROT_NORMAL; if (blk_rq_bytes(req)) cmd->sc_data_direction = rq_dma_dir(req); @@ -1743,6 +1736,13 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, if (!scsi_host_queue_ready(q, shost, sdev, cmd)) goto out_dec_target_busy;
+ /* + * Only clear the driver-private command data if the LLD does not supply + * a function to initialize that data. + */ + if (shost->hostt->cmd_size && !shost->hostt->init_cmd_priv) + memset(cmd + 1, 0, shost->hostt->cmd_size); + if (!(req->rq_flags & RQF_DONTPREP)) { ret = scsi_prepare_cmd(req); if (ret != BLK_STS_OK)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrisious Haddad phaddad@nvidia.com
[ Upstream commit e1a0bdbdfdf08428f0ede5ae49c7f4139ac73ef5 ]
When there is a failure during bind QP, the cleanup flow destroys the counter regardless if it is the one that created it or not, which is problematic since if it isn't the one that created it, that counter could still be in use.
Fix that by destroying the counter only if it was created during this call.
Fixes: 45842fc627c7 ("IB/mlx5: Support statistic q counter configuration") Signed-off-by: Patrisious Haddad phaddad@nvidia.com Reviewed-by: Mark Zhang markzhang@nvidia.com Link: https://patch.msgid.link/25dfefddb0ebefa668c32e06a94d84e3216257cf.1740033937... Reviewed-by: Zhu Yanjun yanjun.zhu@linux.dev Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/counters.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/counters.c b/drivers/infiniband/hw/mlx5/counters.c index 8300ce6228350..b049bba215790 100644 --- a/drivers/infiniband/hw/mlx5/counters.c +++ b/drivers/infiniband/hw/mlx5/counters.c @@ -542,6 +542,7 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter, struct ib_qp *qp) { struct mlx5_ib_dev *dev = to_mdev(qp->device); + bool new = false; int err;
if (!counter->id) { @@ -556,6 +557,7 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter, return err; counter->id = MLX5_GET(alloc_q_counter_out, out, counter_set_id); + new = true; }
err = mlx5_ib_qp_set_counter(qp, counter); @@ -565,8 +567,10 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter, return 0;
fail_set_counter: - mlx5_ib_counter_dealloc(counter); - counter->id = 0; + if (new) { + mlx5_ib_counter_dealloc(counter); + counter->id = 0; + }
return err; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann arnd@arndb.de
[ Upstream commit 1f7a4f98c11fbeb18ed21f3b3a497e90a50ad2e0 ]
There is a warning about unused variables when building with W=1 and no procfs:
net/sunrpc/cache.c:1660:30: error: 'cache_flush_proc_ops' defined but not used [-Werror=unused-const-variable=] 1660 | static const struct proc_ops cache_flush_proc_ops = { | ^~~~~~~~~~~~~~~~~~~~ net/sunrpc/cache.c:1622:30: error: 'content_proc_ops' defined but not used [-Werror=unused-const-variable=] 1622 | static const struct proc_ops content_proc_ops = { | ^~~~~~~~~~~~~~~~ net/sunrpc/cache.c:1598:30: error: 'cache_channel_proc_ops' defined but not used [-Werror=unused-const-variable=] 1598 | static const struct proc_ops cache_channel_proc_ops = { | ^~~~~~~~~~~~~~~~~~~~~~
These are used inside of an #ifdef, so replacing that with an IS_ENABLED() check lets the compiler see how they are used while still dropping them during dead code elimination.
Fixes: dbf847ecb631 ("knfsd: allow cache_register to return error on failure") Reviewed-by: Jeff Layton jlayton@kernel.org Acked-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Anna Schumaker anna.schumaker@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sunrpc/cache.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c index 3298da2e37e43..cb6a6bc9fea77 100644 --- a/net/sunrpc/cache.c +++ b/net/sunrpc/cache.c @@ -1675,12 +1675,14 @@ static void remove_cache_proc_entries(struct cache_detail *cd) } }
-#ifdef CONFIG_PROC_FS static int create_cache_proc_entries(struct cache_detail *cd, struct net *net) { struct proc_dir_entry *p; struct sunrpc_net *sn;
+ if (!IS_ENABLED(CONFIG_PROC_FS)) + return 0; + sn = net_generic(net, sunrpc_net_id); cd->procfs = proc_mkdir(cd->name, sn->proc_net_rpc); if (cd->procfs == NULL) @@ -1708,12 +1710,6 @@ static int create_cache_proc_entries(struct cache_detail *cd, struct net *net) remove_cache_proc_entries(cd); return -ENOMEM; } -#else /* CONFIG_PROC_FS */ -static int create_cache_proc_entries(struct cache_detail *cd, struct net *net) -{ - return 0; -} -#endif
void __init cache_initialize(void) {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
[ Upstream commit a3bdd8f5c2217e1cb35db02c2eed36ea20fb50f5 ]
We fixed the UAF issue in USB MIDI code by canceling the pending work at closing each MIDI output device in the commit below. However, this assumed that it's the only one that is tied with the endpoint, and it resulted in unexpected data truncations when multiple devices are assigned to a single endpoint and opened simultaneously.
For addressing the unexpected MIDI message drops, simply replace cancel_work_sync() with flush_work(). The drain callback should have been already invoked before the close callback, hence the port->active flag must be already cleared. So this just assures that the pending work is finished before freeing the resources.
Fixes: 0125de38122f ("ALSA: usb-audio: Cancel pending work at closing a MIDI substream") Reported-and-tested-by: John Keeping jkeeping@inmusicbrands.com Closes: https://lore.kernel.org/20250217111647.3368132-1-jkeeping@inmusicbrands.com Link: https://patch.msgid.link/20250218114024.23125-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/usb/midi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sound/usb/midi.c b/sound/usb/midi.c index 6b0993258e039..6d861046b582b 100644 --- a/sound/usb/midi.c +++ b/sound/usb/midi.c @@ -1145,7 +1145,7 @@ static int snd_usbmidi_output_close(struct snd_rawmidi_substream *substream) { struct usbmidi_out_port *port = substream->runtime->private_data;
- cancel_work_sync(&port->ep->work); + flush_work(&port->ep->work); return substream_open(substream, 0, 0); }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
[ Upstream commit b25120e1d5f2ebb3db00af557709041f47f7f3d0 ]
L2CAP_ECRED_CONN_RSP needs to respond DCID in the same order received as SCID but the order is reversed due to use of list_add which actually prepend channels to the list so the response is reversed:
ACL Data RX: Handle 16 flags 0x02 dlen 26
LE L2CAP: Enhanced Credit Connection Request (0x17) ident 2 len 18 PSM: 39 (0x0027) MTU: 256 MPS: 251 Credits: 65535 Source CID: 116 Source CID: 117 Source CID: 118 Source CID: 119 Source CID: 120 < ACL Data TX: Handle 16 flags 0x00 dlen 26 LE L2CAP: Enhanced Credit Connection Response (0x18) ident 2 len 18 MTU: 517 MPS: 247 Credits: 3 Result: Connection successful (0x0000) Destination CID: 68 Destination CID: 67 Destination CID: 66 Destination CID: 65 Destination CID: 64
Also make sure the response don't include channels that are not on BT_CONNECT2 since the chan->ident can be set to the same value as in the following trace:
< ACL Data TX: Handle 16 flags 0x00 dlen 12 LE L2CAP: LE Flow Control Credit (0x16) ident 6 len 4 Source CID: 64 Credits: 1 ...
ACL Data RX: Handle 16 flags 0x02 dlen 18
LE L2CAP: Enhanced Credit Connection Request (0x17) ident 6 len 10 PSM: 39 (0x0027) MTU: 517 MPS: 251 Credits: 255 Source CID: 70 < ACL Data TX: Handle 16 flags 0x00 dlen 20 LE L2CAP: Enhanced Credit Connection Response (0x18) ident 6 len 12 MTU: 517 MPS: 247 Credits: 3 Result: Connection successful (0x0000) Destination CID: 64 Destination CID: 68
Closes: https://github.com/bluez/bluez/issues/1094 Fixes: 9aa9d9473f15 ("Bluetooth: L2CAP: Fix responding with wrong PDU type") Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/bluetooth/l2cap_core.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index acb148759bd04..304ebb31cebba 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -636,7 +636,8 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) test_bit(FLAG_HOLD_HCI_CONN, &chan->flags)) hci_conn_hold(conn->hcon);
- list_add(&chan->list, &conn->chan_l); + /* Append to the list since the order matters for ECRED */ + list_add_tail(&chan->list, &conn->chan_l); }
void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) @@ -3774,7 +3775,11 @@ static void l2cap_ecred_rsp_defer(struct l2cap_chan *chan, void *data) { struct l2cap_ecred_rsp_data *rsp = data;
- if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags)) + /* Check if channel for outgoing connection or if it wasn't deferred + * since in those cases it must be skipped. + */ + if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags) || + !test_and_clear_bit(FLAG_DEFER_SETUP, &chan->flags)) return;
/* Reset ident so only one response is sent */
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells dhowells@redhat.com
[ Upstream commit c34d999ca3145d9fe858258cc3342ec493f47d2e ]
The rxperf RPCs seem to have a magic cookie at the end of the request that was failing to be taken account of by the unmarshalling of the request. Fix the rxperf code to expect this.
Fixes: 75bfdbf2fca3 ("rxrpc: Implement an in-kernel rxperf server for testing purposes") Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: Simon Horman horms@kernel.org cc: linux-afs@lists.infradead.org Link: https://patch.msgid.link/20250218192250.296870-2-dhowells@redhat.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/rxrpc/rxperf.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c index 085e7892d3104..b1536da2246b8 100644 --- a/net/rxrpc/rxperf.c +++ b/net/rxrpc/rxperf.c @@ -478,6 +478,18 @@ static int rxperf_deliver_request(struct rxperf_call *call) call->unmarshal++; fallthrough; case 2: + ret = rxperf_extract_data(call, true); + if (ret < 0) + return ret; + + /* Deal with the terminal magic cookie. */ + call->iov_len = 4; + call->kvec[0].iov_len = call->iov_len; + call->kvec[0].iov_base = call->tmp; + iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len); + call->unmarshal++; + fallthrough; + case 3: ret = rxperf_extract_data(call, false); if (ret < 0) return ret;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells dhowells@redhat.com
[ Upstream commit ca0e79a46097d54e4af46c67c852479d97af35bb ]
Make it possible to find the afs_volume structs that are using an afs_server struct to aid in breaking volume callbacks.
The way this is done is that each afs_volume already has an array of afs_server_entry records that point to the servers where that volume might be found. An afs_volume backpointer and a list node is added to each entry and each entry is then added to an RCU-traversable list on the afs_server to which it points.
Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: linux-afs@lists.infradead.org Stable-dep-of: add117e48df4 ("afs: Fix the server_list to unuse a displaced server rather than putting it") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/cell.c | 1 + fs/afs/internal.h | 23 +++++---- fs/afs/server.c | 1 + fs/afs/server_list.c | 112 +++++++++++++++++++++++++++++++++++++++---- fs/afs/vl_alias.c | 2 +- fs/afs/volume.c | 36 ++++++++------ 6 files changed, 143 insertions(+), 32 deletions(-)
diff --git a/fs/afs/cell.c b/fs/afs/cell.c index 926cb1188eba6..7c0dce8eecadd 100644 --- a/fs/afs/cell.c +++ b/fs/afs/cell.c @@ -161,6 +161,7 @@ static struct afs_cell *afs_alloc_cell(struct afs_net *net, refcount_set(&cell->ref, 1); atomic_set(&cell->active, 0); INIT_WORK(&cell->manager, afs_manage_cell_work); + spin_lock_init(&cell->vs_lock); cell->volumes = RB_ROOT; INIT_HLIST_HEAD(&cell->proc_volumes); seqlock_init(&cell->volume_lock); diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 2f135d19545b1..0973cd0a39695 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -379,6 +379,7 @@ struct afs_cell { unsigned int debug_id;
/* The volumes belonging to this cell */ + spinlock_t vs_lock; /* Lock for server->volumes */ struct rb_root volumes; /* Tree of volumes on this server */ struct hlist_head proc_volumes; /* procfs volume list */ seqlock_t volume_lock; /* For volumes */ @@ -502,6 +503,7 @@ struct afs_server { struct hlist_node addr4_link; /* Link in net->fs_addresses4 */ struct hlist_node addr6_link; /* Link in net->fs_addresses6 */ struct hlist_node proc_link; /* Link in net->fs_proc */ + struct list_head volumes; /* RCU list of afs_server_entry objects */ struct work_struct initcb_work; /* Work for CB.InitCallBackState* */ struct afs_server *gc_next; /* Next server in manager's list */ time64_t unuse_time; /* Time at which last unused */ @@ -550,12 +552,14 @@ struct afs_server { */ struct afs_server_entry { struct afs_server *server; + struct afs_volume *volume; + struct list_head slink; /* Link in server->volumes */ };
struct afs_server_list { struct rcu_head rcu; - afs_volid_t vids[AFS_MAXTYPES]; /* Volume IDs */ refcount_t usage; + bool attached; /* T if attached to servers */ unsigned char nr_servers; unsigned char preferred; /* Preferred server */ unsigned short vnovol_mask; /* Servers to be skipped due to VNOVOL */ @@ -568,10 +572,9 @@ struct afs_server_list { * Live AFS volume management. */ struct afs_volume { - union { - struct rcu_head rcu; - afs_volid_t vid; /* volume ID */ - }; + struct rcu_head rcu; + afs_volid_t vid; /* The volume ID of this volume */ + afs_volid_t vids[AFS_MAXTYPES]; /* All associated volume IDs */ refcount_t ref; time64_t update_at; /* Time at which to next update */ struct afs_cell *cell; /* Cell to which belongs (pins ref) */ @@ -1453,10 +1456,14 @@ static inline struct afs_server_list *afs_get_serverlist(struct afs_server_list }
extern void afs_put_serverlist(struct afs_net *, struct afs_server_list *); -extern struct afs_server_list *afs_alloc_server_list(struct afs_cell *, struct key *, - struct afs_vldb_entry *, - u8); +struct afs_server_list *afs_alloc_server_list(struct afs_volume *volume, + struct key *key, + struct afs_vldb_entry *vldb); extern bool afs_annotate_server_list(struct afs_server_list *, struct afs_server_list *); +void afs_attach_volume_to_servers(struct afs_volume *volume, struct afs_server_list *slist); +void afs_reattach_volume_to_servers(struct afs_volume *volume, struct afs_server_list *slist, + struct afs_server_list *old); +void afs_detach_volume_from_servers(struct afs_volume *volume, struct afs_server_list *slist);
/* * super.c diff --git a/fs/afs/server.c b/fs/afs/server.c index 0bd2f5ba6900c..87381c2ffe374 100644 --- a/fs/afs/server.c +++ b/fs/afs/server.c @@ -236,6 +236,7 @@ static struct afs_server *afs_alloc_server(struct afs_cell *cell, server->addr_version = alist->version; server->uuid = *uuid; rwlock_init(&server->fs_lock); + INIT_LIST_HEAD(&server->volumes); INIT_WORK(&server->initcb_work, afs_server_init_callback_work); init_waitqueue_head(&server->probe_wq); INIT_LIST_HEAD(&server->probe_link); diff --git a/fs/afs/server_list.c b/fs/afs/server_list.c index b59896b1de0af..4d6369477f54e 100644 --- a/fs/afs/server_list.c +++ b/fs/afs/server_list.c @@ -24,13 +24,13 @@ void afs_put_serverlist(struct afs_net *net, struct afs_server_list *slist) /* * Build a server list from a VLDB record. */ -struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, +struct afs_server_list *afs_alloc_server_list(struct afs_volume *volume, struct key *key, - struct afs_vldb_entry *vldb, - u8 type_mask) + struct afs_vldb_entry *vldb) { struct afs_server_list *slist; struct afs_server *server; + unsigned int type_mask = 1 << volume->type; int ret = -ENOMEM, nr_servers = 0, i, j;
for (i = 0; i < vldb->nr_servers; i++) @@ -44,15 +44,12 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, refcount_set(&slist->usage, 1); rwlock_init(&slist->lock);
- for (i = 0; i < AFS_MAXTYPES; i++) - slist->vids[i] = vldb->vid[i]; - /* Make sure a records exists for each server in the list. */ for (i = 0; i < vldb->nr_servers; i++) { if (!(vldb->fs_mask[i] & type_mask)) continue;
- server = afs_lookup_server(cell, key, &vldb->fs_server[i], + server = afs_lookup_server(volume->cell, key, &vldb->fs_server[i], vldb->addr_version[i]); if (IS_ERR(server)) { ret = PTR_ERR(server); @@ -70,7 +67,7 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, break; if (j < slist->nr_servers) { if (slist->servers[j].server == server) { - afs_put_server(cell->net, server, + afs_put_server(volume->cell->net, server, afs_server_trace_put_slist_isort); continue; } @@ -81,6 +78,7 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, }
slist->servers[j].server = server; + slist->servers[j].volume = volume; slist->nr_servers++; }
@@ -92,7 +90,7 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell, return slist;
error_2: - afs_put_serverlist(cell->net, slist); + afs_put_serverlist(volume->cell->net, slist); error: return ERR_PTR(ret); } @@ -127,3 +125,99 @@ bool afs_annotate_server_list(struct afs_server_list *new,
return true; } + +/* + * Attach a volume to the servers it is going to use. + */ +void afs_attach_volume_to_servers(struct afs_volume *volume, struct afs_server_list *slist) +{ + struct afs_server_entry *se, *pe; + struct afs_server *server; + struct list_head *p; + unsigned int i; + + spin_lock(&volume->cell->vs_lock); + + for (i = 0; i < slist->nr_servers; i++) { + se = &slist->servers[i]; + server = se->server; + + list_for_each(p, &server->volumes) { + pe = list_entry(p, struct afs_server_entry, slink); + if (volume->vid <= pe->volume->vid) + break; + } + list_add_tail_rcu(&se->slink, p); + } + + slist->attached = true; + spin_unlock(&volume->cell->vs_lock); +} + +/* + * Reattach a volume to the servers it is going to use when server list is + * replaced. We try to switch the attachment points to avoid rewalking the + * lists. + */ +void afs_reattach_volume_to_servers(struct afs_volume *volume, struct afs_server_list *new, + struct afs_server_list *old) +{ + unsigned int n = 0, o = 0; + + spin_lock(&volume->cell->vs_lock); + + while (n < new->nr_servers || o < old->nr_servers) { + struct afs_server_entry *pn = n < new->nr_servers ? &new->servers[n] : NULL; + struct afs_server_entry *po = o < old->nr_servers ? &old->servers[o] : NULL; + struct afs_server_entry *s; + struct list_head *p; + int diff; + + if (pn && po && pn->server == po->server) { + list_replace_rcu(&po->slink, &pn->slink); + n++; + o++; + continue; + } + + if (pn && po) + diff = memcmp(&pn->server->uuid, &po->server->uuid, + sizeof(pn->server->uuid)); + else + diff = pn ? -1 : 1; + + if (diff < 0) { + list_for_each(p, &pn->server->volumes) { + s = list_entry(p, struct afs_server_entry, slink); + if (volume->vid <= s->volume->vid) + break; + } + list_add_tail_rcu(&pn->slink, p); + n++; + } else { + list_del_rcu(&po->slink); + o++; + } + } + + spin_unlock(&volume->cell->vs_lock); +} + +/* + * Detach a volume from the servers it has been using. + */ +void afs_detach_volume_from_servers(struct afs_volume *volume, struct afs_server_list *slist) +{ + unsigned int i; + + if (!slist->attached) + return; + + spin_lock(&volume->cell->vs_lock); + + for (i = 0; i < slist->nr_servers; i++) + list_del_rcu(&slist->servers[i].slink); + + slist->attached = false; + spin_unlock(&volume->cell->vs_lock); +} diff --git a/fs/afs/vl_alias.c b/fs/afs/vl_alias.c index 83cf1bfbe343a..b2cc10df95308 100644 --- a/fs/afs/vl_alias.c +++ b/fs/afs/vl_alias.c @@ -126,7 +126,7 @@ static int afs_compare_volume_slists(const struct afs_volume *vol_a, lb = rcu_dereference(vol_b->servers);
for (i = 0; i < AFS_MAXTYPES; i++) - if (la->vids[i] != lb->vids[i]) + if (vol_a->vids[i] != vol_b->vids[i]) return 0;
while (a < la->nr_servers && b < lb->nr_servers) { diff --git a/fs/afs/volume.c b/fs/afs/volume.c index c028598a903c9..0f64b97581272 100644 --- a/fs/afs/volume.c +++ b/fs/afs/volume.c @@ -72,11 +72,11 @@ static void afs_remove_volume_from_cell(struct afs_volume *volume) */ static struct afs_volume *afs_alloc_volume(struct afs_fs_context *params, struct afs_vldb_entry *vldb, - unsigned long type_mask) + struct afs_server_list **_slist) { struct afs_server_list *slist; struct afs_volume *volume; - int ret = -ENOMEM; + int ret = -ENOMEM, i;
volume = kzalloc(sizeof(struct afs_volume), GFP_KERNEL); if (!volume) @@ -95,13 +95,16 @@ static struct afs_volume *afs_alloc_volume(struct afs_fs_context *params, rwlock_init(&volume->cb_v_break_lock); memcpy(volume->name, vldb->name, vldb->name_len + 1);
- slist = afs_alloc_server_list(params->cell, params->key, vldb, type_mask); + for (i = 0; i < AFS_MAXTYPES; i++) + volume->vids[i] = vldb->vid[i]; + + slist = afs_alloc_server_list(volume, params->key, vldb); if (IS_ERR(slist)) { ret = PTR_ERR(slist); goto error_1; }
- refcount_set(&slist->usage, 1); + *_slist = slist; rcu_assign_pointer(volume->servers, slist); trace_afs_volume(volume->vid, 1, afs_volume_trace_alloc); return volume; @@ -117,17 +120,19 @@ static struct afs_volume *afs_alloc_volume(struct afs_fs_context *params, * Look up or allocate a volume record. */ static struct afs_volume *afs_lookup_volume(struct afs_fs_context *params, - struct afs_vldb_entry *vldb, - unsigned long type_mask) + struct afs_vldb_entry *vldb) { + struct afs_server_list *slist; struct afs_volume *candidate, *volume;
- candidate = afs_alloc_volume(params, vldb, type_mask); + candidate = afs_alloc_volume(params, vldb, &slist); if (IS_ERR(candidate)) return candidate;
volume = afs_insert_volume_into_cell(params->cell, candidate); - if (volume != candidate) + if (volume == candidate) + afs_attach_volume_to_servers(volume, slist); + else afs_put_volume(params->net, candidate, afs_volume_trace_put_cell_dup); return volume; } @@ -208,8 +213,7 @@ struct afs_volume *afs_create_volume(struct afs_fs_context *params) goto error; }
- type_mask = 1UL << params->type; - volume = afs_lookup_volume(params, vldb, type_mask); + volume = afs_lookup_volume(params, vldb);
error: kfree(vldb); @@ -221,14 +225,17 @@ struct afs_volume *afs_create_volume(struct afs_fs_context *params) */ static void afs_destroy_volume(struct afs_net *net, struct afs_volume *volume) { + struct afs_server_list *slist = rcu_access_pointer(volume->servers); + _enter("%p", volume);
#ifdef CONFIG_AFS_FSCACHE ASSERTCMP(volume->cache, ==, NULL); #endif
+ afs_detach_volume_from_servers(volume, slist); afs_remove_volume_from_cell(volume); - afs_put_serverlist(net, rcu_access_pointer(volume->servers)); + afs_put_serverlist(net, slist); afs_put_cell(volume->cell, afs_cell_trace_put_vol); trace_afs_volume(volume->vid, refcount_read(&volume->ref), afs_volume_trace_free); @@ -362,8 +369,7 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key) }
/* See if the volume's server list got updated. */ - new = afs_alloc_server_list(volume->cell, key, - vldb, (1 << volume->type)); + new = afs_alloc_server_list(volume, key, vldb); if (IS_ERR(new)) { ret = PTR_ERR(new); goto error_vldb; @@ -384,9 +390,11 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key)
volume->update_at = ktime_get_real_seconds() + afs_volume_record_life; write_unlock(&volume->servers_lock); - ret = 0;
+ if (discard == old) + afs_reattach_volume_to_servers(volume, new, old); afs_put_serverlist(volume->cell->net, discard); + ret = 0; error_vldb: kfree(vldb); error:
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells dhowells@redhat.com
[ Upstream commit add117e48df4788a86a21bd0515833c0a6db1ad1 ]
When allocating and building an afs_server_list struct object from a VLDB record, we look up each server address to get the server record for it - but a server may have more than one entry in the record and we discard the duplicate pointers. Currently, however, when we discard, we only put a server record, not unuse it - but the lookup got as an active-user count.
The active-user count on an afs_server_list object determines its lifetime whereas the refcount keeps the memory backing it around. Failing to reduce the active-user counter prevents the record from being cleaned up and can lead to multiple copied being seen - and pointing to deleted afs_cell objects and other such things.
Fix this by switching the incorrect 'put' to an 'unuse' instead.
Without this, occasionally, a dead server record can be seen in /proc/net/afs/servers and list corruption may be observed:
list_del corruption. prev->next should be ffff888102423e40, but was 0000000000000000. (prev=ffff88810140cd38)
Fixes: 977e5f8ed0ab ("afs: Split the usage count on struct afs_server") Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: Simon Horman horms@kernel.org cc: linux-afs@lists.infradead.org Link: https://patch.msgid.link/20250218192250.296870-5-dhowells@redhat.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/server_list.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/afs/server_list.c b/fs/afs/server_list.c index 4d6369477f54e..89c75d934f79e 100644 --- a/fs/afs/server_list.c +++ b/fs/afs/server_list.c @@ -67,8 +67,8 @@ struct afs_server_list *afs_alloc_server_list(struct afs_volume *volume, break; if (j < slist->nr_servers) { if (slist->servers[j].server == server) { - afs_put_server(volume->cell->net, server, - afs_server_trace_put_slist_isort); + afs_unuse_server(volume->cell->net, server, + afs_server_trace_put_slist_isort); continue; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 0e4427f8f587c4b603475468bb3aee9418574893 ]
After commit 22600596b675 ("ipv4: give an IPv4 dev to blackhole_netdev") IPv4 neighbors can be constructed on the blackhole net device, but they are constructed with an output function (neigh_direct_output()) that simply calls dev_queue_xmit(). The latter will transmit packets via 'skb->dev' which might not be the blackhole net device if dst_dev_put() switched 'dst->dev' to the blackhole net device while another CPU was using the dst entry in ip_output(), but after it already initialized 'skb->dev' from 'dst->dev'.
Specifically, the following can happen:
CPU1 CPU2
udp_sendmsg(sk1) udp_sendmsg(sk2) udp_send_skb() [...] ip_output() skb->dev = skb_dst(skb)->dev dst_dev_put() dst->dev = blackhole_netdev ip_finish_output2() resolves neigh on dst->dev neigh_output() neigh_direct_output() dev_queue_xmit()
This will result in IPv4 packets being sent without an Ethernet header via a valid net device:
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on enp9s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes 22:07:02.329668 20:00:40:11:18:fb > 45:00:00:44:f4:94, ethertype Unknown (0x58c6), length 68: 0x0000: 8dda 74ca f1ae ca6c ca6c 0098 969c 0400 ..t....l.l...... 0x0010: 0000 4730 3f18 6800 0000 0000 0000 9971 ..G0?.h........q 0x0020: c4c9 9055 a157 0a70 9ead bf83 38ca ab38 ...U.W.p....8..8 0x0030: 8add ab96 e052 .....R
Fix by making sure that neighbors are constructed on top of the blackhole net device with an output function that simply consumes the packets, in a similar fashion to dst_discard_out() and blackhole_netdev_xmit().
Fixes: 8d7017fd621d ("blackhole_netdev: use blackhole_netdev to invalidate dst entries") Fixes: 22600596b675 ("ipv4: give an IPv4 dev to blackhole_netdev") Reported-by: Florian Meister fmei@sfs.com Closes: https://lore.kernel.org/netdev/20250210084931.23a5c2e4@hermes.local/ Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Eric Dumazet edumazet@google.com Link: https://patch.msgid.link/20250220072559.782296-1-idosch@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/loopback.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c index f6eab66c26608..6aa00f62b1f90 100644 --- a/drivers/net/loopback.c +++ b/drivers/net/loopback.c @@ -247,8 +247,22 @@ static netdev_tx_t blackhole_netdev_xmit(struct sk_buff *skb, return NETDEV_TX_OK; }
+static int blackhole_neigh_output(struct neighbour *n, struct sk_buff *skb) +{ + kfree_skb(skb); + return 0; +} + +static int blackhole_neigh_construct(struct net_device *dev, + struct neighbour *n) +{ + n->output = blackhole_neigh_output; + return 0; +} + static const struct net_device_ops blackhole_netdev_ops = { .ndo_start_xmit = blackhole_netdev_xmit, + .ndo_neigh_construct = blackhole_neigh_construct, };
/* This is a dst-dummy device used specifically for invalidated
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiri Slaby (SUSE) jirislaby@kernel.org
[ Upstream commit c180188ec02281126045414e90d08422a80f75b4 ]
Commit 7acf8a1e8a28 ("Replace 2 jiffies with sysctl netdev_budget_usecs to enable softirq tuning") added a possibility to set net_hotdata.netdev_budget_usecs, but added no lower bound checking.
Commit a4837980fd9f ("net: revert default NAPI poll timeout to 2 jiffies") made the *initial* value HZ-dependent, so the initial value is at least 2 jiffies even for lower HZ values (2 ms for 1000 Hz, 8ms for 250 Hz, 20 ms for 100 Hz).
But a user still can set improper values by a sysctl. Set .extra1 (the lower bound) for net_hotdata.netdev_budget_usecs to the same value as in the latter commit. That is to 2 jiffies.
Fixes: a4837980fd9f ("net: revert default NAPI poll timeout to 2 jiffies") Fixes: 7acf8a1e8a28 ("Replace 2 jiffies with sysctl netdev_budget_usecs to enable softirq tuning") Signed-off-by: Jiri Slaby (SUSE) jirislaby@kernel.org Cc: Dmitry Yakunin zeil@yandex-team.ru Cc: Konstantin Khlebnikov khlebnikov@yandex-team.ru Link: https://patch.msgid.link/20250220110752.137639-1-jirislaby@kernel.org Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/sysctl_net_core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c index 0b15272dd2d35..a7fa17b6a1297 100644 --- a/net/core/sysctl_net_core.c +++ b/net/core/sysctl_net_core.c @@ -31,6 +31,7 @@ static int min_sndbuf = SOCK_MIN_SNDBUF; static int min_rcvbuf = SOCK_MIN_RCVBUF; static int max_skb_frags = MAX_SKB_FRAGS; static int min_mem_pcpu_rsv = SK_MEMORY_PCPU_RESERVE; +static int netdev_budget_usecs_min = 2 * USEC_PER_SEC / HZ;
static int net_msg_warn; /* Unused, but still a sysctl */
@@ -613,7 +614,7 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = SYSCTL_ZERO, + .extra1 = &netdev_budget_usecs_min, }, { .procname = "fb_tunnels_only_for_init_net",
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peilin He he.peilin@zte.com.cn
[ Upstream commit db3efdcf70c752e8a8deb16071d8e693c3ef8746 ]
Introduce a tracepoint for icmp_send, which can help users to get more detail information conveniently when icmp abnormal events happen.
1. Giving an usecase example: ============================= When an application experiences packet loss due to an unreachable UDP destination port, the kernel will send an exception message through the icmp_send function. By adding a trace point for icmp_send, developers or system administrators can obtain detailed information about the UDP packet loss, including the type, code, source address, destination address, source port, and destination port. This facilitates the trouble-shooting of UDP packet loss issues especially for those network-service applications.
2. Operation Instructions: ========================== Switch to the tracing directory. cd /sys/kernel/tracing Filter for destination port unreachable. echo "type==3 && code==3" > events/icmp/icmp_send/filter Enable trace event. echo 1 > events/icmp/icmp_send/enable
3. Result View: ================ udp_client_erro-11370 [002] ...s.12 124.728002: icmp_send: icmp_send: type=3, code=3. From 127.0.0.1:41895 to 127.0.0.1:6666 ulen=23 skbaddr=00000000589b167a
Signed-off-by: Peilin He he.peilin@zte.com.cn Signed-off-by: xu xin xu.xin16@zte.com.cn Reviewed-by: Yunkai Zhang zhang.yunkai@zte.com.cn Cc: Yang Yang yang.yang29@zte.com.cn Cc: Liu Chun liu.chun2@zte.com.cn Cc: Xuexin Jiang jiang.xuexin@zte.com.cn Reviewed-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- include/trace/events/icmp.h | 67 +++++++++++++++++++++++++++++++++++++ net/ipv4/icmp.c | 4 +++ 2 files changed, 71 insertions(+) create mode 100644 include/trace/events/icmp.h
diff --git a/include/trace/events/icmp.h b/include/trace/events/icmp.h new file mode 100644 index 0000000000000..31559796949a7 --- /dev/null +++ b/include/trace/events/icmp.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM icmp + +#if !defined(_TRACE_ICMP_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_ICMP_H + +#include <linux/icmp.h> +#include <linux/tracepoint.h> + +TRACE_EVENT(icmp_send, + + TP_PROTO(const struct sk_buff *skb, int type, int code), + + TP_ARGS(skb, type, code), + + TP_STRUCT__entry( + __field(const void *, skbaddr) + __field(int, type) + __field(int, code) + __array(__u8, saddr, 4) + __array(__u8, daddr, 4) + __field(__u16, sport) + __field(__u16, dport) + __field(unsigned short, ulen) + ), + + TP_fast_assign( + struct iphdr *iph = ip_hdr(skb); + struct udphdr *uh = udp_hdr(skb); + int proto_4 = iph->protocol; + __be32 *p32; + + __entry->skbaddr = skb; + __entry->type = type; + __entry->code = code; + + if (proto_4 != IPPROTO_UDP || (u8 *)uh < skb->head || + (u8 *)uh + sizeof(struct udphdr) + > skb_tail_pointer(skb)) { + __entry->sport = 0; + __entry->dport = 0; + __entry->ulen = 0; + } else { + __entry->sport = ntohs(uh->source); + __entry->dport = ntohs(uh->dest); + __entry->ulen = ntohs(uh->len); + } + + p32 = (__be32 *) __entry->saddr; + *p32 = iph->saddr; + + p32 = (__be32 *) __entry->daddr; + *p32 = iph->daddr; + ), + + TP_printk("icmp_send: type=%d, code=%d. From %pI4:%u to %pI4:%u ulen=%d skbaddr=%p", + __entry->type, __entry->code, + __entry->saddr, __entry->sport, __entry->daddr, + __entry->dport, __entry->ulen, __entry->skbaddr) +); + +#endif /* _TRACE_ICMP_H */ + +/* This part must be outside protection */ +#include <trace/define_trace.h> + diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index a21d32b3ae6c3..b05fa424ad5ce 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -93,6 +93,8 @@ #include <net/ip_fib.h> #include <net/l3mdev.h> #include <net/addrconf.h> +#define CREATE_TRACE_POINTS +#include <trace/events/icmp.h>
/* * Build xmit assembly blocks @@ -778,6 +780,8 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info, if (!fl4.saddr) fl4.saddr = htonl(INADDR_DUMMY);
+ trace_icmp_send(skb_in, type, code); + icmp_push_reply(sk, &icmp_param, &fl4, &ipc, &rt); ende: ip_rt_put(rt);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 1c6f50b37f711b831d78973dad0df1da99ad0014 ]
Align the ICMP code to other callers of ip_route_input() and pass the full DS field. In the future this will allow us to perform a route lookup according to the full DSCP value.
No functional changes intended since the upper DSCP bits are masked when comparing against the TOS selectors in FIB rules and routes.
Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Guillaume Nault gnault@redhat.com Acked-by: Florian Westphal fw@strlen.de Reviewed-by: David Ahern dsahern@kernel.org Link: https://patch.msgid.link/20240821125251.1571445-11-idosch@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/icmp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index b05fa424ad5ce..3807a269e0755 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -550,7 +550,7 @@ static struct rtable *icmp_route_lookup(struct net *net, orefdst = skb_in->_skb_refdst; /* save old refdst */ skb_dst_set(skb_in, NULL); err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr, - RT_TOS(tos), rt2->dst.dev); + tos, rt2->dst.dev);
dst_release(&rt2->dst); rt2 = skb_rtable(skb_in);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 4805646c42e51d2fbf142864d281473ad453ad5d ]
The function is called to resolve a route for an ICMP message that is sent in response to a situation. Based on the type of the generated ICMP message, the function is either passed the DS field of the packet that generated the ICMP message or a DS field that is derived from it.
Unmask the upper DSCP bits before resolving and output route via ip_route_output_key_hash() so that in the future the lookup could be performed according to the full DSCP value.
Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Guillaume Nault gnault@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/icmp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 3807a269e0755..a154339845dd4 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -93,6 +93,7 @@ #include <net/ip_fib.h> #include <net/l3mdev.h> #include <net/addrconf.h> +#include <net/inet_dscp.h> #define CREATE_TRACE_POINTS #include <trace/events/icmp.h>
@@ -502,7 +503,7 @@ static struct rtable *icmp_route_lookup(struct net *net, fl4->saddr = saddr; fl4->flowi4_mark = mark; fl4->flowi4_uid = sock_net_uid(net, NULL); - fl4->flowi4_tos = RT_TOS(tos); + fl4->flowi4_tos = tos & INET_DSCP_MASK; fl4->flowi4_proto = IPPROTO_ICMP; fl4->fl4_icmp_type = type; fl4->fl4_icmp_code = code;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 939cd1abf080c629552a9c5e6db4c0509d13e4c7 ]
Unmask the upper DSCP bits when calling ip_route_output_flow() so that in the future it could perform the FIB lookup according to the full DSCP value.
Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Guillaume Nault gnault@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipvlan/ipvlan_core.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c index fef4eff7753a7..b1afcb8740de1 100644 --- a/drivers/net/ipvlan/ipvlan_core.c +++ b/drivers/net/ipvlan/ipvlan_core.c @@ -2,6 +2,8 @@ /* Copyright (c) 2014 Mahesh Bandewar maheshb@google.com */
+#include <net/inet_dscp.h> + #include "ipvlan.h"
static u32 ipvlan_jhash_secret __read_mostly; @@ -420,7 +422,7 @@ static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb) int err, ret = NET_XMIT_DROP; struct flowi4 fl4 = { .flowi4_oif = dev->ifindex, - .flowi4_tos = RT_TOS(ip4h->tos), + .flowi4_tos = ip4h->tos & INET_DSCP_MASK, .flowi4_flags = FLOWI_FLAG_ANYSRC, .flowi4_mark = skb->mark, .daddr = ip4h->daddr,
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Guillaume Nault gnault@redhat.com
[ Upstream commit 913c83a610bb7dd8e5952a2b4663e1feec0b5de6 ]
Pass a dscp_t variable to icmp_route_lookup(), instead of a plain u8, to prevent accidental setting of ECN bits in ->flowi4_tos. Rename that variable ("tos" -> "dscp") to make the intent clear.
While there, reorganise the function parameters to fill up horizontal space.
Signed-off-by: Guillaume Nault gnault@redhat.com Reviewed-by: David Ahern dsahern@kernel.org Link: https://patch.msgid.link/294fead85c6035bcdc5fcf9a6bb4ce8798c45ba1.1727807926... Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/icmp.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index a154339845dd4..855fcef829e2c 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -484,13 +484,11 @@ static struct net_device *icmp_get_route_lookup_dev(struct sk_buff *skb) return route_lookup_dev; }
-static struct rtable *icmp_route_lookup(struct net *net, - struct flowi4 *fl4, +static struct rtable *icmp_route_lookup(struct net *net, struct flowi4 *fl4, struct sk_buff *skb_in, - const struct iphdr *iph, - __be32 saddr, u8 tos, u32 mark, - int type, int code, - struct icmp_bxm *param) + const struct iphdr *iph, __be32 saddr, + dscp_t dscp, u32 mark, int type, + int code, struct icmp_bxm *param) { struct net_device *route_lookup_dev; struct rtable *rt, *rt2; @@ -503,7 +501,7 @@ static struct rtable *icmp_route_lookup(struct net *net, fl4->saddr = saddr; fl4->flowi4_mark = mark; fl4->flowi4_uid = sock_net_uid(net, NULL); - fl4->flowi4_tos = tos & INET_DSCP_MASK; + fl4->flowi4_tos = inet_dscp_to_dsfield(dscp); fl4->flowi4_proto = IPPROTO_ICMP; fl4->fl4_icmp_type = type; fl4->fl4_icmp_code = code; @@ -551,7 +549,7 @@ static struct rtable *icmp_route_lookup(struct net *net, orefdst = skb_in->_skb_refdst; /* save old refdst */ skb_dst_set(skb_in, NULL); err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr, - tos, rt2->dst.dev); + inet_dscp_to_dsfield(dscp), rt2->dst.dev);
dst_release(&rt2->dst); rt2 = skb_rtable(skb_in); @@ -747,8 +745,9 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info, ipc.opt = &icmp_param.replyopts.opt; ipc.sockc.mark = mark;
- rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr, tos, mark, - type, code, &icmp_param); + rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr, + inet_dsfield_to_dscp(tos), mark, type, code, + &icmp_param); if (IS_ERR(rt)) goto out_unlock;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Guillaume Nault gnault@redhat.com
[ Upstream commit 7e863e5db6185b1add0df4cb01b31a4ed1c4b738 ]
Pass a dscp_t variable to ip_route_input(), instead of a plain u8, to prevent accidental setting of ECN bits in ->flowi4_tos.
Callers of ip_route_input() to consider are:
* input_action_end_dx4_finish() and input_action_end_dt4() in net/ipv6/seg6_local.c. These functions set the tos parameter to 0, which is already a valid dscp_t value, so they don't need to be adjusted for the new prototype.
* icmp_route_lookup(), which already has a dscp_t variable to pass as parameter. We just need to remove the inet_dscp_to_dsfield() conversion.
* br_nf_pre_routing_finish(), ip_options_rcv_srr() and ip4ip6_err(), which get the DSCP directly from IPv4 headers. Define a helper to read the .tos field of struct iphdr as dscp_t, so that these function don't have to do the conversion manually.
While there, declare *iph as const in br_nf_pre_routing_finish(), declare its local variables in reverse-christmas-tree order and move the "err = ip_route_input()" assignment out of the conditional to avoid checkpatch warning.
Signed-off-by: Guillaume Nault gnault@redhat.com Reviewed-by: David Ahern dsahern@kernel.org Link: https://patch.msgid.link/e9d40781d64d3d69f4c79ac8a008b8d67a033e8d.1727807926... Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/ip.h | 5 +++++ include/net/route.h | 5 +++-- net/bridge/br_netfilter_hooks.c | 8 +++++--- net/ipv4/icmp.c | 2 +- net/ipv4/ip_options.c | 3 ++- net/ipv6/ip6_tunnel.c | 4 ++-- 6 files changed, 18 insertions(+), 9 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h index 7db5912e0c5f6..d8bf1f0a6919c 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -415,6 +415,11 @@ int ip_decrease_ttl(struct iphdr *iph) return --iph->ttl; }
+static inline dscp_t ip4h_dscp(const struct iphdr *ip4h) +{ + return inet_dsfield_to_dscp(ip4h->tos); +} + static inline int ip_mtu_locked(const struct dst_entry *dst) { const struct rtable *rt = (const struct rtable *)dst; diff --git a/include/net/route.h b/include/net/route.h index 0171e9e1bbea3..27c17aff0bbe1 100644 --- a/include/net/route.h +++ b/include/net/route.h @@ -200,12 +200,13 @@ int ip_route_use_hint(struct sk_buff *skb, __be32 dst, __be32 src, const struct sk_buff *hint);
static inline int ip_route_input(struct sk_buff *skb, __be32 dst, __be32 src, - u8 tos, struct net_device *devin) + dscp_t dscp, struct net_device *devin) { int err;
rcu_read_lock(); - err = ip_route_input_noref(skb, dst, src, tos, devin); + err = ip_route_input_noref(skb, dst, src, inet_dscp_to_dsfield(dscp), + devin); if (!err) { skb_dst_force(skb); if (!skb_dst(skb)) diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c index a1cfa75bbadb9..2a4958e995f2d 100644 --- a/net/bridge/br_netfilter_hooks.c +++ b/net/bridge/br_netfilter_hooks.c @@ -366,9 +366,9 @@ br_nf_ipv4_daddr_was_changed(const struct sk_buff *skb, */ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_buff *skb) { - struct net_device *dev = skb->dev, *br_indev; - struct iphdr *iph = ip_hdr(skb); struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb); + struct net_device *dev = skb->dev, *br_indev; + const struct iphdr *iph = ip_hdr(skb); struct rtable *rt; int err;
@@ -386,7 +386,9 @@ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_ } nf_bridge->in_prerouting = 0; if (br_nf_ipv4_daddr_was_changed(skb, nf_bridge)) { - if ((err = ip_route_input(skb, iph->daddr, iph->saddr, iph->tos, dev))) { + err = ip_route_input(skb, iph->daddr, iph->saddr, + ip4h_dscp(iph), dev); + if (err) { struct in_device *in_dev = __in_dev_get_rcu(dev);
/* If err equals -EHOSTUNREACH the error is due to a diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 855fcef829e2c..94501bb30c431 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -549,7 +549,7 @@ static struct rtable *icmp_route_lookup(struct net *net, struct flowi4 *fl4, orefdst = skb_in->_skb_refdst; /* save old refdst */ skb_dst_set(skb_in, NULL); err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr, - inet_dscp_to_dsfield(dscp), rt2->dst.dev); + dscp, rt2->dst.dev);
dst_release(&rt2->dst); rt2 = skb_rtable(skb_in); diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c index a9e22a098872f..b4c59708fc095 100644 --- a/net/ipv4/ip_options.c +++ b/net/ipv4/ip_options.c @@ -617,7 +617,8 @@ int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev)
orefdst = skb->_skb_refdst; skb_dst_set(skb, NULL); - err = ip_route_input(skb, nexthop, iph->saddr, iph->tos, dev); + err = ip_route_input(skb, nexthop, iph->saddr, ip4h_dscp(iph), + dev); rt2 = skb_rtable(skb); if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) { skb_dst_drop(skb); diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c index 97905d4174eca..d645d022ce774 100644 --- a/net/ipv6/ip6_tunnel.c +++ b/net/ipv6/ip6_tunnel.c @@ -628,8 +628,8 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, } skb_dst_set(skb2, &rt->dst); } else { - if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos, - skb2->dev) || + if (ip_route_input(skb2, eiph->daddr, eiph->saddr, + ip4h_dscp(eiph), skb2->dev) || skb_dst(skb2)->dev->type != ARPHRD_TUNNEL6) goto out; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Guillaume Nault gnault@redhat.com
[ Upstream commit 0c30d6eedd1ec0c1382bcab9576d26413cd278a3 ]
Use ip4h_dscp() to get the DSCP from the IPv4 header, then convert the dscp_t value to __u8 with inet_dscp_to_dsfield().
Then, when we'll convert .flowi4_tos to dscp_t, we'll just have to drop the inet_dscp_to_dsfield() call.
Signed-off-by: Guillaume Nault gnault@redhat.com Reviewed-by: Ido Schimmel idosch@nvidia.com Link: https://patch.msgid.link/f48335504a05b3587e0081a9b4511e0761571ca5.1730292157... Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 27843ce6ba3d ("ipvlan: ensure network headers are in skb linear part") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipvlan/ipvlan_core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c index b1afcb8740de1..fd591ddb3884d 100644 --- a/drivers/net/ipvlan/ipvlan_core.c +++ b/drivers/net/ipvlan/ipvlan_core.c @@ -3,6 +3,7 @@ */
#include <net/inet_dscp.h> +#include <net/ip.h>
#include "ipvlan.h"
@@ -422,7 +423,7 @@ static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb) int err, ret = NET_XMIT_DROP; struct flowi4 fl4 = { .flowi4_oif = dev->ifindex, - .flowi4_tos = ip4h->tos & INET_DSCP_MASK, + .flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h)), .flowi4_flags = FLOWI_FLAG_ANYSRC, .flowi4_mark = skb->mark, .daddr = ip4h->daddr,
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 27843ce6ba3d3122b65066550fe33fb8839f8aef ]
syzbot found that ipvlan_process_v6_outbound() was assuming the IPv6 network header isis present in skb->head [1]
Add the needed pskb_network_may_pull() calls for both IPv4 and IPv6 handlers.
[1] BUG: KMSAN: uninit-value in __ipv6_addr_type+0xa2/0x490 net/ipv6/addrconf_core.c:47 __ipv6_addr_type+0xa2/0x490 net/ipv6/addrconf_core.c:47 ipv6_addr_type include/net/ipv6.h:555 [inline] ip6_route_output_flags_noref net/ipv6/route.c:2616 [inline] ip6_route_output_flags+0x51/0x720 net/ipv6/route.c:2651 ip6_route_output include/net/ip6_route.h:93 [inline] ipvlan_route_v6_outbound+0x24e/0x520 drivers/net/ipvlan/ipvlan_core.c:476 ipvlan_process_v6_outbound drivers/net/ipvlan/ipvlan_core.c:491 [inline] ipvlan_process_outbound drivers/net/ipvlan/ipvlan_core.c:541 [inline] ipvlan_xmit_mode_l3 drivers/net/ipvlan/ipvlan_core.c:605 [inline] ipvlan_queue_xmit+0xd72/0x1780 drivers/net/ipvlan/ipvlan_core.c:671 ipvlan_start_xmit+0x5b/0x210 drivers/net/ipvlan/ipvlan_main.c:223 __netdev_start_xmit include/linux/netdevice.h:5150 [inline] netdev_start_xmit include/linux/netdevice.h:5159 [inline] xmit_one net/core/dev.c:3735 [inline] dev_hard_start_xmit+0x247/0xa20 net/core/dev.c:3751 sch_direct_xmit+0x399/0xd40 net/sched/sch_generic.c:343 qdisc_restart net/sched/sch_generic.c:408 [inline] __qdisc_run+0x14da/0x35d0 net/sched/sch_generic.c:416 qdisc_run+0x141/0x4d0 include/net/pkt_sched.h:127 net_tx_action+0x78b/0x940 net/core/dev.c:5484 handle_softirqs+0x1a0/0x7c0 kernel/softirq.c:561 __do_softirq+0x14/0x1a kernel/softirq.c:595 do_softirq+0x9a/0x100 kernel/softirq.c:462 __local_bh_enable_ip+0x9f/0xb0 kernel/softirq.c:389 local_bh_enable include/linux/bottom_half.h:33 [inline] rcu_read_unlock_bh include/linux/rcupdate.h:919 [inline] __dev_queue_xmit+0x2758/0x57d0 net/core/dev.c:4611 dev_queue_xmit include/linux/netdevice.h:3311 [inline] packet_xmit+0x9c/0x6c0 net/packet/af_packet.c:276 packet_snd net/packet/af_packet.c:3132 [inline] packet_sendmsg+0x93e0/0xa7e0 net/packet/af_packet.c:3164 sock_sendmsg_nosec net/socket.c:718 [inline]
Fixes: 2ad7bf363841 ("ipvlan: Initial check-in of the IPVLAN driver.") Reported-by: syzbot+93ab4a777bafb9d9f960@syzkaller.appspotmail.com Closes: https://lore.kernel.org/netdev/67b74f01.050a0220.14d86d.02d8.GAE@google.com/... Signed-off-by: Eric Dumazet edumazet@google.com Cc: Mahesh Bandewar maheshb@google.com Link: https://patch.msgid.link/20250220155336.61884-1-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipvlan/ipvlan_core.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c index fd591ddb3884d..ca62188a317ad 100644 --- a/drivers/net/ipvlan/ipvlan_core.c +++ b/drivers/net/ipvlan/ipvlan_core.c @@ -416,20 +416,25 @@ struct ipvl_addr *ipvlan_addr_lookup(struct ipvl_port *port, void *lyr3h,
static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb) { - const struct iphdr *ip4h = ip_hdr(skb); struct net_device *dev = skb->dev; struct net *net = dev_net(dev); - struct rtable *rt; int err, ret = NET_XMIT_DROP; + const struct iphdr *ip4h; + struct rtable *rt; struct flowi4 fl4 = { .flowi4_oif = dev->ifindex, - .flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h)), .flowi4_flags = FLOWI_FLAG_ANYSRC, .flowi4_mark = skb->mark, - .daddr = ip4h->daddr, - .saddr = ip4h->saddr, };
+ if (!pskb_network_may_pull(skb, sizeof(struct iphdr))) + goto err; + + ip4h = ip_hdr(skb); + fl4.daddr = ip4h->daddr; + fl4.saddr = ip4h->saddr; + fl4.flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h)); + rt = ip_route_output_flow(net, &fl4, NULL); if (IS_ERR(rt)) goto err; @@ -488,6 +493,12 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb) struct net_device *dev = skb->dev; int err, ret = NET_XMIT_DROP;
+ if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr))) { + DEV_STATS_INC(dev, tx_errors); + kfree_skb(skb); + return ret; + } + err = ipvlan_route_v6_outbound(dev, skb); if (unlikely(err)) { DEV_STATS_INC(dev, tx_errors);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Anderson sean.anderson@linux.dev
[ Upstream commit fa52f15c745ce55261b92873676f64f7348cfe82 ]
Stats calculations involve a RMW to add the stat update to the existing value. This is currently not protected by any synchronization mechanism, so data races are possible. Add a spinlock to protect the update. The reader side could be protected using u64_stats, but we would still need a spinlock for the update side anyway. And we always do an update immediately before reading the stats anyway.
Fixes: 89e5785fc8a6 ("[PATCH] Atmel MACB ethernet driver") Signed-off-by: Sean Anderson sean.anderson@linux.dev Link: https://patch.msgid.link/20250220162950.95941-1-sean.anderson@linux.dev Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/cadence/macb.h | 2 ++ drivers/net/ethernet/cadence/macb_main.c | 12 ++++++++++-- 2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h index 78c972bb1d962..9fb5a18e056d4 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -1270,6 +1270,8 @@ struct macb { struct clk *rx_clk; struct clk *tsu_clk; struct net_device *dev; + /* Protects hw_stats and ethtool_stats */ + spinlock_t stats_lock; union { struct macb_stats macb; struct gem_stats gem; diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index 8f61731e4554b..4325d0ace1f26 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1992,10 +1992,12 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
if (status & MACB_BIT(ISR_ROVR)) { /* We missed at least one packet */ + spin_lock(&bp->stats_lock); if (macb_is_gem(bp)) bp->hw_stats.gem.rx_overruns++; else bp->hw_stats.macb.rx_overruns++; + spin_unlock(&bp->stats_lock);
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) queue_writel(queue, ISR, MACB_BIT(ISR_ROVR)); @@ -3084,6 +3086,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp) if (!netif_running(bp->dev)) return nstat;
+ spin_lock_irq(&bp->stats_lock); gem_update_stats(bp);
nstat->rx_errors = (hwstat->rx_frame_check_sequence_errors + @@ -3113,6 +3116,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp) nstat->tx_aborted_errors = hwstat->tx_excessive_collisions; nstat->tx_carrier_errors = hwstat->tx_carrier_sense_errors; nstat->tx_fifo_errors = hwstat->tx_underrun; + spin_unlock_irq(&bp->stats_lock);
return nstat; } @@ -3120,12 +3124,13 @@ static struct net_device_stats *gem_get_stats(struct macb *bp) static void gem_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *stats, u64 *data) { - struct macb *bp; + struct macb *bp = netdev_priv(dev);
- bp = netdev_priv(dev); + spin_lock_irq(&bp->stats_lock); gem_update_stats(bp); memcpy(data, &bp->ethtool_stats, sizeof(u64) * (GEM_STATS_LEN + QUEUE_STATS_LEN * MACB_MAX_QUEUES)); + spin_unlock_irq(&bp->stats_lock); }
static int gem_get_sset_count(struct net_device *dev, int sset) @@ -3175,6 +3180,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev) return gem_get_stats(bp);
/* read stats from hardware */ + spin_lock_irq(&bp->stats_lock); macb_update_stats(bp);
/* Convert HW stats into netdevice stats */ @@ -3208,6 +3214,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev) nstat->tx_carrier_errors = hwstat->tx_carrier_errors; nstat->tx_fifo_errors = hwstat->tx_underruns; /* Don't know about heartbeat or window errors... */ + spin_unlock_irq(&bp->stats_lock);
return nstat; } @@ -5063,6 +5070,7 @@ static int macb_probe(struct platform_device *pdev) } } spin_lock_init(&bp->lock); + spin_lock_init(&bp->stats_lock);
/* setup capabilities */ macb_configure_caps(bp, macb_config);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicolas Frattaroli nicolas.frattaroli@collabora.com
[ Upstream commit 5b0c02f9b8acf2a791e531bbc09acae2d51f4f9b ]
The ES8328 codec driver, which is also used for the ES8388 chip that appears to have an identical register map, claims that the output can either take the route from DAC->Mixer->Output or through DAC->Output directly. To the best of what I could find, this is not true, and creates problems.
Without DACCONTROL17 bit index 7 set for the left channel, as well as DACCONTROL20 bit index 7 set for the right channel, I cannot get any analog audio out on Left Out 2 and Right Out 2 respectively, despite the DAPM routes claiming that this should be possible. Furthermore, the same is the case for Left Out 1 and Right Out 1, showing that those two don't have a direct route from DAC to output bypassing the mixer either.
Those control bits toggle whether the DACs are fed (stale bread?) into their respective mixers. If one "unmutes" the mixer controls in alsamixer, then sure, the audio output works, but if it doesn't work without the mixer being fed the DAC input then evidently it's not a direct output from the DAC.
ES8328/ES8388 are seemingly not alone in this. ES8323, which uses a separate driver for what appears to be a very similar register map, simply flips those two bits on in its probe function, and then pretends there is no power management whatsoever for the individual controls. Fair enough.
My theory as to why nobody has noticed this up to this point is that everyone just assumes it's their fault when they had to unmute an additional control in ALSA.
Fix this in the es8328 driver by removing the erroneous direct route, then get rid of the playback switch controls and have those bits tied to the mixer's widget instead, which until now had no register to play with.
Fixes: 567e4f98922c ("ASoC: add es8328 codec driver") Signed-off-by: Nicolas Frattaroli nicolas.frattaroli@collabora.com Link: https://patch.msgid.link/20250222-es8328-route-bludgeoning-v1-1-99bfb7fb22d9... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/es8328.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/sound/soc/codecs/es8328.c b/sound/soc/codecs/es8328.c index 0bd9ba5a11b4e..43792e175d75f 100644 --- a/sound/soc/codecs/es8328.c +++ b/sound/soc/codecs/es8328.c @@ -234,7 +234,6 @@ static const struct snd_kcontrol_new es8328_right_line_controls =
/* Left Mixer */ static const struct snd_kcontrol_new es8328_left_mixer_controls[] = { - SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL17, 7, 1, 0), SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL17, 6, 1, 0), SOC_DAPM_SINGLE("Right Playback Switch", ES8328_DACCONTROL18, 7, 1, 0), SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL18, 6, 1, 0), @@ -244,7 +243,6 @@ static const struct snd_kcontrol_new es8328_left_mixer_controls[] = { static const struct snd_kcontrol_new es8328_right_mixer_controls[] = { SOC_DAPM_SINGLE("Left Playback Switch", ES8328_DACCONTROL19, 7, 1, 0), SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL19, 6, 1, 0), - SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL20, 7, 1, 0), SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL20, 6, 1, 0), };
@@ -337,10 +335,10 @@ static const struct snd_soc_dapm_widget es8328_dapm_widgets[] = { SND_SOC_DAPM_DAC("Left DAC", "Left Playback", ES8328_DACPOWER, ES8328_DACPOWER_LDAC_OFF, 1),
- SND_SOC_DAPM_MIXER("Left Mixer", SND_SOC_NOPM, 0, 0, + SND_SOC_DAPM_MIXER("Left Mixer", ES8328_DACCONTROL17, 7, 0, &es8328_left_mixer_controls[0], ARRAY_SIZE(es8328_left_mixer_controls)), - SND_SOC_DAPM_MIXER("Right Mixer", SND_SOC_NOPM, 0, 0, + SND_SOC_DAPM_MIXER("Right Mixer", ES8328_DACCONTROL20, 7, 0, &es8328_right_mixer_controls[0], ARRAY_SIZE(es8328_right_mixer_controls)),
@@ -419,19 +417,14 @@ static const struct snd_soc_dapm_route es8328_dapm_routes[] = { { "Right Line Mux", "PGA", "Right PGA Mux" }, { "Right Line Mux", "Differential", "Differential Mux" },
- { "Left Out 1", NULL, "Left DAC" }, - { "Right Out 1", NULL, "Right DAC" }, - { "Left Out 2", NULL, "Left DAC" }, - { "Right Out 2", NULL, "Right DAC" }, - - { "Left Mixer", "Playback Switch", "Left DAC" }, + { "Left Mixer", NULL, "Left DAC" }, { "Left Mixer", "Left Bypass Switch", "Left Line Mux" }, { "Left Mixer", "Right Playback Switch", "Right DAC" }, { "Left Mixer", "Right Bypass Switch", "Right Line Mux" },
{ "Right Mixer", "Left Playback Switch", "Left DAC" }, { "Right Mixer", "Left Bypass Switch", "Left Line Mux" }, - { "Right Mixer", "Playback Switch", "Right DAC" }, + { "Right Mixer", NULL, "Right DAC" }, { "Right Mixer", "Right Bypass Switch", "Right Line Mux" },
{ "DAC DIG", NULL, "DAC STM" },
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Philo Lu lulie@linux.alibaba.com
[ Upstream commit de2c211868b9424f9aa9b3432c4430825bafb41b ]
We found an issue when using bpf_redirect with ipvs NAT mode after commit ff70202b2d1a ("dev_forward_skb: do not scrub skb mark within the same name space"). Particularly, we use bpf_redirect to return the skb directly back to the netif it comes from, i.e., xnet is false in skb_scrub_packet(), and then ipvs_property is preserved and SNAT is skipped in the rx path.
ipvs_property has been already cleared when netns is changed in commit 2b5ec1a5f973 ("netfilter/ipvs: clear ipvs_property flag when SKB net namespace changed"). This patch just clears it in spite of netns.
Fixes: 2b5ec1a5f973 ("netfilter/ipvs: clear ipvs_property flag when SKB net namespace changed") Signed-off-by: Philo Lu lulie@linux.alibaba.com Acked-by: Julian Anastasov ja@ssi.bg Link: https://patch.msgid.link/20250222033518.126087-1-lulie@linux.alibaba.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/skbuff.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index f0a9ef1aeaa29..21a83e26f004b 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -5867,11 +5867,11 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet) skb->offload_fwd_mark = 0; skb->offload_l3_fwd_mark = 0; #endif + ipvs_reset(skb);
if (!xnet) return;
- ipvs_reset(skb); skb->mark = 0; skb_clear_tstamp(skb); }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Richard Fitzgerald rf@opensource.cirrus.com
[ Upstream commit fe08b7d5085a9774abc30c26d5aebc5b9cdd6091 ]
Change calls to async regmap write functions to use the normal blocking writes so that the cs35l56 driver can use spi_bus_lock() to gain exclusive access to the SPI bus.
As this is part of a fix, it makes only the minimal change to swap the functions to the blocking equivalents. There's no need to risk reworking the buffer allocation logic that is now partially redundant.
The async writes are a 12-year-old workaround for inefficiency of synchronous writes in the SPI subsystem. The SPI subsystem has since been changed to avoid the overheads, so this workaround should not be necessary.
The cs35l56 driver needs to use spi_bus_lock() prevent bus activity while it is soft-resetting the cs35l56. But spi_bus_lock() is incompatible with spi_async() calls, which will fail with -EBUSY.
Fixes: 8a731fd37f8b ("ASoC: cs35l56: Move utility functions to shared file") Signed-off-by: Richard Fitzgerald rf@opensource.cirrus.com Link: https://patch.msgid.link/20250225131843.113752-2-rf@opensource.cirrus.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/cirrus/cs_dsp.c | 24 ++++++------------------ 1 file changed, 6 insertions(+), 18 deletions(-)
diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c index e62ffffe5fb8d..4ce5681be18f0 100644 --- a/drivers/firmware/cirrus/cs_dsp.c +++ b/drivers/firmware/cirrus/cs_dsp.c @@ -1562,8 +1562,8 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware, goto out_fw; }
- ret = regmap_raw_write_async(regmap, reg, buf->buf, - le32_to_cpu(region->len)); + ret = regmap_raw_write(regmap, reg, buf->buf, + le32_to_cpu(region->len)); if (ret != 0) { cs_dsp_err(dsp, "%s.%d: Failed to write %d bytes at %d in %s: %d\n", @@ -1578,12 +1578,6 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware, regions++; }
- ret = regmap_async_complete(regmap); - if (ret != 0) { - cs_dsp_err(dsp, "Failed to complete async write: %d\n", ret); - goto out_fw; - } - if (pos > firmware->size) cs_dsp_warn(dsp, "%s.%d: %zu bytes at end of file\n", file, regions, pos - firmware->size); @@ -1591,7 +1585,6 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware, cs_dsp_debugfs_save_wmfwname(dsp, file);
out_fw: - regmap_async_complete(regmap); cs_dsp_buf_free(&buf_list); kfree(text);
@@ -2287,8 +2280,8 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware cs_dsp_dbg(dsp, "%s.%d: Writing %d bytes at %x\n", file, blocks, le32_to_cpu(blk->len), reg); - ret = regmap_raw_write_async(regmap, reg, buf->buf, - le32_to_cpu(blk->len)); + ret = regmap_raw_write(regmap, reg, buf->buf, + le32_to_cpu(blk->len)); if (ret != 0) { cs_dsp_err(dsp, "%s.%d: Failed to write to %x in %s: %d\n", @@ -2300,10 +2293,6 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware blocks++; }
- ret = regmap_async_complete(regmap); - if (ret != 0) - cs_dsp_err(dsp, "Failed to complete async write: %d\n", ret); - if (pos > firmware->size) cs_dsp_warn(dsp, "%s.%d: %zu bytes at end of file\n", file, blocks, pos - firmware->size); @@ -2311,7 +2300,6 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware cs_dsp_debugfs_save_binname(dsp, file);
out_fw: - regmap_async_complete(regmap); cs_dsp_buf_free(&buf_list); kfree(text);
@@ -2523,8 +2511,8 @@ static int cs_dsp_adsp2_enable_core(struct cs_dsp *dsp) { int ret;
- ret = regmap_update_bits_async(dsp->regmap, dsp->base + ADSP2_CONTROL, - ADSP2_SYS_ENA, ADSP2_SYS_ENA); + ret = regmap_update_bits(dsp->regmap, dsp->base + ADSP2_CONTROL, + ADSP2_SYS_ENA, ADSP2_SYS_ENA); if (ret != 0) return ret;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stefan Binding sbinding@opensource.cirrus.com
[ Upstream commit a40ce9f4bdbebfbf55fdd83a5284fbaaf222f0b9 ]
These models use 2xCS35L41amps with HDA using SPI and I2C. All models use Internal Boost. Some models also use Realtek Speakers in conjunction with CS35L41. All models require DSD support to be added inside cs35l41_hda_property.c
Signed-off-by: Stefan Binding sbinding@opensource.cirrus.com Link: https://lore.kernel.org/r/20231218151221.388745-4-sbinding@opensource.cirrus... Signed-off-by: Takashi Iwai tiwai@suse.de Stable-dep-of: 9e7c6779e353 ("ALSA: hda/realtek: Fix wrong mic setup for ASUS VivoBook 15") Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_realtek.c | 30 ++++++++++++++++++------------ 1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 75162e5f712b4..3310871ed9cc3 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -10084,23 +10084,26 @@ static const struct hda_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC), SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), - SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650P", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), - SND_PCI_QUIRK(0x1043, 0x1463, "Asus GA402X", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), - SND_PCI_QUIRK(0x1043, 0x1473, "ASUS GU604V", ALC285_FIXUP_ASUS_HEADSET_MIC), - SND_PCI_QUIRK(0x1043, 0x1483, "ASUS GU603V", ALC285_FIXUP_ASUS_HEADSET_MIC), - SND_PCI_QUIRK(0x1043, 0x1493, "ASUS GV601V", ALC285_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650PY/PZ/PV/PU/PYV/PZV/PIV/PVV", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1463, "Asus GA402X/GA402N", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1473, "ASUS GU604VI/VC/VE/VG/VJ/VQ/VU/VV/VY/VZ", ALC285_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1483, "ASUS GU603VQ/VU/VV/VJ/VI", ALC285_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1493, "ASUS GV601VV/VU/VJ/VQ/VI", ALC285_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x14d3, "ASUS G614JY/JZ/JG", ALC245_FIXUP_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x14e3, "ASUS G513PI/PU/PV", ALC287_FIXUP_CS35L41_I2C_2), + SND_PCI_QUIRK(0x1043, 0x1503, "ASUS G733PY/PZ/PZV/PYV", ALC287_FIXUP_CS35L41_I2C_2), SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A), - SND_PCI_QUIRK(0x1043, 0x1533, "ASUS GV302XA", ALC287_FIXUP_CS35L41_I2C_2), - SND_PCI_QUIRK(0x1043, 0x1573, "ASUS GZ301V", ALC285_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1533, "ASUS GV302XA/XJ/XQ/XU/XV/XI", ALC287_FIXUP_CS35L41_I2C_2), + SND_PCI_QUIRK(0x1043, 0x1573, "ASUS GZ301VV/VQ/VU/VJ/VA/VC/VE/VVC/VQC/VUC/VJC/VEC/VCC", ALC285_FIXUP_ASUS_HEADSET_MIC), SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK), - SND_PCI_QUIRK(0x1043, 0x1663, "ASUS GU603ZV", ALC285_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1663, "ASUS GU603ZI/ZJ/ZQ/ZU/ZV", ALC285_FIXUP_ASUS_HEADSET_MIC), SND_PCI_QUIRK(0x1043, 0x1683, "ASUS UM3402YAR", ALC287_FIXUP_CS35L41_I2C_2), SND_PCI_QUIRK(0x1043, 0x16a3, "ASUS UX3402VA", ALC245_FIXUP_CS35L41_SPI_2), SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401), SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS), SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK), - SND_PCI_QUIRK(0x1043, 0x17f3, "ROG Ally RC71L_RC71L", ALC294_FIXUP_ASUS_ALLY), + SND_PCI_QUIRK(0x1043, 0x17f3, "ROG Ally NR2301L/X", ALC294_FIXUP_ASUS_ALLY), SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS), SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC), SND_PCI_QUIRK(0x1043, 0x18d3, "ASUS UM3504DA", ALC294_FIXUP_CS35L41_I2C_2), @@ -10125,10 +10128,13 @@ static const struct hda_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1043, 0x1c43, "ASUS UX8406MA", ALC245_FIXUP_CS35L41_SPI_2), SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401), SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS), - SND_PCI_QUIRK(0x1043, 0x1c9f, "ASUS G614JI", ALC285_FIXUP_ASUS_HEADSET_MIC), - SND_PCI_QUIRK(0x1043, 0x1caf, "ASUS G634JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), + SND_PCI_QUIRK(0x1043, 0x1c9f, "ASUS G614JU/JV/JI", ALC285_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1caf, "ASUS G634JY/JZ/JI/JG", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC), - SND_PCI_QUIRK(0x1043, 0x1d1f, "ASUS ROG Strix G17 2023 (G713PV)", ALC287_FIXUP_CS35L41_I2C_2), + SND_PCI_QUIRK(0x1043, 0x1ccf, "ASUS G814JU/JV/JI", ALC245_FIXUP_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x1cdf, "ASUS G814JY/JZ/JG", ALC245_FIXUP_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x1cef, "ASUS G834JY/JZ/JI/JG", ALC285_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1d1f, "ASUS G713PI/PU/PV/PVN", ALC287_FIXUP_CS35L41_I2C_2), SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401), SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE), SND_PCI_QUIRK(0x1043, 0x1da2, "ASUS UP6502ZA/ZD", ALC245_FIXUP_CS35L41_SPI_2),
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
[ Upstream commit 9e7c6779e3530bbdd465214afcd13f19c33e51a2 ]
ASUS VivoBook 15 with SSID 1043:1460 took an incorrect quirk via the pin pattern matching for ASUS (ALC256_FIXUP_ASUS_MIC), resulting in the two built-in mic pins (0x13 and 0x1b). This had worked without problems casually in the past because the right pin (0x1b) was picked up as the primary device. But since we fixed the pin enumeration for other bugs, the bogus one (0x13) is picked up as the primary device, hence the bug surfaced now.
For addressing the regression, this patch explicitly specifies the quirk entry with ALC256_FIXUP_ASUS_MIC_NO_PRESENCE, which sets up only the headset mic pin.
Fixes: 3b4309546b48 ("ALSA: hda: Fix headset detection failure due to unstable sort") Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219807 Link: https://patch.msgid.link/20250225154540.13543-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 3310871ed9cc3..cb1f559303631 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -10085,6 +10085,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC), SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650PY/PZ/PV/PU/PYV/PZV/PIV/PVV", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), + SND_PCI_QUIRK(0x1043, 0x1460, "Asus VivoBook 15", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1043, 0x1463, "Asus GA402X/GA402N", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), SND_PCI_QUIRK(0x1043, 0x1473, "ASUS GU604VI/VC/VE/VG/VJ/VQ/VU/VV/VY/VZ", ALC285_FIXUP_ASUS_HEADSET_MIC), SND_PCI_QUIRK(0x1043, 0x1483, "ASUS GU603VQ/VU/VV/VJ/VI", ALC285_FIXUP_ASUS_HEADSET_MIC),
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Greenwalt paul.greenwalt@intel.com
[ Upstream commit ba1124f58afd37d9ff155d4ab7c9f209346aaed9 ]
E830 is the 200G NIC family which uses the ice driver.
Add specific E830 registers. Embed macros to use proper register based on (hw)->mac_type & name those macros to [ORIGINAL]_BY_MAC(hw). Registers only available on one of the macs will need to be explicitly referred to as E800_NAME instead of just NAME. PTP is not yet supported.
Co-developed-by: Milena Olech milena.olech@intel.com Signed-off-by: Milena Olech milena.olech@intel.com Co-developed-by: Dan Nowlin dan.nowlin@intel.com Signed-off-by: Dan Nowlin dan.nowlin@intel.com Co-developed-by: Scott Taylor scott.w.taylor@intel.com Signed-off-by: Scott Taylor scott.w.taylor@intel.com Co-developed-by: Pawel Chmielewski pawel.chmielewski@intel.com Signed-off-by: Pawel Chmielewski pawel.chmielewski@intel.com Reviewed-by: Simon Horman horms@kernel.org Signed-off-by: Paul Greenwalt paul.greenwalt@intel.com Tested-by: Tony Brelinski tony.brelinski@intel.com Signed-off-by: Jacob Keller jacob.e.keller@intel.com Link: https://lore.kernel.org/r/20231025214157.1222758-2-jacob.e.keller@intel.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 79990cf5e7ad ("ice: Fix deinitializing VF in error path") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice_common.c | 71 ++++++++++++------- drivers/net/ethernet/intel/ice/ice_devids.h | 10 ++- .../net/ethernet/intel/ice/ice_ethtool_fdir.c | 24 ++++--- .../net/ethernet/intel/ice/ice_hw_autogen.h | 52 ++++++++++---- drivers/net/ethernet/intel/ice/ice_main.c | 13 ++-- drivers/net/ethernet/intel/ice/ice_type.h | 3 +- .../ethernet/intel/ice/ice_virtchnl_fdir.c | 29 +++++--- 7 files changed, 141 insertions(+), 61 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 80deca45ab599..983332cbace21 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (c) 2018, Intel Corporation. */ +/* Copyright (c) 2018-2023, Intel Corporation. */
#include "ice_common.h" #include "ice_sched.h" @@ -153,6 +153,12 @@ static int ice_set_mac_type(struct ice_hw *hw) case ICE_DEV_ID_E823L_SFP: hw->mac_type = ICE_MAC_GENERIC; break; + case ICE_DEV_ID_E830_BACKPLANE: + case ICE_DEV_ID_E830_QSFP56: + case ICE_DEV_ID_E830_SFP: + case ICE_DEV_ID_E830_SFP_DD: + hw->mac_type = ICE_MAC_E830; + break; default: hw->mac_type = ICE_MAC_UNKNOWN; break; @@ -684,8 +690,7 @@ static void ice_fill_tx_timer_and_fc_thresh(struct ice_hw *hw, struct ice_aqc_set_mac_cfg *cmd) { - u16 fc_thres_val, tx_timer_val; - u32 val; + u32 val, fc_thres_m;
/* We read back the transmit timer and FC threshold value of * LFC. Thus, we will use index = @@ -694,19 +699,32 @@ ice_fill_tx_timer_and_fc_thresh(struct ice_hw *hw, * Also, because we are operating on transmit timer and FC * threshold of LFC, we don't turn on any bit in tx_tmr_priority */ -#define IDX_OF_LFC PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX - - /* Retrieve the transmit timer */ - val = rd32(hw, PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(IDX_OF_LFC)); - tx_timer_val = val & - PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M; - cmd->tx_tmr_value = cpu_to_le16(tx_timer_val); - - /* Retrieve the FC threshold */ - val = rd32(hw, PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(IDX_OF_LFC)); - fc_thres_val = val & PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M; - - cmd->fc_refresh_threshold = cpu_to_le16(fc_thres_val); +#define E800_IDX_OF_LFC E800_PRTMAC_HSEC_CTL_TX_PS_QNT_MAX +#define E800_REFRESH_TMR E800_PRTMAC_HSEC_CTL_TX_PS_RFSH_TMR + + if (hw->mac_type == ICE_MAC_E830) { + /* Retrieve the transmit timer */ + val = rd32(hw, E830_PRTMAC_CL01_PS_QNT); + cmd->tx_tmr_value = + le16_encode_bits(val, E830_PRTMAC_CL01_PS_QNT_CL0_M); + + /* Retrieve the fc threshold */ + val = rd32(hw, E830_PRTMAC_CL01_QNT_THR); + fc_thres_m = E830_PRTMAC_CL01_QNT_THR_CL0_M; + } else { + /* Retrieve the transmit timer */ + val = rd32(hw, + E800_PRTMAC_HSEC_CTL_TX_PS_QNT(E800_IDX_OF_LFC)); + cmd->tx_tmr_value = + le16_encode_bits(val, + E800_PRTMAC_HSEC_CTL_TX_PS_QNT_M); + + /* Retrieve the fc threshold */ + val = rd32(hw, + E800_REFRESH_TMR(E800_IDX_OF_LFC)); + fc_thres_m = E800_PRTMAC_HSEC_CTL_TX_PS_RFSH_TMR_M; + } + cmd->fc_refresh_threshold = le16_encode_bits(val, fc_thres_m); }
/** @@ -2389,16 +2407,21 @@ ice_parse_1588_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p, static void ice_parse_fdir_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p) { - u32 reg_val, val; + u32 reg_val, gsize, bsize;
reg_val = rd32(hw, GLQF_FD_SIZE); - val = (reg_val & GLQF_FD_SIZE_FD_GSIZE_M) >> - GLQF_FD_SIZE_FD_GSIZE_S; - func_p->fd_fltr_guar = - ice_get_num_per_func(hw, val); - val = (reg_val & GLQF_FD_SIZE_FD_BSIZE_M) >> - GLQF_FD_SIZE_FD_BSIZE_S; - func_p->fd_fltr_best_effort = val; + switch (hw->mac_type) { + case ICE_MAC_E830: + gsize = FIELD_GET(E830_GLQF_FD_SIZE_FD_GSIZE_M, reg_val); + bsize = FIELD_GET(E830_GLQF_FD_SIZE_FD_BSIZE_M, reg_val); + break; + case ICE_MAC_E810: + default: + gsize = FIELD_GET(E800_GLQF_FD_SIZE_FD_GSIZE_M, reg_val); + bsize = FIELD_GET(E800_GLQF_FD_SIZE_FD_BSIZE_M, reg_val); + } + func_p->fd_fltr_guar = ice_get_num_per_func(hw, gsize); + func_p->fd_fltr_best_effort = bsize;
ice_debug(hw, ICE_DBG_INIT, "func caps: fd_fltr_guar = %d\n", func_p->fd_fltr_guar); diff --git a/drivers/net/ethernet/intel/ice/ice_devids.h b/drivers/net/ethernet/intel/ice/ice_devids.h index 6d560d1c74a4a..a2d384dbfc767 100644 --- a/drivers/net/ethernet/intel/ice/ice_devids.h +++ b/drivers/net/ethernet/intel/ice/ice_devids.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (c) 2018, Intel Corporation. */ +/* Copyright (c) 2018-2023, Intel Corporation. */
#ifndef _ICE_DEVIDS_H_ #define _ICE_DEVIDS_H_ @@ -16,6 +16,14 @@ #define ICE_DEV_ID_E823L_1GBE 0x124F /* Intel(R) Ethernet Connection E823-L for QSFP */ #define ICE_DEV_ID_E823L_QSFP 0x151D +/* Intel(R) Ethernet Controller E830-C for backplane */ +#define ICE_DEV_ID_E830_BACKPLANE 0x12D1 +/* Intel(R) Ethernet Controller E830-C for QSFP */ +#define ICE_DEV_ID_E830_QSFP56 0x12D2 +/* Intel(R) Ethernet Controller E830-C for SFP */ +#define ICE_DEV_ID_E830_SFP 0x12D3 +/* Intel(R) Ethernet Controller E830-C for SFP-DD */ +#define ICE_DEV_ID_E830_SFP_DD 0x12D4 /* Intel(R) Ethernet Controller E810-C for backplane */ #define ICE_DEV_ID_E810C_BACKPLANE 0x1591 /* Intel(R) Ethernet Controller E810-C for QSFP */ diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c index b6bbf2376ef5c..d43b642cbc01c 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2018-2020, Intel Corporation. */ +/* Copyright (C) 2018-2023, Intel Corporation. */
/* flow director ethtool support for ice */
@@ -540,16 +540,24 @@ int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi) /* total guaranteed filters assigned to this VSI */ num_guar = vsi->num_gfltr;
- /* minus the guaranteed filters programed by this VSI */ - num_guar -= (rd32(hw, VSIQF_FD_CNT(vsi_num)) & - VSIQF_FD_CNT_FD_GCNT_M) >> VSIQF_FD_CNT_FD_GCNT_S; - /* total global best effort filters */ num_be = hw->func_caps.fd_fltr_best_effort;
- /* minus the global best effort filters programmed */ - num_be -= (rd32(hw, GLQF_FD_CNT) & GLQF_FD_CNT_FD_BCNT_M) >> - GLQF_FD_CNT_FD_BCNT_S; + /* Subtract the number of programmed filters from the global values */ + switch (hw->mac_type) { + case ICE_MAC_E830: + num_guar -= FIELD_GET(E830_VSIQF_FD_CNT_FD_GCNT_M, + rd32(hw, VSIQF_FD_CNT(vsi_num))); + num_be -= FIELD_GET(E830_GLQF_FD_CNT_FD_BCNT_M, + rd32(hw, GLQF_FD_CNT)); + break; + case ICE_MAC_E810: + default: + num_guar -= FIELD_GET(E800_VSIQF_FD_CNT_FD_GCNT_M, + rd32(hw, VSIQF_FD_CNT(vsi_num))); + num_be -= FIELD_GET(E800_GLQF_FD_CNT_FD_BCNT_M, + rd32(hw, GLQF_FD_CNT)); + }
return num_guar + num_be; } diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h index 531cc2194741e..67519a985b327 100644 --- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h +++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (c) 2018, Intel Corporation. */ +/* Copyright (c) 2018-2023, Intel Corporation. */
/* Machine-generated file */
@@ -284,11 +284,11 @@ #define VPLAN_TX_QBASE_VFNUMQ_M ICE_M(0xFF, 16) #define VPLAN_TXQ_MAPENA(_VF) (0x00073800 + ((_VF) * 4)) #define VPLAN_TXQ_MAPENA_TX_ENA_M BIT(0) -#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i) (0x001E36E0 + ((_i) * 32)) -#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8 -#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M ICE_M(0xFFFF, 0) -#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) -#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M ICE_M(0xFFFF, 0) +#define E800_PRTMAC_HSEC_CTL_TX_PS_QNT(_i) (0x001E36E0 + ((_i) * 32)) +#define E800_PRTMAC_HSEC_CTL_TX_PS_QNT_MAX 8 +#define E800_PRTMAC_HSEC_CTL_TX_PS_QNT_M GENMASK(15, 0) +#define E800_PRTMAC_HSEC_CTL_TX_PS_RFSH_TMR(_i) (0x001E3800 + ((_i) * 32)) +#define E800_PRTMAC_HSEC_CTL_TX_PS_RFSH_TMR_M GENMASK(15, 0) #define GL_MDCK_TX_TDPU 0x00049348 #define GL_MDCK_TX_TDPU_RCU_ANTISPOOF_ITR_DIS_M BIT(1) #define GL_MDET_RX 0x00294C00 @@ -311,7 +311,11 @@ #define GL_MDET_TX_PQM_MAL_TYPE_S 26 #define GL_MDET_TX_PQM_MAL_TYPE_M ICE_M(0x1F, 26) #define GL_MDET_TX_PQM_VALID_M BIT(31) -#define GL_MDET_TX_TCLAN 0x000FC068 +#define GL_MDET_TX_TCLAN_BY_MAC(hw) \ + ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MDET_TX_TCLAN : \ + E800_GL_MDET_TX_TCLAN) +#define E800_GL_MDET_TX_TCLAN 0x000FC068 +#define E830_GL_MDET_TX_TCLAN 0x000FCCC0 #define GL_MDET_TX_TCLAN_QNUM_S 0 #define GL_MDET_TX_TCLAN_QNUM_M ICE_M(0x7FFF, 0) #define GL_MDET_TX_TCLAN_VF_NUM_S 15 @@ -325,7 +329,11 @@ #define PF_MDET_RX_VALID_M BIT(0) #define PF_MDET_TX_PQM 0x002D2C80 #define PF_MDET_TX_PQM_VALID_M BIT(0) -#define PF_MDET_TX_TCLAN 0x000FC000 +#define PF_MDET_TX_TCLAN_BY_MAC(hw) \ + ((hw)->mac_type == ICE_MAC_E830 ? E830_PF_MDET_TX_TCLAN : \ + E800_PF_MDET_TX_TCLAN) +#define E800_PF_MDET_TX_TCLAN 0x000FC000 +#define E830_PF_MDET_TX_TCLAN 0x000FCC00 #define PF_MDET_TX_TCLAN_VALID_M BIT(0) #define VP_MDET_RX(_VF) (0x00294400 + ((_VF) * 4)) #define VP_MDET_RX_VALID_M BIT(0) @@ -335,6 +343,8 @@ #define VP_MDET_TX_TCLAN_VALID_M BIT(0) #define VP_MDET_TX_TDPU(_VF) (0x00040000 + ((_VF) * 4)) #define VP_MDET_TX_TDPU_VALID_M BIT(0) +#define E800_GL_MNG_FWSM_FW_MODES_M GENMASK(2, 0) +#define E830_GL_MNG_FWSM_FW_MODES_M GENMASK(1, 0) #define GL_MNG_FWSM 0x000B6134 #define GL_MNG_FWSM_FW_LOADING_M BIT(30) #define GLNVM_FLA 0x000B6108 @@ -363,13 +373,18 @@ #define GL_PWR_MODE_CTL_CAR_MAX_BW_S 30 #define GL_PWR_MODE_CTL_CAR_MAX_BW_M ICE_M(0x3, 30) #define GLQF_FD_CNT 0x00460018 +#define E800_GLQF_FD_CNT_FD_GCNT_M GENMASK(14, 0) +#define E830_GLQF_FD_CNT_FD_GCNT_M GENMASK(15, 0) #define GLQF_FD_CNT_FD_BCNT_S 16 -#define GLQF_FD_CNT_FD_BCNT_M ICE_M(0x7FFF, 16) +#define E800_GLQF_FD_CNT_FD_BCNT_M GENMASK(30, 16) +#define E830_GLQF_FD_CNT_FD_BCNT_M GENMASK(31, 16) #define GLQF_FD_SIZE 0x00460010 #define GLQF_FD_SIZE_FD_GSIZE_S 0 -#define GLQF_FD_SIZE_FD_GSIZE_M ICE_M(0x7FFF, 0) +#define E800_GLQF_FD_SIZE_FD_GSIZE_M GENMASK(14, 0) +#define E830_GLQF_FD_SIZE_FD_GSIZE_M GENMASK(15, 0) #define GLQF_FD_SIZE_FD_BSIZE_S 16 -#define GLQF_FD_SIZE_FD_BSIZE_M ICE_M(0x7FFF, 16) +#define E800_GLQF_FD_SIZE_FD_BSIZE_M GENMASK(30, 16) +#define E830_GLQF_FD_SIZE_FD_BSIZE_M GENMASK(31, 16) #define GLQF_FDINSET(_i, _j) (0x00412000 + ((_i) * 4 + (_j) * 512)) #define GLQF_FDMASK(_i) (0x00410800 + ((_i) * 4)) #define GLQF_FDMASK_MAX_INDEX 31 @@ -388,6 +403,10 @@ #define GLQF_HMASK_SEL(_i) (0x00410000 + ((_i) * 4)) #define GLQF_HMASK_SEL_MAX_INDEX 127 #define GLQF_HMASK_SEL_MASK_SEL_S 0 +#define E800_PFQF_FD_CNT_FD_GCNT_M GENMASK(14, 0) +#define E830_PFQF_FD_CNT_FD_GCNT_M GENMASK(15, 0) +#define E800_PFQF_FD_CNT_FD_BCNT_M GENMASK(30, 16) +#define E830_PFQF_FD_CNT_FD_BCNT_M GENMASK(31, 16) #define PFQF_FD_ENA 0x0043A000 #define PFQF_FD_ENA_FD_ENA_M BIT(0) #define PFQF_FD_SIZE 0x00460100 @@ -478,6 +497,7 @@ #define GLTSYN_SYNC_DLAY 0x00088818 #define GLTSYN_TGT_H_0(_i) (0x00088930 + ((_i) * 4)) #define GLTSYN_TGT_L_0(_i) (0x00088928 + ((_i) * 4)) +#define GLTSYN_TIME_0(_i) (0x000888C8 + ((_i) * 4)) #define GLTSYN_TIME_H(_i) (0x000888D8 + ((_i) * 4)) #define GLTSYN_TIME_L(_i) (0x000888D0 + ((_i) * 4)) #define PFHH_SEM 0x000A4200 /* Reset Source: PFR */ @@ -486,9 +506,11 @@ #define PFTSYN_SEM_BUSY_M BIT(0) #define VSIQF_FD_CNT(_VSI) (0x00464000 + ((_VSI) * 4)) #define VSIQF_FD_CNT_FD_GCNT_S 0 -#define VSIQF_FD_CNT_FD_GCNT_M ICE_M(0x3FFF, 0) +#define E800_VSIQF_FD_CNT_FD_GCNT_M GENMASK(13, 0) +#define E830_VSIQF_FD_CNT_FD_GCNT_M GENMASK(15, 0) #define VSIQF_FD_CNT_FD_BCNT_S 16 -#define VSIQF_FD_CNT_FD_BCNT_M ICE_M(0x3FFF, 16) +#define E800_VSIQF_FD_CNT_FD_BCNT_M GENMASK(29, 16) +#define E830_VSIQF_FD_CNT_FD_BCNT_M GENMASK(31, 16) #define VSIQF_FD_SIZE(_VSI) (0x00462000 + ((_VSI) * 4)) #define VSIQF_HKEY_MAX_INDEX 12 #define PFPM_APM 0x000B8080 @@ -500,6 +522,10 @@ #define PFPM_WUS_MAG_M BIT(1) #define PFPM_WUS_MNG_M BIT(3) #define PFPM_WUS_FW_RST_WK_M BIT(31) +#define E830_PRTMAC_CL01_PS_QNT 0x001E32A0 +#define E830_PRTMAC_CL01_PS_QNT_CL0_M GENMASK(15, 0) +#define E830_PRTMAC_CL01_QNT_THR 0x001E3320 +#define E830_PRTMAC_CL01_QNT_THR_CL0_M GENMASK(15, 0) #define VFINT_DYN_CTLN(_i) (0x00003800 + ((_i) * 4)) #define VFINT_DYN_CTLN_CLEARPBA_M BIT(1)
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 9f12c9a0fe296..ef5d43f2804e7 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (c) 2018, Intel Corporation. */ +/* Copyright (c) 2018-2023, Intel Corporation. */
/* Intel(R) Ethernet Connection E800 Series Linux Driver */
@@ -1748,7 +1748,7 @@ static void ice_handle_mdd_event(struct ice_pf *pf) wr32(hw, GL_MDET_TX_PQM, 0xffffffff); }
- reg = rd32(hw, GL_MDET_TX_TCLAN); + reg = rd32(hw, GL_MDET_TX_TCLAN_BY_MAC(hw)); if (reg & GL_MDET_TX_TCLAN_VALID_M) { u8 pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >> GL_MDET_TX_TCLAN_PF_NUM_S; @@ -1762,7 +1762,7 @@ static void ice_handle_mdd_event(struct ice_pf *pf) if (netif_msg_tx_err(pf)) dev_info(dev, "Malicious Driver Detection event %d on TX queue %d PF# %d VF# %d\n", event, queue, pf_num, vf_num); - wr32(hw, GL_MDET_TX_TCLAN, 0xffffffff); + wr32(hw, GL_MDET_TX_TCLAN_BY_MAC(hw), U32_MAX); }
reg = rd32(hw, GL_MDET_RX); @@ -1790,9 +1790,9 @@ static void ice_handle_mdd_event(struct ice_pf *pf) dev_info(dev, "Malicious Driver Detection event TX_PQM detected on PF\n"); }
- reg = rd32(hw, PF_MDET_TX_TCLAN); + reg = rd32(hw, PF_MDET_TX_TCLAN_BY_MAC(hw)); if (reg & PF_MDET_TX_TCLAN_VALID_M) { - wr32(hw, PF_MDET_TX_TCLAN, 0xFFFF); + wr32(hw, PF_MDET_TX_TCLAN_BY_MAC(hw), 0xffff); if (netif_msg_tx_err(pf)) dev_info(dev, "Malicious Driver Detection event TX_TCLAN detected on PF\n"); } @@ -3873,7 +3873,8 @@ static void ice_set_pf_caps(struct ice_pf *pf) }
clear_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags); - if (func_caps->common_cap.ieee_1588) + if (func_caps->common_cap.ieee_1588 && + !(pf->hw.mac_type == ICE_MAC_E830)) set_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags);
pf->max_pf_txqs = func_caps->common_cap.num_txq; diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index 5e353b0cbe6f7..35ee5b29ea34e 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (c) 2018, Intel Corporation. */ +/* Copyright (c) 2018-2023, Intel Corporation. */
#ifndef _ICE_TYPE_H_ #define _ICE_TYPE_H_ @@ -129,6 +129,7 @@ enum ice_set_fc_aq_failures { enum ice_mac_type { ICE_MAC_UNKNOWN = 0, ICE_MAC_E810, + ICE_MAC_E830, ICE_MAC_GENERIC, };
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c index 974c71490d97c..3ca5f44dea26e 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2021, Intel Corporation. */ +/* Copyright (C) 2021-2023, Intel Corporation. */
#include "ice.h" #include "ice_base.h" @@ -1421,8 +1421,8 @@ ice_vc_fdir_irq_handler(struct ice_vsi *ctrl_vsi, */ static void ice_vf_fdir_dump_info(struct ice_vf *vf) { + u32 fd_size, fd_cnt, fd_size_g, fd_cnt_g, fd_size_b, fd_cnt_b; struct ice_vsi *vf_vsi; - u32 fd_size, fd_cnt; struct device *dev; struct ice_pf *pf; struct ice_hw *hw; @@ -1441,12 +1441,25 @@ static void ice_vf_fdir_dump_info(struct ice_vf *vf)
fd_size = rd32(hw, VSIQF_FD_SIZE(vsi_num)); fd_cnt = rd32(hw, VSIQF_FD_CNT(vsi_num)); - dev_dbg(dev, "VF %d: space allocated: guar:0x%x, be:0x%x, space consumed: guar:0x%x, be:0x%x\n", - vf->vf_id, - (fd_size & VSIQF_FD_CNT_FD_GCNT_M) >> VSIQF_FD_CNT_FD_GCNT_S, - (fd_size & VSIQF_FD_CNT_FD_BCNT_M) >> VSIQF_FD_CNT_FD_BCNT_S, - (fd_cnt & VSIQF_FD_CNT_FD_GCNT_M) >> VSIQF_FD_CNT_FD_GCNT_S, - (fd_cnt & VSIQF_FD_CNT_FD_BCNT_M) >> VSIQF_FD_CNT_FD_BCNT_S); + switch (hw->mac_type) { + case ICE_MAC_E830: + fd_size_g = FIELD_GET(E830_VSIQF_FD_CNT_FD_GCNT_M, fd_size); + fd_size_b = FIELD_GET(E830_VSIQF_FD_CNT_FD_BCNT_M, fd_size); + fd_cnt_g = FIELD_GET(E830_VSIQF_FD_CNT_FD_GCNT_M, fd_cnt); + fd_cnt_b = FIELD_GET(E830_VSIQF_FD_CNT_FD_BCNT_M, fd_cnt); + break; + case ICE_MAC_E810: + default: + fd_size_g = FIELD_GET(E800_VSIQF_FD_CNT_FD_GCNT_M, fd_size); + fd_size_b = FIELD_GET(E800_VSIQF_FD_CNT_FD_BCNT_M, fd_size); + fd_cnt_g = FIELD_GET(E800_VSIQF_FD_CNT_FD_GCNT_M, fd_cnt); + fd_cnt_b = FIELD_GET(E800_VSIQF_FD_CNT_FD_BCNT_M, fd_cnt); + } + + dev_dbg(dev, "VF %d: Size in the FD table: guaranteed:0x%x, best effort:0x%x\n", + vf->vf_id, fd_size_g, fd_size_b); + dev_dbg(dev, "VF %d: Filter counter in the FD table: guaranteed:0x%x, best effort:0x%x\n", + vf->vf_id, fd_cnt_g, fd_cnt_b); }
/**
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Greenwalt paul.greenwalt@intel.com
[ Upstream commit 59f4d59b25aec39a015c0949f4ec235c7a839c44 ]
E830 adds hardware support to prevent the VF from overflowing the PF mailbox with VIRTCHNL messages. E830 will use the hardware feature (ICE_F_MBX_LIMIT) instead of the software solution ice_is_malicious_vf().
To prevent a VF from overflowing the PF, the PF sets the number of messages per VF that can be in the PF's mailbox queue (ICE_MBX_OVERFLOW_WATERMARK). When the PF processes a message from a VF, the PF decrements the per VF message count using the E830_MBX_VF_DEC_TRIG register.
Signed-off-by: Paul Greenwalt paul.greenwalt@intel.com Reviewed-by: Alexander Lobakin aleksander.lobakin@intel.com Tested-by: Rafal Romanowski rafal.romanowski@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Stable-dep-of: 79990cf5e7ad ("ice: Fix deinitializing VF in error path") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice.h | 1 + .../net/ethernet/intel/ice/ice_hw_autogen.h | 3 ++ drivers/net/ethernet/intel/ice/ice_lib.c | 3 ++ drivers/net/ethernet/intel/ice/ice_main.c | 24 ++++++++++---- drivers/net/ethernet/intel/ice/ice_sriov.c | 3 +- drivers/net/ethernet/intel/ice/ice_vf_lib.c | 26 +++++++++++++-- drivers/net/ethernet/intel/ice/ice_vf_mbx.c | 32 +++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_vf_mbx.h | 9 ++++++ drivers/net/ethernet/intel/ice/ice_virtchnl.c | 8 +++-- 9 files changed, 96 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index f943964ec05ae..e29a7ffd5f143 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -202,6 +202,7 @@ enum ice_feature { ICE_F_GNSS, ICE_F_ROCE_LAG, ICE_F_SRIOV_LAG, + ICE_F_MBX_LIMIT, ICE_F_MAX };
diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h index 67519a985b327..96f70c0a96598 100644 --- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h +++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h @@ -528,5 +528,8 @@ #define E830_PRTMAC_CL01_QNT_THR_CL0_M GENMASK(15, 0) #define VFINT_DYN_CTLN(_i) (0x00003800 + ((_i) * 4)) #define VFINT_DYN_CTLN_CLEARPBA_M BIT(1) +#define E830_MBX_PF_IN_FLIGHT_VF_MSGS_THRESH 0x00234000 +#define E830_MBX_VF_DEC_TRIG(_VF) (0x00233800 + (_VF) * 4) +#define E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT(_VF) (0x00233000 + (_VF) * 4)
#endif /* _ICE_HW_AUTOGEN_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 3a0ef56d3edca..1fc4805353eb5 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -4023,6 +4023,9 @@ void ice_init_feature_support(struct ice_pf *pf) default: break; } + + if (pf->hw.mac_type == ICE_MAC_E830) + ice_set_feature_support(pf, ICE_F_MBX_LIMIT); }
/** diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index ef5d43f2804e7..0ae7bdfff83fb 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -1514,12 +1514,20 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type) ice_vf_lan_overflow_event(pf, &event); break; case ice_mbx_opc_send_msg_to_pf: - data.num_msg_proc = i; - data.num_pending_arq = pending; - data.max_num_msgs_mbx = hw->mailboxq.num_rq_entries; - data.async_watermark_val = ICE_MBX_OVERFLOW_WATERMARK; + if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) { + ice_vc_process_vf_msg(pf, &event, NULL); + ice_mbx_vf_dec_trig_e830(hw, &event); + } else { + u16 val = hw->mailboxq.num_rq_entries; + + data.max_num_msgs_mbx = val; + val = ICE_MBX_OVERFLOW_WATERMARK; + data.async_watermark_val = val; + data.num_msg_proc = i; + data.num_pending_arq = pending;
- ice_vc_process_vf_msg(pf, &event, &data); + ice_vc_process_vf_msg(pf, &event, &data); + } break; case ice_aqc_opc_fw_logging: ice_output_fw_log(hw, &event.desc, event.msg_buf); @@ -3920,7 +3928,11 @@ static int ice_init_pf(struct ice_pf *pf)
mutex_init(&pf->vfs.table_lock); hash_init(pf->vfs.table); - ice_mbx_init_snapshot(&pf->hw); + if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) + wr32(&pf->hw, E830_MBX_PF_IN_FLIGHT_VF_MSGS_THRESH, + ICE_MBX_OVERFLOW_WATERMARK); + else + ice_mbx_init_snapshot(&pf->hw);
return 0; } diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c index 31314e7540f8c..b82c47f8865b4 100644 --- a/drivers/net/ethernet/intel/ice/ice_sriov.c +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c @@ -195,7 +195,8 @@ void ice_free_vfs(struct ice_pf *pf) }
/* clear malicious info since the VF is getting released */ - list_del(&vf->mbx_info.list_entry); + if (!ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) + list_del(&vf->mbx_info.list_entry);
mutex_unlock(&vf->cfg_lock); } diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c index 03b9d7d748518..3f80ba02f634e 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c @@ -701,6 +701,23 @@ ice_vf_clear_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 promisc_m) return 0; }
+/** + * ice_reset_vf_mbx_cnt - reset VF mailbox message count + * @vf: pointer to the VF structure + * + * This function clears the VF mailbox message count, and should be called on + * VF reset. + */ +static void ice_reset_vf_mbx_cnt(struct ice_vf *vf) +{ + struct ice_pf *pf = vf->pf; + + if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) + ice_mbx_vf_clear_cnt_e830(&pf->hw, vf->vf_id); + else + ice_mbx_clear_malvf(&vf->mbx_info); +} + /** * ice_reset_all_vfs - reset all allocated VFs in one go * @pf: pointer to the PF structure @@ -727,7 +744,7 @@ void ice_reset_all_vfs(struct ice_pf *pf)
/* clear all malicious info if the VFs are getting reset */ ice_for_each_vf(pf, bkt, vf) - ice_mbx_clear_malvf(&vf->mbx_info); + ice_reset_vf_mbx_cnt(vf);
/* If VFs have been disabled, there is no need to reset */ if (test_and_set_bit(ICE_VF_DIS, pf->state)) { @@ -944,7 +961,7 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags) ice_eswitch_update_repr(vsi);
/* if the VF has been reset allow it to come up again */ - ice_mbx_clear_malvf(&vf->mbx_info); + ice_reset_vf_mbx_cnt(vf);
out_unlock: if (lag && lag->bonded && lag->primary && @@ -994,7 +1011,10 @@ void ice_initialize_vf_entry(struct ice_vf *vf) ice_vf_fdir_init(vf);
/* Initialize mailbox info for this VF */ - ice_mbx_init_vf_info(&pf->hw, &vf->mbx_info); + if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) + ice_mbx_vf_clear_cnt_e830(&pf->hw, vf->vf_id); + else + ice_mbx_init_vf_info(&pf->hw, &vf->mbx_info);
mutex_init(&vf->cfg_lock); } diff --git a/drivers/net/ethernet/intel/ice/ice_vf_mbx.c b/drivers/net/ethernet/intel/ice/ice_vf_mbx.c index 40cb4ba0789ce..75c8113e58ee9 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_mbx.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_mbx.c @@ -210,6 +210,38 @@ ice_mbx_detect_malvf(struct ice_hw *hw, struct ice_mbx_vf_info *vf_info, return 0; }
+/** + * ice_mbx_vf_dec_trig_e830 - Decrements the VF mailbox queue counter + * @hw: pointer to the HW struct + * @event: pointer to the control queue receive event + * + * This function triggers to decrement the counter + * MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT when the driver replenishes + * the buffers at the PF mailbox queue. + */ +void ice_mbx_vf_dec_trig_e830(const struct ice_hw *hw, + const struct ice_rq_event_info *event) +{ + u16 vfid = le16_to_cpu(event->desc.retval); + + wr32(hw, E830_MBX_VF_DEC_TRIG(vfid), 1); +} + +/** + * ice_mbx_vf_clear_cnt_e830 - Clear the VF mailbox queue count + * @hw: pointer to the HW struct + * @vf_id: VF ID in the PF space + * + * This function clears the counter MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT, and should + * be called when a VF is created and on VF reset. + */ +void ice_mbx_vf_clear_cnt_e830(const struct ice_hw *hw, u16 vf_id) +{ + u32 reg = rd32(hw, E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT(vf_id)); + + wr32(hw, E830_MBX_VF_DEC_TRIG(vf_id), reg); +} + /** * ice_mbx_vf_state_handler - Handle states of the overflow algorithm * @hw: pointer to the HW struct diff --git a/drivers/net/ethernet/intel/ice/ice_vf_mbx.h b/drivers/net/ethernet/intel/ice/ice_vf_mbx.h index 44bc030d17e07..684de89e5c5ed 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_mbx.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_mbx.h @@ -19,6 +19,9 @@ ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen, struct ice_sq_cd *cd);
u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed); +void ice_mbx_vf_dec_trig_e830(const struct ice_hw *hw, + const struct ice_rq_event_info *event); +void ice_mbx_vf_clear_cnt_e830(const struct ice_hw *hw, u16 vf_id); int ice_mbx_vf_state_handler(struct ice_hw *hw, struct ice_mbx_data *mbx_data, struct ice_mbx_vf_info *vf_info, bool *report_malvf); @@ -47,5 +50,11 @@ static inline void ice_mbx_init_snapshot(struct ice_hw *hw) { }
+static inline void +ice_mbx_vf_dec_trig_e830(const struct ice_hw *hw, + const struct ice_rq_event_info *event) +{ +} + #endif /* CONFIG_PCI_IOV */ #endif /* _ICE_VF_MBX_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index 9f7268bb2ee3b..e709b10a29761 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -3899,8 +3899,10 @@ ice_is_malicious_vf(struct ice_vf *vf, struct ice_mbx_data *mbxdata) * @event: pointer to the AQ event * @mbxdata: information used to detect VF attempting mailbox overflow * - * called from the common asq/arq handler to - * process request from VF + * Called from the common asq/arq handler to process request from VF. When this + * flow is used for devices with hardware VF to PF message queue overflow + * support (ICE_F_MBX_LIMIT) mbxdata is set to NULL and ice_is_malicious_vf + * check is skipped. */ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event, struct ice_mbx_data *mbxdata) @@ -3926,7 +3928,7 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event, mutex_lock(&vf->cfg_lock);
/* Check if the VF is trying to overflow the mailbox */ - if (ice_is_malicious_vf(vf, mbxdata)) + if (mbxdata && ice_is_malicious_vf(vf, mbxdata)) goto finish;
/* Check if VF is disabled. */
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marcin Szycik marcin.szycik@linux.intel.com
[ Upstream commit 79990cf5e7aded76d0c092c9f5ed31eb1c75e02c ]
If ice_ena_vfs() fails after calling ice_create_vf_entries(), it frees all VFs without removing them from snapshot PF-VF mailbox list, leading to list corruption.
Reproducer: devlink dev eswitch set $PF1_PCI mode switchdev ip l s $PF1 up ip l s $PF1 promisc on sleep 1 echo 1 > /sys/class/net/$PF1/device/sriov_numvfs sleep 1 echo 1 > /sys/class/net/$PF1/device/sriov_numvfs
Trace (minimized): list_add corruption. next->prev should be prev (ffff8882e241c6f0), but was 0000000000000000. (next=ffff888455da1330). kernel BUG at lib/list_debug.c:29! RIP: 0010:__list_add_valid_or_report+0xa6/0x100 ice_mbx_init_vf_info+0xa7/0x180 [ice] ice_initialize_vf_entry+0x1fa/0x250 [ice] ice_sriov_configure+0x8d7/0x1520 [ice] ? __percpu_ref_switch_mode+0x1b1/0x5d0 ? __pfx_ice_sriov_configure+0x10/0x10 [ice]
Sometimes a KASAN report can be seen instead with a similar stack trace: BUG: KASAN: use-after-free in __list_add_valid_or_report+0xf1/0x100
VFs are added to this list in ice_mbx_init_vf_info(), but only removed in ice_free_vfs(). Move the removing to ice_free_vf_entries(), which is also being called in other places where VFs are being removed (including ice_free_vfs() itself).
Fixes: 8cd8a6b17d27 ("ice: move VF overflow message count into struct ice_mbx_vf_info") Reported-by: Sujai Buvaneswaran sujai.buvaneswaran@intel.com Closes: https://lore.kernel.org/intel-wired-lan/PH0PR11MB50138B635F2E5CEB7075325D961... Reviewed-by: Martyna Szapar-Mudlaw martyna.szapar-mudlaw@linux.intel.com Signed-off-by: Marcin Szycik marcin.szycik@linux.intel.com Reviewed-by: Simon Horman horms@kernel.org Tested-by: Sujai Buvaneswaran sujai.buvaneswaran@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Link: https://patch.msgid.link/20250224190647.3601930-2-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice_sriov.c | 5 +---- drivers/net/ethernet/intel/ice/ice_vf_lib.c | 8 ++++++++ drivers/net/ethernet/intel/ice/ice_vf_lib_private.h | 1 + 3 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c index b82c47f8865b4..56345fe653707 100644 --- a/drivers/net/ethernet/intel/ice/ice_sriov.c +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c @@ -36,6 +36,7 @@ static void ice_free_vf_entries(struct ice_pf *pf)
hash_for_each_safe(vfs->table, bkt, tmp, vf, entry) { hash_del_rcu(&vf->entry); + ice_deinitialize_vf_entry(vf); ice_put_vf(vf); } } @@ -194,10 +195,6 @@ void ice_free_vfs(struct ice_pf *pf) wr32(hw, GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx)); }
- /* clear malicious info since the VF is getting released */ - if (!ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) - list_del(&vf->mbx_info.list_entry); - mutex_unlock(&vf->cfg_lock); }
diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c index 3f80ba02f634e..58f9ac81dfbb2 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c @@ -1019,6 +1019,14 @@ void ice_initialize_vf_entry(struct ice_vf *vf) mutex_init(&vf->cfg_lock); }
+void ice_deinitialize_vf_entry(struct ice_vf *vf) +{ + struct ice_pf *pf = vf->pf; + + if (!ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) + list_del(&vf->mbx_info.list_entry); +} + /** * ice_dis_vf_qs - Disable the VF queues * @vf: pointer to the VF structure diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h index 0c7e77c0a09fa..5392b04049862 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h @@ -24,6 +24,7 @@ #endif
void ice_initialize_vf_entry(struct ice_vf *vf); +void ice_deinitialize_vf_entry(struct ice_vf *vf); void ice_dis_vf_qs(struct ice_vf *vf); int ice_check_vf_init(struct ice_vf *vf); enum virtchnl_status_code ice_err_to_virt_err(int err);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wang Hai wanghai38@huawei.com
[ Upstream commit 8d52da23b6c68a0f6bad83959ebb61a2cf623c4e ]
Recently a bug was discovered where the server had entered TCP_ESTABLISHED state, but the upper layers were not notified.
The same 5-tuple packet may be processed by different CPUSs, so two CPUs may receive different ack packets at the same time when the state is TCP_NEW_SYN_RECV.
In that case, req->ts_recent in tcp_check_req may be changed concurrently, which will probably cause the newsk's ts_recent to be incorrectly large. So that tcp_validate_incoming will fail. At this point, newsk will not be able to enter the TCP_ESTABLISHED.
cpu1 cpu2 tcp_check_req tcp_check_req req->ts_recent = rcv_tsval = t1 req->ts_recent = rcv_tsval = t2
syn_recv_sock tcp_sk(child)->rx_opt.ts_recent = req->ts_recent = t2 // t1 < t2 tcp_child_process tcp_rcv_state_process tcp_validate_incoming tcp_paws_check if ((s32)(rx_opt->ts_recent - rx_opt->rcv_tsval) <= paws_win) // t2 - t1 > paws_win, failed tcp_v4_do_rcv tcp_rcv_state_process // TCP_ESTABLISHED
The cpu2's skb or a newly received skb will call tcp_v4_do_rcv to get the newsk into the TCP_ESTABLISHED state, but at this point it is no longer possible to notify the upper layer application. A notification mechanism could be added here, but the fix is more complex, so the current fix is used.
In tcp_check_req, req->ts_recent is used to assign a value to tcp_sk(child)->rx_opt.ts_recent, so removing the change in req->ts_recent and changing tcp_sk(child)->rx_opt.ts_recent directly after owning the req fixes this bug.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Wang Hai wanghai38@huawei.com Reviewed-by: Jason Xing kerneljasonxing@gmail.com Reviewed-by: Eric Dumazet edumazet@google.com Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_minisocks.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index cc2b608b1a8e7..ddb90b9057e75 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -754,12 +754,6 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
/* In sequence, PAWS is OK. */
- /* TODO: We probably should defer ts_recent change once - * we take ownership of @req. - */ - if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt)) - WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval); - if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) { /* Truncate SYN, it is out of window starting at tcp_rsk(req)->rcv_isn + 1. */ @@ -808,6 +802,10 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb, if (!child) goto listen_overflow;
+ if (own_req && tmp_opt.saw_tstamp && + !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt)) + tcp_sk(child)->rx_opt.ts_recent = tmp_opt.rcv_tsval; + if (own_req && rsk_drop_req(req)) { reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req); inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mohammad Heib mheib@redhat.com
[ Upstream commit 49806fe6e61b045b5be8610e08b5a3083c109aa0 ]
In certain cases, napi_get_frags() returns an skb that points to an old received fragment, This skb may have its skb->ip_summed, csum, and other fields set from previous fragment handling.
Some network drivers set skb->ip_summed to either CHECKSUM_COMPLETE or CHECKSUM_UNNECESSARY when getting skb from napi_get_frags(), while others only set skb->ip_summed when RX checksum offload is enabled on the device, and do not set any value for skb->ip_summed when hardware checksum offload is disabled, assuming that the skb->ip_summed initiated to zero by napi_reuse_skb, ionic driver for example will ignore/unset any value for the ip_summed filed if HW checksum offload is disabled, and if we have a situation where the user disables the checksum offload during a traffic that could lead to the following errors shown in the kernel logs: <IRQ> dump_stack_lvl+0x34/0x48 __skb_gro_checksum_complete+0x7e/0x90 tcp6_gro_receive+0xc6/0x190 ipv6_gro_receive+0x1ec/0x430 dev_gro_receive+0x188/0x360 ? ionic_rx_clean+0x25a/0x460 [ionic] napi_gro_frags+0x13c/0x300 ? __pfx_ionic_rx_service+0x10/0x10 [ionic] ionic_rx_service+0x67/0x80 [ionic] ionic_cq_service+0x58/0x90 [ionic] ionic_txrx_napi+0x64/0x1b0 [ionic] __napi_poll+0x27/0x170 net_rx_action+0x29c/0x370 handle_softirqs+0xce/0x270 __irq_exit_rcu+0xa3/0xc0 common_interrupt+0x80/0xa0 </IRQ>
This inconsistency sometimes leads to checksum validation issues in the upper layers of the network stack.
To resolve this, this patch clears the skb->ip_summed value for each reused skb in by napi_reuse_skb(), ensuring that the caller is responsible for setting the correct checksum status. This eliminates potential checksum validation issues caused by improper handling of skb->ip_summed.
Fixes: 76620aafd66f ("gro: New frags interface to avoid copying shinfo") Signed-off-by: Mohammad Heib mheib@redhat.com Reviewed-by: Shannon Nelson shannon.nelson@amd.com Reviewed-by: Eric Dumazet edumazet@google.com Link: https://patch.msgid.link/20250225112852.2507709-1-mheib@redhat.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/gro.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/net/core/gro.c b/net/core/gro.c index 85d3f686ba539..397cf59842503 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -627,6 +627,7 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) skb->pkt_type = PACKET_HOST;
skb->encapsulation = 0; + skb->ip_summed = CHECKSUM_NONE; skb_shinfo(skb)->gso_type = 0; skb_shinfo(skb)->gso_size = 0; if (unlikely(skb->slow_gro)) {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Harshal Chaudhari hchaudhari@marvell.com
[ Upstream commit 2d253726ff7106b39a44483b6864398bba8a2f74 ]
Non IP flow, with vlan tag not working as expected while running below command for vlan-priority. fixed that.
ethtool -N eth1 flow-type ether vlan 0x8000 vlan-mask 0x1fff action 0 loc 0
Fixes: 1274daede3ef ("net: mvpp2: cls: Add steering based on vlan Id and priority.") Signed-off-by: Harshal Chaudhari hchaudhari@marvell.com Reviewed-by: Maxime Chevallier maxime.chevallier@bootlin.com Link: https://patch.msgid.link/20250225042058.2643838-1-hchaudhari@marvell.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c index 40aeaa7bd739f..d2757cc116139 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c @@ -324,7 +324,7 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = { MVPP2_PRS_RI_VLAN_MASK), /* Non IP flow, with vlan tag */ MVPP2_DEF_FLOW(MVPP22_FLOW_ETHERNET, MVPP2_FL_NON_IP_TAG, - MVPP22_CLS_HEK_OPT_VLAN, + MVPP22_CLS_HEK_TAGGED, 0, 0), };
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shay Drory shayd@nvidia.com
[ Upstream commit 2f5a6014eb168a97b24153adccfa663d3b282767 ]
irq_pool_alloc() debug print can print a null string. Fix it by providing a default string to print.
Fixes: 71e084e26414 ("net/mlx5: Allocating a pool of MSI-X vectors for SFs") Signed-off-by: Shay Drory shayd@nvidia.com Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/oe-kbuild-all/202501141055.SwfIphN0-lkp@intel.com/ Reviewed-by: Moshe Shemesh moshe@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Reviewed-by: Kalesh AP kalesh-anakkur.purayil@broadcom.com Link: https://patch.msgid.link/20250225072608.526866-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c index 6bac8ad70ba60..a8d6fd18c0f55 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c @@ -617,7 +617,7 @@ irq_pool_alloc(struct mlx5_core_dev *dev, int start, int size, char *name, pool->min_threshold = min_threshold * MLX5_EQ_REFS_PER_IRQ; pool->max_threshold = max_threshold * MLX5_EQ_REFS_PER_IRQ; mlx5_core_dbg(dev, "pool->name = %s, pool->size = %d, pool->start = %d", - name, size, start); + name ? name : "mlx5_pcif_pool", size, start); return pool; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 0600cf40e9b36fe17f9c9f04d4f9cef249eaa5e7 ]
Add static inline dst_dev_overhead() function to include/net/dst.h. This helper function is used by ioam6_iptunnel, rpl_iptunnel and seg6_iptunnel to get the dev's overhead based on a cache entry (dst_entry). If the cache is empty, the default and generic value skb->mac_len is returned. Otherwise, LL_RESERVED_SPACE() over dst's dev is returned.
Signed-off-by: Justin Iurman justin.iurman@uliege.be Cc: Alexander Lobakin aleksander.lobakin@intel.com Cc: Vadim Fedorenko vadim.fedorenko@linux.dev Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: c64a0727f9b1 ("net: ipv6: fix dst ref loop on input in seg6 lwt") Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/dst.h | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/include/net/dst.h b/include/net/dst.h index 78884429deed8..16b7b99b5f309 100644 --- a/include/net/dst.h +++ b/include/net/dst.h @@ -448,6 +448,15 @@ static inline void dst_set_expires(struct dst_entry *dst, int timeout) dst->expires = expires; }
+static inline unsigned int dst_dev_overhead(struct dst_entry *dst, + struct sk_buff *skb) +{ + if (likely(dst)) + return LL_RESERVED_SPACE(dst->dev); + + return skb->mac_len; +} + INDIRECT_CALLABLE_DECLARE(int ip6_output(struct net *, struct sock *, struct sk_buff *)); INDIRECT_CALLABLE_DECLARE(int ip_output(struct net *, struct sock *,
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 40475b63761abb6f8fdef960d03228a08662c9c4 ]
This patch mitigates the two-reallocations issue with seg6_iptunnel by providing the dst_entry (in the cache) to the first call to skb_cow_head(). As a result, the very first iteration would still trigger two reallocations (i.e., empty cache), while next iterations would only trigger a single reallocation.
Performance tests before/after applying this patch, which clearly shows the improvement: - before: https://ibb.co/3Cg4sNH - after: https://ibb.co/8rQ350r
Signed-off-by: Justin Iurman justin.iurman@uliege.be Cc: David Lebrun dlebrun@google.com Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: c64a0727f9b1 ("net: ipv6: fix dst ref loop on input in seg6 lwt") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/seg6_iptunnel.c | 85 ++++++++++++++++++++++++---------------- 1 file changed, 52 insertions(+), 33 deletions(-)
diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c index 098632adc9b5a..4bf937bfc2633 100644 --- a/net/ipv6/seg6_iptunnel.c +++ b/net/ipv6/seg6_iptunnel.c @@ -124,8 +124,8 @@ static __be32 seg6_make_flowlabel(struct net *net, struct sk_buff *skb, return flowlabel; }
-/* encapsulate an IPv6 packet within an outer IPv6 header with a given SRH */ -int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto) +static int __seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, + int proto, struct dst_entry *cache_dst) { struct dst_entry *dst = skb_dst(skb); struct net *net = dev_net(dst->dev); @@ -137,7 +137,7 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto) hdrlen = (osrh->hdrlen + 1) << 3; tot_len = hdrlen + sizeof(*hdr);
- err = skb_cow_head(skb, tot_len + skb->mac_len); + err = skb_cow_head(skb, tot_len + dst_dev_overhead(cache_dst, skb)); if (unlikely(err)) return err;
@@ -197,11 +197,18 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
return 0; } + +/* encapsulate an IPv6 packet within an outer IPv6 header with a given SRH */ +int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto) +{ + return __seg6_do_srh_encap(skb, osrh, proto, NULL); +} EXPORT_SYMBOL_GPL(seg6_do_srh_encap);
/* encapsulate an IPv6 packet within an outer IPv6 header with reduced SRH */ static int seg6_do_srh_encap_red(struct sk_buff *skb, - struct ipv6_sr_hdr *osrh, int proto) + struct ipv6_sr_hdr *osrh, int proto, + struct dst_entry *cache_dst) { __u8 first_seg = osrh->first_segment; struct dst_entry *dst = skb_dst(skb); @@ -230,7 +237,7 @@ static int seg6_do_srh_encap_red(struct sk_buff *skb,
tot_len = red_hdrlen + sizeof(struct ipv6hdr);
- err = skb_cow_head(skb, tot_len + skb->mac_len); + err = skb_cow_head(skb, tot_len + dst_dev_overhead(cache_dst, skb)); if (unlikely(err)) return err;
@@ -317,8 +324,8 @@ static int seg6_do_srh_encap_red(struct sk_buff *skb, return 0; }
-/* insert an SRH within an IPv6 packet, just after the IPv6 header */ -int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh) +static int __seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, + struct dst_entry *cache_dst) { struct ipv6hdr *hdr, *oldhdr; struct ipv6_sr_hdr *isrh; @@ -326,7 +333,7 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
hdrlen = (osrh->hdrlen + 1) << 3;
- err = skb_cow_head(skb, hdrlen + skb->mac_len); + err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb)); if (unlikely(err)) return err;
@@ -369,9 +376,8 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
return 0; } -EXPORT_SYMBOL_GPL(seg6_do_srh_inline);
-static int seg6_do_srh(struct sk_buff *skb) +static int seg6_do_srh(struct sk_buff *skb, struct dst_entry *cache_dst) { struct dst_entry *dst = skb_dst(skb); struct seg6_iptunnel_encap *tinfo; @@ -384,7 +390,7 @@ static int seg6_do_srh(struct sk_buff *skb) if (skb->protocol != htons(ETH_P_IPV6)) return -EINVAL;
- err = seg6_do_srh_inline(skb, tinfo->srh); + err = __seg6_do_srh_inline(skb, tinfo->srh, cache_dst); if (err) return err; break; @@ -402,9 +408,11 @@ static int seg6_do_srh(struct sk_buff *skb) return -EINVAL;
if (tinfo->mode == SEG6_IPTUN_MODE_ENCAP) - err = seg6_do_srh_encap(skb, tinfo->srh, proto); + err = __seg6_do_srh_encap(skb, tinfo->srh, + proto, cache_dst); else - err = seg6_do_srh_encap_red(skb, tinfo->srh, proto); + err = seg6_do_srh_encap_red(skb, tinfo->srh, + proto, cache_dst);
if (err) return err; @@ -425,11 +433,13 @@ static int seg6_do_srh(struct sk_buff *skb) skb_push(skb, skb->mac_len);
if (tinfo->mode == SEG6_IPTUN_MODE_L2ENCAP) - err = seg6_do_srh_encap(skb, tinfo->srh, - IPPROTO_ETHERNET); + err = __seg6_do_srh_encap(skb, tinfo->srh, + IPPROTO_ETHERNET, + cache_dst); else err = seg6_do_srh_encap_red(skb, tinfo->srh, - IPPROTO_ETHERNET); + IPPROTO_ETHERNET, + cache_dst);
if (err) return err; @@ -444,6 +454,13 @@ static int seg6_do_srh(struct sk_buff *skb) return 0; }
+/* insert an SRH within an IPv6 packet, just after the IPv6 header */ +int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh) +{ + return __seg6_do_srh_inline(skb, osrh, NULL); +} +EXPORT_SYMBOL_GPL(seg6_do_srh_inline); + static int seg6_input_finish(struct net *net, struct sock *sk, struct sk_buff *skb) { @@ -458,31 +475,33 @@ static int seg6_input_core(struct net *net, struct sock *sk, struct seg6_lwt *slwt; int err;
- err = seg6_do_srh(skb); - if (unlikely(err)) - goto drop; - slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
local_bh_disable(); dst = dst_cache_get(&slwt->cache); + local_bh_enable(); + + err = seg6_do_srh(skb, dst); + if (unlikely(err)) + goto drop;
if (!dst) { ip6_route_input(skb); dst = skb_dst(skb); if (!dst->error) { + local_bh_disable(); dst_cache_set_ip6(&slwt->cache, dst, &ipv6_hdr(skb)->saddr); + local_bh_enable(); } + + err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); + if (unlikely(err)) + goto drop; } else { skb_dst_drop(skb); skb_dst_set(skb, dst); } - local_bh_enable(); - - err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); - if (unlikely(err)) - goto drop;
if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled)) return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, @@ -528,16 +547,16 @@ static int seg6_output_core(struct net *net, struct sock *sk, struct seg6_lwt *slwt; int err;
- err = seg6_do_srh(skb); - if (unlikely(err)) - goto drop; - slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
local_bh_disable(); dst = dst_cache_get(&slwt->cache); local_bh_enable();
+ err = seg6_do_srh(skb, dst); + if (unlikely(err)) + goto drop; + if (unlikely(!dst)) { struct ipv6hdr *hdr = ipv6_hdr(skb); struct flowi6 fl6; @@ -559,15 +578,15 @@ static int seg6_output_core(struct net *net, struct sock *sk, local_bh_disable(); dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr); local_bh_enable(); + + err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); + if (unlikely(err)) + goto drop; }
skb_dst_drop(skb); skb_dst_set(skb, dst);
- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); - if (unlikely(err)) - goto drop; - if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled)) return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, skb, NULL, skb_dst(skb)->dev, dst_output);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit c64a0727f9b1cbc63a5538c8c0014e9a175ad864 ]
Prevent a dst ref loop on input in seg6_iptunnel.
Fixes: af4a2209b134 ("ipv6: sr: use dst_cache in seg6_input") Cc: David Lebrun dlebrun@google.com Cc: Ido Schimmel idosch@nvidia.com Reviewed-by: Ido Schimmel idosch@nvidia.com Signed-off-by: Justin Iurman justin.iurman@uliege.be Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/seg6_iptunnel.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c index 4bf937bfc2633..c44e4c0824e0d 100644 --- a/net/ipv6/seg6_iptunnel.c +++ b/net/ipv6/seg6_iptunnel.c @@ -472,10 +472,18 @@ static int seg6_input_core(struct net *net, struct sock *sk, { struct dst_entry *orig_dst = skb_dst(skb); struct dst_entry *dst = NULL; + struct lwtunnel_state *lwtst; struct seg6_lwt *slwt; int err;
- slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate); + /* We cannot dereference "orig_dst" once ip6_route_input() or + * skb_dst_drop() is called. However, in order to detect a dst loop, we + * need the address of its lwtstate. So, save the address of lwtstate + * now and use it later as a comparison. + */ + lwtst = orig_dst->lwtstate; + + slwt = seg6_lwt_lwtunnel(lwtst);
local_bh_disable(); dst = dst_cache_get(&slwt->cache); @@ -488,7 +496,9 @@ static int seg6_input_core(struct net *net, struct sock *sk, if (!dst) { ip6_route_input(skb); dst = skb_dst(skb); - if (!dst->error) { + + /* cache only if we don't create a dst reference loop */ + if (!dst->error && lwtst != dst->lwtstate) { local_bh_disable(); dst_cache_set_ip6(&slwt->cache, dst, &ipv6_hdr(skb)->saddr);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 985ec6f5e6235242191370628acb73d7a9f0c0ea ]
This patch mitigates the two-reallocations issue with rpl_iptunnel by providing the dst_entry (in the cache) to the first call to skb_cow_head(). As a result, the very first iteration would still trigger two reallocations (i.e., empty cache), while next iterations would only trigger a single reallocation.
Performance tests before/after applying this patch, which clearly shows there is no impact (it even shows improvement): - before: https://ibb.co/nQJhqwc - after: https://ibb.co/4ZvW6wV
Signed-off-by: Justin Iurman justin.iurman@uliege.be Cc: Alexander Aring aahringo@redhat.com Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: 13e55fbaec17 ("net: ipv6: fix dst ref loop on input in rpl lwt") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/rpl_iptunnel.c | 46 ++++++++++++++++++++++------------------- 1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c index db3c19a42e1ca..7ba22d2f2bfef 100644 --- a/net/ipv6/rpl_iptunnel.c +++ b/net/ipv6/rpl_iptunnel.c @@ -125,7 +125,8 @@ static void rpl_destroy_state(struct lwtunnel_state *lwt) }
static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt, - const struct ipv6_rpl_sr_hdr *srh) + const struct ipv6_rpl_sr_hdr *srh, + struct dst_entry *cache_dst) { struct ipv6_rpl_sr_hdr *isrh, *csrh; const struct ipv6hdr *oldhdr; @@ -153,7 +154,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
hdrlen = ((csrh->hdrlen + 1) << 3);
- err = skb_cow_head(skb, hdrlen + skb->mac_len); + err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb)); if (unlikely(err)) { kfree(buf); return err; @@ -186,7 +187,8 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt, return 0; }
-static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt) +static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt, + struct dst_entry *cache_dst) { struct dst_entry *dst = skb_dst(skb); struct rpl_iptunnel_encap *tinfo; @@ -196,7 +198,7 @@ static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt)
tinfo = rpl_encap_lwtunnel(dst->lwtstate);
- return rpl_do_srh_inline(skb, rlwt, tinfo->srh); + return rpl_do_srh_inline(skb, rlwt, tinfo->srh, cache_dst); }
static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb) @@ -208,14 +210,14 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
- err = rpl_do_srh(skb, rlwt); - if (unlikely(err)) - goto drop; - local_bh_disable(); dst = dst_cache_get(&rlwt->cache); local_bh_enable();
+ err = rpl_do_srh(skb, rlwt, dst); + if (unlikely(err)) + goto drop; + if (unlikely(!dst)) { struct ipv6hdr *hdr = ipv6_hdr(skb); struct flowi6 fl6; @@ -237,15 +239,15 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb) local_bh_disable(); dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr); local_bh_enable(); + + err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); + if (unlikely(err)) + goto drop; }
skb_dst_drop(skb); skb_dst_set(skb, dst);
- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); - if (unlikely(err)) - goto drop; - return dst_output(net, sk, skb);
drop: @@ -262,29 +264,31 @@ static int rpl_input(struct sk_buff *skb)
rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
- err = rpl_do_srh(skb, rlwt); - if (unlikely(err)) - goto drop; - local_bh_disable(); dst = dst_cache_get(&rlwt->cache); + local_bh_enable(); + + err = rpl_do_srh(skb, rlwt, dst); + if (unlikely(err)) + goto drop;
if (!dst) { ip6_route_input(skb); dst = skb_dst(skb); if (!dst->error) { + local_bh_disable(); dst_cache_set_ip6(&rlwt->cache, dst, &ipv6_hdr(skb)->saddr); + local_bh_enable(); } + + err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); + if (unlikely(err)) + goto drop; } else { skb_dst_drop(skb); skb_dst_set(skb, dst); } - local_bh_enable(); - - err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); - if (unlikely(err)) - goto drop;
return dst_input(skb);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 13e55fbaec176119cff68a7e1693b251c8883c5f ]
Prevent a dst ref loop on input in rpl_iptunnel.
Fixes: a7a29f9c361f ("net: ipv6: add rpl sr tunnel") Cc: Alexander Aring alex.aring@gmail.com Cc: Ido Schimmel idosch@nvidia.com Reviewed-by: Ido Schimmel idosch@nvidia.com Signed-off-by: Justin Iurman justin.iurman@uliege.be Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/rpl_iptunnel.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c index 7ba22d2f2bfef..28fc7fae57972 100644 --- a/net/ipv6/rpl_iptunnel.c +++ b/net/ipv6/rpl_iptunnel.c @@ -259,10 +259,18 @@ static int rpl_input(struct sk_buff *skb) { struct dst_entry *orig_dst = skb_dst(skb); struct dst_entry *dst = NULL; + struct lwtunnel_state *lwtst; struct rpl_lwt *rlwt; int err;
- rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate); + /* We cannot dereference "orig_dst" once ip6_route_input() or + * skb_dst_drop() is called. However, in order to detect a dst loop, we + * need the address of its lwtstate. So, save the address of lwtstate + * now and use it later as a comparison. + */ + lwtst = orig_dst->lwtstate; + + rlwt = rpl_lwt_lwtunnel(lwtst);
local_bh_disable(); dst = dst_cache_get(&rlwt->cache); @@ -275,7 +283,9 @@ static int rpl_input(struct sk_buff *skb) if (!dst) { ip6_route_input(skb); dst = skb_dst(skb); - if (!dst->error) { + + /* cache only if we don't create a dst reference loop */ + if (!dst->error && lwtst != dst->lwtstate) { local_bh_disable(); dst_cache_set_ip6(&rlwt->cache, dst, &ipv6_hdr(skb)->saddr);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Diogo Ivo diogo.ivo@siemens.com
[ Upstream commit 5758e03cf604aa282b9afa61aec3188c4a9b3fe7 ]
As all sources of concurrency in hardware register access occur in non-interrupt context eliminate spinlock-based synchronization and rely on the mutex-based synchronization that is already present.
Signed-off-by: Diogo Ivo diogo.ivo@siemens.com Reviewed-by: Simon Horman horms@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: 54e1b4becf5e ("net: ti: icss-iep: Reject perout generation request") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ti/icssg/icss_iep.c | 14 -------------- 1 file changed, 14 deletions(-)
diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c index f06cdec14ed7a..b491d89de8fb2 100644 --- a/drivers/net/ethernet/ti/icssg/icss_iep.c +++ b/drivers/net/ethernet/ti/icssg/icss_iep.c @@ -110,7 +110,6 @@ struct icss_iep { struct ptp_clock_info ptp_info; struct ptp_clock *ptp_clock; struct mutex ptp_clk_mutex; /* PHC access serializer */ - spinlock_t irq_lock; /* CMP IRQ vs icss_iep_ptp_enable access */ u32 def_inc; s16 slow_cmp_inc; u32 slow_cmp_count; @@ -192,14 +191,11 @@ static void icss_iep_update_to_next_boundary(struct icss_iep *iep, u64 start_ns) */ static void icss_iep_settime(struct icss_iep *iep, u64 ns) { - unsigned long flags; - if (iep->ops && iep->ops->settime) { iep->ops->settime(iep->clockops_data, ns); return; }
- spin_lock_irqsave(&iep->irq_lock, flags); if (iep->pps_enabled || iep->perout_enabled) writel(0, iep->base + iep->plat_data->reg_offs[ICSS_IEP_SYNC_CTRL_REG]);
@@ -210,7 +206,6 @@ static void icss_iep_settime(struct icss_iep *iep, u64 ns) writel(IEP_SYNC_CTRL_SYNC_N_EN(0) | IEP_SYNC_CTRL_SYNC_EN, iep->base + iep->plat_data->reg_offs[ICSS_IEP_SYNC_CTRL_REG]); } - spin_unlock_irqrestore(&iep->irq_lock, flags); }
/** @@ -549,7 +544,6 @@ static int icss_iep_perout_enable_hw(struct icss_iep *iep, static int icss_iep_perout_enable(struct icss_iep *iep, struct ptp_perout_request *req, int on) { - unsigned long flags; int ret = 0;
mutex_lock(&iep->ptp_clk_mutex); @@ -562,11 +556,9 @@ static int icss_iep_perout_enable(struct icss_iep *iep, if (iep->perout_enabled == !!on) goto exit;
- spin_lock_irqsave(&iep->irq_lock, flags); ret = icss_iep_perout_enable_hw(iep, req, on); if (!ret) iep->perout_enabled = !!on; - spin_unlock_irqrestore(&iep->irq_lock, flags);
exit: mutex_unlock(&iep->ptp_clk_mutex); @@ -578,7 +570,6 @@ static int icss_iep_pps_enable(struct icss_iep *iep, int on) { struct ptp_clock_request rq; struct timespec64 ts; - unsigned long flags; int ret = 0; u64 ns;
@@ -592,8 +583,6 @@ static int icss_iep_pps_enable(struct icss_iep *iep, int on) if (iep->pps_enabled == !!on) goto exit;
- spin_lock_irqsave(&iep->irq_lock, flags); - rq.perout.index = 0; if (on) { ns = icss_iep_gettime(iep, NULL); @@ -610,8 +599,6 @@ static int icss_iep_pps_enable(struct icss_iep *iep, int on) if (!ret) iep->pps_enabled = !!on;
- spin_unlock_irqrestore(&iep->irq_lock, flags); - exit: mutex_unlock(&iep->ptp_clk_mutex);
@@ -861,7 +848,6 @@ static int icss_iep_probe(struct platform_device *pdev)
iep->ptp_info = icss_iep_ptp_info; mutex_init(&iep->ptp_clk_mutex); - spin_lock_init(&iep->irq_lock); dev_set_drvdata(dev, iep); icss_iep_disable(iep);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Meghana Malladi m-malladi@ti.com
[ Upstream commit 54e1b4becf5e220be03db4e1be773c1310e8cbbd ]
IEP driver supports both perout and pps signal generation but perout feature is faulty with half-cooked support due to some missing configuration. Remove perout support from the driver and reject perout requests with "not supported" error code.
Fixes: c1e0230eeaab2 ("net: ti: icss-iep: Add IEP driver") Signed-off-by: Meghana Malladi m-malladi@ti.com Reviewed-by: Vadim Fedorenko vadim.fedorenko@linux.dev Link: https://patch.msgid.link/20250227092441.1848419-1-m-malladi@ti.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ti/icssg/icss_iep.c | 21 +-------------------- 1 file changed, 1 insertion(+), 20 deletions(-)
diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c index b491d89de8fb2..3f9a030471fe2 100644 --- a/drivers/net/ethernet/ti/icssg/icss_iep.c +++ b/drivers/net/ethernet/ti/icssg/icss_iep.c @@ -544,26 +544,7 @@ static int icss_iep_perout_enable_hw(struct icss_iep *iep, static int icss_iep_perout_enable(struct icss_iep *iep, struct ptp_perout_request *req, int on) { - int ret = 0; - - mutex_lock(&iep->ptp_clk_mutex); - - if (iep->pps_enabled) { - ret = -EBUSY; - goto exit; - } - - if (iep->perout_enabled == !!on) - goto exit; - - ret = icss_iep_perout_enable_hw(iep, req, on); - if (!ret) - iep->perout_enabled = !!on; - -exit: - mutex_unlock(&iep->ptp_clk_mutex); - - return ret; + return -EOPNOTSUPP; }
static int icss_iep_pps_enable(struct icss_iep *iep, int on)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luo Gengkun luogengkun@huaweicloud.com
[ Upstream commit 2016066c66192a99d9e0ebf433789c490a6785a2 ]
Syskaller triggers a warning due to prev_epc->pmu != next_epc->pmu in perf_event_swap_task_ctx_data(). vmcore shows that two lists have the same perf_event_pmu_context, but not in the same order.
The problem is that the order of pmu_ctx_list for the parent is impacted by the time when an event/PMU is added. While the order for a child is impacted by the event order in the pinned_groups and flexible_groups. So the order of pmu_ctx_list in the parent and child may be different.
To fix this problem, insert the perf_event_pmu_context to its proper place after iteration of the pmu_ctx_list.
The follow testcase can trigger above warning:
# perf record -e cycles --call-graph lbr -- taskset -c 3 ./a.out & # perf stat -e cpu-clock,cs -p xxx // xxx is the pid of a.out
test.c
void main() { int count = 0; pid_t pid;
printf("%d running\n", getpid()); sleep(30); printf("running\n");
pid = fork(); if (pid == -1) { printf("fork error\n"); return; } if (pid == 0) { while (1) { count++; } } else { while (1) { count++; } } }
The testcase first opens an LBR event, so it will allocate task_ctx_data, and then open tracepoint and software events, so the parent context will have 3 different perf_event_pmu_contexts. On inheritance, child ctx will insert the perf_event_pmu_context in another order and the warning will trigger.
[ mingo: Tidied up the changelog. ]
Fixes: bd2756811766 ("perf: Rewrite core context handling") Signed-off-by: Luo Gengkun luogengkun@huaweicloud.com Signed-off-by: Ingo Molnar mingo@kernel.org Reviewed-by: Kan Liang kan.liang@linux.intel.com Link: https://lore.kernel.org/r/20250122073356.1824736-1-luogengkun@huaweicloud.co... Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/core.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index 5d6458ea675e9..18f0a3aa6d7c6 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4842,7 +4842,7 @@ static struct perf_event_pmu_context * find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx, struct perf_event *event) { - struct perf_event_pmu_context *new = NULL, *epc; + struct perf_event_pmu_context *new = NULL, *pos = NULL, *epc; void *task_ctx_data = NULL;
if (!ctx->task) { @@ -4899,12 +4899,19 @@ find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx, atomic_inc(&epc->refcount); goto found_epc; } + /* Make sure the pmu_ctx_list is sorted by PMU type: */ + if (!pos && epc->pmu->type > pmu->type) + pos = epc; }
epc = new; new = NULL;
- list_add(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list); + if (!pos) + list_add_tail(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list); + else + list_add(&epc->pmu_ctx_entry, pos->pmu_ctx_entry.prev); + epc->ctx = ctx;
found_epc:
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tong Tiangen tongtiangen@huawei.com
[ Upstream commit bddf10d26e6e5114e7415a0e442ec6f51a559468 ]
We triggered the following crash in syzkaller tests:
BUG: Bad page state in process syz.7.38 pfn:1eff3 page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1eff3 flags: 0x3fffff00004004(referenced|reserved|node=0|zone=1|lastcpupid=0x1fffff) raw: 003fffff00004004 ffffe6c6c07bfcc8 ffffe6c6c07bfcc8 0000000000000000 raw: 0000000000000000 0000000000000000 00000000fffffffe 0000000000000000 page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x32/0x50 bad_page+0x69/0xf0 free_unref_page_prepare+0x401/0x500 free_unref_page+0x6d/0x1b0 uprobe_write_opcode+0x460/0x8e0 install_breakpoint.part.0+0x51/0x80 register_for_each_vma+0x1d9/0x2b0 __uprobe_register+0x245/0x300 bpf_uprobe_multi_link_attach+0x29b/0x4f0 link_create+0x1e2/0x280 __sys_bpf+0x75f/0xac0 __x64_sys_bpf+0x1a/0x30 do_syscall_64+0x56/0x100 entry_SYSCALL_64_after_hwframe+0x78/0xe2
BUG: Bad rss-counter state mm:00000000452453e0 type:MM_FILEPAGES val:-1
The following syzkaller test case can be used to reproduce:
r2 = creat(&(0x7f0000000000)='./file0\x00', 0x8) write$nbd(r2, &(0x7f0000000580)=ANY=[], 0x10) r4 = openat(0xffffffffffffff9c, &(0x7f0000000040)='./file0\x00', 0x42, 0x0) mmap$IORING_OFF_SQ_RING(&(0x7f0000ffd000/0x3000)=nil, 0x3000, 0x0, 0x12, r4, 0x0) r5 = userfaultfd(0x80801) ioctl$UFFDIO_API(r5, 0xc018aa3f, &(0x7f0000000040)={0xaa, 0x20}) r6 = userfaultfd(0x80801) ioctl$UFFDIO_API(r6, 0xc018aa3f, &(0x7f0000000140)) ioctl$UFFDIO_REGISTER(r6, 0xc020aa00, &(0x7f0000000100)={{&(0x7f0000ffc000/0x4000)=nil, 0x4000}, 0x2}) ioctl$UFFDIO_ZEROPAGE(r5, 0xc020aa04, &(0x7f0000000000)={{&(0x7f0000ffd000/0x1000)=nil, 0x1000}}) r7 = bpf$PROG_LOAD(0x5, &(0x7f0000000140)={0x2, 0x3, &(0x7f0000000200)=ANY=[@ANYBLOB="1800000000120000000000000000000095"], &(0x7f0000000000)='GPL\x00', 0x7, 0x0, 0x0, 0x0, 0x0, '\x00', 0x0, @fallback=0x30, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x10, 0x0, @void, @value}, 0x94) bpf$BPF_LINK_CREATE_XDP(0x1c, &(0x7f0000000040)={r7, 0x0, 0x30, 0x1e, @val=@uprobe_multi={&(0x7f0000000080)='./file0\x00', &(0x7f0000000100)=[0x2], 0x0, 0x0, 0x1}}, 0x40)
The cause is that zero pfn is set to the PTE without increasing the RSS count in mfill_atomic_pte_zeropage() and the refcount of zero folio does not increase accordingly. Then, the operation on the same pfn is performed in uprobe_write_opcode()->__replace_page() to unconditional decrease the RSS count and old_folio's refcount.
Therefore, two bugs are introduced:
1. The RSS count is incorrect, when process exit, the check_mm() report error "Bad rss-count".
2. The reserved folio (zero folio) is freed when folio->refcount is zero, then free_pages_prepare->free_page_is_bad() report error "Bad page state".
There is more, the following warning could also theoretically be triggered:
__replace_page() -> ... -> folio_remove_rmap_pte() -> VM_WARN_ON_FOLIO(is_zero_folio(folio), folio)
Considering that uprobe hit on the zero folio is a very rare case, just reject zero old folio immediately after get_user_page_vma_remote().
[ mingo: Cleaned up the changelog ]
Fixes: 7396fa818d62 ("uprobes/core: Make background page replacement logic account for rss_stat counters") Fixes: 2b1444983508 ("uprobes, mm, x86: Add the ability to install and remove uprobes breakpoints") Signed-off-by: Tong Tiangen tongtiangen@huawei.com Signed-off-by: Ingo Molnar mingo@kernel.org Reviewed-by: David Hildenbrand david@redhat.com Reviewed-by: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Masami Hiramatsu mhiramat@kernel.org Link: https://lore.kernel.org/r/20250224031149.1598949-1-tongtiangen@huawei.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/uprobes.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 6dac0b5798213..7e2edd1b06939 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -481,6 +481,11 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (ret <= 0) goto put_old;
+ if (is_zero_page(old_page)) { + ret = -EINVAL; + goto put_old; + } + if (WARN(!is_register && PageCompound(old_page), "uprobe unregister should never work on compound page\n")) { ret = -EINVAL;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pavel Begunkov asml.silence@gmail.com
[ Upstream commit 6ebf05189dfc6d0d597c99a6448a4d1064439a18 ]
Match the compat part of io_sendmsg_copy_hdr() with its counterpart and save msg_control.
Fixes: c55978024d123 ("io_uring/net: move receive multishot out of the generic msghdr path") Signed-off-by: Pavel Begunkov asml.silence@gmail.com Link: https://lore.kernel.org/r/2a8418821fe83d3b64350ad2b3c0303e9b732bbd.174049850... Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- io_uring/net.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/io_uring/net.c b/io_uring/net.c index 56091292950fd..1a0e98e19dc0e 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -303,7 +303,9 @@ static int io_sendmsg_copy_hdr(struct io_kiocb *req, if (unlikely(ret)) return ret;
- return __get_compat_msghdr(&iomsg->msg, &cmsg, NULL); + ret = __get_compat_msghdr(&iomsg->msg, &cmsg, NULL); + sr->msg_control = iomsg->msg.msg_control_user; + return ret; } #endif
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Russell Senior russell@personaltelco.net
[ Upstream commit bebe35bb738b573c32a5033499cd59f20293f2a3 ]
I still have some Soekris net4826 in a Community Wireless Network I volunteer with. These devices use an AMD SC1100 SoC. I am running OpenWrt on them, which uses a patched kernel, that naturally has evolved over time. I haven't updated the ones in the field in a number of years (circa 2017), but have one in a test bed, where I have intermittently tried out test builds.
A few years ago, I noticed some trouble, particularly when "warm booting", that is, doing a reboot without removing power, and noticed the device was hanging after the kernel message:
[ 0.081615] Working around Cyrix MediaGX virtual DMA bugs.
If I removed power and then restarted, it would boot fine, continuing through the message above, thusly:
[ 0.081615] Working around Cyrix MediaGX virtual DMA bugs. [ 0.090076] Enable Memory-Write-back mode on Cyrix/NSC processor. [ 0.100000] Enable Memory access reorder on Cyrix/NSC processor. [ 0.100070] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.110058] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 [ 0.120037] CPU: NSC Geode(TM) Integrated Processor by National Semi (family: 0x5, model: 0x9, stepping: 0x1) [...]
In order to continue using modern tools, like ssh, to interact with the software on these old devices, I need modern builds of the OpenWrt firmware on the devices. I confirmed that the warm boot hang was still an issue in modern OpenWrt builds (currently using a patched linux v6.6.65).
Last night, I decided it was time to get to the bottom of the warm boot hang, and began bisecting. From preserved builds, I narrowed down the bisection window from late February to late May 2019. During this period, the OpenWrt builds were using 4.14.x. I was able to build using period-correct Ubuntu 18.04.6. After a number of bisection iterations, I identified a kernel bump from 4.14.112 to 4.14.113 as the commit that introduced the warm boot hang.
https://github.com/openwrt/openwrt/commit/07aaa7e3d62ad32767d7067107db64b6ad...
Looking at the upstream changes in the stable kernel between 4.14.112 and 4.14.113 (tig v4.14.112..v4.14.113), I spotted a likely suspect:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
So, I tried reverting just that kernel change on top of the breaking OpenWrt commit, and my warm boot hang went away.
Presumably, the warm boot hang is due to some register not getting cleared in the same way that a loss of power does. That is approximately as much as I understand about the problem.
More poking/prodding and coaching from Jonas Gorski, it looks like this test patch fixes the problem on my board: Tested against v6.6.67 and v4.14.113.
Fixes: 18fb053f9b82 ("x86/cpu/cyrix: Use correct macros for Cyrix calls on Geode processors") Debugged-by: Jonas Gorski jonas.gorski@gmail.com Signed-off-by: Russell Senior russell@personaltelco.net Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/CAHP3WfOgs3Ms4Z+L9i0-iBOE21sdMk5erAiJurPjnrL9LSsgR... Cc: Matthew Whitehead tedheadster@gmail.com Cc: Thomas Gleixner tglx@linutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/cpu/cyrix.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/cyrix.c b/arch/x86/kernel/cpu/cyrix.c index 9651275aecd1b..dfec2c61e3547 100644 --- a/arch/x86/kernel/cpu/cyrix.c +++ b/arch/x86/kernel/cpu/cyrix.c @@ -153,8 +153,8 @@ static void geode_configure(void) u8 ccr3; local_irq_save(flags);
- /* Suspend on halt power saving and enable #SUSP pin */ - setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88); + /* Suspend on halt power saving */ + setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x08);
ccr3 = getCx86(CX86_CCR3); setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chukun Pan amadeus@jmu.edu.cn
[ Upstream commit 3126ea9be66b53e607f87f067641ba724be24181 ]
The device tree of RK3568 did not specify reset-names before. So add fallback to old behaviour to be compatible with old DT.
Fixes: fbcbffbac994 ("phy: rockchip: naneng-combphy: fix phy reset") Cc: Jianfeng Liu liujianfeng1994@gmail.com Signed-off-by: Chukun Pan amadeus@jmu.edu.cn Reviewed-by: Jonas Karlman jonas@kwiboo.se Link: https://lore.kernel.org/r/20250106100001.1344418-2-amadeus@jmu.edu.cn Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/phy/rockchip/phy-rockchip-naneng-combphy.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c index 9c231094ba359..2354ce8b21594 100644 --- a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c +++ b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c @@ -309,7 +309,10 @@ static int rockchip_combphy_parse_dt(struct device *dev, struct rockchip_combphy
priv->ext_refclk = device_property_present(dev, "rockchip,ext-refclk");
- priv->phy_rst = devm_reset_control_get(dev, "phy"); + priv->phy_rst = devm_reset_control_get_exclusive(dev, "phy"); + /* fallback to old behaviour */ + if (PTR_ERR(priv->phy_rst) == -ENOENT) + priv->phy_rst = devm_reset_control_array_get_exclusive(dev); if (IS_ERR(priv->phy_rst)) return dev_err_probe(dev, PTR_ERR(priv->phy_rst), "failed to get phy reset\n");
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yong-Xuan Wang yongxuan.wang@sifive.com
[ Upstream commit 2121cadec45aaf61fa45b3aa3d99723ed4e6683a ]
Documentation/virt/kvm/locking.rst advises that kvm->lock should be acquired outside vcpu->mutex and kvm->srcu. However, when KVM/RISC-V handling SBI_EXT_HSM_HART_START, the lock ordering is vcpu->mutex, kvm->srcu then kvm->lock.
Although the lockdep checking no longer complains about this after commit f0f44752f5f6 ("rcu: Annotate SRCU's update-side lockdep dependencies"), it's necessary to replace kvm->lock with a new dedicated lock to ensure only one hart can execute the SBI_EXT_HSM_HART_START call for the target hart simultaneously.
Additionally, this patch also rename "power_off" to "mp_state" with two possible values. The vcpu->mp_state_lock also protects the access of vcpu->mp_state.
Signed-off-by: Yong-Xuan Wang yongxuan.wang@sifive.com Reviewed-by: Anup Patel anup@brainfault.org Link: https://lore.kernel.org/r/20240417074528.16506-2-yongxuan.wang@sifive.com Signed-off-by: Anup Patel anup@brainfault.org Stable-dep-of: c7db342e3b47 ("riscv: KVM: Fix hart suspend status check") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/include/asm/kvm_host.h | 8 ++++-- arch/riscv/kvm/vcpu.c | 48 ++++++++++++++++++++++--------- arch/riscv/kvm/vcpu_sbi.c | 7 +++-- arch/riscv/kvm/vcpu_sbi_hsm.c | 39 +++++++++++++++++-------- 4 files changed, 73 insertions(+), 29 deletions(-)
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 1ebf20dfbaa69..459e61ad7d2b6 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -236,8 +236,9 @@ struct kvm_vcpu_arch { /* Cache pages needed to program page tables with spinlock held */ struct kvm_mmu_memory_cache mmu_page_cache;
- /* VCPU power-off state */ - bool power_off; + /* VCPU power state */ + struct kvm_mp_state mp_state; + spinlock_t mp_state_lock;
/* Don't run the VCPU (blocked) */ bool pause; @@ -351,7 +352,10 @@ int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu); bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask); +void __kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu); +void __kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu); +bool kvm_riscv_vcpu_stopped(struct kvm_vcpu *vcpu);
#endif /* __RISCV_KVM_HOST_H__ */ diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 82229db1ce73f..9584d62c96ee7 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -100,6 +100,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) struct kvm_cpu_context *cntx; struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;
+ spin_lock_init(&vcpu->arch.mp_state_lock); + /* Mark this VCPU never ran */ vcpu->arch.ran_atleast_once = false; vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; @@ -193,7 +195,7 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) { return (kvm_riscv_vcpu_has_interrupts(vcpu, -1UL) && - !vcpu->arch.power_off && !vcpu->arch.pause); + !kvm_riscv_vcpu_stopped(vcpu) && !vcpu->arch.pause); }
int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) @@ -421,26 +423,42 @@ bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) return kvm_riscv_vcpu_aia_has_interrupts(vcpu, mask); }
-void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu) +void __kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu) { - vcpu->arch.power_off = true; + WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED); kvm_make_request(KVM_REQ_SLEEP, vcpu); kvm_vcpu_kick(vcpu); }
-void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu) +void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu) { - vcpu->arch.power_off = false; + spin_lock(&vcpu->arch.mp_state_lock); + __kvm_riscv_vcpu_power_off(vcpu); + spin_unlock(&vcpu->arch.mp_state_lock); +} + +void __kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu) +{ + WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE); kvm_vcpu_wake_up(vcpu); }
+void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu) +{ + spin_lock(&vcpu->arch.mp_state_lock); + __kvm_riscv_vcpu_power_on(vcpu); + spin_unlock(&vcpu->arch.mp_state_lock); +} + +bool kvm_riscv_vcpu_stopped(struct kvm_vcpu *vcpu) +{ + return READ_ONCE(vcpu->arch.mp_state.mp_state) == KVM_MP_STATE_STOPPED; +} + int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, struct kvm_mp_state *mp_state) { - if (vcpu->arch.power_off) - mp_state->mp_state = KVM_MP_STATE_STOPPED; - else - mp_state->mp_state = KVM_MP_STATE_RUNNABLE; + *mp_state = READ_ONCE(vcpu->arch.mp_state);
return 0; } @@ -450,17 +468,21 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, { int ret = 0;
+ spin_lock(&vcpu->arch.mp_state_lock); + switch (mp_state->mp_state) { case KVM_MP_STATE_RUNNABLE: - vcpu->arch.power_off = false; + WRITE_ONCE(vcpu->arch.mp_state, *mp_state); break; case KVM_MP_STATE_STOPPED: - kvm_riscv_vcpu_power_off(vcpu); + __kvm_riscv_vcpu_power_off(vcpu); break; default: ret = -EINVAL; }
+ spin_unlock(&vcpu->arch.mp_state_lock); + return ret; }
@@ -561,11 +583,11 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_SLEEP, vcpu)) { kvm_vcpu_srcu_read_unlock(vcpu); rcuwait_wait_event(wait, - (!vcpu->arch.power_off) && (!vcpu->arch.pause), + (!kvm_riscv_vcpu_stopped(vcpu)) && (!vcpu->arch.pause), TASK_INTERRUPTIBLE); kvm_vcpu_srcu_read_lock(vcpu);
- if (vcpu->arch.power_off || vcpu->arch.pause) { + if (kvm_riscv_vcpu_stopped(vcpu) || vcpu->arch.pause) { /* * Awaken to handle a signal, request to * sleep again later. diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 7a7fe40d0930b..be43278109f4e 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -102,8 +102,11 @@ void kvm_riscv_vcpu_sbi_system_reset(struct kvm_vcpu *vcpu, unsigned long i; struct kvm_vcpu *tmp;
- kvm_for_each_vcpu(i, tmp, vcpu->kvm) - tmp->arch.power_off = true; + kvm_for_each_vcpu(i, tmp, vcpu->kvm) { + spin_lock(&vcpu->arch.mp_state_lock); + WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED); + spin_unlock(&vcpu->arch.mp_state_lock); + } kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
memset(&run->system_event, 0, sizeof(run->system_event)); diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c index 7dca0e9381d9a..827d946ab8714 100644 --- a/arch/riscv/kvm/vcpu_sbi_hsm.c +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -18,12 +18,18 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) struct kvm_cpu_context *cp = &vcpu->arch.guest_context; struct kvm_vcpu *target_vcpu; unsigned long target_vcpuid = cp->a0; + int ret = 0;
target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); if (!target_vcpu) return SBI_ERR_INVALID_PARAM; - if (!target_vcpu->arch.power_off) - return SBI_ERR_ALREADY_AVAILABLE; + + spin_lock(&target_vcpu->arch.mp_state_lock); + + if (!kvm_riscv_vcpu_stopped(target_vcpu)) { + ret = SBI_ERR_ALREADY_AVAILABLE; + goto out; + }
reset_cntx = &target_vcpu->arch.guest_reset_context; /* start address */ @@ -34,19 +40,31 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) reset_cntx->a1 = cp->a2; kvm_make_request(KVM_REQ_VCPU_RESET, target_vcpu);
- kvm_riscv_vcpu_power_on(target_vcpu); + __kvm_riscv_vcpu_power_on(target_vcpu);
- return 0; +out: + spin_unlock(&target_vcpu->arch.mp_state_lock); + + return ret; }
static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu) { - if (vcpu->arch.power_off) - return SBI_ERR_FAILURE; + int ret = 0;
- kvm_riscv_vcpu_power_off(vcpu); + spin_lock(&vcpu->arch.mp_state_lock);
- return 0; + if (kvm_riscv_vcpu_stopped(vcpu)) { + ret = SBI_ERR_FAILURE; + goto out; + } + + __kvm_riscv_vcpu_power_off(vcpu); + +out: + spin_unlock(&vcpu->arch.mp_state_lock); + + return ret; }
static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) @@ -58,7 +76,7 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); if (!target_vcpu) return SBI_ERR_INVALID_PARAM; - if (!target_vcpu->arch.power_off) + if (!kvm_riscv_vcpu_stopped(target_vcpu)) return SBI_HSM_STATE_STARTED; else if (vcpu->stat.generic.blocking) return SBI_HSM_STATE_SUSPENDED; @@ -71,14 +89,11 @@ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, { int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; - struct kvm *kvm = vcpu->kvm; unsigned long funcid = cp->a6;
switch (funcid) { case SBI_EXT_HSM_HART_START: - mutex_lock(&kvm->lock); ret = kvm_sbi_hsm_vcpu_start(vcpu); - mutex_unlock(&kvm->lock); break; case SBI_EXT_HSM_HART_STOP: ret = kvm_sbi_hsm_vcpu_stop(vcpu);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Jones ajones@ventanamicro.com
[ Upstream commit c7db342e3b4744688be1e27e31254c1d31a35274 ]
"Not stopped" means started or suspended so we need to check for a single state in order to have a chance to check for each state. Also, we need to use target_vcpu when checking for the suspend state.
Fixes: 763c8bed8c05 ("RISC-V: KVM: Implement SBI HSM suspend call") Signed-off-by: Andrew Jones ajones@ventanamicro.com Reviewed-by: Anup Patel anup@brainfault.org Link: https://lore.kernel.org/r/20250217084506.18763-8-ajones@ventanamicro.com Signed-off-by: Anup Patel anup@brainfault.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kvm/vcpu_sbi_hsm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c index 827d946ab8714..7e349b4ee926c 100644 --- a/arch/riscv/kvm/vcpu_sbi_hsm.c +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -76,12 +76,12 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); if (!target_vcpu) return SBI_ERR_INVALID_PARAM; - if (!kvm_riscv_vcpu_stopped(target_vcpu)) - return SBI_HSM_STATE_STARTED; - else if (vcpu->stat.generic.blocking) + if (kvm_riscv_vcpu_stopped(target_vcpu)) + return SBI_HSM_STATE_STOPPED; + else if (target_vcpu->stat.generic.blocking) return SBI_HSM_STATE_SUSPENDED; else - return SBI_HSM_STATE_STOPPED; + return SBI_HSM_STATE_STARTED; }
static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Jones ajones@ventanamicro.com
[ Upstream commit 0611f78f83c93c000029ab01daa28166d03590ed ]
When an invalid function ID of an SBI extension is used we should return not-supported, not invalid-param. Also, when we see that at least one hartid constructed from the base and mask parameters is invalid, then we should return invalid-param. Finally, rather than relying on overflowing a left shift to result in zero and then using that zero in a condition which [correctly] skips sending an IPI (but loops unnecessarily), explicitly check for overflow and exit the loop immediately.
Fixes: 5f862df5585c ("RISC-V: KVM: Add v0.1 replacement SBI extensions defined in v0.2") Signed-off-by: Andrew Jones ajones@ventanamicro.com Reviewed-by: Anup Patel anup@brainfault.org Link: https://lore.kernel.org/r/20250217084506.18763-10-ajones@ventanamicro.com Signed-off-by: Anup Patel anup@brainfault.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kvm/vcpu_sbi_replace.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c index 7c4d5d38a3390..26e2619ab887b 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -51,9 +51,10 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_context *cp = &vcpu->arch.guest_context; unsigned long hmask = cp->a0; unsigned long hbase = cp->a1; + unsigned long hart_bit = 0, sentmask = 0;
if (cp->a6 != SBI_EXT_IPI_SEND_IPI) { - retdata->err_val = SBI_ERR_INVALID_PARAM; + retdata->err_val = SBI_ERR_NOT_SUPPORTED; return 0; }
@@ -62,15 +63,23 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, if (hbase != -1UL) { if (tmp->vcpu_id < hbase) continue; - if (!(hmask & (1UL << (tmp->vcpu_id - hbase)))) + hart_bit = tmp->vcpu_id - hbase; + if (hart_bit >= __riscv_xlen) + goto done; + if (!(hmask & (1UL << hart_bit))) continue; } ret = kvm_riscv_vcpu_set_interrupt(tmp, IRQ_VS_SOFT); if (ret < 0) break; + sentmask |= 1UL << hart_bit; kvm_riscv_vcpu_pmu_incr_fw(tmp, SBI_PMU_FW_IPI_RCVD); }
+done: + if (hbase != -1UL && (hmask ^ sentmask)) + retdata->err_val = SBI_ERR_INVALID_PARAM; + return ret; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Jones ajones@ventanamicro.com
[ Upstream commit b901484852992cf3d162a5eab72251cc813ca624 ]
When an invalid function ID of an SBI extension is used we should return not-supported, not invalid-param.
Fixes: 5f862df5585c ("RISC-V: KVM: Add v0.1 replacement SBI extensions defined in v0.2") Signed-off-by: Andrew Jones ajones@ventanamicro.com Reviewed-by: Anup Patel anup@brainfault.org Link: https://lore.kernel.org/r/20250217084506.18763-11-ajones@ventanamicro.com Signed-off-by: Anup Patel anup@brainfault.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kvm/vcpu_sbi_replace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c index 26e2619ab887b..87ec68ed52d76 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -21,7 +21,7 @@ static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, u64 next_cycle;
if (cp->a6 != SBI_EXT_TIME_SET_TIMER) { - retdata->err_val = SBI_ERR_INVALID_PARAM; + retdata->err_val = SBI_ERR_NOT_SUPPORTED; return 0; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steven Rostedt rostedt@goodmis.org
commit 6f86bdeab633a56d5c6dccf1a2c5989b6a5e323e upstream.
The following commands causes a crash:
~# cd /sys/kernel/tracing/events/rcu/rcu_callback ~# echo 'hist:name=bad:keys=common_pid:onmax(bogus).save(common_pid)' > trigger bash: echo: write error: Invalid argument ~# echo 'hist:name=bad:keys=common_pid' > trigger
Because the following occurs:
event_trigger_write() { trigger_process_regex() { event_hist_trigger_parse() {
data = event_trigger_alloc(..);
event_trigger_register(.., data) { cmd_ops->reg(.., data, ..) [hist_register_trigger()] { data->ops->init() [event_hist_trigger_init()] { save_named_trigger(name, data) { list_add(&data->named_list, &named_triggers); } } } }
ret = create_actions(); (return -EINVAL) if (ret) goto out_unreg; [..] ret = hist_trigger_enable(data, ...) { list_add_tail_rcu(&data->list, &file->triggers); <<<---- SKIPPED!!! (this is important!) [..] out_unreg: event_hist_unregister(.., data) { cmd_ops->unreg(.., data, ..) [hist_unregister_trigger()] { list_for_each_entry(iter, &file->triggers, list) { if (!hist_trigger_match(data, iter, named_data, false)) <- never matches continue; [..] test = iter; } if (test && test->ops->free) <<<-- test is NULL
test->ops->free(test) [event_hist_trigger_free()] { [..] if (data->name) del_named_trigger(data) { list_del(&data->named_list); <<<<-- NEVER gets removed! } } } }
[..] kfree(data); <<<-- frees item but it is still on list
The next time a hist with name is registered, it causes an u-a-f bug and the kernel can crash.
Move the code around such that if event_trigger_register() succeeds, the next thing called is hist_trigger_enable() which adds it to the list.
A bunch of actions is called if get_named_trigger_data() returns false. But that doesn't need to be called after event_trigger_register(), so it can be moved up, allowing event_trigger_register() to be called just before hist_trigger_enable() keeping them together and allowing the file->triggers to be properly populated.
Cc: stable@vger.kernel.org Cc: Masami Hiramatsu mhiramat@kernel.org Cc: Mathieu Desnoyers mathieu.desnoyers@efficios.com Link: https://lore.kernel.org/20250227163944.1c37f85f@gandalf.local.home Fixes: 067fe038e70f6 ("tracing: Add variable reference handling to hist triggers") Reported-by: Tomas Glozar tglozar@redhat.com Tested-by: Tomas Glozar tglozar@redhat.com Reviewed-by: Tom Zanussi zanussi@kernel.org Closes: https://lore.kernel.org/all/CAP4=nvTsxjckSBTz=Oe_UYh8keD9_sZC4i++4h72mJLic4_... Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_events_hist.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-)
--- a/kernel/trace/trace_events_hist.c +++ b/kernel/trace/trace_events_hist.c @@ -6660,27 +6660,27 @@ static int event_hist_trigger_parse(stru if (existing_hist_update_only(glob, trigger_data, file)) goto out_free;
- ret = event_trigger_register(cmd_ops, file, glob, trigger_data); - if (ret < 0) - goto out_free; + if (!get_named_trigger_data(trigger_data)) {
- if (get_named_trigger_data(trigger_data)) - goto enable; + ret = create_actions(hist_data); + if (ret) + goto out_free;
- ret = create_actions(hist_data); - if (ret) - goto out_unreg; + if (has_hist_vars(hist_data) || hist_data->n_var_refs) { + ret = save_hist_vars(hist_data); + if (ret) + goto out_free; + }
- if (has_hist_vars(hist_data) || hist_data->n_var_refs) { - ret = save_hist_vars(hist_data); + ret = tracing_map_init(hist_data->map); if (ret) - goto out_unreg; + goto out_free; }
- ret = tracing_map_init(hist_data->map); - if (ret) - goto out_unreg; -enable: + ret = event_trigger_register(cmd_ops, file, glob, trigger_data); + if (ret < 0) + goto out_free; + ret = hist_trigger_enable(trigger_data, file); if (ret) goto out_unreg;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikolay Kuratov kniv@yandex-team.ru
commit a1a7eb89ca0b89dc1c326eeee2596f263291aca3 upstream.
Check whether denominator expression x * (x - 1) * 1000 mod {2^32, 2^64} produce zero and skip stddev computation in that case.
For now don't care about rec->counter * rec->counter overflow because rec->time * rec->time overflow will likely happen earlier.
Cc: stable@vger.kernel.org Cc: Wen Yang wenyang@linux.alibaba.com Cc: Mark Rutland mark.rutland@arm.com Cc: Mathieu Desnoyers mathieu.desnoyers@efficios.com Link: https://lore.kernel.org/20250206090156.1561783-1-kniv@yandex-team.ru Fixes: e31f7939c1c27 ("ftrace: Avoid potential division by zero in function profiler") Signed-off-by: Nikolay Kuratov kniv@yandex-team.ru Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/ftrace.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-)
--- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -538,6 +538,7 @@ static int function_stat_show(struct seq static struct trace_seq s; unsigned long long avg; unsigned long long stddev; + unsigned long long stddev_denom; #endif mutex_lock(&ftrace_profile_lock);
@@ -559,23 +560,19 @@ static int function_stat_show(struct seq #ifdef CONFIG_FUNCTION_GRAPH_TRACER seq_puts(m, " ");
- /* Sample standard deviation (s^2) */ - if (rec->counter <= 1) - stddev = 0; - else { - /* - * Apply Welford's method: - * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2) - */ + /* + * Variance formula: + * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2) + * Maybe Welford's method is better here? + * Divide only by 1000 for ns^2 -> us^2 conversion. + * trace_print_graph_duration will divide by 1000 again. + */ + stddev = 0; + stddev_denom = rec->counter * (rec->counter - 1) * 1000; + if (stddev_denom) { stddev = rec->counter * rec->time_squared - rec->time * rec->time; - - /* - * Divide only 1000 for ns^2 -> us^2 conversion. - * trace_print_graph_duration will divide 1000 again. - */ - stddev = div64_ul(stddev, - rec->counter * (rec->counter - 1) * 1000); + stddev = div64_ul(stddev, stddev_denom); }
trace_seq_init(&s);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Panchenko dmitry@d-systems.ee
commit 9af3b4f2d879da01192d6168e6c651e7fb5b652d upstream.
Re-add the sample-rate quirk for the Pioneer DJM-900NXS2. This device does not work without setting sample-rate.
Signed-off-by: Dmitry Panchenko dmitry@d-systems.ee Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20250220161540.3624660-1-dmitry@d-systems.ee Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/usb/quirks.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/usb/quirks.c +++ b/sound/usb/quirks.c @@ -1775,6 +1775,7 @@ void snd_usb_set_format_quirk(struct snd case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */ subs->stream_offset_adj = 2; break; + case USB_ID(0x2b73, 0x000a): /* Pioneer DJM-900NXS2 */ case USB_ID(0x2b73, 0x0013): /* Pioneer DJM-450 */ pioneer_djm_set_format_quirk(subs, 0x0082); break;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Adrien Vergé adrienverge@gmail.com
commit c6557ccf8094ce2e1142c6e49cd47f5d5e2933a8 upstream.
This fixes a regression introduced a few weeks ago in stable kernels 6.12.14 and 6.13.3. The internal microphone on ASUS Vivobook N705UD / X705UD laptops is broken: the microphone appears in userspace (e.g. Gnome settings) but no sound is detected. I bisected it to commit 3b4309546b48 ("ALSA: hda: Fix headset detection failure due to unstable sort").
I figured out the cause: 1. The initial pins enabled for the ALC256 driver are: cfg->inputs == { { pin=0x19, type=AUTO_PIN_MIC, is_headset_mic=1, is_headphone_mic=0, has_boost_on_pin=1 }, { pin=0x1a, type=AUTO_PIN_MIC, is_headset_mic=0, is_headphone_mic=0, has_boost_on_pin=1 } } 2. Since 2017 and commits c1732ede5e8 ("ALSA: hda/realtek - Fix headset and mic on several ASUS laptops with ALC256") and 28e8af8a163 ("ALSA: hda/realtek: Fix mic and headset jack sense on ASUS X705UD"), the quirk ALC256_FIXUP_ASUS_MIC is also applied to ASUS X705UD / N705UD laptops. This added another internal microphone on pin 0x13: cfg->inputs == { { pin=0x13, type=AUTO_PIN_MIC, is_headset_mic=0, is_headphone_mic=0, has_boost_on_pin=1 }, { pin=0x19, type=AUTO_PIN_MIC, is_headset_mic=1, is_headphone_mic=0, has_boost_on_pin=1 }, { pin=0x1a, type=AUTO_PIN_MIC, is_headset_mic=0, is_headphone_mic=0, has_boost_on_pin=1 } } I don't know what this pin 0x13 corresponds to. To the best of my knowledge, these laptops have only one internal microphone. 3. Before 2025 and commit 3b4309546b48 ("ALSA: hda: Fix headset detection failure due to unstable sort"), the sort function would let the microphone of pin 0x1a (the working one) *before* the microphone of pin 0x13 (the phantom one). 4. After this commit 3b4309546b48, the fixed sort function puts the working microphone (pin 0x1a) *after* the phantom one (pin 0x13). As a result, no sound is detected anymore.
It looks like the quirk ALC256_FIXUP_ASUS_MIC is not needed anymore for ASUS Vivobook X705UD / N705UD laptops. Without it, everything works fine: - the internal microphone is detected and records actual sound, - plugging in a jack headset is detected and can record actual sound with it, - unplugging the jack headset makes the system go back to internal microphone and can record actual sound.
Cc: stable@vger.kernel.org Cc: Kuan-Wei Chiu visitorckw@gmail.com Cc: Chris Chiu chris.chiu@canonical.com Fixes: 3b4309546b48 ("ALSA: hda: Fix headset detection failure due to unstable sort") Tested-by: Adrien Vergé adrienverge@gmail.com Signed-off-by: Adrien Vergé adrienverge@gmail.com Link: https://patch.msgid.link/20250226135515.24219-1-adrienverge@gmail.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/patch_realtek.c | 1 - 1 file changed, 1 deletion(-)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -10115,7 +10115,6 @@ static const struct hda_quirk alc269_fix SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE), SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), - SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC), SND_PCI_QUIRK(0x1043, 0x1a63, "ASUS UX3405MA", ALC245_FIXUP_CS35L41_SPI_2), SND_PCI_QUIRK(0x1043, 0x1a83, "ASUS UM5302LA", ALC294_FIXUP_CS35L41_I2C_2), SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Breno Leitao leitao@debian.org
commit 0fe8813baf4b2e865d3b2c735ce1a15b86002c74 upstream.
The perf_iterate_ctx() function performs RCU list traversal but currently lacks RCU read lock protection. This causes lockdep warnings when running perf probe with unshare(1) under CONFIG_PROVE_RCU_LIST=y:
WARNING: suspicious RCU usage kernel/events/core.c:8168 RCU-list traversed in non-reader section!!
Call Trace: lockdep_rcu_suspicious ? perf_event_addr_filters_apply perf_iterate_ctx perf_event_exec begin_new_exec ? load_elf_phdrs load_elf_binary ? lock_acquire ? find_held_lock ? bprm_execve bprm_execve do_execveat_common.isra.0 __x64_sys_execve do_syscall_64 entry_SYSCALL_64_after_hwframe
This protection was previously present but was removed in commit bd2756811766 ("perf: Rewrite core context handling"). Add back the necessary rcu_read_lock()/rcu_read_unlock() pair around perf_iterate_ctx() call in perf_event_exec().
[ mingo: Use scoped_guard() as suggested by Peter ]
Fixes: bd2756811766 ("perf: Rewrite core context handling") Signed-off-by: Breno Leitao leitao@debian.org Signed-off-by: Ingo Molnar mingo@kernel.org Acked-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250117-fix_perf_rcu-v1-1-13cb9210fc6a@debian.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/events/core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -8113,7 +8113,8 @@ void perf_event_exec(void)
perf_event_enable_on_exec(ctx); perf_event_remove_on_exec(ctx); - perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, true); + scoped_guard(rcu) + perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, true);
perf_unpin_context(ctx); put_ctx(ctx);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kan Liang kan.liang@linux.intel.com
commit 88ec7eedbbd21cad38707620ad6c48a4e9a87c18 upstream.
Perf doesn't work at low frequencies:
$ perf record -e cpu_core/instructions/ppp -F 120 Error: The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (cpu_core/instructions/ppp). "dmesg | grep -i perf" may provide additional information.
The limit_period() check avoids a low sampling period on a counter. It doesn't intend to limit the frequency.
The check in the x86_pmu_hw_config() should be limited to non-freq mode. The attr.sample_period and attr.sample_freq are union. The attr.sample_period should not be used to indicate the frequency mode.
Fixes: c46e665f0377 ("perf/x86: Add INST_RETIRED.ALL workarounds") Signed-off-by: Kan Liang kan.liang@linux.intel.com Signed-off-by: Ingo Molnar mingo@kernel.org Reviewed-by: Ravi Bangoria ravi.bangoria@amd.com Cc: Peter Zijlstra peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250117151913.3043942-1-kan.liang@linux.intel.com Closes: https://lore.kernel.org/lkml/20250115154949.3147-1-ravi.bangoria@amd.com/ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/events/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -623,7 +623,7 @@ int x86_pmu_hw_config(struct perf_event if (event->attr.type == event->pmu->type) event->hw.config |= event->attr.config & X86_RAW_EVENT_MASK;
- if (event->attr.sample_period && x86_pmu.limit_period) { + if (!event->attr.freq && x86_pmu.limit_period) { s64 left = event->attr.sample_period; x86_pmu.limit_period(event, &left); if (left > event->attr.sample_period)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kan Liang kan.liang@linux.intel.com
commit 0d39844150546fa1415127c5fbae26db64070dd3 upstream.
A low attr::freq value cannot be set via IOC_PERIOD on some platforms.
The perf_event_check_period() introduced in:
81ec3f3c4c4d ("perf/x86: Add check_period PMU callback")
was intended to check the period, rather than the frequency. A low frequency may be mistakenly rejected by limit_period().
Fix it.
Fixes: 81ec3f3c4c4d ("perf/x86: Add check_period PMU callback") Signed-off-by: Kan Liang kan.liang@linux.intel.com Signed-off-by: Ingo Molnar mingo@kernel.org Reviewed-by: Ravi Bangoria ravi.bangoria@amd.com Cc: Peter Zijlstra peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250117151913.3043942-2-kan.liang@linux.intel.com Closes: https://lore.kernel.org/lkml/20250115154949.3147-1-ravi.bangoria@amd.com/ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/events/core.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)
--- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5861,14 +5861,15 @@ static int _perf_event_period(struct per if (!value) return -EINVAL;
- if (event->attr.freq && value > sysctl_perf_event_sample_rate) - return -EINVAL; - - if (perf_event_check_period(event, value)) - return -EINVAL; - - if (!event->attr.freq && (value & (1ULL << 63))) - return -EINVAL; + if (event->attr.freq) { + if (value > sysctl_perf_event_sample_rate) + return -EINVAL; + } else { + if (perf_event_check_period(event, value)) + return -EINVAL; + if (value & (1ULL << 63)) + return -EINVAL; + }
event_function_call(event, __perf_event_period, &value);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tom Chung chiahsuan.chung@amd.com
commit e8863f8b0316d8ee1e7e5291e8f2f72c91ac967d upstream.
[Why] PSR-SU may cause some glitching randomly on several panels.
[How] Temporarily disable the PSR-SU and fallback to PSR1 for all eDP panels.
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/3388 Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Reviewed-by: Sun peng Li sunpeng.li@amd.com Signed-off-by: Tom Chung chiahsuan.chung@amd.com Signed-off-by: Roman Li roman.li@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit 6deeefb820d0efb0b36753622fb982d03b37b3ad) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c @@ -51,7 +51,8 @@ static bool link_supports_psrsu(struct d !link->dpcd_caps.psr_info.psr2_su_y_granularity_cap) return false;
- return dc_dmub_check_min_version(dc->ctx->dmub_srv->dmub); + /* Temporarily disable PSR-SU to avoid glitches */ + return false; }
/*
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Roman Li Roman.Li@amd.com
commit 4de141b8b1b7991b607f77e5f4580e1c67c24717 upstream.
[Why] DC is not using amdgpu_irq_get/put to manage the HPD interrupt refcounts. So when amdgpu_irq_gpu_reset_resume_helper() reprograms all of the IRQs, HPD gets disabled.
[How] Use amdgpu_irq_get/put() for HPD init/fini in DM in order to sync refcounts
Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Aurabindo Pillai aurabindo.pillai@amd.com Signed-off-by: Roman Li Roman.Li@amd.com Signed-off-by: Zaeem Mohamed zaeem.mohamed@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit f3dde2ff7fcaacd77884502e8f572f2328e9c745) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c @@ -891,6 +891,7 @@ void amdgpu_dm_hpd_init(struct amdgpu_de struct drm_device *dev = adev_to_drm(adev); struct drm_connector *connector; struct drm_connector_list_iter iter; + int i;
drm_connector_list_iter_begin(dev, &iter); drm_for_each_connector_iter(connector, &iter) { @@ -912,6 +913,12 @@ void amdgpu_dm_hpd_init(struct amdgpu_de } } drm_connector_list_iter_end(&iter); + + /* Update reference counts for HPDs */ + for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { + if (amdgpu_irq_get(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) + drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", i); + } }
/** @@ -927,6 +934,7 @@ void amdgpu_dm_hpd_fini(struct amdgpu_de struct drm_device *dev = adev_to_drm(adev); struct drm_connector *connector; struct drm_connector_list_iter iter; + int i;
drm_connector_list_iter_begin(dev, &iter); drm_for_each_connector_iter(connector, &iter) { @@ -947,4 +955,10 @@ void amdgpu_dm_hpd_fini(struct amdgpu_de } } drm_connector_list_iter_end(&iter); + + /* Update reference counts for HPDs */ + for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { + if (amdgpu_irq_put(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) + drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", i); + } }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyrone Ting kfting@nuvoton.com
commit dd1998e243f5fa25d348a384ba0b6c84d980f2b2 upstream.
The customer reports that there is a soft lockup issue related to the i2c driver. After checking, the i2c module was doing a tx transfer and the bmc machine reboots in the middle of the i2c transaction, the i2c module keeps the status without being reset.
Due to such an i2c module status, the i2c irq handler keeps getting triggered since the i2c irq handler is registered in the kernel booting process after the bmc machine is doing a warm rebooting. The continuous triggering is stopped by the soft lockup watchdog timer.
Disable the interrupt enable bit in the i2c module before calling devm_request_irq to fix this issue since the i2c relative status bit is read-only.
Here is the soft lockup log. [ 28.176395] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [swapper/0:1] [ 28.183351] Modules linked in: [ 28.186407] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.15.120-yocto-s-dirty-bbebc78 #1 [ 28.201174] pstate: 40000005 (nZcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 28.208128] pc : __do_softirq+0xb0/0x368 [ 28.212055] lr : __do_softirq+0x70/0x368 [ 28.215972] sp : ffffff8035ebca00 [ 28.219278] x29: ffffff8035ebca00 x28: 0000000000000002 x27: ffffff80071a3780 [ 28.226412] x26: ffffffc008bdc000 x25: ffffffc008bcc640 x24: ffffffc008be50c0 [ 28.233546] x23: ffffffc00800200c x22: 0000000000000000 x21: 000000000000001b [ 28.240679] x20: 0000000000000000 x19: ffffff80001c3200 x18: ffffffffffffffff [ 28.247812] x17: ffffffc02d2e0000 x16: ffffff8035eb8b40 x15: 00001e8480000000 [ 28.254945] x14: 02c3647e37dbfcb6 x13: 02c364f2ab14200c x12: 0000000002c364f2 [ 28.262078] x11: 00000000fa83b2da x10: 000000000000b67e x9 : ffffffc008010250 [ 28.269211] x8 : 000000009d983d00 x7 : 7fffffffffffffff x6 : 0000036d74732434 [ 28.276344] x5 : 00ffffffffffffff x4 : 0000000000000015 x3 : 0000000000000198 [ 28.283476] x2 : ffffffc02d2e0000 x1 : 00000000000000e0 x0 : ffffffc008bdcb40 [ 28.290611] Call trace: [ 28.293052] __do_softirq+0xb0/0x368 [ 28.296625] __irq_exit_rcu+0xe0/0x100 [ 28.300374] irq_exit+0x14/0x20 [ 28.303513] handle_domain_irq+0x68/0x90 [ 28.307440] gic_handle_irq+0x78/0xb0 [ 28.311098] call_on_irq_stack+0x20/0x38 [ 28.315019] do_interrupt_handler+0x54/0x5c [ 28.319199] el1_interrupt+0x2c/0x4c [ 28.322777] el1h_64_irq_handler+0x14/0x20 [ 28.326872] el1h_64_irq+0x74/0x78 [ 28.330269] __setup_irq+0x454/0x780 [ 28.333841] request_threaded_irq+0xd0/0x1b4 [ 28.338107] devm_request_threaded_irq+0x84/0x100 [ 28.342809] npcm_i2c_probe_bus+0x188/0x3d0 [ 28.346990] platform_probe+0x6c/0xc4 [ 28.350653] really_probe+0xcc/0x45c [ 28.354227] __driver_probe_device+0x8c/0x160 [ 28.358578] driver_probe_device+0x44/0xe0 [ 28.362670] __driver_attach+0x124/0x1d0 [ 28.366589] bus_for_each_dev+0x7c/0xe0 [ 28.370426] driver_attach+0x28/0x30 [ 28.373997] bus_add_driver+0x124/0x240 [ 28.377830] driver_register+0x7c/0x124 [ 28.381662] __platform_driver_register+0x2c/0x34 [ 28.386362] npcm_i2c_init+0x3c/0x5c [ 28.389937] do_one_initcall+0x74/0x230 [ 28.393768] kernel_init_freeable+0x24c/0x2b4 [ 28.398126] kernel_init+0x28/0x130 [ 28.401614] ret_from_fork+0x10/0x20 [ 28.405189] Kernel panic - not syncing: softlockup: hung tasks [ 28.411011] SMP: stopping secondary CPUs [ 28.414933] Kernel Offset: disabled [ 28.418412] CPU features: 0x00000000,00000802 [ 28.427644] Rebooting in 20 seconds..
Fixes: 56a1485b102e ("i2c: npcm7xx: Add Nuvoton NPCM I2C controller driver") Signed-off-by: Tyrone Ting kfting@nuvoton.com Cc: stable@vger.kernel.org # v5.8+ Reviewed-by: Tali Perry tali.perry1@gmail.com Signed-off-by: Andi Shyti andi.shyti@kernel.org Link: https://lore.kernel.org/r/20250220040029.27596-2-kfting@nuvoton.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/i2c/busses/i2c-npcm7xx.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/drivers/i2c/busses/i2c-npcm7xx.c +++ b/drivers/i2c/busses/i2c-npcm7xx.c @@ -2333,6 +2333,13 @@ static int npcm_i2c_probe_bus(struct pla if (irq < 0) return irq;
+ /* + * Disable the interrupt to avoid the interrupt handler being triggered + * incorrectly by the asynchronous interrupt status since the machine + * might do a warm reset during the last smbus/i2c transfer session. + */ + npcm_i2c_int_enable(bus, false); + ret = devm_request_irq(bus->dev, irq, npcm_i2c_bus_irq, 0, dev_name(bus->dev), bus); if (ret)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Binbin Zhou zhoubinbin@loongson.cn
commit 71c49ee9bb41e1709abac7e2eb05f9193222e580 upstream.
According to the chip manual, the I2C register access type of Loongson-2K2000/LS7A is "B", so we can only access registers in byte form (readb()/writeb()).
Although Loongson-2K0500/Loongson-2K1000 do not have similar constraints, register accesses in byte form also behave correctly.
Also, in hardware, the frequency division registers are defined as two separate registers (high 8-bit and low 8-bit), so we just access them directly as bytes.
Fixes: 015e61f0bffd ("i2c: ls2x: Add driver for Loongson-2K/LS7A I2C controller") Co-developed-by: Hongliang Wang wanghongliang@loongson.cn Signed-off-by: Hongliang Wang wanghongliang@loongson.cn Signed-off-by: Binbin Zhou zhoubinbin@loongson.cn Cc: stable@vger.kernel.org # v6.3+ Reviewed-by: Andy Shevchenko andy@kernel.org Signed-off-by: Andi Shyti andi.shyti@kernel.org Link: https://lore.kernel.org/r/20250220125612.1910990-1-zhoubinbin@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/i2c/busses/i2c-ls2x.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
--- a/drivers/i2c/busses/i2c-ls2x.c +++ b/drivers/i2c/busses/i2c-ls2x.c @@ -10,6 +10,7 @@ * Rewritten for mainline by Binbin Zhou zhoubinbin@loongson.cn */
+#include <linux/bitfield.h> #include <linux/bits.h> #include <linux/completion.h> #include <linux/device.h> @@ -26,7 +27,8 @@ #include <linux/units.h>
/* I2C Registers */ -#define I2C_LS2X_PRER 0x0 /* Freq Division Register(16 bits) */ +#define I2C_LS2X_PRER_LO 0x0 /* Freq Division Low Byte Register */ +#define I2C_LS2X_PRER_HI 0x1 /* Freq Division High Byte Register */ #define I2C_LS2X_CTR 0x2 /* Control Register */ #define I2C_LS2X_TXR 0x3 /* Transport Data Register */ #define I2C_LS2X_RXR 0x3 /* Receive Data Register */ @@ -93,6 +95,7 @@ static irqreturn_t ls2x_i2c_isr(int this */ static void ls2x_i2c_adjust_bus_speed(struct ls2x_i2c_priv *priv) { + u16 val; struct i2c_timings *t = &priv->i2c_t; struct device *dev = priv->adapter.dev.parent; u32 acpi_speed = i2c_acpi_find_bus_speed(dev); @@ -104,9 +107,14 @@ static void ls2x_i2c_adjust_bus_speed(st else t->bus_freq_hz = LS2X_I2C_FREQ_STD;
- /* Calculate and set i2c frequency. */ - writew(LS2X_I2C_PCLK_FREQ / (5 * t->bus_freq_hz) - 1, - priv->base + I2C_LS2X_PRER); + /* + * According to the chip manual, we can only access the registers as bytes, + * otherwise the high bits will be truncated. + * So set the I2C frequency with a sequential writeb() instead of writew(). + */ + val = LS2X_I2C_PCLK_FREQ / (5 * t->bus_freq_hz) - 1; + writeb(FIELD_GET(GENMASK(7, 0), val), priv->base + I2C_LS2X_PRER_LO); + writeb(FIELD_GET(GENMASK(15, 8), val), priv->base + I2C_LS2X_PRER_HI); }
static void ls2x_i2c_init(struct ls2x_i2c_priv *priv)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikita Zhandarovich n.zhandarovich@fintech.ru
commit 1cf9631d836b289bd5490776551961c883ae8a4f upstream.
Syzbot reports [1] a warning in usb_submit_urb() triggered by inconsistencies between expected and actually present endpoints in gl620a driver. Since genelink_bind() does not properly verify whether specified eps are in fact provided by the device, in this case, an artificially manufactured one, one may get a mismatch.
Fix the issue by resorting to a usbnet utility function usbnet_get_endpoints(), usually reserved for this very problem. Check for endpoints and return early before proceeding further if any are missing.
[1] Syzbot report: usb 5-1: Manufacturer: syz usb 5-1: SerialNumber: syz usb 5-1: config 0 descriptor?? gl620a 5-1:0.23 usb0: register 'gl620a' at usb-dummy_hcd.0-1, ... ------------[ cut here ]------------ usb 5-1: BOGUS urb xfer, pipe 3 != type 1 WARNING: CPU: 2 PID: 1841 at drivers/usb/core/urb.c:503 usb_submit_urb+0xe4b/0x1730 drivers/usb/core/urb.c:503 Modules linked in: CPU: 2 UID: 0 PID: 1841 Comm: kworker/2:2 Not tainted 6.12.0-syzkaller-07834-g06afb0f36106 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Workqueue: mld mld_ifc_work RIP: 0010:usb_submit_urb+0xe4b/0x1730 drivers/usb/core/urb.c:503 ... Call Trace: <TASK> usbnet_start_xmit+0x6be/0x2780 drivers/net/usb/usbnet.c:1467 __netdev_start_xmit include/linux/netdevice.h:5002 [inline] netdev_start_xmit include/linux/netdevice.h:5011 [inline] xmit_one net/core/dev.c:3590 [inline] dev_hard_start_xmit+0x9a/0x7b0 net/core/dev.c:3606 sch_direct_xmit+0x1ae/0xc30 net/sched/sch_generic.c:343 __dev_xmit_skb net/core/dev.c:3827 [inline] __dev_queue_xmit+0x13d4/0x43e0 net/core/dev.c:4400 dev_queue_xmit include/linux/netdevice.h:3168 [inline] neigh_resolve_output net/core/neighbour.c:1514 [inline] neigh_resolve_output+0x5bc/0x950 net/core/neighbour.c:1494 neigh_output include/net/neighbour.h:539 [inline] ip6_finish_output2+0xb1b/0x2070 net/ipv6/ip6_output.c:141 __ip6_finish_output net/ipv6/ip6_output.c:215 [inline] ip6_finish_output+0x3f9/0x1360 net/ipv6/ip6_output.c:226 NF_HOOK_COND include/linux/netfilter.h:303 [inline] ip6_output+0x1f8/0x540 net/ipv6/ip6_output.c:247 dst_output include/net/dst.h:450 [inline] NF_HOOK include/linux/netfilter.h:314 [inline] NF_HOOK include/linux/netfilter.h:308 [inline] mld_sendpack+0x9f0/0x11d0 net/ipv6/mcast.c:1819 mld_send_cr net/ipv6/mcast.c:2120 [inline] mld_ifc_work+0x740/0xca0 net/ipv6/mcast.c:2651 process_one_work+0x9c5/0x1ba0 kernel/workqueue.c:3229 process_scheduled_works kernel/workqueue.c:3310 [inline] worker_thread+0x6c8/0xf00 kernel/workqueue.c:3391 kthread+0x2c1/0x3a0 kernel/kthread.c:389 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 </TASK>
Reported-by: syzbot+d693c07c6f647e0388d3@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=d693c07c6f647e0388d3 Fixes: 47ee3051c856 ("[PATCH] USB: usbnet (5/9) module for genesys gl620a cables") Cc: stable@vger.kernel.org Signed-off-by: Nikita Zhandarovich n.zhandarovich@fintech.ru Link: https://patch.msgid.link/20250224172919.1220522-1-n.zhandarovich@fintech.ru Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/gl620a.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
--- a/drivers/net/usb/gl620a.c +++ b/drivers/net/usb/gl620a.c @@ -179,9 +179,7 @@ static int genelink_bind(struct usbnet * { dev->hard_mtu = GL_RCV_BUF_SIZE; dev->net->hard_header_len += 4; - dev->in = usb_rcvbulkpipe(dev->udev, dev->driver_info->in); - dev->out = usb_sndbulkpipe(dev->udev, dev->driver_info->out); - return 0; + return usbnet_get_endpoints(dev, intf); }
static const struct driver_info genelink_info = {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit 39ab773e4c120f7f98d759415ccc2aca706bbc10 upstream.
When a DMA mapping error occurs while processing skb frags, it will free one more tx_swbd than expected, so fix this off-by-one issue.
Fixes: d4fd0404c1c9 ("enetc: Introduce basic PF and VF ENETC ethernet drivers") Cc: stable@vger.kernel.org Suggested-by: Vladimir Oltean vladimir.oltean@nxp.com Suggested-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Reviewed-by: Claudiu Manoil claudiu.manoil@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-2-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -145,6 +145,24 @@ static int enetc_ptp_parse(struct sk_buf return 0; }
+/** + * enetc_unwind_tx_frame() - Unwind the DMA mappings of a multi-buffer Tx frame + * @tx_ring: Pointer to the Tx ring on which the buffer descriptors are located + * @count: Number of Tx buffer descriptors which need to be unmapped + * @i: Index of the last successfully mapped Tx buffer descriptor + */ +static void enetc_unwind_tx_frame(struct enetc_bdr *tx_ring, int count, int i) +{ + while (count--) { + struct enetc_tx_swbd *tx_swbd = &tx_ring->tx_swbd[i]; + + enetc_free_tx_frame(tx_ring, tx_swbd); + if (i == 0) + i = tx_ring->bd_count; + i--; + } +} + static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb) { bool do_vlan, do_onestep_tstamp = false, do_twostep_tstamp = false; @@ -328,13 +346,7 @@ static int enetc_map_tx_buffs(struct ene dma_err: dev_err(tx_ring->dev, "DMA map error");
- do { - tx_swbd = &tx_ring->tx_swbd[i]; - enetc_free_tx_frame(tx_ring, tx_swbd); - if (i == 0) - i = tx_ring->bd_count; - i--; - } while (count--); + enetc_unwind_tx_frame(tx_ring, count, i);
return 0; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit da291996b16ebd10626d4b20288327b743aff110 upstream.
When creating a TSO header, if the skb is VLAN tagged, the extended BD will be used and the 'count' should be increased by 2 instead of 1. Otherwise, when an error occurs, less tx_swbd will be freed than the actual number.
Fixes: fb8629e2cbfc ("net: enetc: add support for software TSO") Cc: stable@vger.kernel.org Suggested-by: Vladimir Oltean vladimir.oltean@nxp.com Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Reviewed-by: Claudiu Manoil claudiu.manoil@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-3-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -351,14 +351,15 @@ dma_err: return 0; }
-static void enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb, - struct enetc_tx_swbd *tx_swbd, - union enetc_tx_bd *txbd, int *i, int hdr_len, - int data_len) +static int enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb, + struct enetc_tx_swbd *tx_swbd, + union enetc_tx_bd *txbd, int *i, int hdr_len, + int data_len) { union enetc_tx_bd txbd_tmp; u8 flags = 0, e_flags = 0; dma_addr_t addr; + int count = 1;
enetc_clear_tx_bd(&txbd_tmp); addr = tx_ring->tso_headers_dma + *i * TSO_HEADER_SIZE; @@ -401,7 +402,10 @@ static void enetc_map_tx_tso_hdr(struct /* Write the BD */ txbd_tmp.ext.e_flags = e_flags; *txbd = txbd_tmp; + count++; } + + return count; }
static int enetc_map_tx_tso_data(struct enetc_bdr *tx_ring, struct sk_buff *skb, @@ -533,9 +537,9 @@ static int enetc_map_tx_tso_buffs(struct
/* compute the csum over the L4 header */ csum = enetc_tso_hdr_csum(&tso, skb, hdr, hdr_len, &pos); - enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd, &i, hdr_len, data_len); + count += enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd, + &i, hdr_len, data_len); bd_data_num = 0; - count++;
while (data_len > 0) { int size;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit bbcbc906ab7b5834c1219cd17a38d78dba904aa0 upstream.
There is an issue with one-step timestamp based on UDP/IP. The peer will discard the sync packet because of the wrong UDP checksum. For ENETC v1, the software needs to update the UDP checksum when updating the originTimestamp field, so that the hardware can correctly update the UDP checksum when updating the correction field. Otherwise, the UDP checksum in the sync packet will be wrong.
Fixes: 7294380c5211 ("enetc: support PTP Sync packet one-step timestamping") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Tested-by: Vladimir Oltean vladimir.oltean@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-6-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 41 ++++++++++++++++++++++----- 1 file changed, 34 insertions(+), 7 deletions(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -253,9 +253,11 @@ static int enetc_map_tx_buffs(struct ene }
if (do_onestep_tstamp) { - u32 lo, hi, val; - u64 sec, nsec; + __be32 new_sec_l, new_nsec; + u32 lo, hi, nsec, val; + __be16 new_sec_h; u8 *data; + u64 sec;
lo = enetc_rd_hot(hw, ENETC_SICTR0); hi = enetc_rd_hot(hw, ENETC_SICTR1); @@ -269,13 +271,38 @@ static int enetc_map_tx_buffs(struct ene /* Update originTimestamp field of Sync packet * - 48 bits seconds field * - 32 bits nanseconds field + * + * In addition, the UDP checksum needs to be updated + * by software after updating originTimestamp field, + * otherwise the hardware will calculate the wrong + * checksum when updating the correction field and + * update it to the packet. */ data = skb_mac_header(skb); - *(__be16 *)(data + offset2) = - htons((sec >> 32) & 0xffff); - *(__be32 *)(data + offset2 + 2) = - htonl(sec & 0xffffffff); - *(__be32 *)(data + offset2 + 6) = htonl(nsec); + new_sec_h = htons((sec >> 32) & 0xffff); + new_sec_l = htonl(sec & 0xffffffff); + new_nsec = htonl(nsec); + if (udp) { + struct udphdr *uh = udp_hdr(skb); + __be32 old_sec_l, old_nsec; + __be16 old_sec_h; + + old_sec_h = *(__be16 *)(data + offset2); + inet_proto_csum_replace2(&uh->check, skb, old_sec_h, + new_sec_h, false); + + old_sec_l = *(__be32 *)(data + offset2 + 2); + inet_proto_csum_replace4(&uh->check, skb, old_sec_l, + new_sec_l, false); + + old_nsec = *(__be32 *)(data + offset2 + 6); + inet_proto_csum_replace4(&uh->check, skb, old_nsec, + new_nsec, false); + } + + *(__be16 *)(data + offset2) = new_sec_h; + *(__be32 *)(data + offset2 + 2) = new_sec_l; + *(__be32 *)(data + offset2 + 6) = new_nsec;
/* Configure single-step register */ val = ENETC_PM0_SINGLE_STEP_EN;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit 432a2cb3ee97a7c6ea578888fe81baad035b9307 upstream.
The 'xdp_tx' is used to count the number of XDP_TX frames sent, not the number of Tx BDs.
Fixes: 7ed2bc80074e ("net: enetc: add support for XDP_TX") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Ioana Ciornei ioana.ciornei@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-4-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -1669,7 +1669,7 @@ static int enetc_clean_rx_ring_xdp(struc enetc_xdp_drop(rx_ring, orig_i, i); tx_ring->stats.xdp_tx_drops++; } else { - tx_ring->stats.xdp_tx += xdp_tx_bd_cnt; + tx_ring->stats.xdp_tx++; rx_ring->xdp.xdp_tx_in_flight += xdp_tx_bd_cnt; xdp_tx_frm_cnt++; /* The XDP_TX enqueue was successful, so we
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Fang wei.fang@nxp.com
commit 249df695c3ffe8c8d36d46c2580ce72410976f96 upstream.
There is an off-by-one issue for the err_chained_bd path, it will free one more tx_swbd than expected. But there is no such issue for the err_map_data path. To fix this off-by-one issue and make the two error handling consistent, the increment of 'i' and 'count' remain in sync and enetc_unwind_tx_frame() is called for error handling.
Fixes: fb8629e2cbfc ("net: enetc: add support for software TSO") Cc: stable@vger.kernel.org Suggested-by: Vladimir Oltean vladimir.oltean@nxp.com Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Vladimir Oltean vladimir.oltean@nxp.com Reviewed-by: Claudiu Manoil claudiu.manoil@nxp.com Link: https://patch.msgid.link/20250224111251.1061098-9-wei.fang@nxp.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/freescale/enetc/enetc.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-)
--- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -590,8 +590,13 @@ static int enetc_map_tx_tso_buffs(struct err = enetc_map_tx_tso_data(tx_ring, skb, tx_swbd, txbd, tso.data, size, size == data_len); - if (err) + if (err) { + if (i == 0) + i = tx_ring->bd_count; + i--; + goto err_map_data; + }
data_len -= size; count++; @@ -620,13 +625,7 @@ err_map_data: dev_err(tx_ring->dev, "DMA map error");
err_chained_bd: - do { - tx_swbd = &tx_ring->tx_swbd[i]; - enetc_free_tx_frame(tx_ring, tx_swbd); - if (i == 0) - i = tx_ring->bd_count; - i--; - } while (count--); + enetc_unwind_tx_frame(tx_ring, count, i);
return 0; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: BH Hsieh bhsieh@nvidia.com
commit 55f1a5f7c97c3c92ba469e16991a09274410ceb7 upstream.
Observed VBUS_OVERRIDE & ID_OVERRIDE might be programmed with unexpected value prior to XUSB PADCTL driver, this could also occur in virtualization scenario.
For example, UEFI firmware programs ID_OVERRIDE=GROUNDED to set a type-c port to host mode and keeps the value to kernel. If the type-c port is connected a usb host, below errors can be observed right after usb host mode driver gets probed. The errors would keep until usb role class driver detects the type-c port as device mode and notifies usb device mode driver to set both ID_OVERRIDE and VBUS_OVERRIDE to correct value by XUSB PADCTL driver.
[ 173.765814] usb usb3-port2: Cannot enable. Maybe the USB cable is bad? [ 173.765837] usb usb3-port2: config error
Taking virtualization into account, asserting XUSB PADCTL reset would break XUSB functions used by other guest OS, hence only reset VBUS & ID OVERRIDE of the port in utmi_phy_init.
Fixes: bbf711682cd5 ("phy: tegra: xusb: Add Tegra186 support") Cc: stable@vger.kernel.org Change-Id: Ic63058d4d49b4a1f8f9ab313196e20ad131cc591 Signed-off-by: BH Hsieh bhsieh@nvidia.com Signed-off-by: Henry Lin henryl@nvidia.com Link: https://lore.kernel.org/r/20250122105943.8057-1-henryl@nvidia.com Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/phy/tegra/xusb-tegra186.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/drivers/phy/tegra/xusb-tegra186.c +++ b/drivers/phy/tegra/xusb-tegra186.c @@ -928,6 +928,7 @@ static int tegra186_utmi_phy_init(struct unsigned int index = lane->index; struct device *dev = padctl->dev; int err; + u32 reg;
port = tegra_xusb_find_usb2_port(padctl, index); if (!port) { @@ -935,6 +936,16 @@ static int tegra186_utmi_phy_init(struct return -ENODEV; }
+ if (port->mode == USB_DR_MODE_OTG || + port->mode == USB_DR_MODE_PERIPHERAL) { + /* reset VBUS&ID OVERRIDE */ + reg = padctl_readl(padctl, USB2_VBUS_ID); + reg &= ~VBUS_OVERRIDE; + reg &= ~ID_OVERRIDE(~0); + reg |= ID_OVERRIDE_FLOATING; + padctl_writel(padctl, reg, USB2_VBUS_ID); + } + if (port->supply && port->mode == USB_DR_MODE_HOST) { err = regulator_enable(port->supply); if (err) {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kaustabh Chakraborty kauschluss@disroot.org
commit e2158c953c973adb49383ddea2504faf08d375b7 upstream.
In exynos5_usbdrd_{pipe3,utmi}_set_refclk(), the masks PHYCLKRST_MPLL_MULTIPLIER_MASK and PHYCLKRST_SSC_REFCLKSEL_MASK are not inverted when applied to the register values. Fix it.
Cc: stable@vger.kernel.org Fixes: 59025887fb08 ("phy: Add new Exynos5 USB 3.0 PHY driver") Signed-off-by: Kaustabh Chakraborty kauschluss@disroot.org Reviewed-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Reviewed-by: Anand Moon linux.amoon@gmail.com Link: https://lore.kernel.org/r/20250209-exynos5-usbdrd-masks-v1-1-4f7f83f323d7@di... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/phy/samsung/phy-exynos5-usbdrd.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
--- a/drivers/phy/samsung/phy-exynos5-usbdrd.c +++ b/drivers/phy/samsung/phy-exynos5-usbdrd.c @@ -319,9 +319,9 @@ exynos5_usbdrd_pipe3_set_refclk(struct p reg |= PHYCLKRST_REFCLKSEL_EXT_REFCLK;
/* FSEL settings corresponding to reference clock */ - reg &= ~PHYCLKRST_FSEL_PIPE_MASK | - PHYCLKRST_MPLL_MULTIPLIER_MASK | - PHYCLKRST_SSC_REFCLKSEL_MASK; + reg &= ~(PHYCLKRST_FSEL_PIPE_MASK | + PHYCLKRST_MPLL_MULTIPLIER_MASK | + PHYCLKRST_SSC_REFCLKSEL_MASK); switch (phy_drd->extrefclk) { case EXYNOS5_FSEL_50MHZ: reg |= (PHYCLKRST_MPLL_MULTIPLIER_50M_REF | @@ -363,9 +363,9 @@ exynos5_usbdrd_utmi_set_refclk(struct ph reg &= ~PHYCLKRST_REFCLKSEL_MASK; reg |= PHYCLKRST_REFCLKSEL_EXT_REFCLK;
- reg &= ~PHYCLKRST_FSEL_UTMI_MASK | - PHYCLKRST_MPLL_MULTIPLIER_MASK | - PHYCLKRST_SSC_REFCLKSEL_MASK; + reg &= ~(PHYCLKRST_FSEL_UTMI_MASK | + PHYCLKRST_MPLL_MULTIPLIER_MASK | + PHYCLKRST_SSC_REFCLKSEL_MASK); reg |= PHYCLKRST_FSEL(phy_drd->extrefclk);
return reg;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Abeni pabeni@redhat.com
commit f865c24bc55158313d5779fc81116023a6940ca3 upstream.
Syzkaller reported a lockdep splat in the PM control path:
WARNING: CPU: 0 PID: 6693 at ./include/net/sock.h:1711 sock_owned_by_me include/net/sock.h:1711 [inline] WARNING: CPU: 0 PID: 6693 at ./include/net/sock.h:1711 msk_owned_by_me net/mptcp/protocol.h:363 [inline] WARNING: CPU: 0 PID: 6693 at ./include/net/sock.h:1711 mptcp_pm_nl_addr_send_ack+0x57c/0x610 net/mptcp/pm_netlink.c:788 Modules linked in: CPU: 0 UID: 0 PID: 6693 Comm: syz.0.205 Not tainted 6.14.0-rc2-syzkaller-00303-gad1b832bf1cf #0 Hardware name: Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 RIP: 0010:sock_owned_by_me include/net/sock.h:1711 [inline] RIP: 0010:msk_owned_by_me net/mptcp/protocol.h:363 [inline] RIP: 0010:mptcp_pm_nl_addr_send_ack+0x57c/0x610 net/mptcp/pm_netlink.c:788 Code: 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc e8 ca 7b d3 f5 eb b9 e8 c3 7b d3 f5 90 0f 0b 90 e9 dd fb ff ff e8 b5 7b d3 f5 90 <0f> 0b 90 e9 3e fb ff ff 44 89 f1 80 e1 07 38 c1 0f 8c eb fb ff ff RSP: 0000:ffffc900034f6f60 EFLAGS: 00010283 RAX: ffffffff8bee3c2b RBX: 0000000000000001 RCX: 0000000000080000 RDX: ffffc90004d42000 RSI: 000000000000a407 RDI: 000000000000a408 RBP: ffffc900034f7030 R08: ffffffff8bee37f6 R09: 0100000000000000 R10: dffffc0000000000 R11: ffffed100bcc62e4 R12: ffff88805e6316e0 R13: ffff88805e630c00 R14: dffffc0000000000 R15: ffff88805e630c00 FS: 00007f7e9a7e96c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b2fd18ff8 CR3: 0000000032c24000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> mptcp_pm_remove_addr+0x103/0x1d0 net/mptcp/pm.c:59 mptcp_pm_remove_anno_addr+0x1f4/0x2f0 net/mptcp/pm_netlink.c:1486 mptcp_nl_remove_subflow_and_signal_addr net/mptcp/pm_netlink.c:1518 [inline] mptcp_pm_nl_del_addr_doit+0x118d/0x1af0 net/mptcp/pm_netlink.c:1629 genl_family_rcv_msg_doit net/netlink/genetlink.c:1115 [inline] genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline] genl_rcv_msg+0xb1f/0xec0 net/netlink/genetlink.c:1210 netlink_rcv_skb+0x206/0x480 net/netlink/af_netlink.c:2543 genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219 netlink_unicast_kernel net/netlink/af_netlink.c:1322 [inline] netlink_unicast+0x7f6/0x990 net/netlink/af_netlink.c:1348 netlink_sendmsg+0x8de/0xcb0 net/netlink/af_netlink.c:1892 sock_sendmsg_nosec net/socket.c:718 [inline] __sock_sendmsg+0x221/0x270 net/socket.c:733 ____sys_sendmsg+0x53a/0x860 net/socket.c:2573 ___sys_sendmsg net/socket.c:2627 [inline] __sys_sendmsg+0x269/0x350 net/socket.c:2659 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7e9998cde9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7e9a7e9038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007f7e99ba5fa0 RCX: 00007f7e9998cde9 RDX: 000000002000c094 RSI: 0000400000000000 RDI: 0000000000000007 RBP: 00007f7e99a0e2a0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f7e99ba5fa0 R15: 00007fff49231088
Indeed the PM can try to send a RM_ADDR over a msk without acquiring first the msk socket lock.
The bugged code-path comes from an early optimization: when there are no subflows, the PM should (usually) not send RM_ADDR notifications.
The above statement is incorrect, as without locks another process could concurrent create a new subflow and cause the RM_ADDR generation.
Additionally the supposed optimization is not very effective even performance-wise, as most mptcp sockets should have at least one subflow: the MPC one.
Address the issue removing the buggy code path, the existing "slow-path" will handle correctly even the edge case.
Fixes: b6c08380860b ("mptcp: remove addr and subflow in PM netlink") Cc: stable@vger.kernel.org Reported-by: syzbot+cd3ce3d03a3393ae9700@syzkaller.appspotmail.com Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/546 Signed-off-by: Paolo Abeni pabeni@redhat.com Reviewed-by: Matthieu Baerts (NGI0) matttbe@kernel.org Signed-off-by: Matthieu Baerts (NGI0) matttbe@kernel.org Link: https://patch.msgid.link/20250224-net-mptcp-misc-fixes-v1-1-f550f636b435@ker... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/mptcp/pm_netlink.c | 5 ----- 1 file changed, 5 deletions(-)
--- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -1559,11 +1559,6 @@ static int mptcp_nl_remove_subflow_and_s if (mptcp_pm_is_userspace(msk)) goto next;
- if (list_empty(&msk->conn_list)) { - mptcp_pm_remove_anno_addr(msk, addr, false); - goto next; - } - lock_sock(sk); remove_subflow = lookup_subflow_by_saddr(&msk->conn_list, addr); mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthieu Baerts (NGI0) matttbe@kernel.org
commit 8668860b0ad32a13fcd6c94a0995b7aa7638c9ef upstream.
Before this patch, if the checksum was not used, the subflow was only reset if map_data_len was != 0. If there were no MPTCP options or an invalid mapping, map_data_len was not set to the data len, and then the subflow was not reset as it should have been, leaving the MPTCP connection in a wrong fallback mode.
This map_data_len condition has been introduced to handle the reception of the infinite mapping. Instead, a new dedicated mapping error could have been returned and treated as a special case. However, the commit 31bf11de146c ("mptcp: introduce MAPPING_BAD_CSUM") has been introduced by Paolo Abeni soon after, and backported later on to stable. It better handle the csum case, and it means the exception for valid_csum_seen in subflow_can_fallback(), plus this one for the infinite mapping in subflow_check_data_avail(), are no longer needed.
In other words, the code can be simplified there: a fallback should only be done if msk->allow_infinite_fallback is set. This boolean is set to false once MPTCP-specific operations acting on the whole MPTCP connection vs the initial path have been done, e.g. a second path has been created, or an MPTCP re-injection -- yes, possible even with a single subflow. The subflow_can_fallback() helper can then be dropped, and replaced by this single condition.
This also makes the code clearer: a fallback should only be done if it is possible to do so.
While at it, no need to set map_data_len to 0 in get_mapping_status() for the infinite mapping case: it will be set to skb->len just after, at the end of subflow_check_data_avail(), and not read in between.
Fixes: f8d4bcacff3b ("mptcp: infinite mapping receiving") Cc: stable@vger.kernel.org Reported-by: Chester A. Unal chester.a.unal@xpedite-tech.com Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/544 Acked-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Matthieu Baerts (NGI0) matttbe@kernel.org Tested-by: Chester A. Unal chester.a.unal@xpedite-tech.com Link: https://patch.msgid.link/20250224-net-mptcp-misc-fixes-v1-2-f550f636b435@ker... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/mptcp/subflow.c | 15 +-------------- 1 file changed, 1 insertion(+), 14 deletions(-)
--- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -1109,7 +1109,6 @@ static enum mapping_status get_mapping_s if (data_len == 0) { pr_debug("infinite mapping received\n"); MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX); - subflow->map_data_len = 0; return MAPPING_INVALID; }
@@ -1251,18 +1250,6 @@ static void subflow_sched_work_if_closed mptcp_schedule_work(sk); }
-static bool subflow_can_fallback(struct mptcp_subflow_context *subflow) -{ - struct mptcp_sock *msk = mptcp_sk(subflow->conn); - - if (subflow->mp_join) - return false; - else if (READ_ONCE(msk->csum_enabled)) - return !subflow->valid_csum_seen; - else - return READ_ONCE(msk->allow_infinite_fallback); -} - static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk) { struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); @@ -1358,7 +1345,7 @@ fallback: return true; }
- if (!subflow_can_fallback(subflow) && subflow->map_data_len) { + if (!READ_ONCE(msk->allow_infinite_fallback)) { /* fatal protocol error, close the socket. * subflow_error_report() will introduce the appropriate barriers */
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ard Biesheuvel ardb@kernel.org
commit 68f3ea7ee199ef77551e090dfef5a49046ea8443 upstream.
In the kernel, there are architectures (x86, arm64) that perform boot-time relocation (for KASLR) without relying on PIE codegen. In this case, all const global objects are emitted into .rodata, including const objects with fields that will be fixed up by the boot-time relocation code. This implies that .rodata (and .text in some cases) need to be writable at boot, but they will usually be mapped read-only as soon as the boot completes.
When using PIE codegen, the compiler will emit const global objects into .data.rel.ro rather than .rodata if the object contains fields that need such fixups at boot-time. This permits the linker to annotate such regions as requiring read-write access only at load time, but not at execution time (in user space), while keeping .rodata truly const (in user space, this is important for reducing the CoW footprint of dynamic executables).
This distinction does not matter for the kernel, but it does imply that const data will end up in writable memory if the .data.rel.ro sections are not treated in a special way, as they will end up in the writable .data segment by default.
So emit .data.rel.ro into the .rodata segment.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org Link: https://lore.kernel.org/r/20250221135704.431269-5-ardb+git@google.com Signed-off-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/asm-generic/vmlinux.lds.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -445,7 +445,7 @@ . = ALIGN((align)); \ .rodata : AT(ADDR(.rodata) - LOAD_OFFSET) { \ __start_rodata = .; \ - *(.rodata) *(.rodata.*) \ + *(.rodata) *(.rodata.*) *(.data.rel.ro*) \ SCHED_DATA \ RO_AFTER_INIT_DATA /* Read only after init */ \ . = ALIGN(8); \
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit b9a49520679e98700d3d89689cc91c08a1c88c1d upstream.
Kernel test robot reported an "imbalanced put" in the rcuref_put() slow path, which turned out to be a false positive. Consider the following race:
ref = 0 (via rcuref_init(ref, 1)) T1 T2 rcuref_put(ref) -> atomic_add_negative_release(-1, ref) # ref -> 0xffffffff -> rcuref_put_slowpath(ref) rcuref_get(ref) -> atomic_add_negative_relaxed(1, &ref->refcnt) -> return true; # ref -> 0
rcuref_put(ref) -> atomic_add_negative_release(-1, ref) # ref -> 0xffffffff -> rcuref_put_slowpath()
-> cnt = atomic_read(&ref->refcnt); # cnt -> 0xffffffff / RCUREF_NOREF -> atomic_try_cmpxchg_release(&ref->refcnt, &cnt, RCUREF_DEAD)) # ref -> 0xe0000000 / RCUREF_DEAD -> return true -> cnt = atomic_read(&ref->refcnt); # cnt -> 0xe0000000 / RCUREF_DEAD -> if (cnt > RCUREF_RELEASED) # 0xe0000000 > 0xc0000000 -> WARN_ONCE(cnt >= RCUREF_RELEASED, "rcuref - imbalanced put()")
The problem is the additional read in the slow path (after it decremented to RCUREF_NOREF) which can happen after the counter has been marked RCUREF_DEAD.
Prevent this by reusing the return value of the decrement. Now every "final" put uses RCUREF_NOREF in the slow path and attempts the final cmpxchg() to RCUREF_DEAD.
[ bigeasy: Add changelog ]
Fixes: ee1ee6db07795 ("atomics: Provide rcuref - scalable reference counting") Reported-by: kernel test robot oliver.sang@intel.com Debugged-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Cc: stable@vger.kernel.org Closes: https://lore.kernel.org/oe-lkp/202412311453.9d7636a2-lkp@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/rcuref.h | 9 ++++++--- lib/rcuref.c | 5 ++--- 2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/include/linux/rcuref.h b/include/linux/rcuref.h index 2c8bfd0f1b6b..6322d8c1c6b4 100644 --- a/include/linux/rcuref.h +++ b/include/linux/rcuref.h @@ -71,27 +71,30 @@ static inline __must_check bool rcuref_get(rcuref_t *ref) return rcuref_get_slowpath(ref); }
-extern __must_check bool rcuref_put_slowpath(rcuref_t *ref); +extern __must_check bool rcuref_put_slowpath(rcuref_t *ref, unsigned int cnt);
/* * Internal helper. Do not invoke directly. */ static __always_inline __must_check bool __rcuref_put(rcuref_t *ref) { + int cnt; + RCU_LOCKDEP_WARN(!rcu_read_lock_held() && preemptible(), "suspicious rcuref_put_rcusafe() usage"); /* * Unconditionally decrease the reference count. The saturation and * dead zones provide enough tolerance for this. */ - if (likely(!atomic_add_negative_release(-1, &ref->refcnt))) + cnt = atomic_sub_return_release(1, &ref->refcnt); + if (likely(cnt >= 0)) return false;
/* * Handle the last reference drop and cases inside the saturation * and dead zones. */ - return rcuref_put_slowpath(ref); + return rcuref_put_slowpath(ref, cnt); }
/** diff --git a/lib/rcuref.c b/lib/rcuref.c index 97f300eca927..5bd726b71e39 100644 --- a/lib/rcuref.c +++ b/lib/rcuref.c @@ -220,6 +220,7 @@ EXPORT_SYMBOL_GPL(rcuref_get_slowpath); /** * rcuref_put_slowpath - Slowpath of __rcuref_put() * @ref: Pointer to the reference count + * @cnt: The resulting value of the fastpath decrement * * Invoked when the reference count is outside of the valid zone. * @@ -233,10 +234,8 @@ EXPORT_SYMBOL_GPL(rcuref_get_slowpath); * with a concurrent get()/put() pair. Caller is not allowed to * deconstruct the protected object. */ -bool rcuref_put_slowpath(rcuref_t *ref) +bool rcuref_put_slowpath(rcuref_t *ref, unsigned int cnt) { - unsigned int cnt = atomic_read(&ref->refcnt); - /* Did this drop the last reference? */ if (likely(cnt == RCUREF_NOREF)) { /*
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 82c387ef7568c0d96a918a5a78d9cad6256cfa15 upstream.
David reported a warning observed while loop testing kexec jump:
Interrupts enabled after irqrouter_resume+0x0/0x50 WARNING: CPU: 0 PID: 560 at drivers/base/syscore.c:103 syscore_resume+0x18a/0x220 kernel_kexec+0xf6/0x180 __do_sys_reboot+0x206/0x250 do_syscall_64+0x95/0x180
The corresponding interrupt flag trace:
hardirqs last enabled at (15573): [<ffffffffa8281b8e>] __up_console_sem+0x7e/0x90 hardirqs last disabled at (15580): [<ffffffffa8281b73>] __up_console_sem+0x63/0x90
That means __up_console_sem() was invoked with interrupts enabled. Further instrumentation revealed that in the interrupt disabled section of kexec jump one of the syscore_suspend() callbacks woke up a task, which set the NEED_RESCHED flag. A later callback in the resume path invoked cond_resched() which in turn led to the invocation of the scheduler:
__cond_resched+0x21/0x60 down_timeout+0x18/0x60 acpi_os_wait_semaphore+0x4c/0x80 acpi_ut_acquire_mutex+0x3d/0x100 acpi_ns_get_node+0x27/0x60 acpi_ns_evaluate+0x1cb/0x2d0 acpi_rs_set_srs_method_data+0x156/0x190 acpi_pci_link_set+0x11c/0x290 irqrouter_resume+0x54/0x60 syscore_resume+0x6a/0x200 kernel_kexec+0x145/0x1c0 __do_sys_reboot+0xeb/0x240 do_syscall_64+0x95/0x180
This is a long standing problem, which probably got more visible with the recent printk changes. Something does a task wakeup and the scheduler sets the NEED_RESCHED flag. cond_resched() sees it set and invokes schedule() from a completely bogus context. The scheduler enables interrupts after context switching, which causes the above warning at the end.
Quite some of the code paths in syscore_suspend()/resume() can result in triggering a wakeup with the exactly same consequences. They might not have done so yet, but as they share a lot of code with normal operations it's just a question of time.
The problem only affects the PREEMPT_NONE and PREEMPT_VOLUNTARY scheduling models. Full preemption is not affected as cond_resched() is disabled and the preemption check preemptible() takes the interrupt disabled flag into account.
Cure the problem by adding a corresponding check into cond_resched().
Reported-by: David Woodhouse dwmw@amazon.co.uk Suggested-by: Peter Zijlstra peterz@infradead.org Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Ingo Molnar mingo@kernel.org Tested-by: David Woodhouse dwmw@amazon.co.uk Cc: Linus Torvalds torvalds@linux-foundation.org Cc: stable@vger.kernel.org Closes: https://lore.kernel.org/all/7717fe2ac0ce5f0a2c43fdab8b11f4483d54a2a4.camel@i... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8561,7 +8561,7 @@ SYSCALL_DEFINE0(sched_yield) #if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC) int __sched __cond_resched(void) { - if (should_resched(0)) { + if (should_resched(0) && !irqs_disabled()) { preempt_schedule_common(); return 1; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stafford Horne shorne@gmail.com
commit 713e788c0e07e185fd44dd581f74855ef149722f upstream.
When working on OpenRISC support for restartable sequences I noticed and fixed these two issues with the riscv support bits.
1 The 'inc' argument to RSEQ_ASM_OP_R_DEREF_ADDV was being implicitly passed to the macro. Fix this by adding 'inc' to the list of macro arguments. 2 The inline asm input constraints for 'inc' and 'off' use "er", The riscv gcc port does not have an "e" constraint, this looks to be copied from the x86 port. Fix this by just using an "r" constraint.
I have compile tested this only for riscv. However, the same fixes I use in the OpenRISC rseq selftests and everything passes with no issues.
Fixes: 171586a6ab66 ("selftests/rseq: riscv: Template memory ordering and percpu access mode") Signed-off-by: Stafford Horne shorne@gmail.com Tested-by: Charlie Jenkins charlie@rivosinc.com Reviewed-by: Charlie Jenkins charlie@rivosinc.com Reviewed-by: Mathieu Desnoyers mathieu.desnoyers@efficios.com Acked-by: Shuah Khan skhan@linuxfoundation.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250114170721.3613280-1-shorne@gmail.com Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/testing/selftests/rseq/rseq-riscv-bits.h | 6 +++--- tools/testing/selftests/rseq/rseq-riscv.h | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-)
--- a/tools/testing/selftests/rseq/rseq-riscv-bits.h +++ b/tools/testing/selftests/rseq/rseq-riscv-bits.h @@ -243,7 +243,7 @@ int RSEQ_TEMPLATE_IDENTIFIER(rseq_offset #ifdef RSEQ_COMPARE_TWICE RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]") #endif - RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, 3) + RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, inc, 3) RSEQ_INJECT_ASM(4) RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ @@ -251,8 +251,8 @@ int RSEQ_TEMPLATE_IDENTIFIER(rseq_offset [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD), [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [ptr] "r" (ptr), - [off] "er" (off), - [inc] "er" (inc) + [off] "r" (off), + [inc] "r" (inc) RSEQ_INJECT_INPUT : "memory", RSEQ_ASM_TMP_REG_1 RSEQ_INJECT_CLOBBER --- a/tools/testing/selftests/rseq/rseq-riscv.h +++ b/tools/testing/selftests/rseq/rseq-riscv.h @@ -158,7 +158,7 @@ do { \ "bnez " RSEQ_ASM_TMP_REG_1 ", 222b\n" \ "333:\n"
-#define RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, post_commit_label) \ +#define RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, inc, post_commit_label) \ "mv " RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(ptr) "]\n" \ RSEQ_ASM_OP_R_ADD(off) \ REG_L RSEQ_ASM_TMP_REG_1 ", 0(" RSEQ_ASM_TMP_REG_1 ")\n" \
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andreas Schwab schwab@suse.de
commit 599c44cd21f4967774e0acf58f734009be4aea9a upstream.
Make sure the compare value in the lr/sc loop is sign extended to match what lr.w does. Fortunately, due to the compiler keeping the register contents sign extended anyway the lack of the explicit extension didn't result in wrong code so far, but this cannot be relied upon.
Fixes: b90edb33010b ("RISC-V: Add futex support.") Signed-off-by: Andreas Schwab schwab@suse.de Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com Reviewed-by: Björn Töpel bjorn@rivosinc.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/mvmfrkv2vhz.fsf@suse.de Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/riscv/include/asm/futex.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/riscv/include/asm/futex.h +++ b/arch/riscv/include/asm/futex.h @@ -93,7 +93,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %[r]) \ _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %[r]) \ : [r] "+r" (ret), [v] "=&r" (val), [u] "+m" (*uaddr), [t] "=&r" (tmp) - : [ov] "Jr" (oldval), [nv] "Jr" (newval) + : [ov] "Jr" ((long)(int)oldval), [nv] "Jr" (newval) : "memory"); __disable_user_access();
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yong-Xuan Wang yongxuan.wang@sifive.com
commit aa49bc2ca8524186ceb0811c23a7f00c3dea6987 upstream.
The signal context of certain RISC-V extensions will be appended after struct __riscv_extra_ext_header, which already includes an empty context header. Therefore, there is no need to preserve a separate hdr for the END of signal context.
Fixes: 8ee0b41898fa ("riscv: signal: Add sigcontext save/restore for vector") Signed-off-by: Yong-Xuan Wang yongxuan.wang@sifive.com Reviewed-by: Zong Li zong.li@sifive.com Reviewed-by: Andy Chiu AndybnAC@gmail.com Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20241220083926.19453-2-yongxuan.wang@sifive.com Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/riscv/kernel/signal.c | 6 ------ 1 file changed, 6 deletions(-)
--- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -211,12 +211,6 @@ static size_t get_rt_frame_size(bool cal if (cal_all || riscv_v_vstate_query(task_pt_regs(current))) total_context_size += riscv_v_sc_size; } - /* - * Preserved a __riscv_ctx_hdr for END signal context header if an - * extension uses __riscv_extra_ext_header - */ - if (total_context_size) - total_context_size += sizeof(struct __riscv_ctx_hdr);
frame_size += total_context_size;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tomas Glozar tglozar@redhat.com
This reverts commit 41955b6c268154f81e34f9b61cf8156eec0730c0.
The commit breaks rtla build, since params->kernel_workload is not present on 6.6-stable.
Signed-off-by: Tomas Glozar tglozar@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/tracing/rtla/src/timerlat_top.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
--- a/tools/tracing/rtla/src/timerlat_top.c +++ b/tools/tracing/rtla/src/timerlat_top.c @@ -679,15 +679,12 @@ timerlat_top_apply_config(struct osnoise auto_house_keeping(¶ms->monitored_cpus); }
- /* - * Set workload according to type of thread if the kernel supports it. - * On kernels without support, user threads will have already failed - * on missing timerlat_fd, and kernel threads do not need it. - */ - retval = osnoise_set_workload(top->context, params->kernel_workload); - if (retval < -1) { - err_msg("Failed to set OSNOISE_WORKLOAD option\n"); - goto out_err; + if (params->user_top) { + retval = osnoise_set_workload(top->context, 0); + if (retval) { + err_msg("Failed to set OSNOISE_WORKLOAD option\n"); + goto out_err; + } }
return 0;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tomas Glozar tglozar@redhat.com
This reverts commit 83b74901bdc9b58739193b8ee6989254391b6ba7.
The commit breaks rtla build, since params->kernel_workload is not present on 6.6-stable.
Signed-off-by: Tomas Glozar tglozar@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/tracing/rtla/src/timerlat_hist.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
--- a/tools/tracing/rtla/src/timerlat_hist.c +++ b/tools/tracing/rtla/src/timerlat_hist.c @@ -900,15 +900,12 @@ timerlat_hist_apply_config(struct osnois auto_house_keeping(¶ms->monitored_cpus); }
- /* - * Set workload according to type of thread if the kernel supports it. - * On kernels without support, user threads will have already failed - * on missing timerlat_fd, and kernel threads do not need it. - */ - retval = osnoise_set_workload(tool->context, params->kernel_workload); - if (retval < -1) { - err_msg("Failed to set OSNOISE_WORKLOAD option\n"); - goto out_err; + if (params->user_hist) { + retval = osnoise_set_workload(tool->context, 0); + if (retval) { + err_msg("Failed to set OSNOISE_WORKLOAD option\n"); + goto out_err; + } }
return 0;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tomas Glozar tglozar@redhat.com
commit d8d866171a414ed88bd0d720864095fd75461134 upstream.
When using rtla timerlat with userspace threads (-u or -U), rtla disables the OSNOISE_WORKLOAD option in /sys/kernel/tracing/osnoise/options. This option is not re-enabled in a subsequent run with kernel-space threads, leading to rtla collecting no results if the previous run exited abnormally:
$ rtla timerlat hist -u ^\Quit (core dumped) $ rtla timerlat hist -k -d 1s Index over: count: min: avg: max: ALL: IRQ Thr Usr count: 0 0 0 min: - - - avg: - - - max: - - -
The issue persists until OSNOISE_WORKLOAD is set manually by running: $ echo OSNOISE_WORKLOAD > /sys/kernel/tracing/osnoise/options
Set OSNOISE_WORKLOAD when running rtla with kernel-space threads if available to fix the issue.
Cc: stable@vger.kernel.org Cc: John Kacur jkacur@redhat.com Cc: Luis Goncalves lgoncalv@redhat.com Link: https://lore.kernel.org/20250107144823.239782-3-tglozar@redhat.com Fixes: ed774f7481fa ("rtla/timerlat_hist: Add timerlat user-space support") Signed-off-by: Tomas Glozar tglozar@redhat.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org [ params->kernel_workload does not exist in 6.6, use !params->user_hist ] Signed-off-by: Tomas Glozar tglozar@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/tracing/rtla/src/timerlat_hist.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
--- a/tools/tracing/rtla/src/timerlat_hist.c +++ b/tools/tracing/rtla/src/timerlat_hist.c @@ -900,12 +900,15 @@ timerlat_hist_apply_config(struct osnois auto_house_keeping(¶ms->monitored_cpus); }
- if (params->user_hist) { - retval = osnoise_set_workload(tool->context, 0); - if (retval) { - err_msg("Failed to set OSNOISE_WORKLOAD option\n"); - goto out_err; - } + /* + * Set workload according to type of thread if the kernel supports it. + * On kernels without support, user threads will have already failed + * on missing timerlat_fd, and kernel threads do not need it. + */ + retval = osnoise_set_workload(tool->context, !params->user_hist); + if (retval < -1) { + err_msg("Failed to set OSNOISE_WORKLOAD option\n"); + goto out_err; }
return 0;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tomas Glozar tglozar@redhat.com
commit 217f0b1e990e30a1f06f6d531fdb4530f4788d48 upstream.
When using rtla timerlat with userspace threads (-u or -U), rtla disables the OSNOISE_WORKLOAD option in /sys/kernel/tracing/osnoise/options. This option is not re-enabled in a subsequent run with kernel-space threads, leading to rtla collecting no results if the previous run exited abnormally:
$ rtla timerlat top -u ^\Quit (core dumped) $ rtla timerlat top -k -d 1s Timer Latency 0 00:00:01 | IRQ Timer Latency (us) | Thread Timer Latency (us) CPU COUNT | cur min avg max | cur min avg max
The issue persists until OSNOISE_WORKLOAD is set manually by running: $ echo OSNOISE_WORKLOAD > /sys/kernel/tracing/osnoise/options
Set OSNOISE_WORKLOAD when running rtla with kernel-space threads if available to fix the issue.
Cc: stable@vger.kernel.org Cc: John Kacur jkacur@redhat.com Cc: Luis Goncalves lgoncalv@redhat.com Link: https://lore.kernel.org/20250107144823.239782-4-tglozar@redhat.com Fixes: cdca4f4e5e8e ("rtla/timerlat_top: Add timerlat user-space support") Signed-off-by: Tomas Glozar tglozar@redhat.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org [ params->kernel_workload does not exist in 6.6, use !params->user_top ] Signed-off-by: Tomas Glozar tglozar@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/tracing/rtla/src/timerlat_top.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
--- a/tools/tracing/rtla/src/timerlat_top.c +++ b/tools/tracing/rtla/src/timerlat_top.c @@ -679,12 +679,15 @@ timerlat_top_apply_config(struct osnoise auto_house_keeping(¶ms->monitored_cpus); }
- if (params->user_top) { - retval = osnoise_set_workload(top->context, 0); - if (retval) { - err_msg("Failed to set OSNOISE_WORKLOAD option\n"); - goto out_err; - } + /* + * Set workload according to type of thread if the kernel supports it. + * On kernels without support, user threads will have already failed + * on missing timerlat_fd, and kernel threads do not need it. + */ + retval = osnoise_set_workload(top->context, !params->user_top); + if (retval < -1) { + err_msg("Failed to set OSNOISE_WORKLOAD option\n"); + goto out_err; }
return 0;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: chr[] chris@rudorff.com
commit 91dcc66b34beb72dde8412421bdc1b4cd40e4fb8 upstream.
resume and irq handler happily races in set_power_state()
* amdgpu_legacy_dpm_compute_clocks() needs lock * protect irq work handler * fix dpm_enabled usage
v2: fix clang build, integrate Lijo's comments (Alex)
Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/2524 Fixes: 3712e7a49459 ("drm/amd/pm: unified lock protections in amdgpu_dpm.c") Reviewed-by: Lijo Lazar lijo.lazar@amd.com Tested-by: Maciej S. Szmigiero mail@maciej.szmigiero.name # on Oland PRO Signed-off-by: chr[] chris@rudorff.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit ee3dc9e204d271c9c7a8d4d38a0bce4745d33e71) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 25 ++++++++++++++++++------ drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c | 8 +++++-- drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c | 26 +++++++++++++++++++------ 3 files changed, 45 insertions(+), 14 deletions(-)
--- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c @@ -3043,6 +3043,7 @@ static int kv_dpm_hw_init(void *handle) if (!amdgpu_dpm) return 0;
+ mutex_lock(&adev->pm.mutex); kv_dpm_setup_asic(adev); ret = kv_dpm_enable(adev); if (ret) @@ -3050,6 +3051,8 @@ static int kv_dpm_hw_init(void *handle) else adev->pm.dpm_enabled = true; amdgpu_legacy_dpm_compute_clocks(adev); + mutex_unlock(&adev->pm.mutex); + return ret; }
@@ -3067,32 +3070,42 @@ static int kv_dpm_suspend(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ cancel_work_sync(&adev->pm.dpm.thermal.work); + if (adev->pm.dpm_enabled) { + mutex_lock(&adev->pm.mutex); + adev->pm.dpm_enabled = false; /* disable dpm */ kv_dpm_disable(adev); /* reset the power state */ adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps; + mutex_unlock(&adev->pm.mutex); } return 0; }
static int kv_dpm_resume(void *handle) { - int ret; + int ret = 0; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
- if (adev->pm.dpm_enabled) { + if (!amdgpu_dpm) + return 0; + + if (!adev->pm.dpm_enabled) { + mutex_lock(&adev->pm.mutex); /* asic init will reset to the boot state */ kv_dpm_setup_asic(adev); ret = kv_dpm_enable(adev); - if (ret) + if (ret) { adev->pm.dpm_enabled = false; - else + } else { adev->pm.dpm_enabled = true; - if (adev->pm.dpm_enabled) amdgpu_legacy_dpm_compute_clocks(adev); + } + mutex_unlock(&adev->pm.mutex); } - return 0; + return ret; }
static bool kv_dpm_is_idle(void *handle) --- a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c @@ -1018,9 +1018,12 @@ void amdgpu_dpm_thermal_work_handler(str enum amd_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL; int temp, size = sizeof(temp);
- if (!adev->pm.dpm_enabled) - return; + mutex_lock(&adev->pm.mutex);
+ if (!adev->pm.dpm_enabled) { + mutex_unlock(&adev->pm.mutex); + return; + } if (!pp_funcs->read_sensor(adev->powerplay.pp_handle, AMDGPU_PP_SENSOR_GPU_TEMP, (void *)&temp, @@ -1042,4 +1045,5 @@ void amdgpu_dpm_thermal_work_handler(str adev->pm.dpm.state = dpm_state;
amdgpu_legacy_dpm_compute_clocks(adev->powerplay.pp_handle); + mutex_unlock(&adev->pm.mutex); } --- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c @@ -7789,6 +7789,7 @@ static int si_dpm_hw_init(void *handle) if (!amdgpu_dpm) return 0;
+ mutex_lock(&adev->pm.mutex); si_dpm_setup_asic(adev); ret = si_dpm_enable(adev); if (ret) @@ -7796,6 +7797,7 @@ static int si_dpm_hw_init(void *handle) else adev->pm.dpm_enabled = true; amdgpu_legacy_dpm_compute_clocks(adev); + mutex_unlock(&adev->pm.mutex); return ret; }
@@ -7813,32 +7815,44 @@ static int si_dpm_suspend(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ cancel_work_sync(&adev->pm.dpm.thermal.work); + if (adev->pm.dpm_enabled) { + mutex_lock(&adev->pm.mutex); + adev->pm.dpm_enabled = false; /* disable dpm */ si_dpm_disable(adev); /* reset the power state */ adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps; + mutex_unlock(&adev->pm.mutex); } + return 0; }
static int si_dpm_resume(void *handle) { - int ret; + int ret = 0; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
- if (adev->pm.dpm_enabled) { + if (!amdgpu_dpm) + return 0; + + if (!adev->pm.dpm_enabled) { /* asic init will reset to the boot state */ + mutex_lock(&adev->pm.mutex); si_dpm_setup_asic(adev); ret = si_dpm_enable(adev); - if (ret) + if (ret) { adev->pm.dpm_enabled = false; - else + } else { adev->pm.dpm_enabled = true; - if (adev->pm.dpm_enabled) amdgpu_legacy_dpm_compute_clocks(adev); + } + mutex_unlock(&adev->pm.mutex); } - return 0; + + return ret; }
static bool si_dpm_is_idle(void *handle)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Joshua Washington joshwash@google.com
commit 415cadd505464d9a11ff5e0f6e0329c127849da5 upstream.
Before this patch the NETDEV_XDP_ACT_NDO_XMIT XDP feature flag is set by default as part of driver initialization, and is never cleared. However, this flag differs from others in that it is used as an indicator for whether the driver is ready to perform the ndo_xdp_xmit operation as part of an XDP_REDIRECT. Kernel helpers xdp_features_(set|clear)_redirect_target exist to convey this meaning.
This patch ensures that the netdev is only reported as a redirect target when XDP queues exist to forward traffic.
Fixes: 39a7f4aa3e4a ("gve: Add XDP REDIRECT support for GQI-QPL format") Cc: stable@vger.kernel.org Reviewed-by: Praveen Kaligineedi pkaligineedi@google.com Reviewed-by: Jeroen de Borst jeroendb@google.com Signed-off-by: Joshua Washington joshwash@google.com Link: https://patch.msgid.link/20250214224417.1237818-1-joshwash@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Joshua Washington joshwash@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/google/gve/gve.h | 10 ++++++++++ drivers/net/ethernet/google/gve/gve_main.c | 6 +++++- 2 files changed, 15 insertions(+), 1 deletion(-)
--- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -1030,6 +1030,16 @@ static inline u32 gve_xdp_tx_start_queue return gve_xdp_tx_queue_id(priv, 0); }
+static inline bool gve_supports_xdp_xmit(struct gve_priv *priv) +{ + switch (priv->queue_format) { + case GVE_GQI_QPL_FORMAT: + return true; + default: + return false; + } +} + /* buffers */ int gve_alloc_page(struct gve_priv *priv, struct device *dev, struct page **page, dma_addr_t *dma, --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1753,6 +1753,8 @@ static void gve_turndown(struct gve_priv /* Stop tx queues */ netif_tx_disable(priv->dev);
+ xdp_features_clear_redirect_target(priv->dev); + gve_clear_napi_enabled(priv); gve_clear_report_stats(priv);
@@ -1793,6 +1795,9 @@ static void gve_turnup(struct gve_priv * } }
+ if (priv->num_xdp_queues && gve_supports_xdp_xmit(priv)) + xdp_features_set_redirect_target(priv->dev, false); + gve_set_napi_enabled(priv); }
@@ -2014,7 +2019,6 @@ static void gve_set_netdev_xdp_features( if (priv->queue_format == GVE_GQI_QPL_FORMAT) { xdp_features = NETDEV_XDP_ACT_BASIC; xdp_features |= NETDEV_XDP_ACT_REDIRECT; - xdp_features |= NETDEV_XDP_ACT_NDO_XMIT; xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY; } else { xdp_features = 0;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit c157d351460bcf202970e97e611cb6b54a3dd4a4 upstream.
The Intel idle driver is preferred over the ACPI processor idle driver, but fails to implement the work around for Core2 generation CPUs, where the TSC stops in C2 and deeper C-states. This causes stalls and boot delays, when the clocksource watchdog does not catch the unstable TSC before the CPU goes deep idle for the first time.
The ACPI driver marks the TSC unstable when it detects that the CPU supports C2 or deeper and the CPU does not have a non-stop TSC.
Add the equivivalent work around to the Intel idle driver to cure that.
Fixes: 18734958e9bf ("intel_idle: Use ACPI _CST for processor models without C-state tables") Reported-by: Fab Stz fabstz-it@yahoo.fr Signed-off-by: Thomas Gleixner tglx@linutronix.de Tested-by: Fab Stz fabstz-it@yahoo.fr Cc: All applicable stable@vger.kernel.org Closes: https://lore.kernel.org/all/10cf96aa-1276-4bd4-8966-c890377030c3@yahoo.fr Link: https://patch.msgid.link/87bjupfy7f.ffs@tglx Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/idle/intel_idle.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -56,6 +56,7 @@ #include <asm/nospec-branch.h> #include <asm/mwait.h> #include <asm/msr.h> +#include <asm/tsc.h> #include <asm/fpu/api.h>
#define INTEL_IDLE_VERSION "0.5.1" @@ -1573,6 +1574,9 @@ static void __init intel_idle_init_cstat if (intel_idle_state_needs_timer_stop(state)) state->flags |= CPUIDLE_FLAG_TIMER_STOP;
+ if (cx->type > ACPI_STATE_C1 && !boot_cpu_has(X86_FEATURE_NONSTOP_TSC)) + mark_tsc_unstable("TSC halts in idle"); + state->enter = intel_idle; state->enter_s2idle = intel_idle_s2idle; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lukasz Czechowski lukasz.czechowski@thaumatec.com
commit 5ae4dca718eacd0a56173a687a3736eb7e627c77 upstream.
UART controllers without flow control seem to behave unstable in case DMA is enabled. The issues were indicated in the message: https://lore.kernel.org/linux-arm-kernel/CAMdYzYpXtMocCtCpZLU_xuWmOp2Ja_v0Aj... In case of PX30-uQ7 Ringneck SoM, it was noticed that after couple of hours of UART communication, the CPU stall was occurring, leading to the system becoming unresponsive. After disabling the DMA, extensive UART communication tests for up to two weeks were performed, and no issues were further observed. The flow control pins for uart5 are not available on PX30-uQ7 Ringneck, as configured by pinctrl-0, so the DMA nodes were removed on SoM dtsi.
Cc: stable@vger.kernel.org Fixes: c484cf93f61b ("arm64: dts: rockchip: add PX30-µQ7 (Ringneck) SoM with Haikou baseboard") Reviewed-by: Quentin Schulz quentin.schulz@cherry.de Signed-off-by: Lukasz Czechowski lukasz.czechowski@thaumatec.com Link: https://lore.kernel.org/r/20250121125604.3115235-3-lukasz.czechowski@thaumat... Signed-off-by: Heiko Stuebner heiko@sntech.de [ conflict resolution due to missing (cosmetic) backport of 4eee627ea59304cdd66c5d4194ef13486a6c44fc] Signed-off-by: Quentin Schulz quentin.schulz@cherry.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi | 5 +++++ 1 file changed, 5 insertions(+)
--- a/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi +++ b/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi @@ -367,6 +367,11 @@ status = "okay"; };
+&uart5 { + /delete-property/ dmas; + /delete-property/ dma-names; +}; + /* Mule UCAN */ &usb_host0_ehci { status = "okay";
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 0b62f6cb07738d7211d926c39f6946b87f72e792 upstream.
32-bit loads microcode before paging is enabled. The commit which introduced that has zero justification in the changelog. The cover letter has slightly more content, but it does not give any technical justification either:
"The problem in current microcode loading method is that we load a microcode way, way too late; ideally we should load it before turning paging on. This may only be practical on 32 bits since we can't get to 64-bit mode without paging on, but we should still do it as early as at all possible."
Handwaving word salad with zero technical content.
Someone claimed in an offlist conversation that this is required for curing the ATOM erratum AAE44/AAF40/AAG38/AAH41. That erratum requires an microcode update in order to make the usage of PSE safe. But during early boot, PSE is completely irrelevant and it is evaluated way later.
Neither is it relevant for the AP on single core HT enabled CPUs as the microcode loading on the AP is not doing anything.
On dual core CPUs there is a theoretical problem if a split of an executable large page between enabling paging including PSE and loading the microcode happens. But that's only theoretical, it's practically irrelevant because the affected dual core CPUs are 64bit enabled and therefore have paging and PSE enabled before loading the microcode on the second core. So why would it work on 64-bit but not on 32-bit?
The erratum:
"AAG38 Code Fetch May Occur to Incorrect Address After a Large Page is Split Into 4-Kbyte Pages
Problem: If software clears the PS (page size) bit in a present PDE (page directory entry), that will cause linear addresses mapped through this PDE to use 4-KByte pages instead of using a large page after old TLB entries are invalidated. Due to this erratum, if a code fetch uses this PDE before the TLB entry for the large page is invalidated then it may fetch from a different physical address than specified by either the old large page translation or the new 4-KByte page translation. This erratum may also cause speculative code fetches from incorrect addresses."
The practical relevance for this is exactly zero because there is no splitting of large text pages during early boot-time, i.e. between paging enable and microcode loading, and neither during CPU hotplug.
IOW, this load microcode before paging enable is yet another voodoo programming solution in search of a problem. What's worse is that it causes at least two serious problems:
1) When stackprotector is enabled, the microcode loader code has the stackprotector mechanics enabled. The read from the per CPU variable __stack_chk_guard is always accessing the virtual address either directly on UP or via %fs on SMP. In physical address mode this results in an access to memory above 3GB. So this works by chance as the hardware returns the same value when there is no RAM at this physical address. When there is RAM populated above 3G then the read is by chance the same as nothing changes that memory during the very early boot stage. That's not necessarily true during runtime CPU hotplug.
2) When function tracing is enabled, the relevant microcode loader functions and the functions invoked from there will call into the tracing code and evaluate global and per CPU variables in physical address mode. What could potentially go wrong?
Cure this and move the microcode loading after the early paging enable, use the new temporary initrd mapping and remove the gunk in the microcode loader which is required to handle physical address mode.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211722.348298216@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/microcode.h | 5 - arch/x86/kernel/cpu/common.c | 12 --- arch/x86/kernel/cpu/microcode/amd.c | 103 ++++++++------------------- arch/x86/kernel/cpu/microcode/core.c | 78 ++++---------------- arch/x86/kernel/cpu/microcode/intel.c | 116 ++++--------------------------- arch/x86/kernel/cpu/microcode/internal.h | 2 arch/x86/kernel/head32.c | 3 arch/x86/kernel/head_32.S | 10 -- arch/x86/kernel/smpboot.c | 12 +-- 9 files changed, 72 insertions(+), 269 deletions(-)
--- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -68,11 +68,6 @@ static inline u32 intel_get_microcode_re
return rev; } - -void show_ucode_info_early(void); - -#else /* CONFIG_CPU_SUP_INTEL */ -static inline void show_ucode_info_early(void) { } #endif /* !CONFIG_CPU_SUP_INTEL */
#endif /* _ASM_X86_MICROCODE_H */ --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2224,8 +2224,6 @@ static inline void setup_getcpu(int cpu) }
#ifdef CONFIG_X86_64 -static inline void ucode_cpu_init(int cpu) { } - static inline void tss_setup_ist(struct tss_struct *tss) { /* Set up the per-CPU TSS IST stacks */ @@ -2236,16 +2234,8 @@ static inline void tss_setup_ist(struct /* Only mapped when SEV-ES is active */ tss->x86_tss.ist[IST_INDEX_VC] = __this_cpu_ist_top_va(VC); } - #else /* CONFIG_X86_64 */ - -static inline void ucode_cpu_init(int cpu) -{ - show_ucode_info_early(); -} - static inline void tss_setup_ist(struct tss_struct *tss) { } - #endif /* !CONFIG_X86_64 */
static inline void tss_setup_io_bitmap(struct tss_struct *tss) @@ -2301,8 +2291,6 @@ void cpu_init(void) struct task_struct *cur = current; int cpu = raw_smp_processor_id();
- ucode_cpu_init(cpu); - #ifdef CONFIG_NUMA if (this_cpu_read(numa_node) == 0 && early_cpu_to_node(cpu) != NUMA_NO_NODE) --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -121,24 +121,20 @@ static u16 find_equiv_id(struct equiv_cp
/* * Check whether there is a valid microcode container file at the beginning - * of @buf of size @buf_size. Set @early to use this function in the early path. + * of @buf of size @buf_size. */ -static bool verify_container(const u8 *buf, size_t buf_size, bool early) +static bool verify_container(const u8 *buf, size_t buf_size) { u32 cont_magic;
if (buf_size <= CONTAINER_HDR_SZ) { - if (!early) - pr_debug("Truncated microcode container header.\n"); - + pr_debug("Truncated microcode container header.\n"); return false; }
cont_magic = *(const u32 *)buf; if (cont_magic != UCODE_MAGIC) { - if (!early) - pr_debug("Invalid magic value (0x%08x).\n", cont_magic); - + pr_debug("Invalid magic value (0x%08x).\n", cont_magic); return false; }
@@ -147,23 +143,20 @@ static bool verify_container(const u8 *b
/* * Check whether there is a valid, non-truncated CPU equivalence table at the - * beginning of @buf of size @buf_size. Set @early to use this function in the - * early path. + * beginning of @buf of size @buf_size. */ -static bool verify_equivalence_table(const u8 *buf, size_t buf_size, bool early) +static bool verify_equivalence_table(const u8 *buf, size_t buf_size) { const u32 *hdr = (const u32 *)buf; u32 cont_type, equiv_tbl_len;
- if (!verify_container(buf, buf_size, early)) + if (!verify_container(buf, buf_size)) return false;
cont_type = hdr[1]; if (cont_type != UCODE_EQUIV_CPU_TABLE_TYPE) { - if (!early) - pr_debug("Wrong microcode container equivalence table type: %u.\n", - cont_type); - + pr_debug("Wrong microcode container equivalence table type: %u.\n", + cont_type); return false; }
@@ -172,9 +165,7 @@ static bool verify_equivalence_table(con equiv_tbl_len = hdr[2]; if (equiv_tbl_len < sizeof(struct equiv_cpu_entry) || buf_size < equiv_tbl_len) { - if (!early) - pr_debug("Truncated equivalence table.\n"); - + pr_debug("Truncated equivalence table.\n"); return false; }
@@ -183,22 +174,19 @@ static bool verify_equivalence_table(con
/* * Check whether there is a valid, non-truncated microcode patch section at the - * beginning of @buf of size @buf_size. Set @early to use this function in the - * early path. + * beginning of @buf of size @buf_size. * * On success, @sh_psize returns the patch size according to the section header, * to the caller. */ static bool -__verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize, bool early) +__verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize) { u32 p_type, p_size; const u32 *hdr;
if (buf_size < SECTION_HDR_SIZE) { - if (!early) - pr_debug("Truncated patch section.\n"); - + pr_debug("Truncated patch section.\n"); return false; }
@@ -207,17 +195,13 @@ __verify_patch_section(const u8 *buf, si p_size = hdr[1];
if (p_type != UCODE_UCODE_TYPE) { - if (!early) - pr_debug("Invalid type field (0x%x) in container file section header.\n", - p_type); - + pr_debug("Invalid type field (0x%x) in container file section header.\n", + p_type); return false; }
if (p_size < sizeof(struct microcode_header_amd)) { - if (!early) - pr_debug("Patch of size %u too short.\n", p_size); - + pr_debug("Patch of size %u too short.\n", p_size); return false; }
@@ -269,7 +253,7 @@ static unsigned int __verify_patch_size( * 0: success */ static int -verify_patch(u8 family, const u8 *buf, size_t buf_size, u32 *patch_size, bool early) +verify_patch(u8 family, const u8 *buf, size_t buf_size, u32 *patch_size) { struct microcode_header_amd *mc_hdr; unsigned int ret; @@ -277,7 +261,7 @@ verify_patch(u8 family, const u8 *buf, s u16 proc_id; u8 patch_fam;
- if (!__verify_patch_section(buf, buf_size, &sh_psize, early)) + if (!__verify_patch_section(buf, buf_size, &sh_psize)) return -1;
/* @@ -292,16 +276,13 @@ verify_patch(u8 family, const u8 *buf, s * size sh_psize, as the section claims. */ if (buf_size < sh_psize) { - if (!early) - pr_debug("Patch of size %u truncated.\n", sh_psize); - + pr_debug("Patch of size %u truncated.\n", sh_psize); return -1; }
ret = __verify_patch_size(family, sh_psize, buf_size); if (!ret) { - if (!early) - pr_debug("Per-family patch size mismatch.\n"); + pr_debug("Per-family patch size mismatch.\n"); return -1; }
@@ -309,8 +290,7 @@ verify_patch(u8 family, const u8 *buf, s
mc_hdr = (struct microcode_header_amd *)(buf + SECTION_HDR_SIZE); if (mc_hdr->nb_dev_id || mc_hdr->sb_dev_id) { - if (!early) - pr_err("Patch-ID 0x%08x: chipset-specific code unsupported.\n", mc_hdr->patch_id); + pr_err("Patch-ID 0x%08x: chipset-specific code unsupported.\n", mc_hdr->patch_id); return -1; }
@@ -337,7 +317,7 @@ static size_t parse_container(u8 *ucode, u16 eq_id; u8 *buf;
- if (!verify_equivalence_table(ucode, size, true)) + if (!verify_equivalence_table(ucode, size)) return 0;
buf = ucode; @@ -364,7 +344,7 @@ static size_t parse_container(u8 *ucode, u32 patch_size; int ret;
- ret = verify_patch(x86_family(desc->cpuid_1_eax), buf, size, &patch_size, true); + ret = verify_patch(x86_family(desc->cpuid_1_eax), buf, size, &patch_size); if (ret < 0) { /* * Patch verification failed, skip to the next container, if @@ -456,14 +436,8 @@ static bool early_apply_microcode(u32 cp { struct cont_desc desc = { 0 }; struct microcode_amd *mc; - u32 rev, dummy, *new_rev; bool ret = false; - -#ifdef CONFIG_X86_32 - new_rev = (u32 *)__pa_nodebug(&ucode_new_rev); -#else - new_rev = &ucode_new_rev; -#endif + u32 rev, dummy;
desc.cpuid_1_eax = cpuid_1_eax;
@@ -484,8 +458,8 @@ static bool early_apply_microcode(u32 cp return ret;
if (!__apply_microcode_amd(mc)) { - *new_rev = mc->hdr.patch_id; - ret = true; + ucode_new_rev = mc->hdr.patch_id; + ret = true; }
return ret; @@ -514,26 +488,13 @@ static bool get_builtin_microcode(struct
static void find_blobs_in_containers(unsigned int cpuid_1_eax, struct cpio_data *ret) { - struct ucode_cpu_info *uci; struct cpio_data cp; - const char *path; - bool use_pa; - - if (IS_ENABLED(CONFIG_X86_32)) { - uci = (struct ucode_cpu_info *)__pa_nodebug(ucode_cpu_info); - path = (const char *)__pa_nodebug(ucode_path); - use_pa = true; - } else { - uci = ucode_cpu_info; - path = ucode_path; - use_pa = false; - }
if (!get_builtin_microcode(&cp, x86_family(cpuid_1_eax))) - cp = find_microcode_in_initrd(path, use_pa); + cp = find_microcode_in_initrd(ucode_path);
/* Needed in load_microcode_amd() */ - uci->cpu_sig.sig = cpuid_1_eax; + ucode_cpu_info->cpu_sig.sig = cpuid_1_eax;
*ret = cp; } @@ -562,7 +523,7 @@ int __init save_microcode_in_initrd_amd( enum ucode_state ret; struct cpio_data cp;
- cp = find_microcode_in_initrd(ucode_path, false); + cp = find_microcode_in_initrd(ucode_path); if (!(cp.data && cp.size)) return -EINVAL;
@@ -738,7 +699,7 @@ static size_t install_equiv_cpu_table(co u32 equiv_tbl_len; const u32 *hdr;
- if (!verify_equivalence_table(buf, buf_size, false)) + if (!verify_equivalence_table(buf, buf_size)) return 0;
hdr = (const u32 *)buf; @@ -784,7 +745,7 @@ static int verify_and_add_patch(u8 famil u16 proc_id; int ret;
- ret = verify_patch(family, fw, leftover, patch_size, false); + ret = verify_patch(family, fw, leftover, patch_size); if (ret) return ret;
@@ -918,7 +879,7 @@ static enum ucode_state request_microcod }
ret = UCODE_ERROR; - if (!verify_container(fw->data, fw->size, false)) + if (!verify_container(fw->data, fw->size)) goto fw_release;
ret = load_microcode_amd(c->x86, fw->data, fw->size); --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -90,10 +90,7 @@ static bool amd_check_current_patch_leve
native_rdmsr(MSR_AMD64_PATCH_LEVEL, lvl, dummy);
- if (IS_ENABLED(CONFIG_X86_32)) - levels = (u32 *)__pa_nodebug(&final_levels); - else - levels = final_levels; + levels = final_levels;
for (i = 0; levels[i]; i++) { if (lvl == levels[i]) @@ -105,17 +102,8 @@ static bool amd_check_current_patch_leve static bool __init check_loader_disabled_bsp(void) { static const char *__dis_opt_str = "dis_ucode_ldr"; - -#ifdef CONFIG_X86_32 - const char *cmdline = (const char *)__pa_nodebug(boot_command_line); - const char *option = (const char *)__pa_nodebug(__dis_opt_str); - bool *res = (bool *)__pa_nodebug(&dis_ucode_ldr); - -#else /* CONFIG_X86_64 */ const char *cmdline = boot_command_line; const char *option = __dis_opt_str; - bool *res = &dis_ucode_ldr; -#endif
/* * CPUID(1).ECX[31]: reserved for hypervisor use. This is still not @@ -123,17 +111,17 @@ static bool __init check_loader_disabled * that's good enough as they don't land on the BSP path anyway. */ if (native_cpuid_ecx(1) & BIT(31)) - return *res; + return true;
if (x86_cpuid_vendor() == X86_VENDOR_AMD) { if (amd_check_current_patch_level()) - return *res; + return true; }
if (cmdline_find_option_bool(cmdline, option) <= 0) - *res = false; + dis_ucode_ldr = false;
- return *res; + return dis_ucode_ldr; }
void __init load_ucode_bsp(void) @@ -171,20 +159,11 @@ void __init load_ucode_bsp(void) load_ucode_amd_early(cpuid_1_eax); }
-static bool check_loader_disabled_ap(void) -{ -#ifdef CONFIG_X86_32 - return *((bool *)__pa_nodebug(&dis_ucode_ldr)); -#else - return dis_ucode_ldr; -#endif -} - void load_ucode_ap(void) { unsigned int cpuid_1_eax;
- if (check_loader_disabled_ap()) + if (dis_ucode_ldr) return;
cpuid_1_eax = native_cpuid_eax(1); @@ -232,40 +211,28 @@ out: return ret; }
-struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa) +struct cpio_data find_microcode_in_initrd(const char *path) { #ifdef CONFIG_BLK_DEV_INITRD unsigned long start = 0; size_t size;
#ifdef CONFIG_X86_32 - struct boot_params *params; - - if (use_pa) - params = (struct boot_params *)__pa_nodebug(&boot_params); - else - params = &boot_params; - - size = params->hdr.ramdisk_size; - - /* - * Set start only if we have an initrd image. We cannot use initrd_start - * because it is not set that early yet. - */ + size = boot_params.hdr.ramdisk_size; + /* Early load on BSP has a temporary mapping. */ if (size) - start = params->hdr.ramdisk_image; + start = initrd_start_early;
-# else /* CONFIG_X86_64 */ +#else /* CONFIG_X86_64 */ size = (unsigned long)boot_params.ext_ramdisk_size << 32; size |= boot_params.hdr.ramdisk_size;
if (size) { start = (unsigned long)boot_params.ext_ramdisk_image << 32; start |= boot_params.hdr.ramdisk_image; - start += PAGE_OFFSET; } -# endif +#endif
/* * Fixup the start address: after reserve_initrd() runs, initrd_start @@ -276,23 +243,10 @@ struct cpio_data find_microcode_in_initr * initrd_gone is for the hotplug case where we've thrown out initrd * already. */ - if (!use_pa) { - if (initrd_gone) - return (struct cpio_data){ NULL, 0, "" }; - if (initrd_start) - start = initrd_start; - } else { - /* - * The picture with physical addresses is a bit different: we - * need to get the *physical* address to which the ramdisk was - * relocated, i.e., relocated_ramdisk (not initrd_start) and - * since we're running from physical addresses, we need to access - * relocated_ramdisk through its *physical* address too. - */ - u64 *rr = (u64 *)__pa_nodebug(&relocated_ramdisk); - if (*rr) - start = *rr; - } + if (initrd_gone) + return (struct cpio_data){ NULL, 0, "" }; + if (initrd_start) + start = initrd_start;
return find_cpio_data(path, (void *)start, size, NULL); #else /* !CONFIG_BLK_DEV_INITRD */ --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -319,15 +319,8 @@ static void save_microcode_patch(struct if (!intel_find_matching_signature(p->data, uci->cpu_sig.sig, uci->cpu_sig.pf)) return;
- /* - * Save for early loading. On 32-bit, that needs to be a physical - * address as the APs are running from physical addresses, before - * paging has been enabled. - */ - if (IS_ENABLED(CONFIG_X86_32)) - intel_ucode_patch = (struct microcode_intel *)__pa_nodebug(p->data); - else - intel_ucode_patch = p->data; + /* Save for early loading */ + intel_ucode_patch = p->data; }
/* @@ -420,66 +413,10 @@ static bool load_builtin_intel_microcode return false; }
-static void print_ucode_info(int old_rev, int new_rev, unsigned int date) -{ - pr_info_once("updated early: 0x%x -> 0x%x, date = %04x-%02x-%02x\n", - old_rev, - new_rev, - date & 0xffff, - date >> 24, - (date >> 16) & 0xff); -} - -#ifdef CONFIG_X86_32 - -static int delay_ucode_info; -static int current_mc_date; -static int early_old_rev; - -/* - * Print early updated ucode info after printk works. This is delayed info dump. - */ -void show_ucode_info_early(void) -{ - struct ucode_cpu_info uci; - - if (delay_ucode_info) { - intel_cpu_collect_info(&uci); - print_ucode_info(early_old_rev, uci.cpu_sig.rev, current_mc_date); - delay_ucode_info = 0; - } -} - -/* - * At this point, we can not call printk() yet. Delay printing microcode info in - * show_ucode_info_early() until printk() works. - */ -static void print_ucode(int old_rev, int new_rev, int date) -{ - int *delay_ucode_info_p; - int *current_mc_date_p; - int *early_old_rev_p; - - delay_ucode_info_p = (int *)__pa_nodebug(&delay_ucode_info); - current_mc_date_p = (int *)__pa_nodebug(¤t_mc_date); - early_old_rev_p = (int *)__pa_nodebug(&early_old_rev); - - *delay_ucode_info_p = 1; - *current_mc_date_p = date; - *early_old_rev_p = old_rev; -} -#else - -static inline void print_ucode(int old_rev, int new_rev, int date) -{ - print_ucode_info(old_rev, new_rev, date); -} -#endif - -static int apply_microcode_early(struct ucode_cpu_info *uci, bool early) +static int apply_microcode_early(struct ucode_cpu_info *uci) { struct microcode_intel *mc; - u32 rev, old_rev; + u32 rev, old_rev, date;
mc = uci->mc; if (!mc) @@ -513,11 +450,9 @@ static int apply_microcode_early(struct
uci->cpu_sig.rev = rev;
- if (early) - print_ucode(old_rev, uci->cpu_sig.rev, mc->hdr.date); - else - print_ucode_info(old_rev, uci->cpu_sig.rev, mc->hdr.date); - + date = mc->hdr.date; + pr_info_once("updated early: 0x%x -> 0x%x, date = %04x-%02x-%02x\n", + old_rev, rev, date & 0xffff, date >> 24, (date >> 16) & 0xff); return 0; }
@@ -535,7 +470,7 @@ int __init save_microcode_in_initrd_inte intel_ucode_patch = NULL;
if (!load_builtin_intel_microcode(&cp)) - cp = find_microcode_in_initrd(ucode_path, false); + cp = find_microcode_in_initrd(ucode_path);
if (!(cp.data && cp.size)) return 0; @@ -551,21 +486,11 @@ int __init save_microcode_in_initrd_inte */ static struct microcode_intel *__load_ucode_intel(struct ucode_cpu_info *uci) { - static const char *path; struct cpio_data cp; - bool use_pa; - - if (IS_ENABLED(CONFIG_X86_32)) { - path = (const char *)__pa_nodebug(ucode_path); - use_pa = true; - } else { - path = ucode_path; - use_pa = false; - }
/* try built-in microcode first */ if (!load_builtin_intel_microcode(&cp)) - cp = find_microcode_in_initrd(path, use_pa); + cp = find_microcode_in_initrd(ucode_path);
if (!(cp.data && cp.size)) return NULL; @@ -586,30 +511,21 @@ void __init load_ucode_intel_bsp(void)
uci.mc = patch;
- apply_microcode_early(&uci, true); + apply_microcode_early(&uci); }
void load_ucode_intel_ap(void) { - struct microcode_intel *patch, **iup; struct ucode_cpu_info uci;
- if (IS_ENABLED(CONFIG_X86_32)) - iup = (struct microcode_intel **) __pa_nodebug(&intel_ucode_patch); - else - iup = &intel_ucode_patch; - - if (!*iup) { - patch = __load_ucode_intel(&uci); - if (!patch) + if (!intel_ucode_patch) { + intel_ucode_patch = __load_ucode_intel(&uci); + if (!intel_ucode_patch) return; - - *iup = patch; }
- uci.mc = *iup; - - apply_microcode_early(&uci, true); + uci.mc = intel_ucode_patch; + apply_microcode_early(&uci); }
static struct microcode_intel *find_patch(struct ucode_cpu_info *uci) @@ -647,7 +563,7 @@ void reload_ucode_intel(void)
uci.mc = p;
- apply_microcode_early(&uci, false); + apply_microcode_early(&uci); }
static int collect_cpu_info(int cpu_num, struct cpu_signature *csig) --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -44,7 +44,7 @@ struct microcode_ops { };
extern struct ucode_cpu_info ucode_cpu_info[]; -struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa); +struct cpio_data find_microcode_in_initrd(const char *path);
#define MAX_UCODE_COUNT 128
--- a/arch/x86/kernel/head32.c +++ b/arch/x86/kernel/head32.c @@ -19,6 +19,7 @@ #include <asm/apic.h> #include <asm/io_apic.h> #include <asm/bios_ebda.h> +#include <asm/microcode.h> #include <asm/tlbflush.h> #include <asm/bootparam_utils.h>
@@ -34,6 +35,8 @@ asmlinkage __visible void __init __noret /* Make sure IDT is set up before any exception happens */ idt_setup_early_handler();
+ load_ucode_bsp(); + cr4_init_shadow();
sanitize_boot_params(&boot_params); --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -118,11 +118,6 @@ SYM_CODE_START(startup_32) movl %eax, pa(olpc_ofw_pgd) #endif
-#ifdef CONFIG_MICROCODE - /* Early load ucode on BSP. */ - call load_ucode_bsp -#endif - /* Create early pagetables. */ call mk_early_pgtbl_32
@@ -157,11 +152,6 @@ SYM_FUNC_START(startup_32_smp) movl %eax,%ss leal -__PAGE_OFFSET(%ecx),%esp
-#ifdef CONFIG_MICROCODE - /* Early load ucode on AP. */ - call load_ucode_ap -#endif - .Ldefault_entry: movl $(CR0_STATE & ~X86_CR0_PG),%eax movl %eax,%cr0 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -259,12 +259,9 @@ static void notrace start_secondary(void cpu_init_exception_handling();
/* - * 32-bit systems load the microcode from the ASM startup code for - * historical reasons. - * - * On 64-bit systems load it before reaching the AP alive - * synchronization point below so it is not part of the full per - * CPU serialized bringup part when "parallel" bringup is enabled. + * Load the microcode before reaching the AP alive synchronization + * point below so it is not part of the full per CPU serialized + * bringup part when "parallel" bringup is enabled. * * That's even safe when hyperthreading is enabled in the CPU as * the core code starts the primary threads first and leaves the @@ -277,8 +274,7 @@ static void notrace start_secondary(void * CPUID, MSRs etc. must be strictly serialized to maintain * software state correctness. */ - if (IS_ENABLED(CONFIG_X86_64)) - load_ucode_ap(); + load_ucode_ap();
/* * Synchronization point with the hotplug core. Sets this CPUs
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ashok Raj ashok.raj@intel.com
commit ae76d951f6537001bdf77894d19cd4a446de337e upstream
Mixed steppings aren't supported on Intel CPUs. Only one microcode patch is required for the entire system. The caching of microcode blobs which match the family and model is therefore pointless and in fact is dysfunctional as CPU hotplug updates use only a single microcode blob, i.e. the one where *intel_ucode_patch points to.
Remove the microcode cache and make it an AMD local feature.
[ tglx: - save only at the end. Otherwise random microcode ends up in the pointer for early loading - free the ucode patch pointer in save_microcode_patch() only after kmemdup() has succeeded, as reported by Andrew Cooper ]
Originally-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Ashok Raj ashok.raj@intel.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211722.404362809@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 10 ++ arch/x86/kernel/cpu/microcode/core.c | 2 arch/x86/kernel/cpu/microcode/intel.c | 133 +++++-------------------------- arch/x86/kernel/cpu/microcode/internal.h | 10 -- 4 files changed, 34 insertions(+), 121 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -37,6 +37,16 @@
#include "internal.h"
+struct ucode_patch { + struct list_head plist; + void *data; + unsigned int size; + u32 patch_id; + u16 equiv_cpu; +}; + +static LIST_HEAD(microcode_cache); + #define UCODE_MAGIC 0x00414d44 #define UCODE_EQUIV_CPU_TABLE_TYPE 0x00000000 #define UCODE_UCODE_TYPE 0x00000001 --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -46,8 +46,6 @@ static bool dis_ucode_ldr = true;
bool initrd_gone;
-LIST_HEAD(microcode_cache); - /* * Synchronization. * --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -33,10 +33,10 @@ static const char ucode_path[] = "kernel/x86/microcode/GenuineIntel.bin";
/* Current microcode patch used in early patching on the APs. */ -static struct microcode_intel *intel_ucode_patch; +static struct microcode_intel *intel_ucode_patch __read_mostly;
/* last level cache size per core */ -static int llc_size_per_core; +static int llc_size_per_core __ro_after_init;
/* microcode format is extended from prescott processors */ struct extended_signature { @@ -253,74 +253,17 @@ static int has_newer_microcode(void *mc, return intel_find_matching_signature(mc, csig, cpf); }
-static struct ucode_patch *memdup_patch(void *data, unsigned int size) +static void save_microcode_patch(void *data, unsigned int size) { - struct ucode_patch *p; + struct microcode_header_intel *p;
- p = kzalloc(sizeof(struct ucode_patch), GFP_KERNEL); + p = kmemdup(data, size, GFP_KERNEL); if (!p) - return NULL; - - p->data = kmemdup(data, size, GFP_KERNEL); - if (!p->data) { - kfree(p); - return NULL; - } - - return p; -} - -static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, unsigned int size) -{ - struct microcode_header_intel *mc_hdr, *mc_saved_hdr; - struct ucode_patch *iter, *tmp, *p = NULL; - bool prev_found = false; - unsigned int sig, pf; - - mc_hdr = (struct microcode_header_intel *)data; - - list_for_each_entry_safe(iter, tmp, µcode_cache, plist) { - mc_saved_hdr = (struct microcode_header_intel *)iter->data; - sig = mc_saved_hdr->sig; - pf = mc_saved_hdr->pf; - - if (intel_find_matching_signature(data, sig, pf)) { - prev_found = true; - - if (mc_hdr->rev <= mc_saved_hdr->rev) - continue; - - p = memdup_patch(data, size); - if (!p) - pr_err("Error allocating buffer %p\n", data); - else { - list_replace(&iter->plist, &p->plist); - kfree(iter->data); - kfree(iter); - } - } - } - - /* - * There weren't any previous patches found in the list cache; save the - * newly found. - */ - if (!prev_found) { - p = memdup_patch(data, size); - if (!p) - pr_err("Error allocating buffer for %p\n", data); - else - list_add_tail(&p->plist, µcode_cache); - } - - if (!p) - return; - - if (!intel_find_matching_signature(p->data, uci->cpu_sig.sig, uci->cpu_sig.pf)) return;
+ kfree(intel_ucode_patch); /* Save for early loading */ - intel_ucode_patch = p->data; + intel_ucode_patch = (struct microcode_intel *)p; }
/* @@ -332,6 +275,7 @@ scan_microcode(void *data, size_t size, { struct microcode_header_intel *mc_header; struct microcode_intel *patch = NULL; + u32 cur_rev = uci->cpu_sig.rev; unsigned int mc_size;
while (size) { @@ -341,8 +285,7 @@ scan_microcode(void *data, size_t size, mc_header = (struct microcode_header_intel *)data;
mc_size = get_totalsize(mc_header); - if (!mc_size || - mc_size > size || + if (!mc_size || mc_size > size || intel_microcode_sanity_check(data, false, MC_HEADER_TYPE_MICROCODE) < 0) break;
@@ -354,31 +297,16 @@ scan_microcode(void *data, size_t size, continue; }
- if (save) { - save_microcode_patch(uci, data, mc_size); + /* BSP scan: Check whether there is newer microcode */ + if (!save && cur_rev >= mc_header->rev) goto next; - } -
- if (!patch) { - if (!has_newer_microcode(data, - uci->cpu_sig.sig, - uci->cpu_sig.pf, - uci->cpu_sig.rev)) - goto next; - - } else { - struct microcode_header_intel *phdr = &patch->hdr; - - if (!has_newer_microcode(data, - phdr->sig, - phdr->pf, - phdr->rev)) - goto next; - } + /* Save scan: Check whether there is newer or matching microcode */ + if (save && cur_rev != mc_header->rev) + goto next;
- /* We have a newer patch, save it. */ patch = data; + cur_rev = mc_header->rev;
next: data += mc_size; @@ -387,6 +315,9 @@ next: if (size) return NULL;
+ if (save && patch) + save_microcode_patch(patch, mc_size); + return patch; }
@@ -528,26 +459,10 @@ void load_ucode_intel_ap(void) apply_microcode_early(&uci); }
-static struct microcode_intel *find_patch(struct ucode_cpu_info *uci) +/* Accessor for microcode pointer */ +static struct microcode_intel *ucode_get_patch(void) { - struct microcode_header_intel *phdr; - struct ucode_patch *iter, *tmp; - - list_for_each_entry_safe(iter, tmp, µcode_cache, plist) { - - phdr = (struct microcode_header_intel *)iter->data; - - if (phdr->rev <= uci->cpu_sig.rev) - continue; - - if (!intel_find_matching_signature(phdr, - uci->cpu_sig.sig, - uci->cpu_sig.pf)) - continue; - - return iter->data; - } - return NULL; + return intel_ucode_patch; }
void reload_ucode_intel(void) @@ -557,7 +472,7 @@ void reload_ucode_intel(void)
intel_cpu_collect_info(&uci);
- p = find_patch(&uci); + p = ucode_get_patch(); if (!p) return;
@@ -601,7 +516,7 @@ static enum ucode_state apply_microcode_ return UCODE_ERROR;
/* Look for a newer patch in our cache: */ - mc = find_patch(uci); + mc = ucode_get_patch(); if (!mc) { mc = uci->mc; if (!mc) @@ -730,7 +645,7 @@ static enum ucode_state generic_load_mic uci->mc = (struct microcode_intel *)new_mc;
/* Save for CPU hotplug */ - save_microcode_patch(uci, new_mc, new_mc_size); + save_microcode_patch(new_mc, new_mc_size);
pr_debug("CPU%d found a matching microcode update with version 0x%x (current=0x%x)\n", cpu, new_rev, uci->cpu_sig.rev); --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -8,16 +8,6 @@ #include <asm/cpu.h> #include <asm/microcode.h>
-struct ucode_patch { - struct list_head plist; - void *data; /* Intel uses only this one */ - unsigned int size; - u32 patch_id; - u16 equiv_cpu; -}; - -extern struct list_head microcode_cache; - struct device;
enum ucode_state {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit b0f0bf5eef5fac6ba30b7cac15ca4cb01f8a6ca9 upstream
Make it readable and comprehensible.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115902.271940980@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/intel.c | 28 +++++++--------------------- 1 file changed, 7 insertions(+), 21 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -266,22 +266,16 @@ static void save_microcode_patch(void *d intel_ucode_patch = (struct microcode_intel *)p; }
-/* - * Get microcode matching with BSP's model. Only CPUs with the same model as - * BSP can stay in the platform. - */ -static struct microcode_intel * -scan_microcode(void *data, size_t size, struct ucode_cpu_info *uci, bool save) +/* Scan CPIO for microcode matching the boot CPU's family, model, stepping */ +static struct microcode_intel *scan_microcode(void *data, size_t size, + struct ucode_cpu_info *uci, bool save) { struct microcode_header_intel *mc_header; struct microcode_intel *patch = NULL; u32 cur_rev = uci->cpu_sig.rev; unsigned int mc_size;
- while (size) { - if (size < sizeof(struct microcode_header_intel)) - break; - + for (; size >= sizeof(struct microcode_header_intel); size -= mc_size, data += mc_size) { mc_header = (struct microcode_header_intel *)data;
mc_size = get_totalsize(mc_header); @@ -289,27 +283,19 @@ scan_microcode(void *data, size_t size, intel_microcode_sanity_check(data, false, MC_HEADER_TYPE_MICROCODE) < 0) break;
- size -= mc_size; - - if (!intel_find_matching_signature(data, uci->cpu_sig.sig, - uci->cpu_sig.pf)) { - data += mc_size; + if (!intel_find_matching_signature(data, uci->cpu_sig.sig, uci->cpu_sig.pf)) continue; - }
/* BSP scan: Check whether there is newer microcode */ if (!save && cur_rev >= mc_header->rev) - goto next; + continue;
/* Save scan: Check whether there is newer or matching microcode */ if (save && cur_rev != mc_header->rev) - goto next; + continue;
patch = data; cur_rev = mc_header->rev; - -next: - data += mc_size; }
if (size)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 6b072022ab2e1e83b7588144ee0080f7197b71da upstream
so it becomes less obfuscated and rename it because there is nothing generic about it.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115902.330295409@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/intel.c | 47 ++++++++++++---------------------- 1 file changed, 17 insertions(+), 30 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -240,19 +240,6 @@ int intel_microcode_sanity_check(void *m } EXPORT_SYMBOL_GPL(intel_microcode_sanity_check);
-/* - * Returns 1 if update has been found, 0 otherwise. - */ -static int has_newer_microcode(void *mc, unsigned int csig, int cpf, int new_rev) -{ - struct microcode_header_intel *mc_hdr = mc; - - if (mc_hdr->rev <= new_rev) - return 0; - - return intel_find_matching_signature(mc, csig, cpf); -} - static void save_microcode_patch(void *data, unsigned int size) { struct microcode_header_intel *p; @@ -559,14 +546,12 @@ out: return ret; }
-static enum ucode_state generic_load_microcode(int cpu, struct iov_iter *iter) +static enum ucode_state parse_microcode_blobs(int cpu, struct iov_iter *iter) { struct ucode_cpu_info *uci = ucode_cpu_info + cpu; unsigned int curr_mc_size = 0, new_mc_size = 0; - enum ucode_state ret = UCODE_OK; - int new_rev = uci->cpu_sig.rev; + int cur_rev = uci->cpu_sig.rev; u8 *new_mc = NULL, *mc = NULL; - unsigned int csig, cpf;
while (iov_iter_count(iter)) { struct microcode_header_intel mc_header; @@ -583,6 +568,7 @@ static enum ucode_state generic_load_mic pr_err("error! Bad data in microcode data file (totalsize too small)\n"); break; } + data_size = mc_size - sizeof(mc_header); if (data_size > iov_iter_count(iter)) { pr_err("error! Bad data in microcode data file (truncated file?)\n"); @@ -605,16 +591,17 @@ static enum ucode_state generic_load_mic break; }
- csig = uci->cpu_sig.sig; - cpf = uci->cpu_sig.pf; - if (has_newer_microcode(mc, csig, cpf, new_rev)) { - vfree(new_mc); - new_rev = mc_header.rev; - new_mc = mc; - new_mc_size = mc_size; - mc = NULL; /* trigger new vmalloc */ - ret = UCODE_NEW; - } + if (cur_rev >= mc_header.rev) + continue; + + if (!intel_find_matching_signature(mc, uci->cpu_sig.sig, uci->cpu_sig.pf)) + continue; + + vfree(new_mc); + cur_rev = mc_header.rev; + new_mc = mc; + new_mc_size = mc_size; + mc = NULL; }
vfree(mc); @@ -634,9 +621,9 @@ static enum ucode_state generic_load_mic save_microcode_patch(new_mc, new_mc_size);
pr_debug("CPU%d found a matching microcode update with version 0x%x (current=0x%x)\n", - cpu, new_rev, uci->cpu_sig.rev); + cpu, cur_rev, uci->cpu_sig.rev);
- return ret; + return UCODE_NEW; }
static bool is_blacklisted(unsigned int cpu) @@ -685,7 +672,7 @@ static enum ucode_state request_microcod kvec.iov_base = (void *)firmware->data; kvec.iov_len = firmware->size; iov_iter_kvec(&iter, ITER_SOURCE, &kvec, 1, firmware->size); - ret = generic_load_microcode(cpu, &iter); + ret = parse_microcode_blobs(cpu, &iter);
release_firmware(firmware);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 0177669ee61de4dc641f9ad86a3df6f22327cf6c upstream
Sanitize the microcode scan loop, fixup printks and move the loading function for builtin microcode next to the place where it is used and mark it __init.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115902.389400871@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/intel.c | 76 ++++++++++++++-------------------- 1 file changed, 32 insertions(+), 44 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -36,7 +36,7 @@ static const char ucode_path[] = "kernel static struct microcode_intel *intel_ucode_patch __read_mostly;
/* last level cache size per core */ -static int llc_size_per_core __ro_after_init; +static unsigned int llc_size_per_core __ro_after_init;
/* microcode format is extended from prescott processors */ struct extended_signature { @@ -294,29 +294,6 @@ static struct microcode_intel *scan_micr return patch; }
-static bool load_builtin_intel_microcode(struct cpio_data *cp) -{ - unsigned int eax = 1, ebx, ecx = 0, edx; - struct firmware fw; - char name[30]; - - if (IS_ENABLED(CONFIG_X86_32)) - return false; - - native_cpuid(&eax, &ebx, &ecx, &edx); - - sprintf(name, "intel-ucode/%02x-%02x-%02x", - x86_family(eax), x86_model(eax), x86_stepping(eax)); - - if (firmware_request_builtin(&fw, name)) { - cp->size = fw.size; - cp->data = (void *)fw.data; - return true; - } - - return false; -} - static int apply_microcode_early(struct ucode_cpu_info *uci) { struct microcode_intel *mc; @@ -360,6 +337,28 @@ static int apply_microcode_early(struct return 0; }
+static bool load_builtin_intel_microcode(struct cpio_data *cp) +{ + unsigned int eax = 1, ebx, ecx = 0, edx; + struct firmware fw; + char name[30]; + + if (IS_ENABLED(CONFIG_X86_32)) + return false; + + native_cpuid(&eax, &ebx, &ecx, &edx); + + sprintf(name, "intel-ucode/%02x-%02x-%02x", + x86_family(eax), x86_model(eax), x86_stepping(eax)); + + if (firmware_request_builtin(&fw, name)) { + cp->size = fw.size; + cp->data = (void *)fw.data; + return true; + } + return false; +} + int __init save_microcode_in_initrd_intel(void) { struct ucode_cpu_info uci; @@ -432,25 +431,16 @@ void load_ucode_intel_ap(void) apply_microcode_early(&uci); }
-/* Accessor for microcode pointer */ -static struct microcode_intel *ucode_get_patch(void) -{ - return intel_ucode_patch; -} - void reload_ucode_intel(void) { - struct microcode_intel *p; struct ucode_cpu_info uci;
intel_cpu_collect_info(&uci);
- p = ucode_get_patch(); - if (!p) + uci.mc = intel_ucode_patch; + if (!uci.mc) return;
- uci.mc = p; - apply_microcode_early(&uci); }
@@ -488,8 +478,7 @@ static enum ucode_state apply_microcode_ if (WARN_ON(raw_smp_processor_id() != cpu)) return UCODE_ERROR;
- /* Look for a newer patch in our cache: */ - mc = ucode_get_patch(); + mc = intel_ucode_patch; if (!mc) { mc = uci->mc; if (!mc) @@ -680,18 +669,17 @@ static enum ucode_state request_microcod }
static struct microcode_ops microcode_intel_ops = { - .request_microcode_fw = request_microcode_fw, - .collect_cpu_info = collect_cpu_info, - .apply_microcode = apply_microcode_intel, + .request_microcode_fw = request_microcode_fw, + .collect_cpu_info = collect_cpu_info, + .apply_microcode = apply_microcode_intel, };
-static int __init calc_llc_size_per_core(struct cpuinfo_x86 *c) +static __init void calc_llc_size_per_core(struct cpuinfo_x86 *c) { u64 llc_size = c->x86_cache_size * 1024ULL;
do_div(llc_size, c->x86_max_cores); - - return (int)llc_size; + llc_size_per_core = (unsigned int)llc_size; }
struct microcode_ops * __init init_intel_microcode(void) @@ -704,7 +692,7 @@ struct microcode_ops * __init init_intel return NULL; }
- llc_size_per_core = calc_llc_size_per_core(c); + calc_llc_size_per_core(c);
return µcode_intel_ops; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit dd5e3e3ca6ac011582a9f3f987493bf6741568c0 upstream.
The early loading code is overly complicated:
- It scans the builtin/initrd for microcode not only on the BSP, but also on all APs during early boot and then later in the boot process it scans again to duplicate and save the microcode before initrd goes away.
That's a pointless exercise because this can be simply done before bringing up the APs when the memory allocator is up and running.
- Saving the microcode from within the scan loop is completely non-obvious and a left over of the microcode cache.
This can be done at the call site now which makes it obvious.
Rework the code so that only the BSP scans the builtin/initrd microcode once during early boot and save it away in an early initcall for later use.
[ bp: Test and fold in a fix from tglx ontop which handles the need to distinguish what save_microcode() does depending on when it is called:
- when on the BSP during early load, it needs to find a newer revision than the one currently loaded on the BSP
- later, before SMP init, it still runs on the BSP and gets the BSP revision just loaded and uses that revision to know which patch to save for the APs. For that it needs to find the exact one as on the BSP. ]
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211722.629085215@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 6 - arch/x86/kernel/cpu/microcode/intel.c | 163 ++++++++++++++----------------- arch/x86/kernel/cpu/microcode/internal.h | 3 3 files changed, 79 insertions(+), 93 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -42,7 +42,7 @@ #define DRIVER_VERSION "2.2"
static struct microcode_ops *microcode_ops; -static bool dis_ucode_ldr = true; +bool dis_ucode_ldr = true;
bool initrd_gone;
@@ -191,10 +191,6 @@ static int __init save_microcode_in_init }
switch (c->x86_vendor) { - case X86_VENDOR_INTEL: - if (c->x86 >= 6) - ret = save_microcode_in_initrd_intel(); - break; case X86_VENDOR_AMD: if (c->x86 >= 0x10) ret = save_microcode_in_initrd_amd(cpuid_eax(1)); --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -32,8 +32,10 @@
static const char ucode_path[] = "kernel/x86/microcode/GenuineIntel.bin";
+#define UCODE_BSP_LOADED ((struct microcode_intel *)0x1UL) + /* Current microcode patch used in early patching on the APs. */ -static struct microcode_intel *intel_ucode_patch __read_mostly; +static struct microcode_intel *ucode_patch_va __read_mostly;
/* last level cache size per core */ static unsigned int llc_size_per_core __ro_after_init; @@ -240,22 +242,30 @@ int intel_microcode_sanity_check(void *m } EXPORT_SYMBOL_GPL(intel_microcode_sanity_check);
-static void save_microcode_patch(void *data, unsigned int size) +static void update_ucode_pointer(struct microcode_intel *mc) { - struct microcode_header_intel *p; + kfree(ucode_patch_va); + + /* + * Save the virtual address for early loading and for eventual free + * on late loading. + */ + ucode_patch_va = mc; +}
- p = kmemdup(data, size, GFP_KERNEL); - if (!p) - return; +static void save_microcode_patch(struct microcode_intel *patch) +{ + struct microcode_intel *mc;
- kfree(intel_ucode_patch); - /* Save for early loading */ - intel_ucode_patch = (struct microcode_intel *)p; + mc = kmemdup(patch, get_totalsize(&patch->hdr), GFP_KERNEL); + if (mc) + update_ucode_pointer(mc); }
-/* Scan CPIO for microcode matching the boot CPU's family, model, stepping */ -static struct microcode_intel *scan_microcode(void *data, size_t size, - struct ucode_cpu_info *uci, bool save) +/* Scan blob for microcode matching the boot CPUs family, model, stepping */ +static __init struct microcode_intel *scan_microcode(void *data, size_t size, + struct ucode_cpu_info *uci, + bool save) { struct microcode_header_intel *mc_header; struct microcode_intel *patch = NULL; @@ -273,35 +283,35 @@ static struct microcode_intel *scan_micr if (!intel_find_matching_signature(data, uci->cpu_sig.sig, uci->cpu_sig.pf)) continue;
- /* BSP scan: Check whether there is newer microcode */ - if (!save && cur_rev >= mc_header->rev) - continue; - - /* Save scan: Check whether there is newer or matching microcode */ - if (save && cur_rev != mc_header->rev) + /* + * For saving the early microcode, find the matching revision which + * was loaded on the BSP. + * + * On the BSP during early boot, find a newer revision than + * actually loaded in the CPU. + */ + if (save) { + if (cur_rev != mc_header->rev) + continue; + } else if (cur_rev >= mc_header->rev) { continue; + }
patch = data; cur_rev = mc_header->rev; }
- if (size) - return NULL; - - if (save && patch) - save_microcode_patch(patch, mc_size); - - return patch; + return size ? NULL : patch; }
-static int apply_microcode_early(struct ucode_cpu_info *uci) +static enum ucode_state apply_microcode_early(struct ucode_cpu_info *uci) { struct microcode_intel *mc; u32 rev, old_rev, date;
mc = uci->mc; if (!mc) - return 0; + return UCODE_NFOUND;
/* * Save us the MSR write below - which is a particular expensive @@ -327,17 +337,17 @@ static int apply_microcode_early(struct
rev = intel_get_microcode_revision(); if (rev != mc->hdr.rev) - return -1; + return UCODE_ERROR;
uci->cpu_sig.rev = rev;
date = mc->hdr.date; pr_info_once("updated early: 0x%x -> 0x%x, date = %04x-%02x-%02x\n", old_rev, rev, date & 0xffff, date >> 24, (date >> 16) & 0xff); - return 0; + return UCODE_UPDATED; }
-static bool load_builtin_intel_microcode(struct cpio_data *cp) +static __init bool load_builtin_intel_microcode(struct cpio_data *cp) { unsigned int eax = 1, ebx, ecx = 0, edx; struct firmware fw; @@ -359,89 +369,71 @@ static bool load_builtin_intel_microcode return false; }
-int __init save_microcode_in_initrd_intel(void) +static __init struct microcode_intel *get_microcode_blob(struct ucode_cpu_info *uci, bool save) { - struct ucode_cpu_info uci; struct cpio_data cp;
- /* - * initrd is going away, clear patch ptr. We will scan the microcode one - * last time before jettisoning and save a patch, if found. Then we will - * update that pointer too, with a stable patch address to use when - * resuming the cores. - */ - intel_ucode_patch = NULL; - if (!load_builtin_intel_microcode(&cp)) cp = find_microcode_in_initrd(ucode_path);
if (!(cp.data && cp.size)) - return 0; + return NULL;
- intel_cpu_collect_info(&uci); + intel_cpu_collect_info(uci);
- scan_microcode(cp.data, cp.size, &uci, true); - return 0; + return scan_microcode(cp.data, cp.size, uci, save); }
/* - * @res_patch, output: a pointer to the patch we found. + * Invoked from an early init call to save the microcode blob which was + * selected during early boot when mm was not usable. The microcode must be + * saved because initrd is going away. It's an early init call so the APs + * just can use the pointer and do not have to scan initrd/builtin firmware + * again. */ -static struct microcode_intel *__load_ucode_intel(struct ucode_cpu_info *uci) +static int __init save_builtin_microcode(void) { - struct cpio_data cp; - - /* try built-in microcode first */ - if (!load_builtin_intel_microcode(&cp)) - cp = find_microcode_in_initrd(ucode_path); + struct ucode_cpu_info uci;
- if (!(cp.data && cp.size)) - return NULL; + if (xchg(&ucode_patch_va, NULL) != UCODE_BSP_LOADED) + return 0;
- intel_cpu_collect_info(uci); + if (dis_ucode_ldr || boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) + return 0;
- return scan_microcode(cp.data, cp.size, uci, false); + uci.mc = get_microcode_blob(&uci, true); + if (uci.mc) + save_microcode_patch(uci.mc); + return 0; } +early_initcall(save_builtin_microcode);
+/* Load microcode on BSP from initrd or builtin blobs */ void __init load_ucode_intel_bsp(void) { - struct microcode_intel *patch; struct ucode_cpu_info uci;
- patch = __load_ucode_intel(&uci); - if (!patch) - return; - - uci.mc = patch; - - apply_microcode_early(&uci); + uci.mc = get_microcode_blob(&uci, false); + if (uci.mc && apply_microcode_early(&uci) == UCODE_UPDATED) + ucode_patch_va = UCODE_BSP_LOADED; }
void load_ucode_intel_ap(void) { struct ucode_cpu_info uci;
- if (!intel_ucode_patch) { - intel_ucode_patch = __load_ucode_intel(&uci); - if (!intel_ucode_patch) - return; - } - - uci.mc = intel_ucode_patch; - apply_microcode_early(&uci); + uci.mc = ucode_patch_va; + if (uci.mc) + apply_microcode_early(&uci); }
+/* Reload microcode on resume */ void reload_ucode_intel(void) { - struct ucode_cpu_info uci; - - intel_cpu_collect_info(&uci); - - uci.mc = intel_ucode_patch; - if (!uci.mc) - return; + struct ucode_cpu_info uci = { .mc = ucode_patch_va, };
- apply_microcode_early(&uci); + if (uci.mc) + apply_microcode_early(&uci); }
static int collect_cpu_info(int cpu_num, struct cpu_signature *csig) @@ -478,7 +470,7 @@ static enum ucode_state apply_microcode_ if (WARN_ON(raw_smp_processor_id() != cpu)) return UCODE_ERROR;
- mc = intel_ucode_patch; + mc = ucode_patch_va; if (!mc) { mc = uci->mc; if (!mc) @@ -538,8 +530,8 @@ out: static enum ucode_state parse_microcode_blobs(int cpu, struct iov_iter *iter) { struct ucode_cpu_info *uci = ucode_cpu_info + cpu; - unsigned int curr_mc_size = 0, new_mc_size = 0; int cur_rev = uci->cpu_sig.rev; + unsigned int curr_mc_size = 0; u8 *new_mc = NULL, *mc = NULL;
while (iov_iter_count(iter)) { @@ -589,7 +581,6 @@ static enum ucode_state parse_microcode_ vfree(new_mc); cur_rev = mc_header.rev; new_mc = mc; - new_mc_size = mc_size; mc = NULL; }
@@ -603,11 +594,11 @@ static enum ucode_state parse_microcode_ if (!new_mc) return UCODE_NFOUND;
- vfree(uci->mc); - uci->mc = (struct microcode_intel *)new_mc; - /* Save for CPU hotplug */ - save_microcode_patch(new_mc, new_mc_size); + save_microcode_patch((struct microcode_intel *)new_mc); + uci->mc = ucode_patch_va; + + vfree(new_mc);
pr_debug("CPU%d found a matching microcode update with version 0x%x (current=0x%x)\n", cpu, cur_rev, uci->cpu_sig.rev); --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -84,6 +84,7 @@ static inline unsigned int x86_cpuid_fam return x86_family(eax); }
+extern bool dis_ucode_ldr; extern bool initrd_gone;
#ifdef CONFIG_CPU_SUP_AMD @@ -107,13 +108,11 @@ static inline void exit_amd_microcode(vo #ifdef CONFIG_CPU_SUP_INTEL void load_ucode_intel_bsp(void); void load_ucode_intel_ap(void); -int save_microcode_in_initrd_intel(void); void reload_ucode_intel(void); struct microcode_ops *init_intel_microcode(void); #else /* CONFIG_CPU_SUP_INTEL */ static inline void load_ucode_intel_bsp(void) { } static inline void load_ucode_intel_ap(void) { } -static inline int save_microcode_in_initrd_intel(void) { return -EINVAL; } static inline void reload_ucode_intel(void) { } static inline struct microcode_ops *init_intel_microcode(void) { return NULL; } #endif /* !CONFIG_CPU_SUP_INTEL */
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 2a1dada3d1cf8f80a27663653a371d99dbf5d540 upstream
There are situations where the late microcode is loaded into memory but is not applied:
1) The rendezvous fails 2) The microcode is rejected by the CPUs
If any of this happens then the pointer which was updated at firmware load time is stale and subsequent CPU hotplug operations either fail to update or create inconsistent microcode state.
Save the loaded microcode in a separate pointer before the late load is attempted and when successful, update the hotplug pointer accordingly via a new microcode_ops callback.
Remove the pointless fallback in the loader to a microcode pointer which is never populated.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115902.505491309@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 4 ++++ arch/x86/kernel/cpu/microcode/intel.c | 30 +++++++++++++++--------------- arch/x86/kernel/cpu/microcode/internal.h | 1 + 3 files changed, 20 insertions(+), 15 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -403,6 +403,10 @@ static int microcode_reload_late(void) store_cpu_caps(&prev_info);
ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask); + + if (microcode_ops->finalize_late_load) + microcode_ops->finalize_late_load(ret); + if (!ret) { pr_info("Reload succeeded, microcode revision: 0x%x -> 0x%x\n", old, boot_cpu_data.microcode); --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -36,6 +36,7 @@ static const char ucode_path[] = "kernel
/* Current microcode patch used in early patching on the APs. */ static struct microcode_intel *ucode_patch_va __read_mostly; +static struct microcode_intel *ucode_patch_late __read_mostly;
/* last level cache size per core */ static unsigned int llc_size_per_core __ro_after_init; @@ -470,12 +471,9 @@ static enum ucode_state apply_microcode_ if (WARN_ON(raw_smp_processor_id() != cpu)) return UCODE_ERROR;
- mc = ucode_patch_va; - if (!mc) { - mc = uci->mc; - if (!mc) - return UCODE_NFOUND; - } + mc = ucode_patch_late; + if (!mc) + return UCODE_NFOUND;
/* * Save us the MSR write below - which is a particular expensive @@ -594,15 +592,7 @@ static enum ucode_state parse_microcode_ if (!new_mc) return UCODE_NFOUND;
- /* Save for CPU hotplug */ - save_microcode_patch((struct microcode_intel *)new_mc); - uci->mc = ucode_patch_va; - - vfree(new_mc); - - pr_debug("CPU%d found a matching microcode update with version 0x%x (current=0x%x)\n", - cpu, cur_rev, uci->cpu_sig.rev); - + ucode_patch_late = (struct microcode_intel *)new_mc; return UCODE_NEW; }
@@ -659,10 +649,20 @@ static enum ucode_state request_microcod return ret; }
+static void finalize_late_load(int result) +{ + if (!result) + save_microcode_patch(ucode_patch_late); + + vfree(ucode_patch_late); + ucode_patch_late = NULL; +} + static struct microcode_ops microcode_intel_ops = { .request_microcode_fw = request_microcode_fw, .collect_cpu_info = collect_cpu_info, .apply_microcode = apply_microcode_intel, + .finalize_late_load = finalize_late_load, };
static __init void calc_llc_size_per_core(struct cpuinfo_x86 *c) --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -31,6 +31,7 @@ struct microcode_ops { */ enum ucode_state (*apply_microcode)(int cpu); int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); + void (*finalize_late_load)(int result); };
extern struct ucode_cpu_info ucode_cpu_info[];
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit f24f204405f9875bc539c6e88553fd5ac913c867 upstream
Microcode blobs are getting larger and might soon reach the kmalloc() limit. Switch over kvmalloc().
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115902.564323243@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/intel.c | 48 +++++++++++++++++----------------- 1 file changed, 25 insertions(+), 23 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -14,7 +14,6 @@ #include <linux/earlycpio.h> #include <linux/firmware.h> #include <linux/uaccess.h> -#include <linux/vmalloc.h> #include <linux/initrd.h> #include <linux/kernel.h> #include <linux/slab.h> @@ -245,7 +244,7 @@ EXPORT_SYMBOL_GPL(intel_microcode_sanity
static void update_ucode_pointer(struct microcode_intel *mc) { - kfree(ucode_patch_va); + kvfree(ucode_patch_va);
/* * Save the virtual address for early loading and for eventual free @@ -256,11 +255,14 @@ static void update_ucode_pointer(struct
static void save_microcode_patch(struct microcode_intel *patch) { + unsigned int size = get_totalsize(&patch->hdr); struct microcode_intel *mc;
- mc = kmemdup(patch, get_totalsize(&patch->hdr), GFP_KERNEL); + mc = kvmemdup(patch, size, GFP_KERNEL); if (mc) update_ucode_pointer(mc); + else + pr_err("Unable to allocate microcode memory size: %u\n", size); }
/* Scan blob for microcode matching the boot CPUs family, model, stepping */ @@ -539,36 +541,34 @@ static enum ucode_state parse_microcode_
if (!copy_from_iter_full(&mc_header, sizeof(mc_header), iter)) { pr_err("error! Truncated or inaccessible header in microcode data file\n"); - break; + goto fail; }
mc_size = get_totalsize(&mc_header); if (mc_size < sizeof(mc_header)) { pr_err("error! Bad data in microcode data file (totalsize too small)\n"); - break; + goto fail; } - data_size = mc_size - sizeof(mc_header); if (data_size > iov_iter_count(iter)) { pr_err("error! Bad data in microcode data file (truncated file?)\n"); - break; + goto fail; }
/* For performance reasons, reuse mc area when possible */ if (!mc || mc_size > curr_mc_size) { - vfree(mc); - mc = vmalloc(mc_size); + kvfree(mc); + mc = kvmalloc(mc_size, GFP_KERNEL); if (!mc) - break; + goto fail; curr_mc_size = mc_size; }
memcpy(mc, &mc_header, sizeof(mc_header)); data = mc + sizeof(mc_header); if (!copy_from_iter_full(data, data_size, iter) || - intel_microcode_sanity_check(mc, true, MC_HEADER_TYPE_MICROCODE) < 0) { - break; - } + intel_microcode_sanity_check(mc, true, MC_HEADER_TYPE_MICROCODE) < 0) + goto fail;
if (cur_rev >= mc_header.rev) continue; @@ -576,24 +576,26 @@ static enum ucode_state parse_microcode_ if (!intel_find_matching_signature(mc, uci->cpu_sig.sig, uci->cpu_sig.pf)) continue;
- vfree(new_mc); + kvfree(new_mc); cur_rev = mc_header.rev; new_mc = mc; mc = NULL; }
- vfree(mc); - - if (iov_iter_count(iter)) { - vfree(new_mc); - return UCODE_ERROR; - } + if (iov_iter_count(iter)) + goto fail;
+ kvfree(mc); if (!new_mc) return UCODE_NFOUND;
ucode_patch_late = (struct microcode_intel *)new_mc; return UCODE_NEW; + +fail: + kvfree(mc); + kvfree(new_mc); + return UCODE_ERROR; }
static bool is_blacklisted(unsigned int cpu) @@ -652,9 +654,9 @@ static enum ucode_state request_microcod static void finalize_late_load(int result) { if (!result) - save_microcode_patch(ucode_patch_late); - - vfree(ucode_patch_late); + update_ucode_pointer(ucode_patch_late); + else + kvfree(ucode_patch_late); ucode_patch_late = NULL; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 3973718cff1e3a5d88ea78ec28ecca2afa60b30b upstream
Deduplicate the early and late apply() functions.
[ bp: Rename the function which does the actual application to __apply_microcode() to differentiate it from microcode_ops.apply_microcode(). ]
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20231017211722.795508212@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/intel.c | 106 +++++++++++----------------------- 1 file changed, 37 insertions(+), 69 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -307,12 +307,12 @@ static __init struct microcode_intel *sc return size ? NULL : patch; }
-static enum ucode_state apply_microcode_early(struct ucode_cpu_info *uci) +static enum ucode_state __apply_microcode(struct ucode_cpu_info *uci, + struct microcode_intel *mc, + u32 *cur_rev) { - struct microcode_intel *mc; - u32 rev, old_rev, date; + u32 rev;
- mc = uci->mc; if (!mc) return UCODE_NFOUND;
@@ -321,14 +321,12 @@ static enum ucode_state apply_microcode_ * operation - when the other hyperthread has updated the microcode * already. */ - rev = intel_get_microcode_revision(); - if (rev >= mc->hdr.rev) { - uci->cpu_sig.rev = rev; + *cur_rev = intel_get_microcode_revision(); + if (*cur_rev >= mc->hdr.rev) { + uci->cpu_sig.rev = *cur_rev; return UCODE_OK; }
- old_rev = rev; - /* * Writeback and invalidate caches before updating microcode to avoid * internal issues depending on what the microcode is updating. @@ -343,13 +341,24 @@ static enum ucode_state apply_microcode_ return UCODE_ERROR;
uci->cpu_sig.rev = rev; - - date = mc->hdr.date; - pr_info_once("updated early: 0x%x -> 0x%x, date = %04x-%02x-%02x\n", - old_rev, rev, date & 0xffff, date >> 24, (date >> 16) & 0xff); return UCODE_UPDATED; }
+static enum ucode_state apply_microcode_early(struct ucode_cpu_info *uci) +{ + struct microcode_intel *mc = uci->mc; + enum ucode_state ret; + u32 cur_rev, date; + + ret = __apply_microcode(uci, mc, &cur_rev); + if (ret == UCODE_UPDATED) { + date = mc->hdr.date; + pr_info_once("updated early: 0x%x -> 0x%x, date = %04x-%02x-%02x\n", + cur_rev, mc->hdr.rev, date & 0xffff, date >> 24, (date >> 16) & 0xff); + } + return ret; +} + static __init bool load_builtin_intel_microcode(struct cpio_data *cp) { unsigned int eax = 1, ebx, ecx = 0, edx; @@ -459,70 +468,29 @@ static int collect_cpu_info(int cpu_num, return 0; }
-static enum ucode_state apply_microcode_intel(int cpu) +static enum ucode_state apply_microcode_late(int cpu) { struct ucode_cpu_info *uci = ucode_cpu_info + cpu; - struct cpuinfo_x86 *c = &cpu_data(cpu); - bool bsp = c->cpu_index == boot_cpu_data.cpu_index; - struct microcode_intel *mc; + struct microcode_intel *mc = ucode_patch_late; enum ucode_state ret; - static int prev_rev; - u32 rev; - - /* We should bind the task to the CPU */ - if (WARN_ON(raw_smp_processor_id() != cpu)) - return UCODE_ERROR; - - mc = ucode_patch_late; - if (!mc) - return UCODE_NFOUND; - - /* - * Save us the MSR write below - which is a particular expensive - * operation - when the other hyperthread has updated the microcode - * already. - */ - rev = intel_get_microcode_revision(); - if (rev >= mc->hdr.rev) { - ret = UCODE_OK; - goto out; - } + u32 cur_rev;
- /* - * Writeback and invalidate caches before updating microcode to avoid - * internal issues depending on what the microcode is updating. - */ - native_wbinvd(); - - /* write microcode via MSR 0x79 */ - wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc->bits); - - rev = intel_get_microcode_revision(); - - if (rev != mc->hdr.rev) { - pr_err("CPU%d update to revision 0x%x failed\n", - cpu, mc->hdr.rev); + if (WARN_ON_ONCE(smp_processor_id() != cpu)) return UCODE_ERROR; - }
- if (bsp && rev != prev_rev) { - pr_info("updated to revision 0x%x, date = %04x-%02x-%02x\n", - rev, - mc->hdr.date & 0xffff, - mc->hdr.date >> 24, + ret = __apply_microcode(uci, mc, &cur_rev); + if (ret != UCODE_UPDATED && ret != UCODE_OK) + return ret; + + if (!cpu && uci->cpu_sig.rev != cur_rev) { + pr_info("Updated to revision 0x%x, date = %04x-%02x-%02x\n", + uci->cpu_sig.rev, mc->hdr.date & 0xffff, mc->hdr.date >> 24, (mc->hdr.date >> 16) & 0xff); - prev_rev = rev; }
- ret = UCODE_UPDATED; - -out: - uci->cpu_sig.rev = rev; - c->microcode = rev; - - /* Update boot_cpu_data's revision too, if we're on the BSP: */ - if (bsp) - boot_cpu_data.microcode = rev; + cpu_data(cpu).microcode = uci->cpu_sig.rev; + if (!cpu) + boot_cpu_data.microcode = uci->cpu_sig.rev;
return ret; } @@ -663,7 +631,7 @@ static void finalize_late_load(int resul static struct microcode_ops microcode_intel_ops = { .request_microcode_fw = request_microcode_fw, .collect_cpu_info = collect_cpu_info, - .apply_microcode = apply_microcode_intel, + .apply_microcode = apply_microcode_late, .finalize_late_load = finalize_late_load, };
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 164aa1ca537238c46923ccacd8995b4265aee47b upstream
Nothing needs struct ucode_cpu_info. Make it take struct cpu_signature, let it return a boolean and simplify the implementation. Rename it now that the silly name clash with collect_cpu_info() is gone.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211722.851573238@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/cpu.h | 4 ++-- arch/x86/kernel/cpu/microcode/intel.c | 33 +++++++++------------------------ drivers/platform/x86/intel/ifs/load.c | 8 +++----- 3 files changed, 14 insertions(+), 31 deletions(-)
--- a/arch/x86/include/asm/cpu.h +++ b/arch/x86/include/asm/cpu.h @@ -71,9 +71,9 @@ static inline void init_ia32_feat_ctl(st
extern __noendbr void cet_disable(void);
-struct ucode_cpu_info; +struct cpu_signature;
-int intel_cpu_collect_info(struct ucode_cpu_info *uci); +void intel_collect_cpu_info(struct cpu_signature *sig);
static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1, unsigned int s2, unsigned int p2) --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -68,36 +68,21 @@ static inline unsigned int exttable_size return et->count * EXT_SIGNATURE_SIZE + EXT_HEADER_SIZE; }
-int intel_cpu_collect_info(struct ucode_cpu_info *uci) +void intel_collect_cpu_info(struct cpu_signature *sig) { - unsigned int val[2]; - unsigned int family, model; - struct cpu_signature csig = { 0 }; - unsigned int eax, ebx, ecx, edx; + sig->sig = cpuid_eax(1); + sig->pf = 0; + sig->rev = intel_get_microcode_revision();
- memset(uci, 0, sizeof(*uci)); + if (x86_model(sig->sig) >= 5 || x86_family(sig->sig) > 6) { + unsigned int val[2];
- eax = 0x00000001; - ecx = 0; - native_cpuid(&eax, &ebx, &ecx, &edx); - csig.sig = eax; - - family = x86_family(eax); - model = x86_model(eax); - - if (model >= 5 || family > 6) { /* get processor flags from MSR 0x17 */ native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]); - csig.pf = 1 << ((val[1] >> 18) & 7); + sig->pf = 1 << ((val[1] >> 18) & 7); } - - csig.rev = intel_get_microcode_revision(); - - uci->cpu_sig = csig; - - return 0; } -EXPORT_SYMBOL_GPL(intel_cpu_collect_info); +EXPORT_SYMBOL_GPL(intel_collect_cpu_info);
/* * Returns 1 if update has been found, 0 otherwise. @@ -391,7 +376,7 @@ static __init struct microcode_intel *ge if (!(cp.data && cp.size)) return NULL;
- intel_cpu_collect_info(uci); + intel_collect_cpu_info(&uci->cpu_sig);
return scan_microcode(cp.data, cp.size, uci, save); } --- a/drivers/platform/x86/intel/ifs/load.c +++ b/drivers/platform/x86/intel/ifs/load.c @@ -227,7 +227,7 @@ out:
static int image_sanity_check(struct device *dev, const struct microcode_header_intel *data) { - struct ucode_cpu_info uci; + struct cpu_signature sig;
/* Provide a specific error message when loading an older/unsupported image */ if (data->hdrver != MC_HEADER_TYPE_IFS) { @@ -240,11 +240,9 @@ static int image_sanity_check(struct dev return -EINVAL; }
- intel_cpu_collect_info(&uci); + intel_collect_cpu_info(&sig);
- if (!intel_find_matching_signature((void *)data, - uci.cpu_sig.sig, - uci.cpu_sig.pf)) { + if (!intel_find_matching_signature((void *)data, sig.sig, sig.pf)) { dev_err(dev, "cpu signature, processor flags not matching\n"); return -EINVAL; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 11f96ac4c21e701650c7d8349b252973185ac6ce upstream
No point for an almost duplicate function.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115902.741173606@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/intel.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -435,21 +435,7 @@ void reload_ucode_intel(void)
static int collect_cpu_info(int cpu_num, struct cpu_signature *csig) { - struct cpuinfo_x86 *c = &cpu_data(cpu_num); - unsigned int val[2]; - - memset(csig, 0, sizeof(*csig)); - - csig->sig = cpuid_eax(0x00000001); - - if ((c->x86_model >= 5) || (c->x86 > 6)) { - /* get processor flags from MSR 0x17 */ - rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]); - csig->pf = 1 << ((val[1] >> 18) & 7); - } - - csig->rev = c->microcode; - + intel_collect_cpu_info(csig); return 0; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit b7fcd995b261c9976e05f47554529c98a0f1cbb0 upstream
Take a cpu_signature argument and work from there. Move the match() helper next to the callsite as there is no point for having it in a header.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115902.797820205@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/cpu.h | 16 +--------------- arch/x86/kernel/cpu/microcode/intel.c | 31 +++++++++++++++++++------------ drivers/platform/x86/intel/ifs/load.c | 2 +- 3 files changed, 21 insertions(+), 28 deletions(-)
--- a/arch/x86/include/asm/cpu.h +++ b/arch/x86/include/asm/cpu.h @@ -75,22 +75,8 @@ struct cpu_signature;
void intel_collect_cpu_info(struct cpu_signature *sig);
-static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1, - unsigned int s2, unsigned int p2) -{ - if (s1 != s2) - return false; - - /* Processor flags are either both 0 ... */ - if (!p1 && !p2) - return true; - - /* ... or they intersect. */ - return p1 & p2; -} - extern u64 x86_read_arch_cap_msr(void); -int intel_find_matching_signature(void *mc, unsigned int csig, int cpf); +bool intel_find_matching_signature(void *mc, struct cpu_signature *sig); int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);
extern struct cpumask cpus_stop_mask; --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -84,29 +84,36 @@ void intel_collect_cpu_info(struct cpu_s } EXPORT_SYMBOL_GPL(intel_collect_cpu_info);
-/* - * Returns 1 if update has been found, 0 otherwise. - */ -int intel_find_matching_signature(void *mc, unsigned int csig, int cpf) +static inline bool cpu_signatures_match(struct cpu_signature *s1, unsigned int sig2, + unsigned int pf2) +{ + if (s1->sig != sig2) + return false; + + /* Processor flags are either both 0 or they intersect. */ + return ((!s1->pf && !pf2) || (s1->pf & pf2)); +} + +bool intel_find_matching_signature(void *mc, struct cpu_signature *sig) { struct microcode_header_intel *mc_hdr = mc; - struct extended_sigtable *ext_hdr; struct extended_signature *ext_sig; + struct extended_sigtable *ext_hdr; int i;
- if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf)) - return 1; + if (cpu_signatures_match(sig, mc_hdr->sig, mc_hdr->pf)) + return true;
/* Look for ext. headers: */ if (get_totalsize(mc_hdr) <= intel_microcode_get_datasize(mc_hdr) + MC_HEADER_SIZE) - return 0; + return false;
ext_hdr = mc + intel_microcode_get_datasize(mc_hdr) + MC_HEADER_SIZE; ext_sig = (void *)ext_hdr + EXT_HEADER_SIZE;
for (i = 0; i < ext_hdr->count; i++) { - if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf)) - return 1; + if (cpu_signatures_match(sig, ext_sig->sig, ext_sig->pf)) + return true; ext_sig++; } return 0; @@ -268,7 +275,7 @@ static __init struct microcode_intel *sc intel_microcode_sanity_check(data, false, MC_HEADER_TYPE_MICROCODE) < 0) break;
- if (!intel_find_matching_signature(data, uci->cpu_sig.sig, uci->cpu_sig.pf)) + if (!intel_find_matching_signature(data, &uci->cpu_sig)) continue;
/* @@ -512,7 +519,7 @@ static enum ucode_state parse_microcode_ if (cur_rev >= mc_header.rev) continue;
- if (!intel_find_matching_signature(mc, uci->cpu_sig.sig, uci->cpu_sig.pf)) + if (!intel_find_matching_signature(mc, &uci->cpu_sig)) continue;
kvfree(new_mc); --- a/drivers/platform/x86/intel/ifs/load.c +++ b/drivers/platform/x86/intel/ifs/load.c @@ -242,7 +242,7 @@ static int image_sanity_check(struct dev
intel_collect_cpu_info(&sig);
- if (!intel_find_matching_signature((void *)data, sig.sig, sig.pf)) { + if (!intel_find_matching_signature((void *)data, &sig)) { dev_err(dev, "cpu signature, processor flags not matching\n"); return -EINVAL; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit b48b26f992a3828b4ae274669f99ce68451d4904 upstream
Microcode is applied on the APs during early bringup. There is no point in trying to apply the microcode again during the hotplug operations and neither at the point where the microcode device is initialized.
Collect CPU info and microcode revision in setup_online_cpu() for now. This will move to the CPU hotplug callback later.
[ bp: Leave the starting notifier for the following scenario:
- boot, late load, suspend to disk, resume
without the starting notifier, only the last core manages to update the microcode upon resume:
# rdmsr -a 0x8b 10000bf 10000bf 10000bf 10000bf 10000bf 10000dc <----
This is on an AMD F10h machine.
For the future, one should check whether potential unification of the CPU init path could cover the resume path too so that this can be simplified even more.
tglx: This is caused by the odd handling of APs which try to find the microcode blob in builtin or initrd instead of caching the microcode blob during early init before the APs are brought up. Will be cleaned up in a later step. ]
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20231017211723.018821624@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 23 ++++++----------------- 1 file changed, 6 insertions(+), 17 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -493,17 +493,6 @@ static void microcode_fini_cpu(int cpu) microcode_ops->microcode_fini_cpu(cpu); }
-static enum ucode_state microcode_init_cpu(int cpu) -{ - struct ucode_cpu_info *uci = ucode_cpu_info + cpu; - - memset(uci, 0, sizeof(*uci)); - - microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig); - - return microcode_ops->apply_microcode(cpu); -} - /** * microcode_bsp_resume - Update boot CPU microcode during resume. */ @@ -558,14 +547,14 @@ static int mc_cpu_down_prep(unsigned int static void setup_online_cpu(struct work_struct *work) { int cpu = smp_processor_id(); - enum ucode_state err; + struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
- err = microcode_init_cpu(cpu); - if (err == UCODE_ERROR) { - pr_err("Error applying microcode on CPU%d\n", cpu); - return; - } + memset(uci, 0, sizeof(*uci));
+ microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig); + cpu_data(cpu).microcode = uci->cpu_sig.rev; + if (!cpu) + boot_cpu_data.microcode = uci->cpu_sig.rev; mc_cpu_online(cpu); }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit ecfd41089348fa4cc767dc588367e9fdf8cb6b9d upstream
find_blobs_in_containers() is invoked on every CPU but overwrites unconditionally ucode_cpu_info of CPU0.
Fix this by using the proper CPU data and move the assignment into the call site apply_ucode_from_containers() so that the function can be reused.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231010150702.433454320@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -503,9 +503,6 @@ static void find_blobs_in_containers(uns if (!get_builtin_microcode(&cp, x86_family(cpuid_1_eax))) cp = find_microcode_in_initrd(ucode_path);
- /* Needed in load_microcode_amd() */ - ucode_cpu_info->cpu_sig.sig = cpuid_1_eax; - *ret = cp; }
@@ -513,6 +510,9 @@ static void apply_ucode_from_containers( { struct cpio_data cp = { };
+ /* Needed in load_microcode_amd() */ + ucode_cpu_info[smp_processor_id()].cpu_sig.sig = cpuid_1_eax; + find_blobs_in_containers(cpuid_1_eax, &cp); if (!(cp.data && cp.size)) return;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit d419d28261e72e1c9ec418711b3da41df2265139 upstream
save_microcode_in_initrd_amd() fails to cache builtin microcode and only scans initrd.
Use find_blobs_in_containers() instead which covers both.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231010150702.495139089@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -533,7 +533,7 @@ int __init save_microcode_in_initrd_amd( enum ucode_state ret; struct cpio_data cp;
- cp = find_microcode_in_initrd(ucode_path); + find_blobs_in_containers(cpuid_1_eax, &cp); if (!(cp.data && cp.size)) return -EINVAL;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit a7939f01672034a58ad3fdbce69bb6c665ce0024 upstream
There is no reason to scan builtin/initrd microcode on each AP.
Cache the builtin/initrd microcode in an early initcall so that the early AP loader can utilize the cache.
The existing fs initcall which invoked save_microcode_in_initrd_amd() is still required to maintain the initrd_gone flag. Rename it accordingly. This will be removed once the AP loader code is converted to use the cache.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211723.187566507@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 8 +++++++- arch/x86/kernel/cpu/microcode/core.c | 26 ++++---------------------- 2 files changed, 11 insertions(+), 23 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -527,12 +527,17 @@ void load_ucode_amd_early(unsigned int c
static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size);
-int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax) +static int __init save_microcode_in_initrd(void) { + unsigned int cpuid_1_eax = native_cpuid_eax(1); + struct cpuinfo_x86 *c = &boot_cpu_data; struct cont_desc desc = { 0 }; enum ucode_state ret; struct cpio_data cp;
+ if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) + return 0; + find_blobs_in_containers(cpuid_1_eax, &cp); if (!(cp.data && cp.size)) return -EINVAL; @@ -549,6 +554,7 @@ int __init save_microcode_in_initrd_amd(
return 0; } +early_initcall(save_microcode_in_initrd);
/* * a small, trivial cache of per-family ucode patches --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -180,30 +180,13 @@ void load_ucode_ap(void) } }
-static int __init save_microcode_in_initrd(void) +/* Temporary workaround until find_microcode_in_initrd() is __init */ +static int __init mark_initrd_gone(void) { - struct cpuinfo_x86 *c = &boot_cpu_data; - int ret = -EINVAL; - - if (dis_ucode_ldr) { - ret = 0; - goto out; - } - - switch (c->x86_vendor) { - case X86_VENDOR_AMD: - if (c->x86 >= 0x10) - ret = save_microcode_in_initrd_amd(cpuid_eax(1)); - break; - default: - break; - } - -out: initrd_gone = true; - - return ret; + return 0; } +fs_initcall(mark_initrd_gone);
struct cpio_data find_microcode_in_initrd(const char *path) { @@ -621,5 +604,4 @@ static int __init microcode_init(void) return error;
} -fs_initcall(save_microcode_in_initrd); late_initcall(microcode_init);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 5af05b8d51a8e3ff5905663655c0f46d1aaae44a upstream
Now that the microcode cache is initialized before the APs are brought up, there is no point in scanning builtin/initrd microcode during AP loading.
Convert the AP loader to utilize the cache, which in turn makes the CPU hotplug callback which applies the microcode after initrd/builtin is gone, obsolete as the early loading during late hotplug operations including the resume path depends now only on the cache.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211723.243426023@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 20 +++++++++++--------- arch/x86/kernel/cpu/microcode/core.c | 15 ++------------- arch/x86/kernel/cpu/microcode/internal.h | 2 -- 3 files changed, 13 insertions(+), 24 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -496,7 +496,7 @@ static bool get_builtin_microcode(struct return false; }
-static void find_blobs_in_containers(unsigned int cpuid_1_eax, struct cpio_data *ret) +static void __init find_blobs_in_containers(unsigned int cpuid_1_eax, struct cpio_data *ret) { struct cpio_data cp;
@@ -506,12 +506,12 @@ static void find_blobs_in_containers(uns *ret = cp; }
-static void apply_ucode_from_containers(unsigned int cpuid_1_eax) +void __init load_ucode_amd_bsp(unsigned int cpuid_1_eax) { struct cpio_data cp = { };
/* Needed in load_microcode_amd() */ - ucode_cpu_info[smp_processor_id()].cpu_sig.sig = cpuid_1_eax; + ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax;
find_blobs_in_containers(cpuid_1_eax, &cp); if (!(cp.data && cp.size)) @@ -520,11 +520,6 @@ static void apply_ucode_from_containers( early_apply_microcode(cpuid_1_eax, cp.data, cp.size); }
-void load_ucode_amd_early(unsigned int cpuid_1_eax) -{ - return apply_ucode_from_containers(cpuid_1_eax); -} - static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size);
static int __init save_microcode_in_initrd(void) @@ -608,7 +603,6 @@ static struct ucode_patch *find_patch(un struct ucode_cpu_info *uci = ucode_cpu_info + cpu; u16 equiv_id;
- equiv_id = find_equiv_id(&equiv_table, uci->cpu_sig.sig); if (!equiv_id) return NULL; @@ -710,6 +704,14 @@ out: return ret; }
+void load_ucode_amd_ap(unsigned int cpuid_1_eax) +{ + unsigned int cpu = smp_processor_id(); + + ucode_cpu_info[cpu].cpu_sig.sig = cpuid_1_eax; + apply_microcode_amd(cpu); +} + static size_t install_equiv_cpu_table(const u8 *buf, size_t buf_size) { u32 equiv_tbl_len; --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -154,7 +154,7 @@ void __init load_ucode_bsp(void) if (intel) load_ucode_intel_bsp(); else - load_ucode_amd_early(cpuid_1_eax); + load_ucode_amd_bsp(cpuid_1_eax); }
void load_ucode_ap(void) @@ -173,7 +173,7 @@ void load_ucode_ap(void) break; case X86_VENDOR_AMD: if (x86_family(cpuid_1_eax) >= 0x10) - load_ucode_amd_early(cpuid_1_eax); + load_ucode_amd_ap(cpuid_1_eax); break; default: break; @@ -494,15 +494,6 @@ static struct syscore_ops mc_syscore_ops .resume = microcode_bsp_resume, };
-static int mc_cpu_starting(unsigned int cpu) -{ - enum ucode_state err = microcode_ops->apply_microcode(cpu); - - pr_debug("%s: CPU%d, err: %d\n", __func__, cpu, err); - - return err == UCODE_ERROR; -} - static int mc_cpu_online(unsigned int cpu) { struct device *dev = get_cpu_device(cpu); @@ -590,8 +581,6 @@ static int __init microcode_init(void) schedule_on_each_cpu(setup_online_cpu);
register_syscore_ops(&mc_syscore_ops); - cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:starting", - mc_cpu_starting, NULL); cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/microcode:online", mc_cpu_online, mc_cpu_down_prep);
--- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -91,7 +91,6 @@ extern bool initrd_gone; #ifdef CONFIG_CPU_SUP_AMD void load_ucode_amd_bsp(unsigned int family); void load_ucode_amd_ap(unsigned int family); -void load_ucode_amd_early(unsigned int cpuid_1_eax); int save_microcode_in_initrd_amd(unsigned int family); void reload_ucode_amd(unsigned int cpu); struct microcode_ops *init_amd_microcode(void); @@ -99,7 +98,6 @@ void exit_amd_microcode(void); #else /* CONFIG_CPU_SUP_AMD */ static inline void load_ucode_amd_bsp(unsigned int family) { } static inline void load_ucode_amd_ap(unsigned int family) { } -static inline void load_ucode_amd_early(unsigned int family) { } static inline int save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; } static inline void reload_ucode_amd(unsigned int cpu) { } static inline struct microcode_ops *init_amd_microcode(void) { return NULL; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 8529e8ab6c6fab8ebf06ead98e77d7646b42fc48 upstream
Get rid of the initrd_gone hack which was required to keep find_microcode_in_initrd() functional after init.
As find_microcode_in_initrd() is now only used during init, mark it accordingly.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211723.298854846@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 17 +---------------- arch/x86/kernel/cpu/microcode/internal.h | 1 - 2 files changed, 1 insertion(+), 17 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -44,8 +44,6 @@ static struct microcode_ops *microcode_ops; bool dis_ucode_ldr = true;
-bool initrd_gone; - /* * Synchronization. * @@ -180,15 +178,7 @@ void load_ucode_ap(void) } }
-/* Temporary workaround until find_microcode_in_initrd() is __init */ -static int __init mark_initrd_gone(void) -{ - initrd_gone = true; - return 0; -} -fs_initcall(mark_initrd_gone); - -struct cpio_data find_microcode_in_initrd(const char *path) +struct cpio_data __init find_microcode_in_initrd(const char *path) { #ifdef CONFIG_BLK_DEV_INITRD unsigned long start = 0; @@ -216,12 +206,7 @@ struct cpio_data find_microcode_in_initr * has the virtual address of the beginning of the initrd. It also * possibly relocates the ramdisk. In either case, initrd_start contains * the updated address so use that instead. - * - * initrd_gone is for the hotplug case where we've thrown out initrd - * already. */ - if (initrd_gone) - return (struct cpio_data){ NULL, 0, "" }; if (initrd_start) start = initrd_start;
--- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -86,7 +86,6 @@ static inline unsigned int x86_cpuid_fam }
extern bool dis_ucode_ldr; -extern bool initrd_gone;
#ifdef CONFIG_CPU_SUP_AMD void load_ucode_amd_bsp(unsigned int family);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 2e1997335ceb6fc819862804f51d4fe83593c138 upstream
Scheduling work on all CPUs to collect the microcode information is just another extra step for no value. Let the CPU hotplug callback registration do it.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211723.354748138@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 29 ++++++++++------------------- 1 file changed, 10 insertions(+), 19 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -481,8 +481,16 @@ static struct syscore_ops mc_syscore_ops
static int mc_cpu_online(unsigned int cpu) { + struct ucode_cpu_info *uci = ucode_cpu_info + cpu; struct device *dev = get_cpu_device(cpu);
+ memset(uci, 0, sizeof(*uci)); + + microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig); + cpu_data(cpu).microcode = uci->cpu_sig.rev; + if (!cpu) + boot_cpu_data.microcode = uci->cpu_sig.rev; + if (sysfs_create_group(&dev->kobj, &mc_attr_group)) pr_err("Failed to create group for CPU%d\n", cpu); return 0; @@ -503,20 +511,6 @@ static int mc_cpu_down_prep(unsigned int return 0; }
-static void setup_online_cpu(struct work_struct *work) -{ - int cpu = smp_processor_id(); - struct ucode_cpu_info *uci = ucode_cpu_info + cpu; - - memset(uci, 0, sizeof(*uci)); - - microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig); - cpu_data(cpu).microcode = uci->cpu_sig.rev; - if (!cpu) - boot_cpu_data.microcode = uci->cpu_sig.rev; - mc_cpu_online(cpu); -} - static struct attribute *cpu_root_microcode_attrs[] = { #ifdef CONFIG_MICROCODE_LATE_LOADING &dev_attr_reload.attr, @@ -562,12 +556,9 @@ static int __init microcode_init(void) } }
- /* Do per-CPU setup */ - schedule_on_each_cpu(setup_online_cpu); - register_syscore_ops(&mc_syscore_ops); - cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/microcode:online", - mc_cpu_online, mc_cpu_down_prep); + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/microcode:online", + mc_cpu_online, mc_cpu_down_prep);
pr_info("Microcode Update Driver: v%s.", DRIVER_VERSION);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit ba48aa32388ac652256baa8d0a6092d350160da0 upstream
This function has nothing to do with suspend. It's a hotplug callback. Remove the bogus comment.
Drop the pointless debug printk. The hotplug core provides tracepoints which track the invocation of those callbacks.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.028651784@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -498,16 +498,10 @@ static int mc_cpu_online(unsigned int cp
static int mc_cpu_down_prep(unsigned int cpu) { - struct device *dev; - - dev = get_cpu_device(cpu); + struct device *dev = get_cpu_device(cpu);
microcode_fini_cpu(cpu); - - /* Suspend is in progress, only remove the interface */ sysfs_remove_group(&dev->kobj, &mc_attr_group); - pr_debug("%s: CPU%d\n", __func__, cpu); - return 0; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 634ac23ad609b3ddd9e0e478bd5afbf49d3a2556 upstream
On CPUs where microcode loading is not NMI-safe the SMT siblings which are parked in one of the play_dead() variants still react to NMIs.
So if an NMI hits while the primary thread updates the microcode the resulting behaviour is undefined. The default play_dead() implementation on modern CPUs is using MWAIT which is not guaranteed to be safe against a microcode update which affects MWAIT.
Take the cpus_booted_once_mask into account to detect this case and refuse to load late if the vendor specific driver does not advertise that late loading is NMI safe.
AMD stated that this is safe, so mark the AMD driver accordingly.
This requirement will be partially lifted in later changes.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.087472735@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/Kconfig | 2 - arch/x86/kernel/cpu/microcode/amd.c | 9 +++-- arch/x86/kernel/cpu/microcode/core.c | 51 +++++++++++++++++++------------ arch/x86/kernel/cpu/microcode/internal.h | 13 +++---- 4 files changed, 44 insertions(+), 31 deletions(-)
--- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1317,7 +1317,7 @@ config MICROCODE config MICROCODE_LATE_LOADING bool "Late microcode loading (DANGEROUS)" default n - depends on MICROCODE + depends on MICROCODE && SMP help Loading microcode late, when the system is up and executing instructions is a tricky business and should be avoided if possible. Just the sequence --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -917,10 +917,11 @@ static void microcode_fini_cpu_amd(int c }
static struct microcode_ops microcode_amd_ops = { - .request_microcode_fw = request_microcode_amd, - .collect_cpu_info = collect_cpu_info_amd, - .apply_microcode = apply_microcode_amd, - .microcode_fini_cpu = microcode_fini_cpu_amd, + .request_microcode_fw = request_microcode_amd, + .collect_cpu_info = collect_cpu_info_amd, + .apply_microcode = apply_microcode_amd, + .microcode_fini_cpu = microcode_fini_cpu_amd, + .nmi_safe = true, };
struct microcode_ops * __init init_amd_microcode(void) --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -254,23 +254,6 @@ static struct platform_device *microcode */ #define SPINUNIT 100 /* 100 nsec */
-static int check_online_cpus(void) -{ - unsigned int cpu; - - /* - * Make sure all CPUs are online. It's fine for SMT to be disabled if - * all the primary threads are still online. - */ - for_each_present_cpu(cpu) { - if (topology_is_primary_thread(cpu) && !cpu_online(cpu)) { - pr_err("Not all CPUs online, aborting microcode update.\n"); - return -EINVAL; - } - } - - return 0; -}
static atomic_t late_cpus_in; static atomic_t late_cpus_out; @@ -387,6 +370,35 @@ static int microcode_reload_late(void) return ret; }
+/* + * Ensure that all required CPUs which are present and have been booted + * once are online. + * + * To pass this check, all primary threads must be online. + * + * If the microcode load is not safe against NMI then all SMT threads + * must be online as well because they still react to NMIs when they are + * soft-offlined and parked in one of the play_dead() variants. So if a + * NMI hits while the primary thread updates the microcode the resulting + * behaviour is undefined. The default play_dead() implementation on + * modern CPUs uses MWAIT, which is also not guaranteed to be safe + * against a microcode update which affects MWAIT. + */ +static bool ensure_cpus_are_online(void) +{ + unsigned int cpu; + + for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { + if (!cpu_online(cpu)) { + if (topology_is_primary_thread(cpu) || !microcode_ops->nmi_safe) { + pr_err("CPU %u not online\n", cpu); + return false; + } + } + } + return true; +} + static ssize_t reload_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t size) @@ -402,9 +414,10 @@ static ssize_t reload_store(struct devic
cpus_read_lock();
- ret = check_online_cpus(); - if (ret) + if (!ensure_cpus_are_online()) { + ret = -EBUSY; goto put; + }
tmp_ret = microcode_ops->request_microcode_fw(bsp, µcode_pdev->dev); if (tmp_ret != UCODE_NEW) --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -20,18 +20,17 @@ enum ucode_state {
struct microcode_ops { enum ucode_state (*request_microcode_fw)(int cpu, struct device *dev); - void (*microcode_fini_cpu)(int cpu);
/* - * The generic 'microcode_core' part guarantees that - * the callbacks below run on a target cpu when they - * are being called. + * The generic 'microcode_core' part guarantees that the callbacks + * below run on a target CPU when they are being called. * See also the "Synchronization" section in microcode_core.c. */ - enum ucode_state (*apply_microcode)(int cpu); - int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); - void (*finalize_late_load)(int result); + enum ucode_state (*apply_microcode)(int cpu); + int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); + void (*finalize_late_load)(int result); + unsigned int nmi_safe : 1; };
extern struct ucode_cpu_info ucode_cpu_info[];
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 6f059e634dcd0d725854514c94c114bbdd83950d upstream
reload_store() is way too complicated. Split the inner workings out and make the following enhancements:
- Taint the kernel only when the microcode was actually updated. If. e.g. the rendezvous fails, then nothing happened and there is no reason for tainting.
- Return useful error codes
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Nikolay Borisov nik.borisov@suse.com Link: https://lore.kernel.org/r/20231002115903.145048840@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 41 ++++++++++++++++------------------- 1 file changed, 19 insertions(+), 22 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -362,11 +362,11 @@ static int microcode_reload_late(void) pr_info("Reload succeeded, microcode revision: 0x%x -> 0x%x\n", old, boot_cpu_data.microcode); microcode_check(&prev_info); + add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); } else { pr_info("Reload failed, current microcode revision: 0x%x\n", boot_cpu_data.microcode); } - return ret; }
@@ -399,40 +399,37 @@ static bool ensure_cpus_are_online(void) return true; }
+static int ucode_load_late_locked(void) +{ + if (!ensure_cpus_are_online()) + return -EBUSY; + + switch (microcode_ops->request_microcode_fw(0, µcode_pdev->dev)) { + case UCODE_NEW: + return microcode_reload_late(); + case UCODE_NFOUND: + return -ENOENT; + default: + return -EBADFD; + } +} + static ssize_t reload_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t size) { - enum ucode_state tmp_ret = UCODE_OK; - int bsp = boot_cpu_data.cpu_index; unsigned long val; - ssize_t ret = 0; + ssize_t ret;
ret = kstrtoul(buf, 0, &val); if (ret || val != 1) return -EINVAL;
cpus_read_lock(); - - if (!ensure_cpus_are_online()) { - ret = -EBUSY; - goto put; - } - - tmp_ret = microcode_ops->request_microcode_fw(bsp, µcode_pdev->dev); - if (tmp_ret != UCODE_NEW) - goto put; - - ret = microcode_reload_late(); -put: + ret = ucode_load_late_locked(); cpus_read_unlock();
- if (ret == 0) - ret = size; - - add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); - - return ret; + return ret ? : size; }
static DEVICE_ATTR_WO(reload);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 0772b9aa1a8f7322dce8588c231cff8b57298a53 upstream
The code is too complicated for no reason:
- The return value is pointless as this is a strict boolean.
- It's way simpler to count down from num_online_cpus() and check for zero.
- The timeout argument is pointless as this is always one second.
- Touching the NMI watchdog every 100ns does not make any sense, neither does checking every 100ns. This is really not a hotpath operation.
Preload the atomic counter with the number of online CPUs and simplify the whole timeout logic. Delay for one microsecond and touch the NMI watchdog once per millisecond.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.204251527@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 39 +++++++++++++++-------------------- 1 file changed, 17 insertions(+), 22 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -252,31 +252,26 @@ static struct platform_device *microcode * requirement can be relaxed in the future. Right now, this is conservative * and good. */ -#define SPINUNIT 100 /* 100 nsec */ +static atomic_t late_cpus_in, late_cpus_out;
- -static atomic_t late_cpus_in; -static atomic_t late_cpus_out; - -static int __wait_for_cpus(atomic_t *t, long long timeout) +static bool wait_for_cpus(atomic_t *cnt) { - int all_cpus = num_online_cpus(); + unsigned int timeout;
- atomic_inc(t); + WARN_ON_ONCE(atomic_dec_return(cnt) < 0);
- while (atomic_read(t) < all_cpus) { - if (timeout < SPINUNIT) { - pr_err("Timeout while waiting for CPUs rendezvous, remaining: %d\n", - all_cpus - atomic_read(t)); - return 1; - } + for (timeout = 0; timeout < USEC_PER_SEC; timeout++) { + if (!atomic_read(cnt)) + return true;
- ndelay(SPINUNIT); - timeout -= SPINUNIT; + udelay(1);
- touch_nmi_watchdog(); + if (!(timeout % USEC_PER_MSEC)) + touch_nmi_watchdog(); } - return 0; + /* Prevent the late comers from making progress and let them time out */ + atomic_inc(cnt); + return false; }
/* @@ -294,7 +289,7 @@ static int __reload_late(void *info) * Wait for all CPUs to arrive. A load will not be attempted unless all * CPUs show up. * */ - if (__wait_for_cpus(&late_cpus_in, NSEC_PER_SEC)) + if (!wait_for_cpus(&late_cpus_in)) return -1;
/* @@ -317,7 +312,7 @@ static int __reload_late(void *info) }
wait_for_siblings: - if (__wait_for_cpus(&late_cpus_out, NSEC_PER_SEC)) + if (!wait_for_cpus(&late_cpus_out)) panic("Timeout during microcode update!\n");
/* @@ -344,8 +339,8 @@ static int microcode_reload_late(void) pr_err("Attempting late microcode loading - it is dangerous and taints the kernel.\n"); pr_err("You should switch to early loading, if possible.\n");
- atomic_set(&late_cpus_in, 0); - atomic_set(&late_cpus_out, 0); + atomic_set(&late_cpus_in, num_online_cpus()); + atomic_set(&late_cpus_out, num_online_cpus());
/* * Take a snapshot before the microcode update in order to compare and
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 4b753955e9151ad2f722137a7bcbafda756186b3 upstream
The microcode rendezvous is purely acting on global state, which does not allow to analyze fails in a coherent way.
Introduce per CPU state where the results are written into, which allows to analyze the return codes of the individual CPUs.
Initialize the state when walking the cpu_present_mask in the online check to avoid another for_each_cpu() loop.
Enhance the result print out with that.
The structure is intentionally named ucode_ctrl as it will gain control fields in subsequent changes.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211723.632681010@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 112 ++++++++++++++++++------------- arch/x86/kernel/cpu/microcode/internal.h | 1 2 files changed, 67 insertions(+), 46 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -252,6 +252,11 @@ static struct platform_device *microcode * requirement can be relaxed in the future. Right now, this is conservative * and good. */ +struct microcode_ctrl { + enum ucode_state result; +}; + +static DEFINE_PER_CPU(struct microcode_ctrl, ucode_ctrl); static atomic_t late_cpus_in, late_cpus_out;
static bool wait_for_cpus(atomic_t *cnt) @@ -274,23 +279,19 @@ static bool wait_for_cpus(atomic_t *cnt) return false; }
-/* - * Returns: - * < 0 - on error - * 0 - success (no update done or microcode was updated) - */ -static int __reload_late(void *info) +static int load_cpus_stopped(void *unused) { int cpu = smp_processor_id(); - enum ucode_state err; - int ret = 0; + enum ucode_state ret;
/* * Wait for all CPUs to arrive. A load will not be attempted unless all * CPUs show up. * */ - if (!wait_for_cpus(&late_cpus_in)) - return -1; + if (!wait_for_cpus(&late_cpus_in)) { + this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); + return 0; + }
/* * On an SMT system, it suffices to load the microcode on one sibling of @@ -299,17 +300,11 @@ static int __reload_late(void *info) * loading attempts happen on multiple threads of an SMT core. See * below. */ - if (cpumask_first(topology_sibling_cpumask(cpu)) == cpu) - err = microcode_ops->apply_microcode(cpu); - else + if (cpumask_first(topology_sibling_cpumask(cpu)) != cpu) goto wait_for_siblings;
- if (err >= UCODE_NFOUND) { - if (err == UCODE_ERROR) { - pr_warn("Error reloading microcode on CPU %d\n", cpu); - ret = -1; - } - } + ret = microcode_ops->apply_microcode(cpu); + this_cpu_write(ucode_ctrl.result, ret);
wait_for_siblings: if (!wait_for_cpus(&late_cpus_out)) @@ -321,19 +316,18 @@ wait_for_siblings: * per-cpu cpuinfo can be updated with right microcode * revision. */ - if (cpumask_first(topology_sibling_cpumask(cpu)) != cpu) - err = microcode_ops->apply_microcode(cpu); + if (cpumask_first(topology_sibling_cpumask(cpu)) == cpu) + return 0;
- return ret; + ret = microcode_ops->apply_microcode(cpu); + this_cpu_write(ucode_ctrl.result, ret); + return 0; }
-/* - * Reload microcode late on all CPUs. Wait for a sec until they - * all gather together. - */ -static int microcode_reload_late(void) +static int load_late_stop_cpus(void) { - int old = boot_cpu_data.microcode, ret; + unsigned int cpu, updated = 0, failed = 0, timedout = 0, siblings = 0; + int old_rev = boot_cpu_data.microcode; struct cpuinfo_x86 prev_info;
pr_err("Attempting late microcode loading - it is dangerous and taints the kernel.\n"); @@ -348,26 +342,47 @@ static int microcode_reload_late(void) */ store_cpu_caps(&prev_info);
- ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask); + stop_machine_cpuslocked(load_cpus_stopped, NULL, cpu_online_mask); + + /* Analyze the results */ + for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { + switch (per_cpu(ucode_ctrl.result, cpu)) { + case UCODE_UPDATED: updated++; break; + case UCODE_TIMEOUT: timedout++; break; + case UCODE_OK: siblings++; break; + default: failed++; break; + } + }
if (microcode_ops->finalize_late_load) - microcode_ops->finalize_late_load(ret); + microcode_ops->finalize_late_load(!updated);
- if (!ret) { - pr_info("Reload succeeded, microcode revision: 0x%x -> 0x%x\n", - old, boot_cpu_data.microcode); - microcode_check(&prev_info); - add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); - } else { - pr_info("Reload failed, current microcode revision: 0x%x\n", - boot_cpu_data.microcode); + if (!updated) { + /* Nothing changed. */ + if (!failed && !timedout) + return 0; + pr_err("update failed: %u CPUs failed %u CPUs timed out\n", + failed, timedout); + return -EIO; } - return ret; + + add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); + pr_info("load: updated on %u primary CPUs with %u siblings\n", updated, siblings); + if (failed || timedout) { + pr_err("load incomplete. %u CPUs timed out or failed\n", + num_online_cpus() - (updated + siblings)); + } + pr_info("revision: 0x%x -> 0x%x\n", old_rev, boot_cpu_data.microcode); + microcode_check(&prev_info); + + return updated + siblings == num_online_cpus() ? 0 : -EIO; }
/* - * Ensure that all required CPUs which are present and have been booted - * once are online. + * This function does two things: + * + * 1) Ensure that all required CPUs which are present and have been booted + * once are online. * * To pass this check, all primary threads must be online. * @@ -378,9 +393,12 @@ static int microcode_reload_late(void) * behaviour is undefined. The default play_dead() implementation on * modern CPUs uses MWAIT, which is also not guaranteed to be safe * against a microcode update which affects MWAIT. + * + * 2) Initialize the per CPU control structure */ -static bool ensure_cpus_are_online(void) +static bool setup_cpus(void) { + struct microcode_ctrl ctrl = { .result = -1, }; unsigned int cpu;
for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { @@ -390,18 +408,20 @@ static bool ensure_cpus_are_online(void) return false; } } + /* Initialize the per CPU state */ + per_cpu(ucode_ctrl, cpu) = ctrl; } return true; }
-static int ucode_load_late_locked(void) +static int load_late_locked(void) { - if (!ensure_cpus_are_online()) + if (!setup_cpus()) return -EBUSY;
switch (microcode_ops->request_microcode_fw(0, µcode_pdev->dev)) { case UCODE_NEW: - return microcode_reload_late(); + return load_late_stop_cpus(); case UCODE_NFOUND: return -ENOENT; default: @@ -421,7 +441,7 @@ static ssize_t reload_store(struct devic return -EINVAL;
cpus_read_lock(); - ret = ucode_load_late_locked(); + ret = load_late_locked(); cpus_read_unlock();
return ret ? : size; --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -16,6 +16,7 @@ enum ucode_state { UCODE_UPDATED, UCODE_NFOUND, UCODE_ERROR, + UCODE_TIMEOUT, };
struct microcode_ops {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit ba3aeb97cb2c53025356f31c5a0a294385194115 upstream
Add a per CPU control field to ucode_ctrl and define constants for it which are going to be used to control the loading state machine.
In theory this could be a global control field, but a global control does not cover the following case:
15 primary CPUs load microcode successfully 1 primary CPU fails and returns with an error code
With global control the sibling of the failed CPU would either try again or the whole operation would be aborted with the consequence that the 15 siblings do not invoke the apply path and end up with inconsistent software state. The result in dmesg would be inconsistent too.
There are two additional fields added and initialized:
ctrl_cpu and secondaries. ctrl_cpu is the CPU number of the primary thread for now, but with the upcoming uniform loading at package or system scope this will be one CPU per package or just one CPU. Secondaries hands the control CPU a CPU mask which will be required to release the secondary CPUs out of the wait loop.
Preparatory change for implementing a properly split control flow for primary and secondary CPUs.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.319959519@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -252,8 +252,19 @@ static struct platform_device *microcode * requirement can be relaxed in the future. Right now, this is conservative * and good. */ +enum sibling_ctrl { + /* Spinwait with timeout */ + SCTRL_WAIT, + /* Invoke the microcode_apply() callback */ + SCTRL_APPLY, + /* Proceed without invoking the microcode_apply() callback */ + SCTRL_DONE, +}; + struct microcode_ctrl { + enum sibling_ctrl ctrl; enum ucode_state result; + unsigned int ctrl_cpu; };
static DEFINE_PER_CPU(struct microcode_ctrl, ucode_ctrl); @@ -398,7 +409,7 @@ static int load_late_stop_cpus(void) */ static bool setup_cpus(void) { - struct microcode_ctrl ctrl = { .result = -1, }; + struct microcode_ctrl ctrl = { .ctrl = SCTRL_WAIT, .result = -1, }; unsigned int cpu;
for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { @@ -408,7 +419,12 @@ static bool setup_cpus(void) return false; } } - /* Initialize the per CPU state */ + + /* + * Initialize the per CPU state. This is core scope for now, + * but prepared to take package or system scope into account. + */ + ctrl.ctrl_cpu = cpumask_first(topology_sibling_cpumask(cpu)); per_cpu(ucode_ctrl, cpu) = ctrl; } return true;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 6067788f04b1020b316344fe34746f96d594a042 upstream
The current all in one code is unreadable and really not suited for adding future features like uniform loading with package or system scope.
Provide a set of new control functions which split the handling of the primary and secondary CPUs. These will replace the current rendezvous all in one function in the next step. This is intentionally a separate change because diff makes an complete unreadable mess otherwise.
So the flow separates the primary and the secondary CPUs into their own functions which use the control field in the per CPU ucode_ctrl struct.
primary() secondary() wait_for_all() wait_for_all() apply_ucode() wait_for_release() release() apply_ucode()
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.377922731@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 84 +++++++++++++++++++++++++++++++++++ 1 file changed, 84 insertions(+)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -290,6 +290,90 @@ static bool wait_for_cpus(atomic_t *cnt) return false; }
+static bool wait_for_ctrl(void) +{ + unsigned int timeout; + + for (timeout = 0; timeout < USEC_PER_SEC; timeout++) { + if (this_cpu_read(ucode_ctrl.ctrl) != SCTRL_WAIT) + return true; + udelay(1); + if (!(timeout % 1000)) + touch_nmi_watchdog(); + } + return false; +} + +static __maybe_unused void load_secondary(unsigned int cpu) +{ + unsigned int ctrl_cpu = this_cpu_read(ucode_ctrl.ctrl_cpu); + enum ucode_state ret; + + /* Initial rendezvous to ensure that all CPUs have arrived */ + if (!wait_for_cpus(&late_cpus_in)) { + pr_err_once("load: %d CPUs timed out\n", atomic_read(&late_cpus_in) - 1); + this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); + return; + } + + /* + * Wait for primary threads to complete. If one of them hangs due + * to the update, there is no way out. This is non-recoverable + * because the CPU might hold locks or resources and confuse the + * scheduler, watchdogs etc. There is no way to safely evacuate the + * machine. + */ + if (!wait_for_ctrl()) + panic("Microcode load: Primary CPU %d timed out\n", ctrl_cpu); + + /* + * If the primary succeeded then invoke the apply() callback, + * otherwise copy the state from the primary thread. + */ + if (this_cpu_read(ucode_ctrl.ctrl) == SCTRL_APPLY) + ret = microcode_ops->apply_microcode(cpu); + else + ret = per_cpu(ucode_ctrl.result, ctrl_cpu); + + this_cpu_write(ucode_ctrl.result, ret); + this_cpu_write(ucode_ctrl.ctrl, SCTRL_DONE); +} + +static __maybe_unused void load_primary(unsigned int cpu) +{ + struct cpumask *secondaries = topology_sibling_cpumask(cpu); + enum sibling_ctrl ctrl; + enum ucode_state ret; + unsigned int sibling; + + /* Initial rendezvous to ensure that all CPUs have arrived */ + if (!wait_for_cpus(&late_cpus_in)) { + this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); + pr_err_once("load: %d CPUs timed out\n", atomic_read(&late_cpus_in) - 1); + return; + } + + ret = microcode_ops->apply_microcode(cpu); + this_cpu_write(ucode_ctrl.result, ret); + this_cpu_write(ucode_ctrl.ctrl, SCTRL_DONE); + + /* + * If the update was successful, let the siblings run the apply() + * callback. If not, tell them it's done. This also covers the + * case where the CPU has uniform loading at package or system + * scope implemented but does not advertise it. + */ + if (ret == UCODE_UPDATED || ret == UCODE_OK) + ctrl = SCTRL_APPLY; + else + ctrl = SCTRL_DONE; + + for_each_cpu(sibling, secondaries) { + if (sibling != cpu) + per_cpu(ucode_ctrl.ctrl, sibling) = ctrl; + } +} + static int load_cpus_stopped(void *unused) { int cpu = smp_processor_id();
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 0bf871651211b58c7b19f40b746b646d5311e2ec upstream
with a new handler which just separates the control flow of primary and secondary CPUs.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.433704135@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 51 ++++++----------------------------- 1 file changed, 9 insertions(+), 42 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -268,7 +268,7 @@ struct microcode_ctrl { };
static DEFINE_PER_CPU(struct microcode_ctrl, ucode_ctrl); -static atomic_t late_cpus_in, late_cpus_out; +static atomic_t late_cpus_in;
static bool wait_for_cpus(atomic_t *cnt) { @@ -304,7 +304,7 @@ static bool wait_for_ctrl(void) return false; }
-static __maybe_unused void load_secondary(unsigned int cpu) +static void load_secondary(unsigned int cpu) { unsigned int ctrl_cpu = this_cpu_read(ucode_ctrl.ctrl_cpu); enum ucode_state ret; @@ -339,7 +339,7 @@ static __maybe_unused void load_secondar this_cpu_write(ucode_ctrl.ctrl, SCTRL_DONE); }
-static __maybe_unused void load_primary(unsigned int cpu) +static void load_primary(unsigned int cpu) { struct cpumask *secondaries = topology_sibling_cpumask(cpu); enum sibling_ctrl ctrl; @@ -376,46 +376,14 @@ static __maybe_unused void load_primary(
static int load_cpus_stopped(void *unused) { - int cpu = smp_processor_id(); - enum ucode_state ret; - - /* - * Wait for all CPUs to arrive. A load will not be attempted unless all - * CPUs show up. - * */ - if (!wait_for_cpus(&late_cpus_in)) { - this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); - return 0; - } - - /* - * On an SMT system, it suffices to load the microcode on one sibling of - * the core because the microcode engine is shared between the threads. - * Synchronization still needs to take place so that no concurrent - * loading attempts happen on multiple threads of an SMT core. See - * below. - */ - if (cpumask_first(topology_sibling_cpumask(cpu)) != cpu) - goto wait_for_siblings; + unsigned int cpu = smp_processor_id();
- ret = microcode_ops->apply_microcode(cpu); - this_cpu_write(ucode_ctrl.result, ret); - -wait_for_siblings: - if (!wait_for_cpus(&late_cpus_out)) - panic("Timeout during microcode update!\n"); - - /* - * At least one thread has completed update on each core. - * For others, simply call the update to make sure the - * per-cpu cpuinfo can be updated with right microcode - * revision. - */ - if (cpumask_first(topology_sibling_cpumask(cpu)) == cpu) - return 0; + if (this_cpu_read(ucode_ctrl.ctrl_cpu) == cpu) + load_primary(cpu); + else + load_secondary(cpu);
- ret = microcode_ops->apply_microcode(cpu); - this_cpu_write(ucode_ctrl.result, ret); + /* No point to wait here. The CPUs will all wait in stop_machine(). */ return 0; }
@@ -429,7 +397,6 @@ static int load_late_stop_cpus(void) pr_err("You should switch to early loading, if possible.\n");
atomic_set(&late_cpus_in, num_online_cpus()); - atomic_set(&late_cpus_out, num_online_cpus());
/* * Take a snapshot before the microcode update in order to compare and
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 7eb314a22800457396f541c655697dabd71e44a7 upstream
stop_machine() does not prevent the spin-waiting sibling from handling an NMI, which is obviously violating the whole concept of rendezvous.
Implement a static branch right in the beginning of the NMI handler which is nopped out except when enabled by the late loading mechanism.
The late loader enables the static branch before stop_machine() is invoked. Each CPU has an nmi_enable in its control structure which indicates whether the CPU should go into the update routine.
This is required to bridge the gap between enabling the branch and actually being at the point where it is required to enter the loader wait loop.
Each CPU which arrives in the stopper thread function sets that flag and issues a self NMI right after that. If the NMI function sees the flag clear, it returns. If it's set it clears the flag and enters the rendezvous.
This is safe against a real NMI which hits in between setting the flag and sending the NMI to itself. The real NMI will be swallowed by the microcode update and the self NMI will then let stuff continue. Otherwise this would end up with a spurious NMI.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.489900814@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/microcode.h | 12 ++++++++ arch/x86/kernel/cpu/microcode/core.c | 42 ++++++++++++++++++++++++++++--- arch/x86/kernel/cpu/microcode/intel.c | 1 arch/x86/kernel/cpu/microcode/internal.h | 3 +- arch/x86/kernel/nmi.c | 4 ++ 5 files changed, 57 insertions(+), 5 deletions(-)
--- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -70,4 +70,16 @@ static inline u32 intel_get_microcode_re } #endif /* !CONFIG_CPU_SUP_INTEL */
+bool microcode_nmi_handler(void); + +#ifdef CONFIG_MICROCODE_LATE_LOADING +DECLARE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); +static __always_inline bool microcode_nmi_handler_enabled(void) +{ + return static_branch_unlikely(µcode_nmi_handler_enable); +} +#else +static __always_inline bool microcode_nmi_handler_enabled(void) { return false; } +#endif + #endif /* _ASM_X86_MICROCODE_H */ --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -23,6 +23,7 @@ #include <linux/miscdevice.h> #include <linux/capability.h> #include <linux/firmware.h> +#include <linux/cpumask.h> #include <linux/kernel.h> #include <linux/delay.h> #include <linux/mutex.h> @@ -31,6 +32,7 @@ #include <linux/fs.h> #include <linux/mm.h>
+#include <asm/apic.h> #include <asm/cpu_device_id.h> #include <asm/perf_event.h> #include <asm/processor.h> @@ -265,8 +267,10 @@ struct microcode_ctrl { enum sibling_ctrl ctrl; enum ucode_state result; unsigned int ctrl_cpu; + bool nmi_enabled; };
+DEFINE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); static DEFINE_PER_CPU(struct microcode_ctrl, ucode_ctrl); static atomic_t late_cpus_in;
@@ -282,7 +286,8 @@ static bool wait_for_cpus(atomic_t *cnt)
udelay(1);
- if (!(timeout % USEC_PER_MSEC)) + /* If invoked directly, tickle the NMI watchdog */ + if (!microcode_ops->use_nmi && !(timeout % USEC_PER_MSEC)) touch_nmi_watchdog(); } /* Prevent the late comers from making progress and let them time out */ @@ -298,7 +303,8 @@ static bool wait_for_ctrl(void) if (this_cpu_read(ucode_ctrl.ctrl) != SCTRL_WAIT) return true; udelay(1); - if (!(timeout % 1000)) + /* If invoked directly, tickle the NMI watchdog */ + if (!microcode_ops->use_nmi && !(timeout % 1000)) touch_nmi_watchdog(); } return false; @@ -374,7 +380,7 @@ static void load_primary(unsigned int cp } }
-static int load_cpus_stopped(void *unused) +static bool microcode_update_handler(void) { unsigned int cpu = smp_processor_id();
@@ -383,7 +389,29 @@ static int load_cpus_stopped(void *unuse else load_secondary(cpu);
- /* No point to wait here. The CPUs will all wait in stop_machine(). */ + touch_nmi_watchdog(); + return true; +} + +bool microcode_nmi_handler(void) +{ + if (!this_cpu_read(ucode_ctrl.nmi_enabled)) + return false; + + this_cpu_write(ucode_ctrl.nmi_enabled, false); + return microcode_update_handler(); +} + +static int load_cpus_stopped(void *unused) +{ + if (microcode_ops->use_nmi) { + /* Enable the NMI handler and raise NMI */ + this_cpu_write(ucode_ctrl.nmi_enabled, true); + apic->send_IPI(smp_processor_id(), NMI_VECTOR); + } else { + /* Just invoke the handler directly */ + microcode_update_handler(); + } return 0; }
@@ -404,8 +432,14 @@ static int load_late_stop_cpus(void) */ store_cpu_caps(&prev_info);
+ if (microcode_ops->use_nmi) + static_branch_enable_cpuslocked(µcode_nmi_handler_enable); + stop_machine_cpuslocked(load_cpus_stopped, NULL, cpu_online_mask);
+ if (microcode_ops->use_nmi) + static_branch_disable_cpuslocked(µcode_nmi_handler_enable); + /* Analyze the results */ for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { switch (per_cpu(ucode_ctrl.result, cpu)) { --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -611,6 +611,7 @@ static struct microcode_ops microcode_in .collect_cpu_info = collect_cpu_info, .apply_microcode = apply_microcode_late, .finalize_late_load = finalize_late_load, + .use_nmi = IS_ENABLED(CONFIG_X86_64), };
static __init void calc_llc_size_per_core(struct cpuinfo_x86 *c) --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -31,7 +31,8 @@ struct microcode_ops { enum ucode_state (*apply_microcode)(int cpu); int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); void (*finalize_late_load)(int result); - unsigned int nmi_safe : 1; + unsigned int nmi_safe : 1, + use_nmi : 1; };
extern struct ucode_cpu_info ucode_cpu_info[]; --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -33,6 +33,7 @@ #include <asm/reboot.h> #include <asm/cache.h> #include <asm/nospec-branch.h> +#include <asm/microcode.h> #include <asm/sev.h>
#define CREATE_TRACE_POINTS @@ -343,6 +344,9 @@ static noinstr void default_do_nmi(struc
instrumentation_begin();
+ if (microcode_nmi_handler_enabled() && microcode_nmi_handler()) + goto out; + handled = nmi_handle(NMI_LOCAL, regs); __this_cpu_add(nmi_stats.normal, handled); if (handled) {
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 1582c0f4a21303792f523fe2839dd8433ee630c0 upstream
The wait for control loop in which the siblings are waiting for the microcode update on the primary thread must be protected against instrumentation as instrumentation can end up in #INT3, #DB or #PF, which then returns with IRET. That IRET reenables NMI which is the opposite of what the NMI rendezvous is trying to achieve.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.545969323@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/core.c | 111 ++++++++++++++++++++++++++--------- 1 file changed, 83 insertions(+), 28 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -272,54 +272,65 @@ struct microcode_ctrl {
DEFINE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); static DEFINE_PER_CPU(struct microcode_ctrl, ucode_ctrl); +static unsigned int loops_per_usec; static atomic_t late_cpus_in;
-static bool wait_for_cpus(atomic_t *cnt) +static noinstr bool wait_for_cpus(atomic_t *cnt) { - unsigned int timeout; + unsigned int timeout, loops;
- WARN_ON_ONCE(atomic_dec_return(cnt) < 0); + WARN_ON_ONCE(raw_atomic_dec_return(cnt) < 0);
for (timeout = 0; timeout < USEC_PER_SEC; timeout++) { - if (!atomic_read(cnt)) + if (!raw_atomic_read(cnt)) return true;
- udelay(1); + for (loops = 0; loops < loops_per_usec; loops++) + cpu_relax();
/* If invoked directly, tickle the NMI watchdog */ - if (!microcode_ops->use_nmi && !(timeout % USEC_PER_MSEC)) + if (!microcode_ops->use_nmi && !(timeout % USEC_PER_MSEC)) { + instrumentation_begin(); touch_nmi_watchdog(); + instrumentation_end(); + } } /* Prevent the late comers from making progress and let them time out */ - atomic_inc(cnt); + raw_atomic_inc(cnt); return false; }
-static bool wait_for_ctrl(void) +static noinstr bool wait_for_ctrl(void) { - unsigned int timeout; + unsigned int timeout, loops;
for (timeout = 0; timeout < USEC_PER_SEC; timeout++) { - if (this_cpu_read(ucode_ctrl.ctrl) != SCTRL_WAIT) + if (raw_cpu_read(ucode_ctrl.ctrl) != SCTRL_WAIT) return true; - udelay(1); + + for (loops = 0; loops < loops_per_usec; loops++) + cpu_relax(); + /* If invoked directly, tickle the NMI watchdog */ - if (!microcode_ops->use_nmi && !(timeout % 1000)) + if (!microcode_ops->use_nmi && !(timeout % USEC_PER_MSEC)) { + instrumentation_begin(); touch_nmi_watchdog(); + instrumentation_end(); + } } return false; }
-static void load_secondary(unsigned int cpu) +/* + * Protected against instrumentation up to the point where the primary + * thread completed the update. See microcode_nmi_handler() for details. + */ +static noinstr bool load_secondary_wait(unsigned int ctrl_cpu) { - unsigned int ctrl_cpu = this_cpu_read(ucode_ctrl.ctrl_cpu); - enum ucode_state ret; - /* Initial rendezvous to ensure that all CPUs have arrived */ if (!wait_for_cpus(&late_cpus_in)) { - pr_err_once("load: %d CPUs timed out\n", atomic_read(&late_cpus_in) - 1); - this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); - return; + raw_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); + return false; }
/* @@ -329,9 +340,33 @@ static void load_secondary(unsigned int * scheduler, watchdogs etc. There is no way to safely evacuate the * machine. */ - if (!wait_for_ctrl()) - panic("Microcode load: Primary CPU %d timed out\n", ctrl_cpu); + if (wait_for_ctrl()) + return true; + + instrumentation_begin(); + panic("Microcode load: Primary CPU %d timed out\n", ctrl_cpu); + instrumentation_end(); +}
+/* + * Protected against instrumentation up to the point where the primary + * thread completed the update. See microcode_nmi_handler() for details. + */ +static noinstr void load_secondary(unsigned int cpu) +{ + unsigned int ctrl_cpu = raw_cpu_read(ucode_ctrl.ctrl_cpu); + enum ucode_state ret; + + if (!load_secondary_wait(ctrl_cpu)) { + instrumentation_begin(); + pr_err_once("load: %d CPUs timed out\n", + atomic_read(&late_cpus_in) - 1); + instrumentation_end(); + return; + } + + /* Primary thread completed. Allow to invoke instrumentable code */ + instrumentation_begin(); /* * If the primary succeeded then invoke the apply() callback, * otherwise copy the state from the primary thread. @@ -343,6 +378,7 @@ static void load_secondary(unsigned int
this_cpu_write(ucode_ctrl.result, ret); this_cpu_write(ucode_ctrl.ctrl, SCTRL_DONE); + instrumentation_end(); }
static void load_primary(unsigned int cpu) @@ -380,25 +416,43 @@ static void load_primary(unsigned int cp } }
-static bool microcode_update_handler(void) +static noinstr bool microcode_update_handler(void) { - unsigned int cpu = smp_processor_id(); + unsigned int cpu = raw_smp_processor_id();
- if (this_cpu_read(ucode_ctrl.ctrl_cpu) == cpu) + if (raw_cpu_read(ucode_ctrl.ctrl_cpu) == cpu) { + instrumentation_begin(); load_primary(cpu); - else + instrumentation_end(); + } else { load_secondary(cpu); + }
+ instrumentation_begin(); touch_nmi_watchdog(); + instrumentation_end(); + return true; }
-bool microcode_nmi_handler(void) +/* + * Protection against instrumentation is required for CPUs which are not + * safe against an NMI which is delivered to the secondary SMT sibling + * while the primary thread updates the microcode. Instrumentation can end + * up in #INT3, #DB and #PF. The IRET from those exceptions reenables NMI + * which is the opposite of what the NMI rendezvous is trying to achieve. + * + * The primary thread is safe versus instrumentation as the actual + * microcode update handles this correctly. It's only the sibling code + * path which must be NMI safe until the primary thread completed the + * update. + */ +bool noinstr microcode_nmi_handler(void) { - if (!this_cpu_read(ucode_ctrl.nmi_enabled)) + if (!raw_cpu_read(ucode_ctrl.nmi_enabled)) return false;
- this_cpu_write(ucode_ctrl.nmi_enabled, false); + raw_cpu_write(ucode_ctrl.nmi_enabled, false); return microcode_update_handler(); }
@@ -425,6 +479,7 @@ static int load_late_stop_cpus(void) pr_err("You should switch to early loading, if possible.\n");
atomic_set(&late_cpus_in, num_online_cpus()); + loops_per_usec = loops_per_jiffy / (TICK_NSEC / 1000);
/* * Take a snapshot before the microcode update in order to compare and
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 9cab5fb776d4367e26950cf759211e948335288e upstream
When SMT siblings are soft-offlined and parked in one of the play_dead() variants they still react on NMI, which is problematic on affected Intel CPUs. The default play_dead() variant uses MWAIT on modern CPUs, which is not guaranteed to be safe when updated concurrently.
Right now late loading is prevented when not all SMT siblings are online, but as they still react on NMI, it is possible to bring them out of their park position into a trivial rendezvous handler.
Provide a function which allows to do that. I does sanity checks whether the target is in the cpus_booted_once_mask and whether the APIC driver supports it.
Mark X2APIC and XAPIC as capable, but exclude 32bit and the UV and NUMACHIP variants as that needs feedback from the relevant experts.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.603100036@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/apic.h | 5 ++++- arch/x86/kernel/apic/apic_flat_64.c | 2 ++ arch/x86/kernel/apic/ipi.c | 8 ++++++++ arch/x86/kernel/apic/x2apic_cluster.c | 1 + arch/x86/kernel/apic/x2apic_phys.c | 1 + 5 files changed, 16 insertions(+), 1 deletion(-)
--- a/arch/x86/include/asm/apic.h +++ b/arch/x86/include/asm/apic.h @@ -277,7 +277,8 @@ struct apic {
u32 disable_esr : 1, dest_mode_logical : 1, - x2apic_set_max_apicid : 1; + x2apic_set_max_apicid : 1, + nmi_to_offline_cpu : 1;
u32 (*calc_dest_apicid)(unsigned int cpu);
@@ -543,6 +544,8 @@ extern bool default_check_apicid_used(ph extern void default_ioapic_phys_id_map(physid_mask_t *phys_map, physid_mask_t *retmap); extern int default_cpu_present_to_apicid(int mps_cpu);
+void apic_send_nmi_to_offline_cpu(unsigned int cpu); + #else /* CONFIG_X86_LOCAL_APIC */
static inline unsigned int read_apic_id(void) { return 0; } --- a/arch/x86/kernel/apic/apic_flat_64.c +++ b/arch/x86/kernel/apic/apic_flat_64.c @@ -103,6 +103,7 @@ static struct apic apic_flat __ro_after_ .send_IPI_allbutself = default_send_IPI_allbutself, .send_IPI_all = default_send_IPI_all, .send_IPI_self = default_send_IPI_self, + .nmi_to_offline_cpu = true,
.read = native_apic_mem_read, .write = native_apic_mem_write, @@ -175,6 +176,7 @@ static struct apic apic_physflat __ro_af .send_IPI_allbutself = default_send_IPI_allbutself, .send_IPI_all = default_send_IPI_all, .send_IPI_self = default_send_IPI_self, + .nmi_to_offline_cpu = true,
.read = native_apic_mem_read, .write = native_apic_mem_write, --- a/arch/x86/kernel/apic/ipi.c +++ b/arch/x86/kernel/apic/ipi.c @@ -97,6 +97,14 @@ sendmask: __apic_send_IPI_mask(mask, CALL_FUNCTION_VECTOR); }
+void apic_send_nmi_to_offline_cpu(unsigned int cpu) +{ + if (WARN_ON_ONCE(!apic->nmi_to_offline_cpu)) + return; + if (WARN_ON_ONCE(!cpumask_test_cpu(cpu, &cpus_booted_once_mask))) + return; + apic->send_IPI(cpu, NMI_VECTOR); +} #endif /* CONFIG_SMP */
static inline int __prepare_ICR2(unsigned int mask) --- a/arch/x86/kernel/apic/x2apic_cluster.c +++ b/arch/x86/kernel/apic/x2apic_cluster.c @@ -251,6 +251,7 @@ static struct apic apic_x2apic_cluster _ .send_IPI_allbutself = x2apic_send_IPI_allbutself, .send_IPI_all = x2apic_send_IPI_all, .send_IPI_self = x2apic_send_IPI_self, + .nmi_to_offline_cpu = true,
.read = native_apic_msr_read, .write = native_apic_msr_write, --- a/arch/x86/kernel/apic/x2apic_phys.c +++ b/arch/x86/kernel/apic/x2apic_phys.c @@ -166,6 +166,7 @@ static struct apic apic_x2apic_phys __ro .send_IPI_allbutself = x2apic_send_IPI_allbutself, .send_IPI_all = x2apic_send_IPI_all, .send_IPI_self = x2apic_send_IPI_self, + .nmi_to_offline_cpu = true,
.read = native_apic_msr_read, .write = native_apic_msr_write,
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 8f849ff63bcbc77670da03cb8f2b78b06257f455 upstream
Offline CPUs need to be parked in a safe loop when microcode update is in progress on the primary CPU. Currently, offline CPUs are parked in mwait_play_dead(), and for Intel CPUs, its not a safe instruction, because the MWAIT instruction can be patched in the new microcode update that can cause instability.
- Add a new microcode state 'UCODE_OFFLINE' to report status on per-CPU basis. - Force NMI on the offline CPUs.
Wake up offline CPUs while the update is in progress and then return them back to mwait_play_dead() after microcode update is complete.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231002115903.660850472@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/microcode.h | 1 arch/x86/kernel/cpu/microcode/core.c | 112 +++++++++++++++++++++++++++++-- arch/x86/kernel/cpu/microcode/internal.h | 1 arch/x86/kernel/nmi.c | 5 + 4 files changed, 113 insertions(+), 6 deletions(-)
--- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -71,6 +71,7 @@ static inline u32 intel_get_microcode_re #endif /* !CONFIG_CPU_SUP_INTEL */
bool microcode_nmi_handler(void); +void microcode_offline_nmi_handler(void);
#ifdef CONFIG_MICROCODE_LATE_LOADING DECLARE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -272,8 +272,9 @@ struct microcode_ctrl {
DEFINE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); static DEFINE_PER_CPU(struct microcode_ctrl, ucode_ctrl); +static atomic_t late_cpus_in, offline_in_nmi; static unsigned int loops_per_usec; -static atomic_t late_cpus_in; +static cpumask_t cpu_offline_mask;
static noinstr bool wait_for_cpus(atomic_t *cnt) { @@ -381,7 +382,7 @@ static noinstr void load_secondary(unsig instrumentation_end(); }
-static void load_primary(unsigned int cpu) +static void __load_primary(unsigned int cpu) { struct cpumask *secondaries = topology_sibling_cpumask(cpu); enum sibling_ctrl ctrl; @@ -416,6 +417,67 @@ static void load_primary(unsigned int cp } }
+static bool kick_offline_cpus(unsigned int nr_offl) +{ + unsigned int cpu, timeout; + + for_each_cpu(cpu, &cpu_offline_mask) { + /* Enable the rendezvous handler and send NMI */ + per_cpu(ucode_ctrl.nmi_enabled, cpu) = true; + apic_send_nmi_to_offline_cpu(cpu); + } + + /* Wait for them to arrive */ + for (timeout = 0; timeout < (USEC_PER_SEC / 2); timeout++) { + if (atomic_read(&offline_in_nmi) == nr_offl) + return true; + udelay(1); + } + /* Let the others time out */ + return false; +} + +static void release_offline_cpus(void) +{ + unsigned int cpu; + + for_each_cpu(cpu, &cpu_offline_mask) + per_cpu(ucode_ctrl.ctrl, cpu) = SCTRL_DONE; +} + +static void load_primary(unsigned int cpu) +{ + unsigned int nr_offl = cpumask_weight(&cpu_offline_mask); + bool proceed = true; + + /* Kick soft-offlined SMT siblings if required */ + if (!cpu && nr_offl) + proceed = kick_offline_cpus(nr_offl); + + /* If the soft-offlined CPUs did not respond, abort */ + if (proceed) + __load_primary(cpu); + + /* Unconditionally release soft-offlined SMT siblings if required */ + if (!cpu && nr_offl) + release_offline_cpus(); +} + +/* + * Minimal stub rendezvous handler for soft-offlined CPUs which participate + * in the NMI rendezvous to protect against a concurrent NMI on affected + * CPUs. + */ +void noinstr microcode_offline_nmi_handler(void) +{ + if (!raw_cpu_read(ucode_ctrl.nmi_enabled)) + return; + raw_cpu_write(ucode_ctrl.nmi_enabled, false); + raw_cpu_write(ucode_ctrl.result, UCODE_OFFLINE); + raw_atomic_inc(&offline_in_nmi); + wait_for_ctrl(); +} + static noinstr bool microcode_update_handler(void) { unsigned int cpu = raw_smp_processor_id(); @@ -472,6 +534,7 @@ static int load_cpus_stopped(void *unuse static int load_late_stop_cpus(void) { unsigned int cpu, updated = 0, failed = 0, timedout = 0, siblings = 0; + unsigned int nr_offl, offline = 0; int old_rev = boot_cpu_data.microcode; struct cpuinfo_x86 prev_info;
@@ -479,6 +542,7 @@ static int load_late_stop_cpus(void) pr_err("You should switch to early loading, if possible.\n");
atomic_set(&late_cpus_in, num_online_cpus()); + atomic_set(&offline_in_nmi, 0); loops_per_usec = loops_per_jiffy / (TICK_NSEC / 1000);
/* @@ -501,6 +565,7 @@ static int load_late_stop_cpus(void) case UCODE_UPDATED: updated++; break; case UCODE_TIMEOUT: timedout++; break; case UCODE_OK: siblings++; break; + case UCODE_OFFLINE: offline++; break; default: failed++; break; } } @@ -512,6 +577,13 @@ static int load_late_stop_cpus(void) /* Nothing changed. */ if (!failed && !timedout) return 0; + + nr_offl = cpumask_weight(&cpu_offline_mask); + if (offline < nr_offl) { + pr_warn("%u offline siblings did not respond.\n", + nr_offl - atomic_read(&offline_in_nmi)); + return -EIO; + } pr_err("update failed: %u CPUs failed %u CPUs timed out\n", failed, timedout); return -EIO; @@ -545,19 +617,49 @@ static int load_late_stop_cpus(void) * modern CPUs uses MWAIT, which is also not guaranteed to be safe * against a microcode update which affects MWAIT. * - * 2) Initialize the per CPU control structure + * As soft-offlined CPUs still react on NMIs, the SMT sibling + * restriction can be lifted when the vendor driver signals to use NMI + * for rendezvous and the APIC provides a mechanism to send an NMI to a + * soft-offlined CPU. The soft-offlined CPUs are then able to + * participate in the rendezvous in a trivial stub handler. + * + * 2) Initialize the per CPU control structure and create a cpumask + * which contains "offline"; secondary threads, so they can be handled + * correctly by a control CPU. */ static bool setup_cpus(void) { struct microcode_ctrl ctrl = { .ctrl = SCTRL_WAIT, .result = -1, }; + bool allow_smt_offline; unsigned int cpu;
+ allow_smt_offline = microcode_ops->nmi_safe || + (microcode_ops->use_nmi && apic->nmi_to_offline_cpu); + + cpumask_clear(&cpu_offline_mask); + for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { + /* + * Offline CPUs sit in one of the play_dead() functions + * with interrupts disabled, but they still react on NMIs + * and execute arbitrary code. Also MWAIT being updated + * while the offline CPU sits there is not necessarily safe + * on all CPU variants. + * + * Mark them in the offline_cpus mask which will be handled + * by CPU0 later in the update process. + * + * Ensure that the primary thread is online so that it is + * guaranteed that all cores are updated. + */ if (!cpu_online(cpu)) { - if (topology_is_primary_thread(cpu) || !microcode_ops->nmi_safe) { - pr_err("CPU %u not online\n", cpu); + if (topology_is_primary_thread(cpu) || !allow_smt_offline) { + pr_err("CPU %u not online, loading aborted\n", cpu); return false; } + cpumask_set_cpu(cpu, &cpu_offline_mask); + per_cpu(ucode_ctrl, cpu) = ctrl; + continue; }
/* --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -17,6 +17,7 @@ enum ucode_state { UCODE_NFOUND, UCODE_ERROR, UCODE_TIMEOUT, + UCODE_OFFLINE, };
struct microcode_ops { --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -502,8 +502,11 @@ DEFINE_IDTENTRY_RAW(exc_nmi) if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) raw_atomic_long_inc(&nsp->idt_calls);
- if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id())) + if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id())) { + if (microcode_nmi_handler_enabled()) + microcode_offline_nmi_handler(); return; + }
if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) { this_cpu_write(nmi_state, NMI_LATCHED);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner tglx@linutronix.de
commit 9407bda845dd19756e276d4f3abc15a20777ba45 upstream
Applying microcode late can be fatal for the running kernel when the update changes functionality which is in use already in a non-compatible way, e.g. by removing a CPUID bit.
There is no way for admins which do not have access to the vendors deep technical support to decide whether late loading of such a microcode is safe or not.
Intel has added a new field to the microcode header which tells the minimal microcode revision which is required to be active in the CPU in order to be safe.
Provide infrastructure for handling this in the core code and a command line switch which allows to enforce it.
If the update is considered safe the kernel is not tainted and the annoying warning message not emitted. If it's enforced and the currently loaded microcode revision is not safe for late loading then the load is aborted.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20231017211724.079611170@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/admin-guide/kernel-parameters.txt | 5 +++++ arch/x86/Kconfig | 23 ++++++++++++++++++++++- arch/x86/kernel/cpu/microcode/amd.c | 3 +++ arch/x86/kernel/cpu/microcode/core.c | 19 ++++++++++++++----- arch/x86/kernel/cpu/microcode/intel.c | 3 +++ arch/x86/kernel/cpu/microcode/internal.h | 2 ++ 6 files changed, 49 insertions(+), 6 deletions(-)
--- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3287,6 +3287,11 @@
mga= [HW,DRM]
+ microcode.force_minrev= [X86] + Format: <bool> + Enable or disable the microcode minimal revision + enforcement for the runtime microcode loader. + min_addr=nn[KMG] [KNL,BOOT,IA-64] All physical memory below this physical address is ignored.
--- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1323,7 +1323,28 @@ config MICROCODE_LATE_LOADING is a tricky business and should be avoided if possible. Just the sequence of synchronizing all cores and SMT threads is one fragile dance which does not guarantee that cores might not softlock after the loading. Therefore, - use this at your own risk. Late loading taints the kernel too. + use this at your own risk. Late loading taints the kernel unless the + microcode header indicates that it is safe for late loading via the + minimal revision check. This minimal revision check can be enforced on + the kernel command line with "microcode.minrev=Y". + +config MICROCODE_LATE_FORCE_MINREV + bool "Enforce late microcode loading minimal revision check" + default n + depends on MICROCODE_LATE_LOADING + help + To prevent that users load microcode late which modifies already + in use features, newer microcode patches have a minimum revision field + in the microcode header, which tells the kernel which minimum + revision must be active in the CPU to safely load that new microcode + late into the running system. If disabled the check will not + be enforced but the kernel will be tainted when the minimal + revision check fails. + + This minimal revision check can also be controlled via the + "microcode.minrev" parameter on the kernel command line. + + If unsure say Y.
config X86_MSR tristate "/dev/cpu/*/msr - Model-specific register support" --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -888,6 +888,9 @@ static enum ucode_state request_microcod enum ucode_state ret = UCODE_NFOUND; const struct firmware *fw;
+ if (force_minrev) + return UCODE_NFOUND; + if (c->x86 >= 0x15) snprintf(fw_name, sizeof(fw_name), "amd-ucode/microcode_amd_fam%.2xh.bin", c->x86);
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -46,6 +46,9 @@ static struct microcode_ops *microcode_ops; bool dis_ucode_ldr = true;
+bool force_minrev = IS_ENABLED(CONFIG_MICROCODE_LATE_FORCE_MINREV); +module_param(force_minrev, bool, S_IRUSR | S_IWUSR); + /* * Synchronization. * @@ -531,15 +534,17 @@ static int load_cpus_stopped(void *unuse return 0; }
-static int load_late_stop_cpus(void) +static int load_late_stop_cpus(bool is_safe) { unsigned int cpu, updated = 0, failed = 0, timedout = 0, siblings = 0; unsigned int nr_offl, offline = 0; int old_rev = boot_cpu_data.microcode; struct cpuinfo_x86 prev_info;
- pr_err("Attempting late microcode loading - it is dangerous and taints the kernel.\n"); - pr_err("You should switch to early loading, if possible.\n"); + if (!is_safe) { + pr_err("Late microcode loading without minimal revision check.\n"); + pr_err("You should switch to early loading, if possible.\n"); + }
atomic_set(&late_cpus_in, num_online_cpus()); atomic_set(&offline_in_nmi, 0); @@ -589,7 +594,9 @@ static int load_late_stop_cpus(void) return -EIO; }
- add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); + if (!is_safe || failed || timedout) + add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); + pr_info("load: updated on %u primary CPUs with %u siblings\n", updated, siblings); if (failed || timedout) { pr_err("load incomplete. %u CPUs timed out or failed\n", @@ -679,7 +686,9 @@ static int load_late_locked(void)
switch (microcode_ops->request_microcode_fw(0, µcode_pdev->dev)) { case UCODE_NEW: - return load_late_stop_cpus(); + return load_late_stop_cpus(false); + case UCODE_NEW_SAFE: + return load_late_stop_cpus(true); case UCODE_NFOUND: return -ENOENT; default: --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -480,6 +480,9 @@ static enum ucode_state parse_microcode_ unsigned int curr_mc_size = 0; u8 *new_mc = NULL, *mc = NULL;
+ if (force_minrev) + return UCODE_NFOUND; + while (iov_iter_count(iter)) { struct microcode_header_intel mc_header; unsigned int mc_size, data_size; --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -13,6 +13,7 @@ struct device; enum ucode_state { UCODE_OK = 0, UCODE_NEW, + UCODE_NEW_SAFE, UCODE_UPDATED, UCODE_NFOUND, UCODE_ERROR, @@ -88,6 +89,7 @@ static inline unsigned int x86_cpuid_fam }
extern bool dis_ucode_ldr; +extern bool force_minrev;
#ifdef CONFIG_CPU_SUP_AMD void load_ucode_amd_bsp(unsigned int family);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit 080990aa3344123673f686cda2df0d1b0deee046 upstream
The AMD side of the loader issues the microcode revision for each logical thread on the system, which can become really noisy on huge machines. And doing that doesn't make a whole lot of sense - the microcode revision is already in /proc/cpuinfo.
So in case one is interested in the theoretical support of mixed silicon steppings on AMD, one can check there.
What is also missing on the AMD side - something which people have requested before - is showing the microcode revision the CPU had *before* the early update.
So abstract that up in the main code and have the BSP on each vendor provide those revision numbers.
Then, dump them only once on driver init.
On Intel, do not dump the patch date - it is not needed.
Reported-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/CAHk-=wg=%2B8rceshMkB4VnKxmRccVLtBLPBawnewZuuqyx5U... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 39 ++++++++----------------------- arch/x86/kernel/cpu/microcode/core.c | 11 +++++++- arch/x86/kernel/cpu/microcode/intel.c | 17 +++++-------- arch/x86/kernel/cpu/microcode/internal.h | 14 +++++++---- 4 files changed, 37 insertions(+), 44 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -104,8 +104,6 @@ struct cont_desc { size_t size; };
-static u32 ucode_new_rev; - /* * Microcode patch container file is prepended to the initrd in cpio * format. See Documentation/arch/x86/microcode.rst @@ -442,12 +440,11 @@ static int __apply_microcode_amd(struct * * Returns true if container found (sets @desc), false otherwise. */ -static bool early_apply_microcode(u32 cpuid_1_eax, void *ucode, size_t size) +static bool early_apply_microcode(u32 cpuid_1_eax, u32 old_rev, void *ucode, size_t size) { struct cont_desc desc = { 0 }; struct microcode_amd *mc; bool ret = false; - u32 rev, dummy;
desc.cpuid_1_eax = cpuid_1_eax;
@@ -457,22 +454,15 @@ static bool early_apply_microcode(u32 cp if (!mc) return ret;
- native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); - /* * Allow application of the same revision to pick up SMT-specific * changes even if the revision of the other SMT thread is already * up-to-date. */ - if (rev > mc->hdr.patch_id) + if (old_rev > mc->hdr.patch_id) return ret;
- if (!__apply_microcode_amd(mc)) { - ucode_new_rev = mc->hdr.patch_id; - ret = true; - } - - return ret; + return !__apply_microcode_amd(mc); }
static bool get_builtin_microcode(struct cpio_data *cp, unsigned int family) @@ -506,9 +496,12 @@ static void __init find_blobs_in_contain *ret = cp; }
-void __init load_ucode_amd_bsp(unsigned int cpuid_1_eax) +void __init load_ucode_amd_bsp(struct early_load_data *ed, unsigned int cpuid_1_eax) { struct cpio_data cp = { }; + u32 dummy; + + native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->old_rev, dummy);
/* Needed in load_microcode_amd() */ ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax; @@ -517,7 +510,8 @@ void __init load_ucode_amd_bsp(unsigned if (!(cp.data && cp.size)) return;
- early_apply_microcode(cpuid_1_eax, cp.data, cp.size); + if (early_apply_microcode(cpuid_1_eax, ed->old_rev, cp.data, cp.size)) + native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy); }
static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size); @@ -625,10 +619,8 @@ void reload_ucode_amd(unsigned int cpu) rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
if (rev < mc->hdr.patch_id) { - if (!__apply_microcode_amd(mc)) { - ucode_new_rev = mc->hdr.patch_id; - pr_info("reload patch_level=0x%08x\n", ucode_new_rev); - } + if (!__apply_microcode_amd(mc)) + pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id); } }
@@ -649,8 +641,6 @@ static int collect_cpu_info_amd(int cpu, if (p && (p->patch_id == csig->rev)) uci->mc = p->data;
- pr_info("CPU%d: patch_level=0x%08x\n", cpu, csig->rev); - return 0; }
@@ -691,8 +681,6 @@ static enum ucode_state apply_microcode_ rev = mc_amd->hdr.patch_id; ret = UCODE_UPDATED;
- pr_info("CPU%d: new patch_level=0x%08x\n", cpu, rev); - out: uci->cpu_sig.rev = rev; c->microcode = rev; @@ -935,11 +923,6 @@ struct microcode_ops * __init init_amd_m pr_warn("AMD CPU family 0x%x not supported\n", c->x86); return NULL; } - - if (ucode_new_rev) - pr_info_once("microcode updated early to new patch_level=0x%08x\n", - ucode_new_rev); - return µcode_amd_ops; }
--- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -77,6 +77,8 @@ static u32 final_levels[] = { 0, /* T-101 terminator */ };
+struct early_load_data early_data; + /* * Check the current patch level on this CPU. * @@ -155,9 +157,9 @@ void __init load_ucode_bsp(void) return;
if (intel) - load_ucode_intel_bsp(); + load_ucode_intel_bsp(&early_data); else - load_ucode_amd_bsp(cpuid_1_eax); + load_ucode_amd_bsp(&early_data, cpuid_1_eax); }
void load_ucode_ap(void) @@ -828,6 +830,11 @@ static int __init microcode_init(void) if (!microcode_ops) return -ENODEV;
+ pr_info_once("Current revision: 0x%08x\n", (early_data.new_rev ?: early_data.old_rev)); + + if (early_data.new_rev) + pr_info_once("Updated early from: 0x%08x\n", early_data.old_rev); + microcode_pdev = platform_device_register_simple("microcode", -1, NULL, 0); if (IS_ERR(microcode_pdev)) return PTR_ERR(microcode_pdev); --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -339,16 +339,9 @@ static enum ucode_state __apply_microcod static enum ucode_state apply_microcode_early(struct ucode_cpu_info *uci) { struct microcode_intel *mc = uci->mc; - enum ucode_state ret; - u32 cur_rev, date; + u32 cur_rev;
- ret = __apply_microcode(uci, mc, &cur_rev); - if (ret == UCODE_UPDATED) { - date = mc->hdr.date; - pr_info_once("updated early: 0x%x -> 0x%x, date = %04x-%02x-%02x\n", - cur_rev, mc->hdr.rev, date & 0xffff, date >> 24, (date >> 16) & 0xff); - } - return ret; + return __apply_microcode(uci, mc, &cur_rev); }
static __init bool load_builtin_intel_microcode(struct cpio_data *cp) @@ -413,13 +406,17 @@ static int __init save_builtin_microcode early_initcall(save_builtin_microcode);
/* Load microcode on BSP from initrd or builtin blobs */ -void __init load_ucode_intel_bsp(void) +void __init load_ucode_intel_bsp(struct early_load_data *ed) { struct ucode_cpu_info uci;
+ ed->old_rev = intel_get_microcode_revision(); + uci.mc = get_microcode_blob(&uci, false); if (uci.mc && apply_microcode_early(&uci) == UCODE_UPDATED) ucode_patch_va = UCODE_BSP_LOADED; + + ed->new_rev = uci.cpu_sig.rev; }
void load_ucode_intel_ap(void) --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -37,6 +37,12 @@ struct microcode_ops { use_nmi : 1; };
+struct early_load_data { + u32 old_rev; + u32 new_rev; +}; + +extern struct early_load_data early_data; extern struct ucode_cpu_info ucode_cpu_info[]; struct cpio_data find_microcode_in_initrd(const char *path);
@@ -92,14 +98,14 @@ extern bool dis_ucode_ldr; extern bool force_minrev;
#ifdef CONFIG_CPU_SUP_AMD -void load_ucode_amd_bsp(unsigned int family); +void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family); void load_ucode_amd_ap(unsigned int family); int save_microcode_in_initrd_amd(unsigned int family); void reload_ucode_amd(unsigned int cpu); struct microcode_ops *init_amd_microcode(void); void exit_amd_microcode(void); #else /* CONFIG_CPU_SUP_AMD */ -static inline void load_ucode_amd_bsp(unsigned int family) { } +static inline void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family) { } static inline void load_ucode_amd_ap(unsigned int family) { } static inline int save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; } static inline void reload_ucode_amd(unsigned int cpu) { } @@ -108,12 +114,12 @@ static inline void exit_amd_microcode(vo #endif /* !CONFIG_CPU_SUP_AMD */
#ifdef CONFIG_CPU_SUP_INTEL -void load_ucode_intel_bsp(void); +void load_ucode_intel_bsp(struct early_load_data *ed); void load_ucode_intel_ap(void); void reload_ucode_intel(void); struct microcode_ops *init_intel_microcode(void); #else /* CONFIG_CPU_SUP_INTEL */ -static inline void load_ucode_intel_bsp(void) { } +static inline void load_ucode_intel_bsp(struct early_load_data *ed) { } static inline void load_ucode_intel_ap(void) { } static inline void reload_ucode_intel(void) { } static inline struct microcode_ops *init_intel_microcode(void) { return NULL; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit 9c21ea53e6bd1104c637b80a0688040f184cc761 upstream
This was meant to be done only when early microcode got updated successfully. Move it into the if-branch.
Also, make sure the current revision is read unconditionally and only once.
Fixes: 080990aa3344 ("x86/microcode: Rework early revisions reporting") Reported-by: Ashok Raj ashok.raj@intel.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Tested-by: Ashok Raj ashok.raj@intel.com Link: https://lore.kernel.org/r/ZWjVt5dNRjbcvlzR@a4bf019067fa.jf.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/intel.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -370,14 +370,14 @@ static __init struct microcode_intel *ge { struct cpio_data cp;
+ intel_collect_cpu_info(&uci->cpu_sig); + if (!load_builtin_intel_microcode(&cp)) cp = find_microcode_in_initrd(ucode_path);
if (!(cp.data && cp.size)) return NULL;
- intel_collect_cpu_info(&uci->cpu_sig); - return scan_microcode(cp.data, cp.size, uci, save); }
@@ -410,13 +410,13 @@ void __init load_ucode_intel_bsp(struct { struct ucode_cpu_info uci;
- ed->old_rev = intel_get_microcode_revision(); - uci.mc = get_microcode_blob(&uci, false); - if (uci.mc && apply_microcode_early(&uci) == UCODE_UPDATED) - ucode_patch_va = UCODE_BSP_LOADED; + ed->old_rev = uci.cpu_sig.rev;
- ed->new_rev = uci.cpu_sig.rev; + if (uci.mc && apply_microcode_early(&uci) == UCODE_UPDATED) { + ucode_patch_va = UCODE_BSP_LOADED; + ed->new_rev = uci.cpu_sig.rev; + } }
void load_ucode_intel_ap(void)
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Borislav Petkov bp@alien8.de
commit 94838d230a6c835ced1bad06b8759e0a5f19c1d3 upstream
On Zen and newer, the family, model and stepping is part of the microcode patch ID so that the equivalence table the driver has been using, is not needed anymore.
So switch the driver to use that from now on.
The equivalence table in the microcode blob should still remain in case there's need to pass some additional information to the kernel loader.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20240725112037.GBZqI1BbUk1KMlOJ_D@fat_crate.local Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 190 +++++++++++++++++++++++++++++------- 1 file changed, 158 insertions(+), 32 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -91,6 +91,31 @@ static struct equiv_cpu_table { struct equiv_cpu_entry *entry; } equiv_table;
+union zen_patch_rev { + struct { + __u32 rev : 8, + stepping : 4, + model : 4, + __reserved : 4, + ext_model : 4, + ext_fam : 8; + }; + __u32 ucode_rev; +}; + +union cpuid_1_eax { + struct { + __u32 stepping : 4, + model : 4, + family : 4, + __reserved0 : 4, + ext_model : 4, + ext_fam : 8, + __reserved1 : 4; + }; + __u32 full; +}; + /* * This points to the current valid container of microcode patches which we will * save from the initrd/builtin before jettisoning its contents. @mc is the @@ -98,7 +123,6 @@ static struct equiv_cpu_table { */ struct cont_desc { struct microcode_amd *mc; - u32 cpuid_1_eax; u32 psize; u8 *data; size_t size; @@ -111,10 +135,42 @@ struct cont_desc { static const char ucode_path[] __maybe_unused = "kernel/x86/microcode/AuthenticAMD.bin";
+/* + * This is CPUID(1).EAX on the BSP. It is used in two ways: + * + * 1. To ignore the equivalence table on Zen1 and newer. + * + * 2. To match which patches to load because the patch revision ID + * already contains the f/m/s for which the microcode is destined + * for. + */ +static u32 bsp_cpuid_1_eax __ro_after_init; + +static union cpuid_1_eax ucode_rev_to_cpuid(unsigned int val) +{ + union zen_patch_rev p; + union cpuid_1_eax c; + + p.ucode_rev = val; + c.full = 0; + + c.stepping = p.stepping; + c.model = p.model; + c.ext_model = p.ext_model; + c.family = 0xf; + c.ext_fam = p.ext_fam; + + return c; +} + static u16 find_equiv_id(struct equiv_cpu_table *et, u32 sig) { unsigned int i;
+ /* Zen and newer do not need an equivalence table. */ + if (x86_family(bsp_cpuid_1_eax) >= 0x17) + return 0; + if (!et || !et->num_entries) return 0;
@@ -161,6 +217,10 @@ static bool verify_equivalence_table(con if (!verify_container(buf, buf_size)) return false;
+ /* Zen and newer do not need an equivalence table. */ + if (x86_family(bsp_cpuid_1_eax) >= 0x17) + return true; + cont_type = hdr[1]; if (cont_type != UCODE_EQUIV_CPU_TABLE_TYPE) { pr_debug("Wrong microcode container equivalence table type: %u.\n", @@ -224,8 +284,9 @@ __verify_patch_section(const u8 *buf, si * exceed the per-family maximum). @sh_psize is the size read from the section * header. */ -static unsigned int __verify_patch_size(u8 family, u32 sh_psize, size_t buf_size) +static unsigned int __verify_patch_size(u32 sh_psize, size_t buf_size) { + u8 family = x86_family(bsp_cpuid_1_eax); u32 max_size;
if (family >= 0x15) @@ -260,9 +321,9 @@ static unsigned int __verify_patch_size( * positive: patch is not for this family, skip it * 0: success */ -static int -verify_patch(u8 family, const u8 *buf, size_t buf_size, u32 *patch_size) +static int verify_patch(const u8 *buf, size_t buf_size, u32 *patch_size) { + u8 family = x86_family(bsp_cpuid_1_eax); struct microcode_header_amd *mc_hdr; unsigned int ret; u32 sh_psize; @@ -288,7 +349,7 @@ verify_patch(u8 family, const u8 *buf, s return -1; }
- ret = __verify_patch_size(family, sh_psize, buf_size); + ret = __verify_patch_size(sh_psize, buf_size); if (!ret) { pr_debug("Per-family patch size mismatch.\n"); return -1; @@ -310,6 +371,15 @@ verify_patch(u8 family, const u8 *buf, s return 0; }
+static bool mc_patch_matches(struct microcode_amd *mc, u16 eq_id) +{ + /* Zen and newer do not need an equivalence table. */ + if (x86_family(bsp_cpuid_1_eax) >= 0x17) + return ucode_rev_to_cpuid(mc->hdr.patch_id).full == bsp_cpuid_1_eax; + else + return eq_id == mc->hdr.processor_rev_id; +} + /* * This scans the ucode blob for the proper container as we can have multiple * containers glued together. Returns the equivalence ID from the equivalence @@ -338,7 +408,7 @@ static size_t parse_container(u8 *ucode, * doesn't contain a patch for the CPU, scan through the whole container * so that it can be skipped in case there are other containers appended. */ - eq_id = find_equiv_id(&table, desc->cpuid_1_eax); + eq_id = find_equiv_id(&table, bsp_cpuid_1_eax);
buf += hdr[2] + CONTAINER_HDR_SZ; size -= hdr[2] + CONTAINER_HDR_SZ; @@ -352,7 +422,7 @@ static size_t parse_container(u8 *ucode, u32 patch_size; int ret;
- ret = verify_patch(x86_family(desc->cpuid_1_eax), buf, size, &patch_size); + ret = verify_patch(buf, size, &patch_size); if (ret < 0) { /* * Patch verification failed, skip to the next container, if @@ -365,7 +435,7 @@ static size_t parse_container(u8 *ucode, }
mc = (struct microcode_amd *)(buf + SECTION_HDR_SIZE); - if (eq_id == mc->hdr.processor_rev_id) { + if (mc_patch_matches(mc, eq_id)) { desc->psize = patch_size; desc->mc = mc; } @@ -423,6 +493,7 @@ static int __apply_microcode_amd(struct
/* verify patch application was successful */ native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); + if (rev != mc->hdr.patch_id) return -1;
@@ -440,14 +511,12 @@ static int __apply_microcode_amd(struct * * Returns true if container found (sets @desc), false otherwise. */ -static bool early_apply_microcode(u32 cpuid_1_eax, u32 old_rev, void *ucode, size_t size) +static bool early_apply_microcode(u32 old_rev, void *ucode, size_t size) { struct cont_desc desc = { 0 }; struct microcode_amd *mc; bool ret = false;
- desc.cpuid_1_eax = cpuid_1_eax; - scan_containers(ucode, size, &desc);
mc = desc.mc; @@ -465,9 +534,10 @@ static bool early_apply_microcode(u32 cp return !__apply_microcode_amd(mc); }
-static bool get_builtin_microcode(struct cpio_data *cp, unsigned int family) +static bool get_builtin_microcode(struct cpio_data *cp) { char fw_name[36] = "amd-ucode/microcode_amd.bin"; + u8 family = x86_family(bsp_cpuid_1_eax); struct firmware fw;
if (IS_ENABLED(CONFIG_X86_32)) @@ -486,11 +556,11 @@ static bool get_builtin_microcode(struct return false; }
-static void __init find_blobs_in_containers(unsigned int cpuid_1_eax, struct cpio_data *ret) +static void __init find_blobs_in_containers(struct cpio_data *ret) { struct cpio_data cp;
- if (!get_builtin_microcode(&cp, x86_family(cpuid_1_eax))) + if (!get_builtin_microcode(&cp)) cp = find_microcode_in_initrd(ucode_path);
*ret = cp; @@ -501,16 +571,18 @@ void __init load_ucode_amd_bsp(struct ea struct cpio_data cp = { }; u32 dummy;
+ bsp_cpuid_1_eax = cpuid_1_eax; + native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->old_rev, dummy);
/* Needed in load_microcode_amd() */ ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax;
- find_blobs_in_containers(cpuid_1_eax, &cp); + find_blobs_in_containers(&cp); if (!(cp.data && cp.size)) return;
- if (early_apply_microcode(cpuid_1_eax, ed->old_rev, cp.data, cp.size)) + if (early_apply_microcode(ed->old_rev, cp.data, cp.size)) native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy); }
@@ -527,12 +599,10 @@ static int __init save_microcode_in_init if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) return 0;
- find_blobs_in_containers(cpuid_1_eax, &cp); + find_blobs_in_containers(&cp); if (!(cp.data && cp.size)) return -EINVAL;
- desc.cpuid_1_eax = cpuid_1_eax; - scan_containers(cp.data, cp.size, &desc); if (!desc.mc) return -EINVAL; @@ -545,26 +615,65 @@ static int __init save_microcode_in_init } early_initcall(save_microcode_in_initrd);
+static inline bool patch_cpus_equivalent(struct ucode_patch *p, struct ucode_patch *n) +{ + /* Zen and newer hardcode the f/m/s in the patch ID */ + if (x86_family(bsp_cpuid_1_eax) >= 0x17) { + union cpuid_1_eax p_cid = ucode_rev_to_cpuid(p->patch_id); + union cpuid_1_eax n_cid = ucode_rev_to_cpuid(n->patch_id); + + /* Zap stepping */ + p_cid.stepping = 0; + n_cid.stepping = 0; + + return p_cid.full == n_cid.full; + } else { + return p->equiv_cpu == n->equiv_cpu; + } +} + /* * a small, trivial cache of per-family ucode patches */ -static struct ucode_patch *cache_find_patch(u16 equiv_cpu) +static struct ucode_patch *cache_find_patch(struct ucode_cpu_info *uci, u16 equiv_cpu) { struct ucode_patch *p; + struct ucode_patch n; + + n.equiv_cpu = equiv_cpu; + n.patch_id = uci->cpu_sig.rev; + + WARN_ON_ONCE(!n.patch_id);
list_for_each_entry(p, µcode_cache, plist) - if (p->equiv_cpu == equiv_cpu) + if (patch_cpus_equivalent(p, &n)) return p; + return NULL; }
+static inline bool patch_newer(struct ucode_patch *p, struct ucode_patch *n) +{ + /* Zen and newer hardcode the f/m/s in the patch ID */ + if (x86_family(bsp_cpuid_1_eax) >= 0x17) { + union zen_patch_rev zp, zn; + + zp.ucode_rev = p->patch_id; + zn.ucode_rev = n->patch_id; + + return zn.rev > zp.rev; + } else { + return n->patch_id > p->patch_id; + } +} + static void update_cache(struct ucode_patch *new_patch) { struct ucode_patch *p;
list_for_each_entry(p, µcode_cache, plist) { - if (p->equiv_cpu == new_patch->equiv_cpu) { - if (p->patch_id >= new_patch->patch_id) { + if (patch_cpus_equivalent(p, new_patch)) { + if (!patch_newer(p, new_patch)) { /* we already have the latest patch */ kfree(new_patch->data); kfree(new_patch); @@ -595,13 +704,22 @@ static void free_cache(void) static struct ucode_patch *find_patch(unsigned int cpu) { struct ucode_cpu_info *uci = ucode_cpu_info + cpu; + u32 rev, dummy __always_unused; u16 equiv_id;
- equiv_id = find_equiv_id(&equiv_table, uci->cpu_sig.sig); - if (!equiv_id) - return NULL; + /* fetch rev if not populated yet: */ + if (!uci->cpu_sig.rev) { + rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); + uci->cpu_sig.rev = rev; + } + + if (x86_family(bsp_cpuid_1_eax) < 0x17) { + equiv_id = find_equiv_id(&equiv_table, uci->cpu_sig.sig); + if (!equiv_id) + return NULL; + }
- return cache_find_patch(equiv_id); + return cache_find_patch(uci, equiv_id); }
void reload_ucode_amd(unsigned int cpu) @@ -651,7 +769,7 @@ static enum ucode_state apply_microcode_ struct ucode_cpu_info *uci; struct ucode_patch *p; enum ucode_state ret; - u32 rev, dummy __always_unused; + u32 rev;
BUG_ON(raw_smp_processor_id() != cpu);
@@ -661,11 +779,11 @@ static enum ucode_state apply_microcode_ if (!p) return UCODE_NFOUND;
+ rev = uci->cpu_sig.rev; + mc_amd = p->data; uci->mc = p->data;
- rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); - /* need to apply patch? */ if (rev > mc_amd->hdr.patch_id) { ret = UCODE_OK; @@ -711,6 +829,10 @@ static size_t install_equiv_cpu_table(co hdr = (const u32 *)buf; equiv_tbl_len = hdr[2];
+ /* Zen and newer do not need an equivalence table. */ + if (x86_family(bsp_cpuid_1_eax) >= 0x17) + goto out; + equiv_table.entry = vmalloc(equiv_tbl_len); if (!equiv_table.entry) { pr_err("failed to allocate equivalent CPU table\n"); @@ -720,12 +842,16 @@ static size_t install_equiv_cpu_table(co memcpy(equiv_table.entry, buf + CONTAINER_HDR_SZ, equiv_tbl_len); equiv_table.num_entries = equiv_tbl_len / sizeof(struct equiv_cpu_entry);
+out: /* add header length */ return equiv_tbl_len + CONTAINER_HDR_SZ; }
static void free_equiv_cpu_table(void) { + if (x86_family(bsp_cpuid_1_eax) >= 0x17) + return; + vfree(equiv_table.entry); memset(&equiv_table, 0, sizeof(equiv_table)); } @@ -751,7 +877,7 @@ static int verify_and_add_patch(u8 famil u16 proc_id; int ret;
- ret = verify_patch(family, fw, leftover, patch_size); + ret = verify_patch(fw, leftover, patch_size); if (ret) return ret;
@@ -776,7 +902,7 @@ static int verify_and_add_patch(u8 famil patch->patch_id = mc_hdr->patch_id; patch->equiv_cpu = proc_id;
- pr_debug("%s: Added patch_id: 0x%08x, proc_id: 0x%04x\n", + pr_debug("%s: Adding patch_id: 0x%08x, proc_id: 0x%04x\n", __func__, patch->patch_id, proc_id);
/* ... and add to cache. */
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit d1744a4c975b1acbe8b498356d28afbc46c88428 upstream
Commit in Fixes changed how a microcode patch is loaded on Zen and newer but the patch matching needs to happen with different rigidity, depending on what is being done:
1) When the patch is added to the patches cache, the stepping must be ignored because the driver still supports different steppings per system
2) When the patch is matched for loading, then the stepping must be taken into account because each CPU needs the patch matching its exact stepping
Take care of that by making the matching smarter.
Fixes: 94838d230a6c ("x86/microcode/AMD: Use the family,model,stepping encoded in the patch ID") Reported-by: Jens Axboe axboe@kernel.dk Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Tested-by: Jens Axboe axboe@kernel.dk Link: https://lore.kernel.org/r/91194406-3fdf-4e38-9838-d334af538f74@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -615,16 +615,19 @@ static int __init save_microcode_in_init } early_initcall(save_microcode_in_initrd);
-static inline bool patch_cpus_equivalent(struct ucode_patch *p, struct ucode_patch *n) +static inline bool patch_cpus_equivalent(struct ucode_patch *p, + struct ucode_patch *n, + bool ignore_stepping) { /* Zen and newer hardcode the f/m/s in the patch ID */ if (x86_family(bsp_cpuid_1_eax) >= 0x17) { union cpuid_1_eax p_cid = ucode_rev_to_cpuid(p->patch_id); union cpuid_1_eax n_cid = ucode_rev_to_cpuid(n->patch_id);
- /* Zap stepping */ - p_cid.stepping = 0; - n_cid.stepping = 0; + if (ignore_stepping) { + p_cid.stepping = 0; + n_cid.stepping = 0; + }
return p_cid.full == n_cid.full; } else { @@ -646,13 +649,13 @@ static struct ucode_patch *cache_find_pa WARN_ON_ONCE(!n.patch_id);
list_for_each_entry(p, µcode_cache, plist) - if (patch_cpus_equivalent(p, &n)) + if (patch_cpus_equivalent(p, &n, false)) return p;
return NULL; }
-static inline bool patch_newer(struct ucode_patch *p, struct ucode_patch *n) +static inline int patch_newer(struct ucode_patch *p, struct ucode_patch *n) { /* Zen and newer hardcode the f/m/s in the patch ID */ if (x86_family(bsp_cpuid_1_eax) >= 0x17) { @@ -661,6 +664,9 @@ static inline bool patch_newer(struct uc zp.ucode_rev = p->patch_id; zn.ucode_rev = n->patch_id;
+ if (zn.stepping != zp.stepping) + return -1; + return zn.rev > zp.rev; } else { return n->patch_id > p->patch_id; @@ -670,10 +676,14 @@ static inline bool patch_newer(struct uc static void update_cache(struct ucode_patch *new_patch) { struct ucode_patch *p; + int ret;
list_for_each_entry(p, µcode_cache, plist) { - if (patch_cpus_equivalent(p, new_patch)) { - if (!patch_newer(p, new_patch)) { + if (patch_cpus_equivalent(p, new_patch, true)) { + ret = patch_newer(p, new_patch); + if (ret < 0) + continue; + else if (!ret) { /* we already have the latest patch */ kfree(new_patch->data); kfree(new_patch);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit 1d81d85d1a19e50d5237dc67d6b825c34ae13de8 upstream
This function should've been split a long time ago because it is used in two paths:
1) On the late loading path, when the microcode is loaded through the request_firmware interface
2) In the save_microcode_in_initrd() path which collects all the microcode patches which are relevant for the current system before the initrd with the microcode container has been jettisoned.
In that path, it is not really necessary to iterate over the nodes on a system and match a patch however it didn't cause any trouble so it was left for a later cleanup
However, that later cleanup was expedited by the fact that Jens was enabling "Use L3 as a NUMA node" in the BIOS setting in his machine and so this causes the NUMA CPU masks used in cpumask_of_node() to be generated *after* 2) above happened on the first node. Which means, all those masks were funky, wrong, uninitialized and whatnot, leading to explosions when dereffing c->microcode in load_microcode_amd().
So split that function and do only the necessary work needed at each stage.
Fixes: 94838d230a6c ("x86/microcode/AMD: Use the family,model,stepping encoded in the patch ID") Reported-by: Jens Axboe axboe@kernel.dk Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Tested-by: Jens Axboe axboe@kernel.dk Link: https://lore.kernel.org/r/91194406-3fdf-4e38-9838-d334af538f74@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -586,7 +586,7 @@ void __init load_ucode_amd_bsp(struct ea native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy); }
-static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size); +static enum ucode_state _load_microcode_amd(u8 family, const u8 *data, size_t size);
static int __init save_microcode_in_initrd(void) { @@ -607,7 +607,7 @@ static int __init save_microcode_in_init if (!desc.mc) return -EINVAL;
- ret = load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size); + ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size); if (ret > UCODE_UPDATED) return -EINVAL;
@@ -956,21 +956,30 @@ static enum ucode_state __load_microcode return UCODE_OK; }
-static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size) +static enum ucode_state _load_microcode_amd(u8 family, const u8 *data, size_t size) { - struct cpuinfo_x86 *c; - unsigned int nid, cpu; - struct ucode_patch *p; enum ucode_state ret;
/* free old equiv table */ free_equiv_cpu_table();
ret = __load_microcode_amd(family, data, size); - if (ret != UCODE_OK) { + if (ret != UCODE_OK) cleanup(); + + return ret; +} + +static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size) +{ + struct cpuinfo_x86 *c; + unsigned int nid, cpu; + struct ucode_patch *p; + enum ucode_state ret; + + ret = _load_microcode_amd(family, data, size); + if (ret != UCODE_OK) return ret; - }
for_each_node(nid) { cpu = cpumask_first(cpumask_of_node(nid));
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Chang S. Bae" chang.seok.bae@intel.com
commit 9a819753b0209c6edebdea447a1aa53e8c697653 upstream
Currently, an unconditional cache flush is performed during every microcode update. Although the original changelog did not mention a specific erratum, this measure was primarily intended to address a specific microcode bug, the load of which has already been blocked by is_blacklisted(). Therefore, this cache flush is no longer necessary.
Additionally, the side effects of doing this have been overlooked. It increases CPU rendezvous time during late loading, where the cache flush takes between 1x to 3.5x longer than the actual microcode update.
Remove native_wbinvd() and update the erratum name to align with the latest errata documentation, document ID 334163 Version 022US.
[ bp: Zap the flaky documentation URL. ]
Fixes: 91df9fdf5149 ("x86/microcode/intel: Writeback and invalidate caches before updating microcode") Reported-by: Yan Hua Wu yanhua1.wu@intel.com Reported-by: William Xie william.xie@intel.com Signed-off-by: Chang S. Bae chang.seok.bae@intel.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Acked-by: Ashok Raj ashok.raj@intel.com Tested-by: Yan Hua Wu yanhua1.wu@intel.com Link: https://lore.kernel.org/r/20241001161042.465584-2-chang.seok.bae@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/intel.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -319,12 +319,6 @@ static enum ucode_state __apply_microcod return UCODE_OK; }
- /* - * Writeback and invalidate caches before updating microcode to avoid - * internal issues depending on what the microcode is updating. - */ - native_wbinvd(); - /* write microcode via MSR 0x79 */ native_wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc->bits);
@@ -551,7 +545,7 @@ static bool is_blacklisted(unsigned int /* * Late loading on model 79 with microcode revision less than 0x0b000021 * and LLC size per core bigger than 2.5MB may result in a system hang. - * This behavior is documented in item BDF90, #334165 (Intel Xeon + * This behavior is documented in item BDX90, #334165 (Intel Xeon * Processor E7-8800/4800 v4 Product Family). */ if (c->x86 == 6 && @@ -559,7 +553,7 @@ static bool is_blacklisted(unsigned int c->x86_stepping == 0x01 && llc_size_per_core > 2621440 && c->microcode < 0x0b000021) { - pr_err_once("Erratum BDF90: late loading with revision < 0x0b000021 (0x%x) disabled.\n", c->microcode); + pr_err_once("Erratum BDX90: late loading with revision < 0x0b000021 (0x%x) disabled.\n", c->microcode); pr_err_once("Please consider either early loading through initrd/built-in or a potential BIOS update.\n"); return true; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit c809b0d0e52d01c30066367b2952c4c4186b1047 upstream
Due to specific requirements while applying microcode patches on Zen1 and 2, the patch buffer mapping needs to be flushed from the TLB after application. Do so.
If not, unnecessary and unnatural delays happen in the boot process.
Reported-by: Thomas De Schampheleire thomas.de_schampheleire@nokia.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Tested-by: Thomas De Schampheleire thomas.de_schampheleire@nokia.com Cc: stable@kernel.org # f1d84b59cbb9 ("x86/mm: Carve out INVLPG inline asm for use by others") Link: https://lore.kernel.org/r/ZyulbYuvrkshfsd2@antipodes Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -34,6 +34,7 @@ #include <asm/setup.h> #include <asm/cpu.h> #include <asm/msr.h> +#include <asm/tlb.h>
#include "internal.h"
@@ -485,11 +486,25 @@ static void scan_containers(u8 *ucode, s } }
-static int __apply_microcode_amd(struct microcode_amd *mc) +static int __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize) { + unsigned long p_addr = (unsigned long)&mc->hdr.data_code; u32 rev, dummy;
- native_wrmsrl(MSR_AMD64_PATCH_LOADER, (u64)(long)&mc->hdr.data_code); + native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr); + + if (x86_family(bsp_cpuid_1_eax) == 0x17) { + unsigned long p_addr_end = p_addr + psize - 1; + + invlpg(p_addr); + + /* + * Flush next page too if patch image is crossing a page + * boundary. + */ + if (p_addr >> PAGE_SHIFT != p_addr_end >> PAGE_SHIFT) + invlpg(p_addr_end); + }
/* verify patch application was successful */ native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); @@ -531,7 +546,7 @@ static bool early_apply_microcode(u32 ol if (old_rev > mc->hdr.patch_id) return ret;
- return !__apply_microcode_amd(mc); + return !__apply_microcode_amd(mc, desc.psize); }
static bool get_builtin_microcode(struct cpio_data *cp) @@ -747,7 +762,7 @@ void reload_ucode_amd(unsigned int cpu) rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
if (rev < mc->hdr.patch_id) { - if (!__apply_microcode_amd(mc)) + if (!__apply_microcode_amd(mc, p->size)) pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id); } } @@ -800,7 +815,7 @@ static enum ucode_state apply_microcode_ goto out; }
- if (__apply_microcode_amd(mc_amd)) { + if (__apply_microcode_amd(mc_amd, p->size)) { pr_err("CPU%d: update failed for patch_level=0x%08x\n", cpu, mc_amd->hdr.patch_id); return UCODE_ERROR;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikolay Borisov nik.borisov@suse.com
commit a85c08aaa665b5436d325f6d7138732a0e1315ce upstream
Instead of open-coding the check for size/data move it inside the function and make it return a boolean indicating whether data was found or not.
No functional changes.
[ bp: Write @ret in find_blobs_in_containers() only on success. ]
Signed-off-by: Nikolay Borisov nik.borisov@suse.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20241018155151.702350-2-nik.borisov@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -571,14 +571,19 @@ static bool get_builtin_microcode(struct return false; }
-static void __init find_blobs_in_containers(struct cpio_data *ret) +static bool __init find_blobs_in_containers(struct cpio_data *ret) { struct cpio_data cp; + bool found;
if (!get_builtin_microcode(&cp)) cp = find_microcode_in_initrd(ucode_path);
- *ret = cp; + found = cp.data && cp.size; + if (found) + *ret = cp; + + return found; }
void __init load_ucode_amd_bsp(struct early_load_data *ed, unsigned int cpuid_1_eax) @@ -593,8 +598,7 @@ void __init load_ucode_amd_bsp(struct ea /* Needed in load_microcode_amd() */ ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax;
- find_blobs_in_containers(&cp); - if (!(cp.data && cp.size)) + if (!find_blobs_in_containers(&cp)) return;
if (early_apply_microcode(ed->old_rev, cp.data, cp.size)) @@ -614,8 +618,7 @@ static int __init save_microcode_in_init if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) return 0;
- find_blobs_in_containers(&cp); - if (!(cp.data && cp.size)) + if (!find_blobs_in_containers(&cp)) return -EINVAL;
scan_containers(cp.data, cp.size, &desc);
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikolay Borisov nik.borisov@suse.com
commit d8317f3d8e6b412ff51ea66f1de2b2f89835f811 upstream
The result of that function is in essence boolean, so simplify to return the result of the relevant expression. It also makes it follow the convention used by __verify_patch_section().
No functional changes.
Signed-off-by: Nikolay Borisov nik.borisov@suse.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20241018155151.702350-3-nik.borisov@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -285,13 +285,13 @@ __verify_patch_section(const u8 *buf, si * exceed the per-family maximum). @sh_psize is the size read from the section * header. */ -static unsigned int __verify_patch_size(u32 sh_psize, size_t buf_size) +static bool __verify_patch_size(u32 sh_psize, size_t buf_size) { u8 family = x86_family(bsp_cpuid_1_eax); u32 max_size;
if (family >= 0x15) - return min_t(u32, sh_psize, buf_size); + goto ret;
#define F1XH_MPB_MAX_SIZE 2048 #define F14H_MPB_MAX_SIZE 1824 @@ -305,13 +305,15 @@ static unsigned int __verify_patch_size( break; default: WARN(1, "%s: WTF family: 0x%x\n", __func__, family); - return 0; + return false; }
- if (sh_psize > min_t(u32, buf_size, max_size)) - return 0; + if (sh_psize > max_size) + return false;
- return sh_psize; +ret: + /* Working with the whole buffer so < is ok. */ + return sh_psize <= buf_size; }
/* @@ -326,7 +328,6 @@ static int verify_patch(const u8 *buf, s { u8 family = x86_family(bsp_cpuid_1_eax); struct microcode_header_amd *mc_hdr; - unsigned int ret; u32 sh_psize; u16 proc_id; u8 patch_fam; @@ -350,8 +351,7 @@ static int verify_patch(const u8 *buf, s return -1; }
- ret = __verify_patch_size(sh_psize, buf_size); - if (!ret) { + if (!__verify_patch_size(sh_psize, buf_size)) { pr_debug("Per-family patch size mismatch.\n"); return -1; }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit 78e0aadbd4c6807a06a9d25bc190fe515d3f3c42 upstream
This is the natural thing to do anyway.
No functional changes.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -486,7 +486,7 @@ static void scan_containers(u8 *ucode, s } }
-static int __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize) +static bool __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize) { unsigned long p_addr = (unsigned long)&mc->hdr.data_code; u32 rev, dummy; @@ -510,9 +510,9 @@ static int __apply_microcode_amd(struct native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
if (rev != mc->hdr.patch_id) - return -1; + return false;
- return 0; + return true; }
/* @@ -546,7 +546,7 @@ static bool early_apply_microcode(u32 ol if (old_rev > mc->hdr.patch_id) return ret;
- return !__apply_microcode_amd(mc, desc.psize); + return __apply_microcode_amd(mc, desc.psize); }
static bool get_builtin_microcode(struct cpio_data *cp) @@ -765,7 +765,7 @@ void reload_ucode_amd(unsigned int cpu) rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
if (rev < mc->hdr.patch_id) { - if (!__apply_microcode_amd(mc, p->size)) + if (__apply_microcode_amd(mc, p->size)) pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id); } } @@ -818,7 +818,7 @@ static enum ucode_state apply_microcode_ goto out; }
- if (__apply_microcode_amd(mc_amd, p->size)) { + if (!__apply_microcode_amd(mc_amd, p->size)) { pr_err("CPU%d: update failed for patch_level=0x%08x\n", cpu, mc_amd->hdr.patch_id); return UCODE_ERROR;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit dc15675074dcfd79a2f10a6e39f96b0244961a01 upstream
No functional changes.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20250211163648.30531-4-bp@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 60 +++++++++++++++--------------------- 1 file changed, 26 insertions(+), 34 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -515,39 +515,6 @@ static bool __apply_microcode_amd(struct return true; }
-/* - * Early load occurs before we can vmalloc(). So we look for the microcode - * patch container file in initrd, traverse equivalent cpu table, look for a - * matching microcode patch, and update, all in initrd memory in place. - * When vmalloc() is available for use later -- on 64-bit during first AP load, - * and on 32-bit during save_microcode_in_initrd_amd() -- we can call - * load_microcode_amd() to save equivalent cpu table and microcode patches in - * kernel heap memory. - * - * Returns true if container found (sets @desc), false otherwise. - */ -static bool early_apply_microcode(u32 old_rev, void *ucode, size_t size) -{ - struct cont_desc desc = { 0 }; - struct microcode_amd *mc; - bool ret = false; - - scan_containers(ucode, size, &desc); - - mc = desc.mc; - if (!mc) - return ret; - - /* - * Allow application of the same revision to pick up SMT-specific - * changes even if the revision of the other SMT thread is already - * up-to-date. - */ - if (old_rev > mc->hdr.patch_id) - return ret; - - return __apply_microcode_amd(mc, desc.psize); -}
static bool get_builtin_microcode(struct cpio_data *cp) { @@ -586,8 +553,19 @@ static bool __init find_blobs_in_contain return found; }
+/* + * Early load occurs before we can vmalloc(). So we look for the microcode + * patch container file in initrd, traverse equivalent cpu table, look for a + * matching microcode patch, and update, all in initrd memory in place. + * When vmalloc() is available for use later -- on 64-bit during first AP load, + * and on 32-bit during save_microcode_in_initrd() -- we can call + * load_microcode_amd() to save equivalent cpu table and microcode patches in + * kernel heap memory. + */ void __init load_ucode_amd_bsp(struct early_load_data *ed, unsigned int cpuid_1_eax) { + struct cont_desc desc = { }; + struct microcode_amd *mc; struct cpio_data cp = { }; u32 dummy;
@@ -601,7 +579,21 @@ void __init load_ucode_amd_bsp(struct ea if (!find_blobs_in_containers(&cp)) return;
- if (early_apply_microcode(ed->old_rev, cp.data, cp.size)) + scan_containers(cp.data, cp.size, &desc); + + mc = desc.mc; + if (!mc) + return; + + /* + * Allow application of the same revision to pick up SMT-specific + * changes even if the revision of the other SMT thread is already + * up-to-date. + */ + if (ed->old_rev > mc->hdr.patch_id) + return; + + if (__apply_microcode_amd(mc, desc.psize)) native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy); }
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit b39c387164879eef71886fc93cee5ca7dd7bf500 upstream
Simply move save_microcode_in_initrd() down.
No functional changes.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20250211163648.30531-5-bp@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 54 +++++++++++++++++------------------- 1 file changed, 26 insertions(+), 28 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -597,34 +597,6 @@ void __init load_ucode_amd_bsp(struct ea native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy); }
-static enum ucode_state _load_microcode_amd(u8 family, const u8 *data, size_t size); - -static int __init save_microcode_in_initrd(void) -{ - unsigned int cpuid_1_eax = native_cpuid_eax(1); - struct cpuinfo_x86 *c = &boot_cpu_data; - struct cont_desc desc = { 0 }; - enum ucode_state ret; - struct cpio_data cp; - - if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) - return 0; - - if (!find_blobs_in_containers(&cp)) - return -EINVAL; - - scan_containers(cp.data, cp.size, &desc); - if (!desc.mc) - return -EINVAL; - - ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size); - if (ret > UCODE_UPDATED) - return -EINVAL; - - return 0; -} -early_initcall(save_microcode_in_initrd); - static inline bool patch_cpus_equivalent(struct ucode_patch *p, struct ucode_patch *n, bool ignore_stepping) @@ -1008,6 +980,32 @@ static enum ucode_state load_microcode_a return ret; }
+static int __init save_microcode_in_initrd(void) +{ + unsigned int cpuid_1_eax = native_cpuid_eax(1); + struct cpuinfo_x86 *c = &boot_cpu_data; + struct cont_desc desc = { 0 }; + enum ucode_state ret; + struct cpio_data cp; + + if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) + return 0; + + if (!find_blobs_in_containers(&cp)) + return -EINVAL; + + scan_containers(cp.data, cp.size, &desc); + if (!desc.mc) + return -EINVAL; + + ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size); + if (ret > UCODE_UPDATED) + return -EINVAL; + + return 0; +} +early_initcall(save_microcode_in_initrd); + /* * AMD microcode firmware naming convention, up to family 15h they are in * the legacy file:
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit 037e81fb9d2dfe7b31fd97e5f578854e38f09887 upstream
Put the MSR_AMD64_PATCH_LEVEL reading of the current microcode revision the hw has, into a separate function.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20250211163648.30531-6-bp@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 46 ++++++++++++++++++------------------ 1 file changed, 24 insertions(+), 22 deletions(-)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -147,6 +147,15 @@ ucode_path[] __maybe_unused = "kernel/x8 */ static u32 bsp_cpuid_1_eax __ro_after_init;
+static u32 get_patch_level(void) +{ + u32 rev, dummy __always_unused; + + native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); + + return rev; +} + static union cpuid_1_eax ucode_rev_to_cpuid(unsigned int val) { union zen_patch_rev p; @@ -486,10 +495,10 @@ static void scan_containers(u8 *ucode, s } }
-static bool __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize) +static bool __apply_microcode_amd(struct microcode_amd *mc, u32 *cur_rev, + unsigned int psize) { unsigned long p_addr = (unsigned long)&mc->hdr.data_code; - u32 rev, dummy;
native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr);
@@ -507,9 +516,8 @@ static bool __apply_microcode_amd(struct }
/* verify patch application was successful */ - native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); - - if (rev != mc->hdr.patch_id) + *cur_rev = get_patch_level(); + if (*cur_rev != mc->hdr.patch_id) return false;
return true; @@ -567,11 +575,12 @@ void __init load_ucode_amd_bsp(struct ea struct cont_desc desc = { }; struct microcode_amd *mc; struct cpio_data cp = { }; - u32 dummy; + u32 rev;
bsp_cpuid_1_eax = cpuid_1_eax;
- native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->old_rev, dummy); + rev = get_patch_level(); + ed->old_rev = rev;
/* Needed in load_microcode_amd() */ ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax; @@ -593,8 +602,8 @@ void __init load_ucode_amd_bsp(struct ea if (ed->old_rev > mc->hdr.patch_id) return;
- if (__apply_microcode_amd(mc, desc.psize)) - native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy); + if (__apply_microcode_amd(mc, &rev, desc.psize)) + ed->new_rev = rev; }
static inline bool patch_cpus_equivalent(struct ucode_patch *p, @@ -696,14 +705,9 @@ static void free_cache(void) static struct ucode_patch *find_patch(unsigned int cpu) { struct ucode_cpu_info *uci = ucode_cpu_info + cpu; - u32 rev, dummy __always_unused; u16 equiv_id;
- /* fetch rev if not populated yet: */ - if (!uci->cpu_sig.rev) { - rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); - uci->cpu_sig.rev = rev; - } + uci->cpu_sig.rev = get_patch_level();
if (x86_family(bsp_cpuid_1_eax) < 0x17) { equiv_id = find_equiv_id(&equiv_table, uci->cpu_sig.sig); @@ -726,22 +730,20 @@ void reload_ucode_amd(unsigned int cpu)
mc = p->data;
- rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); - + rev = get_patch_level(); if (rev < mc->hdr.patch_id) { - if (__apply_microcode_amd(mc, p->size)) - pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id); + if (__apply_microcode_amd(mc, &rev, p->size)) + pr_info_once("reload revision: 0x%08x\n", rev); } }
static int collect_cpu_info_amd(int cpu, struct cpu_signature *csig) { - struct cpuinfo_x86 *c = &cpu_data(cpu); struct ucode_cpu_info *uci = ucode_cpu_info + cpu; struct ucode_patch *p;
csig->sig = cpuid_eax(0x00000001); - csig->rev = c->microcode; + csig->rev = get_patch_level();
/* * a patch could have been loaded early, set uci->mc so that @@ -782,7 +784,7 @@ static enum ucode_state apply_microcode_ goto out; }
- if (!__apply_microcode_amd(mc_amd, p->size)) { + if (!__apply_microcode_amd(mc_amd, &rev, p->size)) { pr_err("CPU%d: update failed for patch_level=0x%08x\n", cpu, mc_amd->hdr.patch_id); return UCODE_ERROR;
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Borislav Petkov (AMD)" bp@alien8.de
commit 50cef76d5cb0e199cda19f026842560f6eedc4f7 upstream
Load patches for which the driver carries a SHA256 checksum of the patch blob.
This can be disabled by adding "microcode.amd_sha_check=off" on the kernel cmdline. But it is highly NOT recommended.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/Kconfig | 1 arch/x86/kernel/cpu/microcode/amd.c | 111 +++++++ arch/x86/kernel/cpu/microcode/amd_shas.c | 444 +++++++++++++++++++++++++++++++ 3 files changed, 554 insertions(+), 2 deletions(-) create mode 100644 arch/x86/kernel/cpu/microcode/amd_shas.c
--- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1313,6 +1313,7 @@ config X86_REBOOTFIXUPS config MICROCODE def_bool y depends on CPU_SUP_AMD || CPU_SUP_INTEL + select CRYPTO_LIB_SHA256 if CPU_SUP_AMD
config MICROCODE_LATE_LOADING bool "Late microcode loading (DANGEROUS)" --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -23,14 +23,18 @@
#include <linux/earlycpio.h> #include <linux/firmware.h> +#include <linux/bsearch.h> #include <linux/uaccess.h> #include <linux/vmalloc.h> #include <linux/initrd.h> #include <linux/kernel.h> #include <linux/pci.h>
+#include <crypto/sha2.h> + #include <asm/microcode.h> #include <asm/processor.h> +#include <asm/cmdline.h> #include <asm/setup.h> #include <asm/cpu.h> #include <asm/msr.h> @@ -147,6 +151,98 @@ ucode_path[] __maybe_unused = "kernel/x8 */ static u32 bsp_cpuid_1_eax __ro_after_init;
+static bool sha_check = true; + +struct patch_digest { + u32 patch_id; + u8 sha256[SHA256_DIGEST_SIZE]; +}; + +#include "amd_shas.c" + +static int cmp_id(const void *key, const void *elem) +{ + struct patch_digest *pd = (struct patch_digest *)elem; + u32 patch_id = *(u32 *)key; + + if (patch_id == pd->patch_id) + return 0; + else if (patch_id < pd->patch_id) + return -1; + else + return 1; +} + +static bool need_sha_check(u32 cur_rev) +{ + switch (cur_rev >> 8) { + case 0x80012: return cur_rev <= 0x800126f; break; + case 0x83010: return cur_rev <= 0x830107c; break; + case 0x86001: return cur_rev <= 0x860010e; break; + case 0x86081: return cur_rev <= 0x8608108; break; + case 0x87010: return cur_rev <= 0x8701034; break; + case 0x8a000: return cur_rev <= 0x8a0000a; break; + case 0xa0011: return cur_rev <= 0xa0011da; break; + case 0xa0012: return cur_rev <= 0xa001243; break; + case 0xa1011: return cur_rev <= 0xa101153; break; + case 0xa1012: return cur_rev <= 0xa10124e; break; + case 0xa1081: return cur_rev <= 0xa108109; break; + case 0xa2010: return cur_rev <= 0xa20102f; break; + case 0xa2012: return cur_rev <= 0xa201212; break; + case 0xa6012: return cur_rev <= 0xa60120a; break; + case 0xa7041: return cur_rev <= 0xa704109; break; + case 0xa7052: return cur_rev <= 0xa705208; break; + case 0xa7080: return cur_rev <= 0xa708009; break; + case 0xa70c0: return cur_rev <= 0xa70C009; break; + case 0xaa002: return cur_rev <= 0xaa00218; break; + default: break; + } + + pr_info("You should not be seeing this. Please send the following couple of lines to x86-<at>-kernel.org\n"); + pr_info("CPUID(1).EAX: 0x%x, current revision: 0x%x\n", bsp_cpuid_1_eax, cur_rev); + return true; +} + +static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsigned int len) +{ + struct patch_digest *pd = NULL; + u8 digest[SHA256_DIGEST_SIZE]; + struct sha256_state s; + int i; + + if (x86_family(bsp_cpuid_1_eax) < 0x17 || + x86_family(bsp_cpuid_1_eax) > 0x19) + return true; + + if (!need_sha_check(cur_rev)) + return true; + + if (!sha_check) + return true; + + pd = bsearch(&patch_id, phashes, ARRAY_SIZE(phashes), sizeof(struct patch_digest), cmp_id); + if (!pd) { + pr_err("No sha256 digest for patch ID: 0x%x found\n", patch_id); + return false; + } + + sha256_init(&s); + sha256_update(&s, data, len); + sha256_final(&s, digest); + + if (memcmp(digest, pd->sha256, sizeof(digest))) { + pr_err("Patch 0x%x SHA256 digest mismatch!\n", patch_id); + + for (i = 0; i < SHA256_DIGEST_SIZE; i++) + pr_cont("0x%x ", digest[i]); + pr_info("\n"); + + return false; + } + + return true; +} + static u32 get_patch_level(void) { u32 rev, dummy __always_unused; @@ -500,6 +596,9 @@ static bool __apply_microcode_amd(struct { unsigned long p_addr = (unsigned long)&mc->hdr.data_code;
+ if (!verify_sha256_digest(mc->hdr.patch_id, *cur_rev, (const u8 *)p_addr, psize)) + return -1; + native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr);
if (x86_family(bsp_cpuid_1_eax) == 0x17) { @@ -575,8 +674,17 @@ void __init load_ucode_amd_bsp(struct ea struct cont_desc desc = { }; struct microcode_amd *mc; struct cpio_data cp = { }; + char buf[4]; u32 rev;
+ if (cmdline_find_option(boot_command_line, "microcode.amd_sha_check", buf, 4)) { + if (!strncmp(buf, "off", 3)) { + sha_check = false; + pr_warn_once("It is a very very bad idea to disable the blobs SHA check!\n"); + add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); + } + } + bsp_cpuid_1_eax = cpuid_1_eax;
rev = get_patch_level(); @@ -906,8 +1014,7 @@ static int verify_and_add_patch(u8 famil }
/* Scan the blob in @data and add microcode patches to the cache. */ -static enum ucode_state __load_microcode_amd(u8 family, const u8 *data, - size_t size) +static enum ucode_state __load_microcode_amd(u8 family, const u8 *data, size_t size) { u8 *fw = (u8 *)data; size_t offset; --- /dev/null +++ b/arch/x86/kernel/cpu/microcode/amd_shas.c @@ -0,0 +1,444 @@ +/* Keep 'em sorted. */ +static const struct patch_digest phashes[] = { + { 0x8001227, { + 0x99,0xc0,0x9b,0x2b,0xcc,0x9f,0x52,0x1b, + 0x1a,0x5f,0x1d,0x83,0xa1,0x6c,0xc4,0x46, + 0xe2,0x6c,0xda,0x73,0xfb,0x2d,0x23,0xa8, + 0x77,0xdc,0x15,0x31,0x33,0x4a,0x46,0x18, + } + }, + { 0x8001250, { + 0xc0,0x0b,0x6b,0x19,0xfd,0x5c,0x39,0x60, + 0xd5,0xc3,0x57,0x46,0x54,0xe4,0xd1,0xaa, + 0xa8,0xf7,0x1f,0xa8,0x6a,0x60,0x3e,0xe3, + 0x27,0x39,0x8e,0x53,0x30,0xf8,0x49,0x19, + } + }, + { 0x800126e, { + 0xf3,0x8b,0x2b,0xb6,0x34,0xe3,0xc8,0x2c, + 0xef,0xec,0x63,0x6d,0xc8,0x76,0x77,0xb3, + 0x25,0x5a,0xb7,0x52,0x8c,0x83,0x26,0xe6, + 0x4c,0xbe,0xbf,0xe9,0x7d,0x22,0x6a,0x43, + } + }, + { 0x800126f, { + 0x2b,0x5a,0xf2,0x9c,0xdd,0xd2,0x7f,0xec, + 0xec,0x96,0x09,0x57,0xb0,0x96,0x29,0x8b, + 0x2e,0x26,0x91,0xf0,0x49,0x33,0x42,0x18, + 0xdd,0x4b,0x65,0x5a,0xd4,0x15,0x3d,0x33, + } + }, + { 0x800820d, { + 0x68,0x98,0x83,0xcd,0x22,0x0d,0xdd,0x59, + 0x73,0x2c,0x5b,0x37,0x1f,0x84,0x0e,0x67, + 0x96,0x43,0x83,0x0c,0x46,0x44,0xab,0x7c, + 0x7b,0x65,0x9e,0x57,0xb5,0x90,0x4b,0x0e, + } + }, + { 0x8301025, { + 0xe4,0x7d,0xdb,0x1e,0x14,0xb4,0x5e,0x36, + 0x8f,0x3e,0x48,0x88,0x3c,0x6d,0x76,0xa1, + 0x59,0xc6,0xc0,0x72,0x42,0xdf,0x6c,0x30, + 0x6f,0x0b,0x28,0x16,0x61,0xfc,0x79,0x77, + } + }, + { 0x8301055, { + 0x81,0x7b,0x99,0x1b,0xae,0x2d,0x4f,0x9a, + 0xef,0x13,0xce,0xb5,0x10,0xaf,0x6a,0xea, + 0xe5,0xb0,0x64,0x98,0x10,0x68,0x34,0x3b, + 0x9d,0x7a,0xd6,0x22,0x77,0x5f,0xb3,0x5b, + } + }, + { 0x8301072, { + 0xcf,0x76,0xa7,0x1a,0x49,0xdf,0x2a,0x5e, + 0x9e,0x40,0x70,0xe5,0xdd,0x8a,0xa8,0x28, + 0x20,0xdc,0x91,0xd8,0x2c,0xa6,0xa0,0xb1, + 0x2d,0x22,0x26,0x94,0x4b,0x40,0x85,0x30, + } + }, + { 0x830107a, { + 0x2a,0x65,0x8c,0x1a,0x5e,0x07,0x21,0x72, + 0xdf,0x90,0xa6,0x51,0x37,0xd3,0x4b,0x34, + 0xc4,0xda,0x03,0xe1,0x8a,0x6c,0xfb,0x20, + 0x04,0xb2,0x81,0x05,0xd4,0x87,0xf4,0x0a, + } + }, + { 0x830107b, { + 0xb3,0x43,0x13,0x63,0x56,0xc1,0x39,0xad, + 0x10,0xa6,0x2b,0xcc,0x02,0xe6,0x76,0x2a, + 0x1e,0x39,0x58,0x3e,0x23,0x6e,0xa4,0x04, + 0x95,0xea,0xf9,0x6d,0xc2,0x8a,0x13,0x19, + } + }, + { 0x830107c, { + 0x21,0x64,0xde,0xfb,0x9f,0x68,0x96,0x47, + 0x70,0x5c,0xe2,0x8f,0x18,0x52,0x6a,0xac, + 0xa4,0xd2,0x2e,0xe0,0xde,0x68,0x66,0xc3, + 0xeb,0x1e,0xd3,0x3f,0xbc,0x51,0x1d,0x38, + } + }, + { 0x860010d, { + 0x86,0xb6,0x15,0x83,0xbc,0x3b,0x9c,0xe0, + 0xb3,0xef,0x1d,0x99,0x84,0x35,0x15,0xf7, + 0x7c,0x2a,0xc6,0x42,0xdb,0x73,0x07,0x5c, + 0x7d,0xc3,0x02,0xb5,0x43,0x06,0x5e,0xf8, + } + }, + { 0x8608108, { + 0x14,0xfe,0x57,0x86,0x49,0xc8,0x68,0xe2, + 0x11,0xa3,0xcb,0x6e,0xff,0x6e,0xd5,0x38, + 0xfe,0x89,0x1a,0xe0,0x67,0xbf,0xc4,0xcc, + 0x1b,0x9f,0x84,0x77,0x2b,0x9f,0xaa,0xbd, + } + }, + { 0x8701034, { + 0xc3,0x14,0x09,0xa8,0x9c,0x3f,0x8d,0x83, + 0x9b,0x4c,0xa5,0xb7,0x64,0x8b,0x91,0x5d, + 0x85,0x6a,0x39,0x26,0x1e,0x14,0x41,0xa8, + 0x75,0xea,0xa6,0xf9,0xc9,0xd1,0xea,0x2b, + } + }, + { 0x8a00008, { + 0xd7,0x2a,0x93,0xdc,0x05,0x2f,0xa5,0x6e, + 0x0c,0x61,0x2c,0x07,0x9f,0x38,0xe9,0x8e, + 0xef,0x7d,0x2a,0x05,0x4d,0x56,0xaf,0x72, + 0xe7,0x56,0x47,0x6e,0x60,0x27,0xd5,0x8c, + } + }, + { 0x8a0000a, { + 0x73,0x31,0x26,0x22,0xd4,0xf9,0xee,0x3c, + 0x07,0x06,0xe7,0xb9,0xad,0xd8,0x72,0x44, + 0x33,0x31,0xaa,0x7d,0xc3,0x67,0x0e,0xdb, + 0x47,0xb5,0xaa,0xbc,0xf5,0xbb,0xd9,0x20, + } + }, + { 0xa00104c, { + 0x3c,0x8a,0xfe,0x04,0x62,0xd8,0x6d,0xbe, + 0xa7,0x14,0x28,0x64,0x75,0xc0,0xa3,0x76, + 0xb7,0x92,0x0b,0x97,0x0a,0x8e,0x9c,0x5b, + 0x1b,0xc8,0x9d,0x3a,0x1e,0x81,0x3d,0x3b, + } + }, + { 0xa00104e, { + 0xc4,0x35,0x82,0x67,0xd2,0x86,0xe5,0xb2, + 0xfd,0x69,0x12,0x38,0xc8,0x77,0xba,0xe0, + 0x70,0xf9,0x77,0x89,0x10,0xa6,0x74,0x4e, + 0x56,0x58,0x13,0xf5,0x84,0x70,0x28,0x0b, + } + }, + { 0xa001053, { + 0x92,0x0e,0xf4,0x69,0x10,0x3b,0xf9,0x9d, + 0x31,0x1b,0xa6,0x99,0x08,0x7d,0xd7,0x25, + 0x7e,0x1e,0x89,0xba,0x35,0x8d,0xac,0xcb, + 0x3a,0xb4,0xdf,0x58,0x12,0xcf,0xc0,0xc3, + } + }, + { 0xa001058, { + 0x33,0x7d,0xa9,0xb5,0x4e,0x62,0x13,0x36, + 0xef,0x66,0xc9,0xbd,0x0a,0xa6,0x3b,0x19, + 0xcb,0xf5,0xc2,0xc3,0x55,0x47,0x20,0xec, + 0x1f,0x7b,0xa1,0x44,0x0e,0x8e,0xa4,0xb2, + } + }, + { 0xa001075, { + 0x39,0x02,0x82,0xd0,0x7c,0x26,0x43,0xe9, + 0x26,0xa3,0xd9,0x96,0xf7,0x30,0x13,0x0a, + 0x8a,0x0e,0xac,0xe7,0x1d,0xdc,0xe2,0x0f, + 0xcb,0x9e,0x8d,0xbc,0xd2,0xa2,0x44,0xe0, + } + }, + { 0xa001078, { + 0x2d,0x67,0xc7,0x35,0xca,0xef,0x2f,0x25, + 0x4c,0x45,0x93,0x3f,0x36,0x01,0x8c,0xce, + 0xa8,0x5b,0x07,0xd3,0xc1,0x35,0x3c,0x04, + 0x20,0xa2,0xfc,0xdc,0xe6,0xce,0x26,0x3e, + } + }, + { 0xa001079, { + 0x43,0xe2,0x05,0x9c,0xfd,0xb7,0x5b,0xeb, + 0x5b,0xe9,0xeb,0x3b,0x96,0xf4,0xe4,0x93, + 0x73,0x45,0x3e,0xac,0x8d,0x3b,0xe4,0xdb, + 0x10,0x31,0xc1,0xe4,0xa2,0xd0,0x5a,0x8a, + } + }, + { 0xa00107a, { + 0x5f,0x92,0xca,0xff,0xc3,0x59,0x22,0x5f, + 0x02,0xa0,0x91,0x3b,0x4a,0x45,0x10,0xfd, + 0x19,0xe1,0x8a,0x6d,0x9a,0x92,0xc1,0x3f, + 0x75,0x78,0xac,0x78,0x03,0x1d,0xdb,0x18, + } + }, + { 0xa001143, { + 0x56,0xca,0xf7,0x43,0x8a,0x4c,0x46,0x80, + 0xec,0xde,0xe5,0x9c,0x50,0x84,0x9a,0x42, + 0x27,0xe5,0x51,0x84,0x8f,0x19,0xc0,0x8d, + 0x0c,0x25,0xb4,0xb0,0x8f,0x10,0xf3,0xf8, + } + }, + { 0xa001144, { + 0x42,0xd5,0x9b,0xa7,0xd6,0x15,0x29,0x41, + 0x61,0xc4,0x72,0x3f,0xf3,0x06,0x78,0x4b, + 0x65,0xf3,0x0e,0xfa,0x9c,0x87,0xde,0x25, + 0xbd,0xb3,0x9a,0xf4,0x75,0x13,0x53,0xdc, + } + }, + { 0xa00115d, { + 0xd4,0xc4,0x49,0x36,0x89,0x0b,0x47,0xdd, + 0xfb,0x2f,0x88,0x3b,0x5f,0xf2,0x8e,0x75, + 0xc6,0x6c,0x37,0x5a,0x90,0x25,0x94,0x3e, + 0x36,0x9c,0xae,0x02,0x38,0x6c,0xf5,0x05, + } + }, + { 0xa001173, { + 0x28,0xbb,0x9b,0xd1,0xa0,0xa0,0x7e,0x3a, + 0x59,0x20,0xc0,0xa9,0xb2,0x5c,0xc3,0x35, + 0x53,0x89,0xe1,0x4c,0x93,0x2f,0x1d,0xc3, + 0xe5,0xf7,0xf3,0xc8,0x9b,0x61,0xaa,0x9e, + } + }, + { 0xa0011a8, { + 0x97,0xc6,0x16,0x65,0x99,0xa4,0x85,0x3b, + 0xf6,0xce,0xaa,0x49,0x4a,0x3a,0xc5,0xb6, + 0x78,0x25,0xbc,0x53,0xaf,0x5d,0xcf,0xf4, + 0x23,0x12,0xbb,0xb1,0xbc,0x8a,0x02,0x2e, + } + }, + { 0xa0011ce, { + 0xcf,0x1c,0x90,0xa3,0x85,0x0a,0xbf,0x71, + 0x94,0x0e,0x80,0x86,0x85,0x4f,0xd7,0x86, + 0xae,0x38,0x23,0x28,0x2b,0x35,0x9b,0x4e, + 0xfe,0xb8,0xcd,0x3d,0x3d,0x39,0xc9,0x6a, + } + }, + { 0xa0011d1, { + 0xdf,0x0e,0xca,0xde,0xf6,0xce,0x5c,0x1e, + 0x4c,0xec,0xd7,0x71,0x83,0xcc,0xa8,0x09, + 0xc7,0xc5,0xfe,0xb2,0xf7,0x05,0xd2,0xc5, + 0x12,0xdd,0xe4,0xf3,0x92,0x1c,0x3d,0xb8, + } + }, + { 0xa0011d3, { + 0x91,0xe6,0x10,0xd7,0x57,0xb0,0x95,0x0b, + 0x9a,0x24,0xee,0xf7,0xcf,0x56,0xc1,0xa6, + 0x4a,0x52,0x7d,0x5f,0x9f,0xdf,0xf6,0x00, + 0x65,0xf7,0xea,0xe8,0x2a,0x88,0xe2,0x26, + } + }, + { 0xa0011d5, { + 0xed,0x69,0x89,0xf4,0xeb,0x64,0xc2,0x13, + 0xe0,0x51,0x1f,0x03,0x26,0x52,0x7d,0xb7, + 0x93,0x5d,0x65,0xca,0xb8,0x12,0x1d,0x62, + 0x0d,0x5b,0x65,0x34,0x69,0xb2,0x62,0x21, + } + }, + { 0xa001223, { + 0xfb,0x32,0x5f,0xc6,0x83,0x4f,0x8c,0xb8, + 0xa4,0x05,0xf9,0x71,0x53,0x01,0x16,0xc4, + 0x83,0x75,0x94,0xdd,0xeb,0x7e,0xb7,0x15, + 0x8e,0x3b,0x50,0x29,0x8a,0x9c,0xcc,0x45, + } + }, + { 0xa001224, { + 0x0e,0x0c,0xdf,0xb4,0x89,0xee,0x35,0x25, + 0xdd,0x9e,0xdb,0xc0,0x69,0x83,0x0a,0xad, + 0x26,0xa9,0xaa,0x9d,0xfc,0x3c,0xea,0xf9, + 0x6c,0xdc,0xd5,0x6d,0x8b,0x6e,0x85,0x4a, + } + }, + { 0xa001227, { + 0xab,0xc6,0x00,0x69,0x4b,0x50,0x87,0xad, + 0x5f,0x0e,0x8b,0xea,0x57,0x38,0xce,0x1d, + 0x0f,0x75,0x26,0x02,0xf6,0xd6,0x96,0xe9, + 0x87,0xb9,0xd6,0x20,0x27,0x7c,0xd2,0xe0, + } + }, + { 0xa001229, { + 0x7f,0x49,0x49,0x48,0x46,0xa5,0x50,0xa6, + 0x28,0x89,0x98,0xe2,0x9e,0xb4,0x7f,0x75, + 0x33,0xa7,0x04,0x02,0xe4,0x82,0xbf,0xb4, + 0xa5,0x3a,0xba,0x24,0x8d,0x31,0x10,0x1d, + } + }, + { 0xa00122e, { + 0x56,0x94,0xa9,0x5d,0x06,0x68,0xfe,0xaf, + 0xdf,0x7a,0xff,0x2d,0xdf,0x74,0x0f,0x15, + 0x66,0xfb,0x00,0xb5,0x51,0x97,0x9b,0xfa, + 0xcb,0x79,0x85,0x46,0x25,0xb4,0xd2,0x10, + } + }, + { 0xa001231, { + 0x0b,0x46,0xa5,0xfc,0x18,0x15,0xa0,0x9e, + 0xa6,0xdc,0xb7,0xff,0x17,0xf7,0x30,0x64, + 0xd4,0xda,0x9e,0x1b,0xc3,0xfc,0x02,0x3b, + 0xe2,0xc6,0x0e,0x41,0x54,0xb5,0x18,0xdd, + } + }, + { 0xa001234, { + 0x88,0x8d,0xed,0xab,0xb5,0xbd,0x4e,0xf7, + 0x7f,0xd4,0x0e,0x95,0x34,0x91,0xff,0xcc, + 0xfb,0x2a,0xcd,0xf7,0xd5,0xdb,0x4c,0x9b, + 0xd6,0x2e,0x73,0x50,0x8f,0x83,0x79,0x1a, + } + }, + { 0xa001236, { + 0x3d,0x30,0x00,0xb9,0x71,0xba,0x87,0x78, + 0xa8,0x43,0x55,0xc4,0x26,0x59,0xcf,0x9d, + 0x93,0xce,0x64,0x0e,0x8b,0x72,0x11,0x8b, + 0xa3,0x8f,0x51,0xe9,0xca,0x98,0xaa,0x25, + } + }, + { 0xa001238, { + 0x72,0xf7,0x4b,0x0c,0x7d,0x58,0x65,0xcc, + 0x00,0xcc,0x57,0x16,0x68,0x16,0xf8,0x2a, + 0x1b,0xb3,0x8b,0xe1,0xb6,0x83,0x8c,0x7e, + 0xc0,0xcd,0x33,0xf2,0x8d,0xf9,0xef,0x59, + } + }, + { 0xa00820c, { + 0xa8,0x0c,0x81,0xc0,0xa6,0x00,0xe7,0xf3, + 0x5f,0x65,0xd3,0xb9,0x6f,0xea,0x93,0x63, + 0xf1,0x8c,0x88,0x45,0xd7,0x82,0x80,0xd1, + 0xe1,0x3b,0x8d,0xb2,0xf8,0x22,0x03,0xe2, + } + }, + { 0xa10113e, { + 0x05,0x3c,0x66,0xd7,0xa9,0x5a,0x33,0x10, + 0x1b,0xf8,0x9c,0x8f,0xed,0xfc,0xa7,0xa0, + 0x15,0xe3,0x3f,0x4b,0x1d,0x0d,0x0a,0xd5, + 0xfa,0x90,0xc4,0xed,0x9d,0x90,0xaf,0x53, + } + }, + { 0xa101144, { + 0xb3,0x0b,0x26,0x9a,0xf8,0x7c,0x02,0x26, + 0x35,0x84,0x53,0xa4,0xd3,0x2c,0x7c,0x09, + 0x68,0x7b,0x96,0xb6,0x93,0xef,0xde,0xbc, + 0xfd,0x4b,0x15,0xd2,0x81,0xd3,0x51,0x47, + } + }, + { 0xa101148, { + 0x20,0xd5,0x6f,0x40,0x4a,0xf6,0x48,0x90, + 0xc2,0x93,0x9a,0xc2,0xfd,0xac,0xef,0x4f, + 0xfa,0xc0,0x3d,0x92,0x3c,0x6d,0x01,0x08, + 0xf1,0x5e,0xb0,0xde,0xb4,0x98,0xae,0xc4, + } + }, + { 0xa10123e, { + 0x03,0xb9,0x2c,0x76,0x48,0x93,0xc9,0x18, + 0xfb,0x56,0xfd,0xf7,0xe2,0x1d,0xca,0x4d, + 0x1d,0x13,0x53,0x63,0xfe,0x42,0x6f,0xfc, + 0x19,0x0f,0xf1,0xfc,0xa7,0xdd,0x89,0x1b, + } + }, + { 0xa101244, { + 0x71,0x56,0xb5,0x9f,0x21,0xbf,0xb3,0x3c, + 0x8c,0xd7,0x36,0xd0,0x34,0x52,0x1b,0xb1, + 0x46,0x2f,0x04,0xf0,0x37,0xd8,0x1e,0x72, + 0x24,0xa2,0x80,0x84,0x83,0x65,0x84,0xc0, + } + }, + { 0xa101248, { + 0xed,0x3b,0x95,0xa6,0x68,0xa7,0x77,0x3e, + 0xfc,0x17,0x26,0xe2,0x7b,0xd5,0x56,0x22, + 0x2c,0x1d,0xef,0xeb,0x56,0xdd,0xba,0x6e, + 0x1b,0x7d,0x64,0x9d,0x4b,0x53,0x13,0x75, + } + }, + { 0xa108108, { + 0xed,0xc2,0xec,0xa1,0x15,0xc6,0x65,0xe9, + 0xd0,0xef,0x39,0xaa,0x7f,0x55,0x06,0xc6, + 0xf5,0xd4,0x3f,0x7b,0x14,0xd5,0x60,0x2c, + 0x28,0x1e,0x9c,0x59,0x69,0x99,0x4d,0x16, + } + }, + { 0xa20102d, { + 0xf9,0x6e,0xf2,0x32,0xd3,0x0f,0x5f,0x11, + 0x59,0xa1,0xfe,0xcc,0xcd,0x9b,0x42,0x89, + 0x8b,0x89,0x2f,0xb5,0xbb,0x82,0xef,0x23, + 0x8c,0xe9,0x19,0x3e,0xcc,0x3f,0x7b,0xb4, + } + }, + { 0xa201210, { + 0xe8,0x6d,0x51,0x6a,0x8e,0x72,0xf3,0xfe, + 0x6e,0x16,0xbc,0x62,0x59,0x40,0x17,0xe9, + 0x6d,0x3d,0x0e,0x6b,0xa7,0xac,0xe3,0x68, + 0xf7,0x55,0xf0,0x13,0xbb,0x22,0xf6,0x41, + } + }, + { 0xa404107, { + 0xbb,0x04,0x4e,0x47,0xdd,0x5e,0x26,0x45, + 0x1a,0xc9,0x56,0x24,0xa4,0x4c,0x82,0xb0, + 0x8b,0x0d,0x9f,0xf9,0x3a,0xdf,0xc6,0x81, + 0x13,0xbc,0xc5,0x25,0xe4,0xc5,0xc3,0x99, + } + }, + { 0xa500011, { + 0x23,0x3d,0x70,0x7d,0x03,0xc3,0xc4,0xf4, + 0x2b,0x82,0xc6,0x05,0xda,0x80,0x0a,0xf1, + 0xd7,0x5b,0x65,0x3a,0x7d,0xab,0xdf,0xa2, + 0x11,0x5e,0x96,0x7e,0x71,0xe9,0xfc,0x74, + } + }, + { 0xa601209, { + 0x66,0x48,0xd4,0x09,0x05,0xcb,0x29,0x32, + 0x66,0xb7,0x9a,0x76,0xcd,0x11,0xf3,0x30, + 0x15,0x86,0xcc,0x5d,0x97,0x0f,0xc0,0x46, + 0xe8,0x73,0xe2,0xd6,0xdb,0xd2,0x77,0x1d, + } + }, + { 0xa704107, { + 0xf3,0xc6,0x58,0x26,0xee,0xac,0x3f,0xd6, + 0xce,0xa1,0x72,0x47,0x3b,0xba,0x2b,0x93, + 0x2a,0xad,0x8e,0x6b,0xea,0x9b,0xb7,0xc2, + 0x64,0x39,0x71,0x8c,0xce,0xe7,0x41,0x39, + } + }, + { 0xa705206, { + 0x8d,0xc0,0x76,0xbd,0x58,0x9f,0x8f,0xa4, + 0x12,0x9d,0x21,0xfb,0x48,0x21,0xbc,0xe7, + 0x67,0x6f,0x04,0x18,0xae,0x20,0x87,0x4b, + 0x03,0x35,0xe9,0xbe,0xfb,0x06,0xdf,0xfc, + } + }, + { 0xa708007, { + 0x6b,0x76,0xcc,0x78,0xc5,0x8a,0xa3,0xe3, + 0x32,0x2d,0x79,0xe4,0xc3,0x80,0xdb,0xb2, + 0x07,0xaa,0x3a,0xe0,0x57,0x13,0x72,0x80, + 0xdf,0x92,0x73,0x84,0x87,0x3c,0x73,0x93, + } + }, + { 0xa70c005, { + 0x88,0x5d,0xfb,0x79,0x64,0xd8,0x46,0x3b, + 0x4a,0x83,0x8e,0x77,0x7e,0xcf,0xb3,0x0f, + 0x1f,0x1f,0xf1,0x97,0xeb,0xfe,0x56,0x55, + 0xee,0x49,0xac,0xe1,0x8b,0x13,0xc5,0x13, + } + }, + { 0xaa00116, { + 0xe8,0x4c,0x2c,0x88,0xa1,0xac,0x24,0x63, + 0x65,0xe5,0xaa,0x2d,0x16,0xa9,0xc3,0xf5, + 0xfe,0x1d,0x5e,0x65,0xc7,0xaa,0x92,0x4d, + 0x91,0xee,0x76,0xbb,0x4c,0x66,0x78,0xc9, + } + }, + { 0xaa00212, { + 0xbd,0x57,0x5d,0x0a,0x0a,0x30,0xc1,0x75, + 0x95,0x58,0x5e,0x93,0x02,0x28,0x43,0x71, + 0xed,0x42,0x29,0xc8,0xec,0x34,0x2b,0xb2, + 0x1a,0x65,0x4b,0xfe,0x07,0x0f,0x34,0xa1, + } + }, + { 0xaa00213, { + 0xed,0x58,0xb7,0x76,0x81,0x7f,0xd9,0x3a, + 0x1a,0xff,0x8b,0x34,0xb8,0x4a,0x99,0x0f, + 0x28,0x49,0x6c,0x56,0x2b,0xdc,0xb7,0xed, + 0x96,0xd5,0x9d,0xc1,0x7a,0xd4,0x51,0x9b, + } + }, + { 0xaa00215, { + 0x55,0xd3,0x28,0xcb,0x87,0xa9,0x32,0xe9, + 0x4e,0x85,0x4b,0x7c,0x6b,0xd5,0x7c,0xd4, + 0x1b,0x51,0x71,0x3a,0x0e,0x0b,0xdc,0x9b, + 0x68,0x2f,0x46,0xee,0xfe,0xc6,0x6d,0xef, + } + }, +};
Hi!
This is the start of the stable review cycle for the 6.6.81 release. There are 142 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
CIP testing did not find any problems here:
https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/tree/linux-6...
6.12 and 6.13 pass our testing, too:
https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/tree/linux-6... https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/tree/linux-6...
Tested-by: Pavel Machek (CIP) pavel@denx.de
Best regards, Pavel
Hello,
On Wed, 5 Mar 2025 18:46:59 +0100 Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.6.81 release. There are 142 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
This rc kernel passes DAMON functionality test[1] on my test machine. Attaching the test results summary below. Please note that I retrieved the kernel from linux-stable-rc tree[2].
Tested-by: SeongJae Park sj@kernel.org
[1] https://github.com/damonitor/damon-tests/tree/next/corr [2] 9f243f9dd268 ("Linux 6.6.81-rc1")
Thanks, SJ
[...]
---
ok 1 selftests: damon: debugfs_attrs.sh ok 2 selftests: damon: debugfs_schemes.sh ok 3 selftests: damon: debugfs_target_ids.sh ok 4 selftests: damon: debugfs_empty_targets.sh ok 5 selftests: damon: debugfs_huge_count_read_write.sh ok 6 selftests: damon: debugfs_duplicate_context_creation.sh ok 7 selftests: damon: debugfs_rm_non_contexts.sh ok 8 selftests: damon: sysfs.sh ok 9 selftests: damon: sysfs_update_removed_scheme_dir.sh ok 10 selftests: damon: reclaim.sh ok 11 selftests: damon: lru_sort.sh ok 1 selftests: damon-tests: kunit.sh ok 2 selftests: damon-tests: huge_count_read_write.sh ok 3 selftests: damon-tests: buffer_overflow.sh ok 4 selftests: damon-tests: rm_contexts.sh ok 5 selftests: damon-tests: record_null_deref.sh ok 6 selftests: damon-tests: dbgfs_target_ids_read_before_terminate_race.sh ok 7 selftests: damon-tests: dbgfs_target_ids_pid_leak.sh ok 8 selftests: damon-tests: damo_tests.sh ok 9 selftests: damon-tests: masim-record.sh ok 10 selftests: damon-tests: build_i386.sh ok 11 selftests: damon-tests: build_arm64.sh # SKIP ok 12 selftests: damon-tests: build_m68k.sh # SKIP ok 13 selftests: damon-tests: build_i386_idle_flag.sh ok 14 selftests: damon-tests: build_i386_highpte.sh ok 15 selftests: damon-tests: build_nomemcg.sh [33m [92mPASS [39m
Am 05.03.2025 um 18:46 schrieb Greg Kroah-Hartman:
This is the start of the stable review cycle for the 6.6.81 release. There are 142 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Builds, boots and works on my 2-socket Ivy Bridge Xeon E5-2697 v2 server. No dmesg oddities or regressions found.
Tested-by: Peter Schneider pschneider1968@googlemail.com
Beste Grüße, Peter Schneider
On 3/5/25 09:46, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.6.81 release. There are 142 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.6.81-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.6.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On Wed, Mar 05, 2025 at 06:46:59PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.6.81 release. There are 142 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Tested-by: Mark Brown broonie@kernel.org
On Wed, 5 Mar 2025 at 23:28, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.6.81 release. There are 142 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.6.81-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.6.y and the diffstat can be found below.
thanks,
greg k-h
Regressions on x86_64 and i386 the defconfig builds failed with clang-20 and gcc-13 the stable-rc v6.6.78-437-g9f243f9dd268
First seen on the v6.6.78-437-g9f243f9dd268 Good: v6.6.78 Bad: v6.6.78-437-g9f243f9dd268
* x86_64 and i386, build - clang-20-defconfig - clang-nightly-defconfig - gcc-13-defconfig - gcc-8-defconfig
Regression Analysis: - New regression? Yes - Reproducibility? Yes
Build regression: x86_64 i386 microcode amd.c 'equiv_id' is used uninitialized Reported-by: Linux Kernel Functional Testing lkft@linaro.org
## Build log arch/x86/kernel/cpu/microcode/amd.c:820:6: error: variable 'equiv_id' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized] 820 | if (x86_family(bsp_cpuid_1_eax) < 0x17) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Source * Kernel version: 6.6.81-rc1 * Git tree: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git * Git sha: 9f243f9dd2684b4b27f8be0cbed639052bc9b22e * Git describe: v6.6.78-437-g9f243f9dd268 * Project details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.6.y/build/v6.6.78...
## Build * Build log: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.6.y/build/v6.6.78... * Build history: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.6.y/build/v6.6.78... * Build details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.6.y/build/v6.6.78... * Build link: https://storage.tuxsuite.com/public/linaro/lkft/builds/2tuO7pCGJaGnX0ygcbTOa... * Kernel config: https://storage.tuxsuite.com/public/linaro/lkft/builds/2tuO7pCGJaGnX0ygcbTOa...
## Steps to reproduce - tuxmake --runtime podman --target-arch x86_64 --toolchain clang-20 --kconfig defconfig LLVM=1 LLVM_IAS=1
-- Linaro LKFT https://lkft.linaro.org
On 3/5/25 10:46, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.6.81 release. There are 142 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 07 Mar 2025 17:44:26 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.6.81-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.6.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
The kernel, bpf tool, perf tool, and kselftest builds fine for v6.6.81-rc1 on x86 and arm64 Azure VM.
Kernel binary size for x86 build: text data bss dec hex filename 27318071 16713718 4644864 48676653 2e6bf2d vmlinux
Kernel binary size for arm64 build: text data bss dec hex filename 34688815 13844910 970368 49504093 2f35f5d vmlinux
Tested-by: Hardik Garg hargar@linux.microsoft.com
Thanks, Hardik
linux-stable-mirror@lists.linaro.org