This is the start of the stable review cycle for the 5.10.164 release. There are 64 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 18 Jan 2023 15:47:28 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.10.164-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.10.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 5.10.164-rc1
Ferry Toth ftoth@exalondelft.nl Revert "usb: ulpi: defer ulpi_register on ulpi_read_id timeout"
Jens Axboe axboe@kernel.dk io_uring/io-wq: only free worker if it was allocated for creation
Jens Axboe axboe@kernel.dk io_uring/io-wq: free worker if task_work creation is canceled
Rob Clark robdclark@chromium.org drm/virtio: Fix GEM handle creation UAF
Johan Hovold johan+linaro@kernel.org efi: fix NULL-deref in init error path
Mark Rutland mark.rutland@arm.com arm64: cmpxchg_double*: hazard against entire exchange variable
Mark Rutland mark.rutland@arm.com arm64: atomics: remove LL/SC trampolines
Mark Rutland mark.rutland@arm.com arm64: atomics: format whitespace consistently
Peter Newman peternewman@google.com x86/resctrl: Fix task CLOSID/RMID update race
Reinette Chatre reinette.chatre@intel.com x86/resctrl: Use task_curr() instead of task_struct->on_cpu to prevent unnecessary IPI
Paolo Bonzini pbonzini@redhat.com KVM: x86: Do not return host topology information from KVM_GET_SUPPORTED_CPUID
Paolo Bonzini pbonzini@redhat.com Documentation: KVM: add API issues section
Christophe JAILLET christophe.jaillet@wanadoo.fr iommu/mediatek-v1: Fix an error handling path in mtk_iommu_v1_probe()
Yong Wu yong.wu@mediatek.com iommu/mediatek-v1: Add error handle for mtk_iommu_probe
Aaron Thompson dev@aaront.org mm: Always release pages to the buddy allocator in memblock_free_late().
Gavin Li gavinl@nvidia.com net/mlx5e: Don't support encap rules with gbp option
Rahul Rameshbabu rrameshbabu@nvidia.com net/mlx5: Fix ptp max frequency adjustment range
Ido Schimmel idosch@nvidia.com net/sched: act_mpls: Fix warning during failed attribute validation
Minsuk Kang linuxlovemin@yonsei.ac.kr nfc: pn533: Wait for out_urb's completion in pn533_usb_send_frame()
Roger Pau Monne roger.pau@citrix.com hvc/xen: lock console list traversal
Angela Czubak aczubak@marvell.com octeontx2-af: Fix LMAC config in cgx_lmac_rx_tx_enable
Subbaraya Sundeep sbhatta@marvell.com octeontx2-af: Map NIX block from CGX connection
Subbaraya Sundeep sbhatta@marvell.com octeontx2-af: Update get/set resource count functions
Tung Nguyen tung.q.nguyen@dektech.com.au tipc: fix unexpected link reset due to discovery messages
Emanuele Ghidoli emanuele.ghidoli@toradex.com ASoC: wm8904: fix wrong outputs volume after power reactivation
Ricardo Ribalda ribalda@chromium.org regulator: da9211: Use irq handler when ready
Eliav Farber farbere@amazon.com EDAC/device: Fix period calculation in edac_device_reset_delay_period()
Peter Zijlstra peterz@infradead.org x86/boot: Avoid using Intel mnemonics in AT&T syntax asm
Kajol Jain kjain@linux.ibm.com powerpc/imc-pmu: Fix use of mutex in IRQs disabled section
Gavrilov Ilia Ilia.Gavrilov@infotecs.ru netfilter: ipset: Fix overflow before widen in the bitmap_ip_create() function.
Nicolas Dichtel nicolas.dichtel@6wind.com xfrm: fix rcu lock in xfrm_notify_userpolicy()
Ye Bin yebin10@huawei.com ext4: fix uninititialized value in 'ext4_evict_inode'
Ferry Toth ftoth@exalondelft.nl usb: ulpi: defer ulpi_register on ulpi_read_id timeout
Mathias Nyman mathias.nyman@linux.intel.com xhci: Prevent infinite loop in transaction errors recovery for streams
Mathias Nyman mathias.nyman@linux.intel.com xhci: move and rename xhci_cleanup_halted_endpoint()
Mathias Nyman mathias.nyman@linux.intel.com xhci: store TD status in the td struct instead of passing it along
Mathias Nyman mathias.nyman@linux.intel.com xhci: move xhci_td_cleanup so it can be called by more functions
Mathias Nyman mathias.nyman@linux.intel.com xhci: Add xhci_reset_halted_ep() helper function
Mathias Nyman mathias.nyman@linux.intel.com xhci: adjust parameters passed to cleanup_halted_endpoint()
Mathias Nyman mathias.nyman@linux.intel.com xhci: get isochronous ring directly from endpoint structure
Mathias Nyman mathias.nyman@linux.intel.com xhci: Avoid parsing transfer events several times
Li Jun jun.li@nxp.com clk: imx: imx8mp: add shared clk gate for usb suspend clk
Li Jun jun.li@nxp.com dt-bindings: clocks: imx8mp: Add ID for usb suspend clock
Lucas Stach l.stach@pengutronix.de clk: imx8mp: add clkout1/2 support
Marek Vasut marex@denx.de clk: imx8mp: Add DISP2 pixel clock
Kim Phillips kim.phillips@amd.com iommu/amd: Fix ill-formed ivrs_ioapic, ivrs_hpet and ivrs_acpihid options
Suravee Suthikulpanit suravee.suthikulpanit@amd.com iommu/amd: Add PCI segment support for ivrs_[ioapic/hpet/acpihid] commands
Qiang Yu quic_qianyu@quicinc.com bus: mhi: host: Fix race between channel preparation and M0 event
Herbert Xu herbert@gondor.apana.org.au ipv6: raw: Deduct extension header length in rawv6_push_pending_frames
Yang Yingliang yangyingliang@huawei.com ixgbe: fix pci device refcount leak
Hans de Goede hdegoede@redhat.com platform/x86: sony-laptop: Don't turn off 0x153 keyboard backlight during probe
Kuogee Hsieh quic_khsieh@quicinc.com drm/msm/dp: do not complete dp_aux_cmd_fifo_tx() if irq is not for aux transfer
Konrad Dybcio konrad.dybcio@linaro.org drm/msm/adreno: Make adreno quirks not overwrite each other
Volker Lendecke vl@samba.org cifs: Fix uninitialized memory read for smb311 posix symlink create
Heiko Carstens hca@linux.ibm.com s390/percpu: add READ_ONCE() to arch_this_cpu_to_op_simple()
Heiko Carstens hca@linux.ibm.com s390/cpum_sf: add READ_ONCE() semantics to compare and swap loops
Brian Norris computersforpeace@gmail.com ASoC: qcom: lpass-cpu: Fix fallback SD line index handling
Alexander Egorenkov egorenar@linux.ibm.com s390/kexec: fix ipl report address for kdump
Adrian Hunter adrian.hunter@intel.com perf auxtrace: Fix address filter duplicate symbol selection
Jonathan Corbet corbet@lwn.net docs: Fix the docs build with Sphinx 6.0
Ard Biesheuvel ardb@kernel.org efi: tpm: Avoid READ_ONCE() for accessing the event log
Marc Zyngier maz@kernel.org KVM: arm64: Fix S1PTW handling on RO memslots
Luka Guzenko l.guzenko@web.de ALSA: hda/realtek: Enable mute/micmute LEDs on HP Spectre x360 13-aw0xxx
Pablo Neira Ayuso pablo@netfilter.org netfilter: nft_payload: incorrect arithmetics when fetching VLAN header bits
-------------
Diffstat:
Documentation/admin-guide/kernel-parameters.txt | 51 +++- Documentation/sphinx/load_config.py | 6 +- Documentation/virt/kvm/api.rst | 60 +++++ Makefile | 4 +- arch/arm64/include/asm/atomic_ll_sc.h | 114 ++++---- arch/arm64/include/asm/atomic_lse.h | 16 +- arch/arm64/include/asm/kvm_emulate.h | 22 +- arch/powerpc/include/asm/imc-pmu.h | 2 +- arch/powerpc/perf/imc-pmu.c | 136 +++++----- arch/s390/include/asm/cpu_mf.h | 31 ++- arch/s390/include/asm/percpu.h | 2 +- arch/s390/kernel/machine_kexec_file.c | 5 +- arch/s390/kernel/perf_cpum_sf.c | 101 ++++--- arch/x86/boot/bioscall.S | 4 +- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 26 +- arch/x86/kvm/cpuid.c | 32 +-- drivers/bus/mhi/core/pm.c | 3 +- drivers/clk/imx/clk-imx8mp.c | 23 +- drivers/edac/edac_device.c | 17 +- drivers/edac/edac_module.h | 2 +- drivers/firmware/efi/efi.c | 9 +- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 10 +- drivers/gpu/drm/msm/dp/dp_aux.c | 4 + drivers/gpu/drm/virtio/virtgpu_ioctl.c | 10 +- drivers/iommu/amd/init.c | 89 ++++-- drivers/iommu/mtk_iommu_v1.c | 26 +- drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c | 14 +- drivers/net/ethernet/marvell/octeontx2/af/cgx.c | 17 +- drivers/net/ethernet/marvell/octeontx2/af/cgx.h | 6 +- drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 134 ++++++++-- drivers/net/ethernet/marvell/octeontx2/af/rvu.h | 4 + .../net/ethernet/marvell/octeontx2/af/rvu_cgx.c | 15 ++ .../net/ethernet/marvell/octeontx2/af/rvu_nix.c | 21 +- .../ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c | 2 + .../net/ethernet/mellanox/mlx5/core/lib/clock.c | 2 +- drivers/nfc/pn533/usb.c | 44 ++- drivers/platform/x86/sony-laptop.c | 21 +- drivers/regulator/da9211-regulator.c | 11 +- drivers/tty/hvc/hvc_xen.c | 46 ++-- drivers/usb/host/xhci-mem.c | 4 + drivers/usb/host/xhci-ring.c | 297 +++++++++++---------- drivers/usb/host/xhci.h | 6 +- fs/cifs/link.c | 1 + fs/ext4/super.c | 1 + include/dt-bindings/clock/imx8mp-clock.h | 10 +- include/linux/tpm_eventlog.h | 4 +- io_uring/io-wq.c | 6 + mm/memblock.c | 8 +- net/ipv6/raw.c | 4 + net/netfilter/ipset/ip_set_bitmap_ip.c | 4 +- net/netfilter/nft_payload.c | 2 +- net/sched/act_mpls.c | 8 +- net/tipc/node.c | 12 +- net/xfrm/xfrm_user.c | 7 +- sound/pci/hda/patch_realtek.c | 23 ++ sound/soc/codecs/wm8904.c | 7 + sound/soc/qcom/lpass-cpu.c | 5 +- tools/perf/util/auxtrace.c | 2 +- 58 files changed, 1015 insertions(+), 538 deletions(-)
From: Pablo Neira Ayuso pablo@netfilter.org
commit 696e1a48b1a1b01edad542a1ef293665864a4dd0 upstream.
If the offset + length goes over the ethernet + vlan header, then the length is adjusted to copy the bytes that are within the boundaries of the vlan_ethhdr scratchpad area. The remaining bytes beyond ethernet + vlan header are copied directly from the skbuff data area.
Fix incorrect arithmetic operator: subtract, not add, the size of the vlan header in case of double-tagged packets to adjust the length accordingly to address CVE-2023-0179.
Reported-by: Davide Ornaghi d.ornaghi97@gmail.com Fixes: f6ae9f120dad ("netfilter: nft_payload: add C-VLAN support") Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/netfilter/nft_payload.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/net/netfilter/nft_payload.c +++ b/net/netfilter/nft_payload.c @@ -62,7 +62,7 @@ nft_payload_copy_vlan(u32 *d, const stru return false;
if (offset + len > VLAN_ETH_HLEN + vlan_hlen) - ethlen -= offset + len - VLAN_ETH_HLEN + vlan_hlen; + ethlen -= offset + len - VLAN_ETH_HLEN - vlan_hlen;
memcpy(dst_u8, vlanh + offset - vlan_hlen, ethlen);
From: Luka Guzenko l.guzenko@web.de
commit ca88eeb308a221c2dcd4a64031d2e5fcd3db9eaa upstream.
The HP Spectre x360 13-aw0xxx devices use the ALC285 codec with GPIO 0x04 controlling the micmute LED and COEF 0x0b index 8 controlling the mute LED. A quirk was added to make these work as well as a fixup.
Signed-off-by: Luka Guzenko l.guzenko@web.de Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230110202514.2792-1-l.guzenko@web.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/patch_realtek.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -4581,6 +4581,16 @@ static void alc285_fixup_hp_coef_micmute } }
+static void alc285_fixup_hp_gpio_micmute_led(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + struct alc_spec *spec = codec->spec; + + if (action == HDA_FIXUP_ACT_PRE_PROBE) + spec->micmute_led_polarity = 1; + alc_fixup_hp_gpio_led(codec, action, 0, 0x04); +} + static void alc236_fixup_hp_coef_micmute_led(struct hda_codec *codec, const struct hda_fixup *fix, int action) { @@ -4602,6 +4612,13 @@ static void alc285_fixup_hp_mute_led(str alc285_fixup_hp_coef_micmute_led(codec, fix, action); }
+static void alc285_fixup_hp_spectre_x360_mute_led(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + alc285_fixup_hp_mute_led_coefbit(codec, fix, action); + alc285_fixup_hp_gpio_micmute_led(codec, fix, action); +} + static void alc236_fixup_hp_mute_led(struct hda_codec *codec, const struct hda_fixup *fix, int action) { @@ -6856,6 +6873,7 @@ enum { ALC285_FIXUP_ASUS_G533Z_PINS, ALC285_FIXUP_HP_GPIO_LED, ALC285_FIXUP_HP_MUTE_LED, + ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED, ALC236_FIXUP_HP_GPIO_LED, ALC236_FIXUP_HP_MUTE_LED, ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF, @@ -8224,6 +8242,10 @@ static const struct hda_fixup alc269_fix .type = HDA_FIXUP_FUNC, .v.func = alc285_fixup_hp_mute_led, }, + [ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc285_fixup_hp_spectre_x360_mute_led, + }, [ALC236_FIXUP_HP_GPIO_LED] = { .type = HDA_FIXUP_FUNC, .v.func = alc236_fixup_hp_gpio_led, @@ -8936,6 +8958,7 @@ static const struct snd_pci_quirk alc269 SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO), SND_PCI_QUIRK(0x103c, 0x86e7, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), SND_PCI_QUIRK(0x103c, 0x86e8, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), + SND_PCI_QUIRK(0x103c, 0x86f9, "HP Spectre x360 13-aw0xxx", ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED), SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), SND_PCI_QUIRK(0x103c, 0x8720, "HP EliteBook x360 1040 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
From: Marc Zyngier maz@kernel.org
commit 406504c7b0405d74d74c15a667cd4c4620c3e7a9 upstream.
A recent development on the EFI front has resulted in guests having their page tables baked in the firmware binary, and mapped into the IPA space as part of a read-only memslot. Not only is this legitimate, but it also results in added security, so thumbs up.
It is possible to take an S1PTW translation fault if the S1 PTs are unmapped at stage-2. However, KVM unconditionally treats S1PTW as a write to correctly handle hardware AF/DB updates to the S1 PTs. Furthermore, KVM injects an exception into the guest for S1PTW writes. In the aforementioned case this results in the guest taking an abort it won't recover from, as the S1 PTs mapping the vectors suffer from the same problem.
So clearly our handling is... wrong.
Instead, switch to a two-pronged approach:
- On S1PTW translation fault, handle the fault as a read
- On S1PTW permission fault, handle the fault as a write
This is of no consequence to SW that *writes* to its PTs (the write will trigger a non-S1PTW fault), and SW that uses RO PTs will not use HW-assisted AF/DB anyway, as that'd be wrong.
Only in the case described in c4ad98e4b72c ("KVM: arm64: Assume write fault on S1PTW permission fault on instruction fetch") do we end-up with two back-to-back faults (page being evicted and faulted back). I don't think this is a case worth optimising for.
Fixes: c4ad98e4b72c ("KVM: arm64: Assume write fault on S1PTW permission fault on instruction fetch") Reviewed-by: Oliver Upton oliver.upton@linux.dev Reviewed-by: Ard Biesheuvel ardb@kernel.org Regression-tested-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Marc Zyngier maz@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/include/asm/kvm_emulate.h | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-)
--- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -382,8 +382,26 @@ static __always_inline int kvm_vcpu_sys_
static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu) { - if (kvm_vcpu_abt_iss1tw(vcpu)) - return true; + if (kvm_vcpu_abt_iss1tw(vcpu)) { + /* + * Only a permission fault on a S1PTW should be + * considered as a write. Otherwise, page tables baked + * in a read-only memslot will result in an exception + * being delivered in the guest. + * + * The drawback is that we end-up faulting twice if the + * guest is using any of HW AF/DB: a translation fault + * to map the page containing the PT (read only at + * first), then a permission fault to allow the flags + * to be set. + */ + switch (kvm_vcpu_trap_get_fault_type(vcpu)) { + case ESR_ELx_FSC_PERM: + return true; + default: + return false; + } + }
if (kvm_vcpu_trap_is_iabt(vcpu)) return false;
From: Ard Biesheuvel ardb@kernel.org
commit d3f450533bbcb6dd4d7d59cadc9b61b7321e4ac1 upstream.
Nathan reports that recent kernels built with LTO will crash when doing EFI boot using Fedora's GRUB and SHIM. The culprit turns out to be a misaligned load from the TPM event log, which is annotated with READ_ONCE(), and under LTO, this gets translated into a LDAR instruction which does not tolerate misaligned accesses.
Interestingly, this does not happen when booting the same kernel straight from the UEFI shell, and so the fact that the event log may appear misaligned in memory may be caused by a bug in GRUB or SHIM.
However, using READ_ONCE() to access firmware tables is slightly unusual in any case, and here, we only need to ensure that 'event' is not dereferenced again after it gets unmapped, but this is already taken care of by the implicit barrier() semantics of the early_memunmap() call.
Cc: stable@vger.kernel.org Cc: Peter Jones pjones@redhat.com Cc: Jarkko Sakkinen jarkko@kernel.org Cc: Matthew Garrett mjg59@srcf.ucam.org Reported-by: Nathan Chancellor nathan@kernel.org Tested-by: Nathan Chancellor nathan@kernel.org Link: https://github.com/ClangBuiltLinux/linux/issues/1782 Signed-off-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/tpm_eventlog.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/include/linux/tpm_eventlog.h +++ b/include/linux/tpm_eventlog.h @@ -198,8 +198,8 @@ static __always_inline int __calc_tpm2_e * The loop below will unmap these fields if the log is larger than * one page, so save them here for reference: */ - count = READ_ONCE(event->count); - event_type = READ_ONCE(event->event_type); + count = event->count; + event_type = event->event_type;
/* Verify that it's the log header */ if (event_header->pcr_idx != 0 ||
From: Jonathan Corbet corbet@lwn.net
commit 0283189e8f3d0917e2ac399688df85211f48447b upstream.
Sphinx 6.0 removed the execfile_() function, which we use as part of the configuration process. They *did* warn us... Just open-code the functionality as is done in Sphinx itself.
Tested (using SPHINX_CONF, since this code is only executed with an alternative config file) on various Sphinx versions from 2.5 through 6.0.
Reported-by: Martin Liška mliska@suse.cz Cc: stable@vger.kernel.org Signed-off-by: Jonathan Corbet corbet@lwn.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/sphinx/load_config.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/Documentation/sphinx/load_config.py +++ b/Documentation/sphinx/load_config.py @@ -3,7 +3,7 @@
import os import sys -from sphinx.util.pycompat import execfile_ +from sphinx.util.osutil import fs_encoding
# ------------------------------------------------------------------------------ def loadConfig(namespace): @@ -48,7 +48,9 @@ def loadConfig(namespace): sys.stdout.write("load additional sphinx-config: %s\n" % config_file) config = namespace.copy() config['__file__'] = config_file - execfile_(config_file, config) + with open(config_file, 'rb') as f: + code = compile(f.read(), fs_encoding, 'exec') + exec(code, config) del config['__file__'] namespace.update(config) else:
From: Adrian Hunter adrian.hunter@intel.com
commit cf129830ee820f7fc90b98df193cd49d49344d09 upstream.
When a match has been made to the nth duplicate symbol, return success not error.
Example:
Before:
$ cat file.c cat: file.c: No such file or directory $ cat file1.c #include <stdio.h>
static void func(void) { printf("First func\n"); }
void other(void);
int main() { func(); other(); return 0; } $ cat file2.c #include <stdio.h>
static void func(void) { printf("Second func\n"); }
void other(void) { func(); }
$ gcc -Wall -Wextra -o test file1.c file2.c $ perf record -e intel_pt//u --filter 'filter func @ ./test' -- ./test Multiple symbols with name 'func' #1 0x1149 l func which is near main #2 0x1179 l func which is near other Disambiguate symbol name by inserting #n after the name e.g. func #2 Or select a global symbol by inserting #0 or #g or #G Failed to parse address filter: 'filter func @ ./test' Filter format is: filter|start|stop|tracestop <start symbol or address> [/ <end symbol or size>] [@<file name>] Where multiple filters are separated by space or comma. $ perf record -e intel_pt//u --filter 'filter func #2 @ ./test' -- ./test Failed to parse address filter: 'filter func #2 @ ./test' Filter format is: filter|start|stop|tracestop <start symbol or address> [/ <end symbol or size>] [@<file name>] Where multiple filters are separated by space or comma.
After:
$ perf record -e intel_pt//u --filter 'filter func #2 @ ./test' -- ./test First func Second func [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.016 MB perf.data ] $ perf script --itrace=b -Ftime,flags,ip,sym,addr --ns 1231062.526977619: tr strt 0 [unknown] => 558495708179 func 1231062.526977619: tr end call 558495708188 func => 558495708050 _init 1231062.526979286: tr strt 0 [unknown] => 55849570818d func 1231062.526979286: tr end return 55849570818f func => 55849570819d other
Fixes: 1b36c03e356936d6 ("perf record: Add support for using symbols in address filters") Reported-by: Dmitrii Dolgov 9erthalion6@gmail.com Signed-off-by: Adrian Hunter adrian.hunter@intel.com Tested-by: Dmitry Dolgov 9erthalion6@gmail.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Ian Rogers irogers@google.com Cc: Jiri Olsa jolsa@kernel.org Cc: Namhyung Kim namhyung@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230110185659.15979-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/perf/util/auxtrace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/tools/perf/util/auxtrace.c +++ b/tools/perf/util/auxtrace.c @@ -2449,7 +2449,7 @@ static int find_dso_sym(struct dso *dso, *size = sym->start - *start; if (idx > 0) { if (*size) - return 1; + return 0; } else if (dso_sym_match(sym, sym_name, &cnt, idx)) { print_duplicate_syms(dso, sym_name); return -EINVAL;
From: Alexander Egorenkov egorenar@linux.ibm.com
commit c2337a40e04dde1692b5b0a46ecc59f89aaba8a1 upstream.
This commit addresses the following erroneous situation with file-based kdump executed on a system with a valid IPL report.
On s390, a kdump kernel, its initrd and IPL report if present are loaded into a special and reserved on boot memory region - crashkernel. When a system crashes and kdump was activated before, the purgatory code is entered first which swaps the crashkernel and [0 - crashkernel size] memory regions. Only after that the kdump kernel is entered. For this reason, the pointer to an IPL report in lowcore must point to the IPL report after the swap and not to the address of the IPL report that was located in crashkernel memory region before the swap. Failing to do so, makes the kdump's decompressor try to read memory from the crashkernel memory region which already contains the production's kernel memory.
The situation described above caused spontaneous kdump failures/hangs on systems where the Secure IPL is activated because on such systems an IPL report is always present. In that case kdump's decompressor tried to parse an IPL report which frequently lead to illegal memory accesses because an IPL report contains addresses to various data.
Cc: stable@vger.kernel.org Fixes: 99feaa717e55 ("s390/kexec_file: Create ipl report and pass to next kernel") Reviewed-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Alexander Egorenkov egorenar@linux.ibm.com Signed-off-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/s390/kernel/machine_kexec_file.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/arch/s390/kernel/machine_kexec_file.c +++ b/arch/s390/kernel/machine_kexec_file.c @@ -185,8 +185,6 @@ static int kexec_file_add_ipl_report(str
data->memsz = ALIGN(data->memsz, PAGE_SIZE); buf.mem = data->memsz; - if (image->type == KEXEC_TYPE_CRASH) - buf.mem += crashk_res.start;
ptr = (void *)ipl_cert_list_addr; end = ptr + ipl_cert_list_size; @@ -223,6 +221,9 @@ static int kexec_file_add_ipl_report(str data->kernel_buf + offsetof(struct lowcore, ipl_parmblock_ptr); *lc_ipl_parmblock_ptr = (__u32)buf.mem;
+ if (image->type == KEXEC_TYPE_CRASH) + buf.mem += crashk_res.start; + ret = kexec_add_buffer(&buf); out: return ret;
From: Brian Norris computersforpeace@gmail.com
commit 000bca8d706d1bf7cca01af75787247c5a2fdedf upstream.
These indices should reference the ID placed within the dai_driver array, not the indices of the array itself.
This fixes commit 4ff028f6c108 ("ASoC: qcom: lpass-cpu: Make I2S SD lines configurable"), which among others, broke IPQ8064 audio (sound/soc/qcom/lpass-ipq806x.c) because it uses ID 4 but we'd stop initializing the mi2s_playback_sd_mode and mi2s_capture_sd_mode arrays at ID 0.
Fixes: 4ff028f6c108 ("ASoC: qcom: lpass-cpu: Make I2S SD lines configurable") Cc: stable@vger.kernel.org Signed-off-by: Brian Norris computersforpeace@gmail.com Reviewed-by: Stephan Gerhold stephan@gerhold.net Link: https://lore.kernel.org/r/20221231061545.2110253-1-computersforpeace@gmail.c... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/soc/qcom/lpass-cpu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/sound/soc/qcom/lpass-cpu.c +++ b/sound/soc/qcom/lpass-cpu.c @@ -816,10 +816,11 @@ static void of_lpass_cpu_parse_dai_data( struct lpass_data *data) { struct device_node *node; - int ret, id; + int ret, i, id;
/* Allow all channels by default for backwards compatibility */ - for (id = 0; id < data->variant->num_dai; id++) { + for (i = 0; i < data->variant->num_dai; i++) { + id = data->variant->dai_driver[i].id; data->mi2s_playback_sd_mode[id] = LPAIF_I2SCTL_MODE_8CH; data->mi2s_capture_sd_mode[id] = LPAIF_I2SCTL_MODE_8CH; }
From: Heiko Carstens hca@linux.ibm.com
commit 82d3edb50a11bf3c5ef63294d5358ba230181413 upstream.
The current cmpxchg_double() loops within the perf hw sampling code do not have READ_ONCE() semantics to read the old value from memory. This allows the compiler to generate code which reads the "old" value several times from memory, which again allows for inconsistencies.
For example:
/* Reset trailer (using compare-double-and-swap) */ do { te_flags = te->flags & ~SDB_TE_BUFFER_FULL_MASK; te_flags |= SDB_TE_ALERT_REQ_MASK; } while (!cmpxchg_double(&te->flags, &te->overflow, te->flags, te->overflow, te_flags, 0ULL));
The compiler could generate code where te->flags used within the cmpxchg_double() call may be refetched from memory and which is not necessarily identical to the previous read version which was used to generate te_flags. Which in turn means that an incorrect update could happen.
Fix this by adding READ_ONCE() semantics to all cmpxchg_double() loops. Given that READ_ONCE() cannot generate code on s390 which atomically reads 16 bytes, use a private compare-and-swap-double implementation to achieve that.
Also replace cmpxchg_double() with the private implementation to be able to re-use the old value within the loops.
As a side effect this converts the whole code to only use bit fields to read and modify bits within the hws trailer header.
Reported-by: Alexander Gordeev agordeev@linux.ibm.com Acked-by: Alexander Gordeev agordeev@linux.ibm.com Acked-by: Hendrik Brueckner brueckner@linux.ibm.com Reviewed-by: Thomas Richter tmricht@linux.ibm.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-s390/Y71QJBhNTIatvxUT@osiris/T/#ma14e2a5f7aa8e... Signed-off-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/s390/include/asm/cpu_mf.h | 31 +++++------- arch/s390/kernel/perf_cpum_sf.c | 101 ++++++++++++++++++++++++---------------- 2 files changed, 77 insertions(+), 55 deletions(-)
--- a/arch/s390/include/asm/cpu_mf.h +++ b/arch/s390/include/asm/cpu_mf.h @@ -128,19 +128,21 @@ struct hws_combined_entry { struct hws_diag_entry diag; /* Diagnostic-sampling data entry */ } __packed;
-struct hws_trailer_entry { - union { - struct { - unsigned int f:1; /* 0 - Block Full Indicator */ - unsigned int a:1; /* 1 - Alert request control */ - unsigned int t:1; /* 2 - Timestamp format */ - unsigned int :29; /* 3 - 31: Reserved */ - unsigned int bsdes:16; /* 32-47: size of basic SDE */ - unsigned int dsdes:16; /* 48-63: size of diagnostic SDE */ - }; - unsigned long long flags; /* 0 - 63: All indicators */ +union hws_trailer_header { + struct { + unsigned int f:1; /* 0 - Block Full Indicator */ + unsigned int a:1; /* 1 - Alert request control */ + unsigned int t:1; /* 2 - Timestamp format */ + unsigned int :29; /* 3 - 31: Reserved */ + unsigned int bsdes:16; /* 32-47: size of basic SDE */ + unsigned int dsdes:16; /* 48-63: size of diagnostic SDE */ + unsigned long long overflow; /* 64 - Overflow Count */ }; - unsigned long long overflow; /* 64 - sample Overflow count */ + __uint128_t val; +}; + +struct hws_trailer_entry { + union hws_trailer_header header; /* 0 - 15 Flags + Overflow Count */ unsigned char timestamp[16]; /* 16 - 31 timestamp */ unsigned long long reserved1; /* 32 -Reserved */ unsigned long long reserved2; /* */ @@ -287,14 +289,11 @@ static inline unsigned long sample_rate_ return USEC_PER_SEC * qsi->cpu_speed / rate; }
-#define SDB_TE_ALERT_REQ_MASK 0x4000000000000000UL -#define SDB_TE_BUFFER_FULL_MASK 0x8000000000000000UL - /* Return TOD timestamp contained in an trailer entry */ static inline unsigned long long trailer_timestamp(struct hws_trailer_entry *te) { /* TOD in STCKE format */ - if (te->t) + if (te->header.t) return *((unsigned long long *) &te->timestamp[1]);
/* TOD in STCK format */ --- a/arch/s390/kernel/perf_cpum_sf.c +++ b/arch/s390/kernel/perf_cpum_sf.c @@ -163,14 +163,15 @@ static void free_sampling_buffer(struct
static int alloc_sample_data_block(unsigned long *sdbt, gfp_t gfp_flags) { - unsigned long sdb, *trailer; + struct hws_trailer_entry *te; + unsigned long sdb;
/* Allocate and initialize sample-data-block */ sdb = get_zeroed_page(gfp_flags); if (!sdb) return -ENOMEM; - trailer = trailer_entry_ptr(sdb); - *trailer = SDB_TE_ALERT_REQ_MASK; + te = (struct hws_trailer_entry *)trailer_entry_ptr(sdb); + te->header.a = 1;
/* Link SDB into the sample-data-block-table */ *sdbt = sdb; @@ -1206,7 +1207,7 @@ static void hw_collect_samples(struct pe "%s: Found unknown" " sampling data entry: te->f %i" " basic.def %#4x (%p)\n", __func__, - te->f, sample->def, sample); + te->header.f, sample->def, sample); /* Sample slot is not yet written or other record. * * This condition can occur if the buffer was reused @@ -1217,7 +1218,7 @@ static void hw_collect_samples(struct pe * that are not full. Stop processing if the first * invalid format was detected. */ - if (!te->f) + if (!te->header.f) break; }
@@ -1227,6 +1228,16 @@ static void hw_collect_samples(struct pe } }
+static inline __uint128_t __cdsg(__uint128_t *ptr, __uint128_t old, __uint128_t new) +{ + asm volatile( + " cdsg %[old],%[new],%[ptr]\n" + : [old] "+d" (old), [ptr] "+QS" (*ptr) + : [new] "d" (new) + : "memory", "cc"); + return old; +} + /* hw_perf_event_update() - Process sampling buffer * @event: The perf event * @flush_all: Flag to also flush partially filled sample-data-blocks @@ -1243,10 +1254,11 @@ static void hw_collect_samples(struct pe */ static void hw_perf_event_update(struct perf_event *event, int flush_all) { + unsigned long long event_overflow, sampl_overflow, num_sdb; + union hws_trailer_header old, prev, new; struct hw_perf_event *hwc = &event->hw; struct hws_trailer_entry *te; unsigned long *sdbt; - unsigned long long event_overflow, sampl_overflow, num_sdb, te_flags; int done;
/* @@ -1266,25 +1278,25 @@ static void hw_perf_event_update(struct te = (struct hws_trailer_entry *) trailer_entry_ptr(*sdbt);
/* Leave loop if no more work to do (block full indicator) */ - if (!te->f) { + if (!te->header.f) { done = 1; if (!flush_all) break; }
/* Check the sample overflow count */ - if (te->overflow) + if (te->header.overflow) /* Account sample overflows and, if a particular limit * is reached, extend the sampling buffer. * For details, see sfb_account_overflows(). */ - sampl_overflow += te->overflow; + sampl_overflow += te->header.overflow;
/* Timestamps are valid for full sample-data-blocks only */ debug_sprintf_event(sfdbg, 6, "%s: sdbt %#lx " "overflow %llu timestamp %#llx\n", - __func__, (unsigned long)sdbt, te->overflow, - (te->f) ? trailer_timestamp(te) : 0ULL); + __func__, (unsigned long)sdbt, te->header.overflow, + (te->header.f) ? trailer_timestamp(te) : 0ULL);
/* Collect all samples from a single sample-data-block and * flag if an (perf) event overflow happened. If so, the PMU @@ -1294,12 +1306,16 @@ static void hw_perf_event_update(struct num_sdb++;
/* Reset trailer (using compare-double-and-swap) */ + /* READ_ONCE() 16 byte header */ + prev.val = __cdsg(&te->header.val, 0, 0); do { - te_flags = te->flags & ~SDB_TE_BUFFER_FULL_MASK; - te_flags |= SDB_TE_ALERT_REQ_MASK; - } while (!cmpxchg_double(&te->flags, &te->overflow, - te->flags, te->overflow, - te_flags, 0ULL)); + old.val = prev.val; + new.val = prev.val; + new.f = 0; + new.a = 1; + new.overflow = 0; + prev.val = __cdsg(&te->header.val, old.val, new.val); + } while (prev.val != old.val);
/* Advance to next sample-data-block */ sdbt++; @@ -1384,7 +1400,7 @@ static void aux_output_end(struct perf_o range_scan = AUX_SDB_NUM_ALERT(aux); for (i = 0, idx = aux->head; i < range_scan; i++, idx++) { te = aux_sdb_trailer(aux, idx); - if (!(te->flags & SDB_TE_BUFFER_FULL_MASK)) + if (!te->header.f) break; } /* i is num of SDBs which are full */ @@ -1392,7 +1408,7 @@ static void aux_output_end(struct perf_o
/* Remove alert indicators in the buffer */ te = aux_sdb_trailer(aux, aux->alert_mark); - te->flags &= ~SDB_TE_ALERT_REQ_MASK; + te->header.a = 0;
debug_sprintf_event(sfdbg, 6, "%s: SDBs %ld range %ld head %ld\n", __func__, i, range_scan, aux->head); @@ -1437,9 +1453,9 @@ static int aux_output_begin(struct perf_ idx = aux->empty_mark + 1; for (i = 0; i < range_scan; i++, idx++) { te = aux_sdb_trailer(aux, idx); - te->flags &= ~(SDB_TE_BUFFER_FULL_MASK | - SDB_TE_ALERT_REQ_MASK); - te->overflow = 0; + te->header.f = 0; + te->header.a = 0; + te->header.overflow = 0; } /* Save the position of empty SDBs */ aux->empty_mark = aux->head + range - 1; @@ -1448,7 +1464,7 @@ static int aux_output_begin(struct perf_ /* Set alert indicator */ aux->alert_mark = aux->head + range/2 - 1; te = aux_sdb_trailer(aux, aux->alert_mark); - te->flags = te->flags | SDB_TE_ALERT_REQ_MASK; + te->header.a = 1;
/* Reset hardware buffer head */ head = AUX_SDB_INDEX(aux, aux->head); @@ -1475,14 +1491,17 @@ static int aux_output_begin(struct perf_ static bool aux_set_alert(struct aux_buffer *aux, unsigned long alert_index, unsigned long long *overflow) { - unsigned long long orig_overflow, orig_flags, new_flags; + union hws_trailer_header old, prev, new; struct hws_trailer_entry *te;
te = aux_sdb_trailer(aux, alert_index); + /* READ_ONCE() 16 byte header */ + prev.val = __cdsg(&te->header.val, 0, 0); do { - orig_flags = te->flags; - *overflow = orig_overflow = te->overflow; - if (orig_flags & SDB_TE_BUFFER_FULL_MASK) { + old.val = prev.val; + new.val = prev.val; + *overflow = old.overflow; + if (old.f) { /* * SDB is already set by hardware. * Abort and try to set somewhere @@ -1490,10 +1509,10 @@ static bool aux_set_alert(struct aux_buf */ return false; } - new_flags = orig_flags | SDB_TE_ALERT_REQ_MASK; - } while (!cmpxchg_double(&te->flags, &te->overflow, - orig_flags, orig_overflow, - new_flags, 0ULL)); + new.a = 1; + new.overflow = 0; + prev.val = __cdsg(&te->header.val, old.val, new.val); + } while (prev.val != old.val); return true; }
@@ -1522,8 +1541,9 @@ static bool aux_set_alert(struct aux_buf static bool aux_reset_buffer(struct aux_buffer *aux, unsigned long range, unsigned long long *overflow) { - unsigned long long orig_overflow, orig_flags, new_flags; unsigned long i, range_scan, idx, idx_old; + union hws_trailer_header old, prev, new; + unsigned long long orig_overflow; struct hws_trailer_entry *te;
debug_sprintf_event(sfdbg, 6, "%s: range %ld head %ld alert %ld " @@ -1554,17 +1574,20 @@ static bool aux_reset_buffer(struct aux_ idx_old = idx = aux->empty_mark + 1; for (i = 0; i < range_scan; i++, idx++) { te = aux_sdb_trailer(aux, idx); + /* READ_ONCE() 16 byte header */ + prev.val = __cdsg(&te->header.val, 0, 0); do { - orig_flags = te->flags; - orig_overflow = te->overflow; - new_flags = orig_flags & ~SDB_TE_BUFFER_FULL_MASK; + old.val = prev.val; + new.val = prev.val; + orig_overflow = old.overflow; + new.f = 0; + new.overflow = 0; if (idx == aux->alert_mark) - new_flags |= SDB_TE_ALERT_REQ_MASK; + new.a = 1; else - new_flags &= ~SDB_TE_ALERT_REQ_MASK; - } while (!cmpxchg_double(&te->flags, &te->overflow, - orig_flags, orig_overflow, - new_flags, 0ULL)); + new.a = 0; + prev.val = __cdsg(&te->header.val, old.val, new.val); + } while (prev.val != old.val); *overflow += orig_overflow; }
From: Heiko Carstens hca@linux.ibm.com
commit e3f360db08d55a14112bd27454e616a24296a8b0 upstream.
Make sure that *ptr__ within arch_this_cpu_to_op_simple() is only dereferenced once by using READ_ONCE(). Otherwise the compiler could generate incorrect code.
Cc: stable@vger.kernel.org Reviewed-by: Alexander Gordeev agordeev@linux.ibm.com Signed-off-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/s390/include/asm/percpu.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/s390/include/asm/percpu.h +++ b/arch/s390/include/asm/percpu.h @@ -31,7 +31,7 @@ pcp_op_T__ *ptr__; \ preempt_disable_notrace(); \ ptr__ = raw_cpu_ptr(&(pcp)); \ - prev__ = *ptr__; \ + prev__ = READ_ONCE(*ptr__); \ do { \ old__ = prev__; \ new__ = old__ op (val); \
From: Volker Lendecke vl@samba.org
commit a152d05ae4a71d802d50cf9177dba34e8bb09f68 upstream.
If smb311 posix is enabled, we send the intended mode for file creation in the posix create context. Instead of using what's there on the stack, create the mfsymlink file with 0644.
Fixes: ce558b0e17f8a ("smb3: Add posix create context for smb3.11 posix mounts") Cc: stable@vger.kernel.org Signed-off-by: Volker Lendecke vl@samba.org Reviewed-by: Tom Talpey tom@talpey.com Reviewed-by: Paulo Alcantara (SUSE) pc@cjr.nz Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/cifs/link.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/cifs/link.c +++ b/fs/cifs/link.c @@ -471,6 +471,7 @@ smb3_create_mf_symlink(unsigned int xid, oparms.disposition = FILE_CREATE; oparms.fid = &fid; oparms.reconnect = false; + oparms.mode = 0644;
rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL, NULL, NULL);
From: Konrad Dybcio konrad.dybcio@linaro.org
commit 13ef096e342b00e30b95a90c6c13eee1f0bec4c5 upstream.
So far the adreno quirks have all been assigned with an OR operator, which is problematic, because they were assigned consecutive integer values, which makes checking them with an AND operator kind of no bueno..
Switch to using BIT(n) so that only the quirks that the programmer chose are taken into account when evaluating info->quirks & ADRENO_QUIRK_...
Fixes: 370063ee427a ("drm/msm/adreno: Add A540 support") Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Reviewed-by: Marijn Suijten marijn.suijten@somainline.org Reviewed-by: Rob Clark robdclark@gmail.com Signed-off-by: Konrad Dybcio konrad.dybcio@linaro.org Reviewed-by: Akhil P Oommen quic_akhilpo@quicinc.com Patchwork: https://patchwork.freedesktop.org/patch/516456/ Link: https://lore.kernel.org/r/20230102100201.77286-1-konrad.dybcio@linaro.org Signed-off-by: Rob Clark robdclark@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-)
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -28,11 +28,9 @@ enum { ADRENO_FW_MAX, };
-enum adreno_quirks { - ADRENO_QUIRK_TWO_PASS_USE_WFI = 1, - ADRENO_QUIRK_FAULT_DETECT_MASK = 2, - ADRENO_QUIRK_LMLOADKILL_DISABLE = 3, -}; +#define ADRENO_QUIRK_TWO_PASS_USE_WFI BIT(0) +#define ADRENO_QUIRK_FAULT_DETECT_MASK BIT(1) +#define ADRENO_QUIRK_LMLOADKILL_DISABLE BIT(2)
struct adreno_rev { uint8_t core; @@ -62,7 +60,7 @@ struct adreno_info { const char *name; const char *fw[ADRENO_FW_MAX]; uint32_t gmem; - enum adreno_quirks quirks; + u64 quirks; struct msm_gpu *(*init)(struct drm_device *dev); const char *zapfw; u32 inactive_period;
From: Kuogee Hsieh quic_khsieh@quicinc.com
commit 1cba0d150fa102439114a91b3e215909efc9f169 upstream.
There are 3 possible interrupt sources are handled by DP controller, HPDstatus, Controller state changes and Aux read/write transaction. At every irq, DP controller have to check isr status of every interrupt sources and service the interrupt if its isr status bits shows interrupts are pending. There is potential race condition may happen at current aux isr handler implementation since it is always complete dp_aux_cmd_fifo_tx() even irq is not for aux read or write transaction. This may cause aux read transaction return premature if host aux data read is in the middle of waiting for sink to complete transferring data to host while irq happen. This will cause host's receiving buffer contains unexpected data. This patch fixes this problem by checking aux isr and return immediately at aux isr handler if there are no any isr status bits set.
Current there is a bug report regrading eDP edid corruption happen during system booting up. After lengthy debugging to found that VIDEO_READY interrupt was continuously firing during system booting up which cause dp_aux_isr() to complete dp_aux_cmd_fifo_tx() prematurely to retrieve data from aux hardware buffer which is not yet contains complete data transfer from sink. This cause edid corruption.
Follows are the signature at kernel logs when problem happen, EDID has corrupt header panel-simple-dp-aux aux-aea0000.edp: Couldn't identify panel via EDID
Changes in v2: -- do complete if (ret == IRQ_HANDLED) ay dp-aux_isr() -- add more commit text
Changes in v3: -- add Stephen suggested -- dp_aux_isr() return IRQ_XXX back to caller -- dp_ctrl_isr() return IRQ_XXX back to caller
Changes in v4: -- split into two patches
Changes in v5: -- delete empty line between tags
Changes in v6: -- remove extra "that" and fixed line more than 75 char at commit text
Fixes: c943b4948b58 ("drm/msm/dp: add displayPort driver support") Signed-off-by: Kuogee Hsieh quic_khsieh@quicinc.com Tested-by: Douglas Anderson dianders@chromium.org Reviewed-by: Abhinav Kumar quic_abhinavk@quicinc.com Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/516121/ Link: https://lore.kernel.org/r/1672193785-11003-2-git-send-email-quic_khsieh@quic... Signed-off-by: Abhinav Kumar quic_abhinavk@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/dp/dp_aux.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/gpu/drm/msm/dp/dp_aux.c +++ b/drivers/gpu/drm/msm/dp/dp_aux.c @@ -423,6 +423,10 @@ void dp_aux_isr(struct drm_dp_aux *dp_au
aux->isr = dp_catalog_aux_get_irq(aux->catalog);
+ /* no interrupts pending, return immediately */ + if (!isr) + return; + if (!aux->cmd_busy) return;
From: Hans de Goede hdegoede@redhat.com
commit ad75bd85b1db69c97eefea07b375567821f6ef58 upstream.
The 0x153 version of the kbd backlight control SNC handle has no separate address to probe if the backlight is there.
This turns the probe call into a set keyboard backlight call with a value of 0 turning off the keyboard backlight.
Skip probing when there is no separate probe address to avoid this.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1583752 Fixes: 800f20170dcf ("Keyboard backlight control for some Vaio Fit models") Signed-off-by: Hans de Goede hdegoede@redhat.com Reviewed-by: Mattia Dongili malattia@linux.it Link: https://lore.kernel.org/r/20221213122943.11123-1-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/sony-laptop.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-)
--- a/drivers/platform/x86/sony-laptop.c +++ b/drivers/platform/x86/sony-laptop.c @@ -1892,14 +1892,21 @@ static int sony_nc_kbd_backlight_setup(s break; }
- ret = sony_call_snc_handle(handle, probe_base, &result); - if (ret) - return ret; + /* + * Only probe if there is a separate probe_base, otherwise the probe call + * is equivalent to __sony_nc_kbd_backlight_mode_set(0), resulting in + * the keyboard backlight being turned off. + */ + if (probe_base) { + ret = sony_call_snc_handle(handle, probe_base, &result); + if (ret) + return ret;
- if ((handle == 0x0137 && !(result & 0x02)) || - !(result & 0x01)) { - dprintk("no backlight keyboard found\n"); - return 0; + if ((handle == 0x0137 && !(result & 0x02)) || + !(result & 0x01)) { + dprintk("no backlight keyboard found\n"); + return 0; + } }
kbdbl_ctl = kzalloc(sizeof(*kbdbl_ctl), GFP_KERNEL);
From: Yang Yingliang yangyingliang@huawei.com
commit b93fb4405fcb5112c5739c5349afb52ec7f15c07 upstream.
As the comment of pci_get_domain_bus_and_slot() says, it returns a PCI device with refcount incremented, when finish using it, the caller must decrement the reference count by calling pci_dev_put().
In ixgbe_get_first_secondary_devfn() and ixgbe_x550em_a_has_mii(), pci_dev_put() is called to avoid leak.
Fixes: 8fa10ef01260 ("ixgbe: register a mdiobus") Signed-off-by: Yang Yingliang yangyingliang@huawei.com Tested-by: Gurucharan G gurucharanx.g@intel.com (A Contingent worker at Intel) Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-)
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c @@ -851,9 +851,11 @@ static struct pci_dev *ixgbe_get_first_s rp_pdev = pci_get_domain_bus_and_slot(0, 0, devfn); if (rp_pdev && rp_pdev->subordinate) { bus = rp_pdev->subordinate->number; + pci_dev_put(rp_pdev); return pci_get_domain_bus_and_slot(0, bus, 0); }
+ pci_dev_put(rp_pdev); return NULL; }
@@ -870,6 +872,7 @@ static bool ixgbe_x550em_a_has_mii(struc struct ixgbe_adapter *adapter = hw->back; struct pci_dev *pdev = adapter->pdev; struct pci_dev *func0_pdev; + bool has_mii = false;
/* For the C3000 family of SoCs (x550em_a) the internal ixgbe devices * are always downstream of root ports @ 0000:00:16.0 & 0000:00:17.0 @@ -880,15 +883,16 @@ static bool ixgbe_x550em_a_has_mii(struc func0_pdev = ixgbe_get_first_secondary_devfn(PCI_DEVFN(0x16, 0)); if (func0_pdev) { if (func0_pdev == pdev) - return true; - else - return false; + has_mii = true; + goto out; } func0_pdev = ixgbe_get_first_secondary_devfn(PCI_DEVFN(0x17, 0)); if (func0_pdev == pdev) - return true; + has_mii = true;
- return false; +out: + pci_dev_put(func0_pdev); + return has_mii; }
/**
From: Herbert Xu herbert@gondor.apana.org.au
commit cb3e9864cdbe35ff6378966660edbcbac955fe17 upstream.
The total cork length created by ip6_append_data includes extension headers, so we must exclude them when comparing them against the IPV6_CHECKSUM offset which does not include extension headers.
Reported-by: Kyle Zeng zengyhkyle@gmail.com Fixes: 357b40a18b04 ("[IPV6]: IPV6_CHECKSUM socket option can corrupt kernel memory") Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv6/raw.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/net/ipv6/raw.c +++ b/net/ipv6/raw.c @@ -539,6 +539,7 @@ csum_copy_err: static int rawv6_push_pending_frames(struct sock *sk, struct flowi6 *fl6, struct raw6_sock *rp) { + struct ipv6_txoptions *opt; struct sk_buff *skb; int err = 0; int offset; @@ -556,6 +557,9 @@ static int rawv6_push_pending_frames(str
offset = rp->offset; total_len = inet_sk(sk)->cork.base.length; + opt = inet6_sk(sk)->cork.opt; + total_len -= opt ? opt->opt_flen : 0; + if (offset >= total_len - 1) { err = -EINVAL; ip6_flush_pending_frames(sk);
From: Qiang Yu quic_qianyu@quicinc.com
[ Upstream commit 869a99907faea6d1835b0bd0d0422ae3519c6ea9 ]
There is a race condition where mhi_prepare_channel() updates the read and write pointers as the base address and in parallel, if an M0 transition occurs, the tasklet goes ahead and rings doorbells for all channels with a delta in TRE rings assuming they are already enabled. This causes a null pointer access. Fix it by adding a channel enabled check before ringing channel doorbells.
Cc: stable@vger.kernel.org # 5.19 Fixes: a6e2e3522f29 "bus: mhi: core: Add support for PM state transitions" Signed-off-by: Qiang Yu quic_qianyu@quicinc.com Reviewed-by: Manivannan Sadhasivam mani@kernel.org Link: https://lore.kernel.org/r/1665889532-13634-1-git-send-email-quic_qianyu@quic... [mani: CCed stable list] Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bus/mhi/core/pm.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c index 044dcdd723a7..7d69b740b9f9 100644 --- a/drivers/bus/mhi/core/pm.c +++ b/drivers/bus/mhi/core/pm.c @@ -298,7 +298,8 @@ int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl) read_lock_irq(&mhi_chan->lock);
/* Only ring DB if ring is not empty */ - if (tre_ring->base && tre_ring->wp != tre_ring->rp) + if (tre_ring->base && tre_ring->wp != tre_ring->rp && + mhi_chan->ch_state == MHI_CH_STATE_ENABLED) mhi_ring_chan_db(mhi_cntrl, mhi_chan); read_unlock_irq(&mhi_chan->lock); }
From: Suravee Suthikulpanit suravee.suthikulpanit@amd.com
[ Upstream commit bbe3a106580c21bc883fb0c9fa3da01534392fe8 ]
By default, PCI segment is zero and can be omitted. To support system with non-zero PCI segment ID, modify the parsing functions to allow PCI segment ID.
Co-developed-by: Vasant Hegde vasant.hegde@amd.com Signed-off-by: Vasant Hegde vasant.hegde@amd.com Signed-off-by: Suravee Suthikulpanit suravee.suthikulpanit@amd.com Link: https://lore.kernel.org/r/20220706113825.25582-33-vasant.hegde@amd.com Signed-off-by: Joerg Roedel jroedel@suse.de Stable-dep-of: 1198d2316dc4 ("iommu/amd: Fix ill-formed ivrs_ioapic, ivrs_hpet and ivrs_acpihid options") Signed-off-by: Sasha Levin sashal@kernel.org --- .../admin-guide/kernel-parameters.txt | 34 ++++++++++---- drivers/iommu/amd/init.c | 44 ++++++++++++------- 2 files changed, 52 insertions(+), 26 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f577c29f2093..ce93bb358bc1 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2103,23 +2103,39 @@
ivrs_ioapic [HW,X86-64] Provide an override to the IOAPIC-ID<->DEVICE-ID - mapping provided in the IVRS ACPI table. For - example, to map IOAPIC-ID decimal 10 to - PCI device 00:14.0 write the parameter as: + mapping provided in the IVRS ACPI table. + By default, PCI segment is 0, and can be omitted. + For example: + * To map IOAPIC-ID decimal 10 to PCI device 00:14.0 + write the parameter as: ivrs_ioapic[10]=00:14.0 + * To map IOAPIC-ID decimal 10 to PCI segment 0x1 and + PCI device 00:14.0 write the parameter as: + ivrs_ioapic[10]=0001:00:14.0
ivrs_hpet [HW,X86-64] Provide an override to the HPET-ID<->DEVICE-ID - mapping provided in the IVRS ACPI table. For - example, to map HPET-ID decimal 0 to - PCI device 00:14.0 write the parameter as: + mapping provided in the IVRS ACPI table. + By default, PCI segment is 0, and can be omitted. + For example: + * To map HPET-ID decimal 0 to PCI device 00:14.0 + write the parameter as: ivrs_hpet[0]=00:14.0 + * To map HPET-ID decimal 10 to PCI segment 0x1 and + PCI device 00:14.0 write the parameter as: + ivrs_ioapic[10]=0001:00:14.0
ivrs_acpihid [HW,X86-64] Provide an override to the ACPI-HID:UID<->DEVICE-ID - mapping provided in the IVRS ACPI table. For - example, to map UART-HID:UID AMD0020:0 to - PCI device 00:14.5 write the parameter as: + mapping provided in the IVRS ACPI table. + + For example, to map UART-HID:UID AMD0020:0 to + PCI segment 0x1 and PCI device ID 00:14.5, + write the parameter as: + ivrs_acpihid[0001:00:14.5]=AMD0020:0 + + By default, PCI segment is 0, and can be omitted. + For example, PCI device 00:14.5 write the parameter as: ivrs_acpihid[00:14.5]=AMD0020:0
js= [HW,JOY] Analog joystick diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c index 310ab24d003a..85370f305aa8 100644 --- a/drivers/iommu/amd/init.c +++ b/drivers/iommu/amd/init.c @@ -85,6 +85,10 @@ #define ACPI_DEVFLAG_ATSDIS 0x10000000
#define LOOP_TIMEOUT 2000000 + +#define IVRS_GET_SBDF_ID(seg, bus, dev, fd) (((seg & 0xffff) << 16) | ((bus & 0xff) << 8) \ + | ((dev & 0x1f) << 3) | (fn & 0x7)) + /* * ACPI table definitions * @@ -3046,15 +3050,17 @@ static int __init parse_amd_iommu_options(char *str)
static int __init parse_ivrs_ioapic(char *str) { - unsigned int bus, dev, fn; + u32 seg = 0, bus, dev, fn; int ret, id, i; - u16 devid; + u32 devid;
ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn); - if (ret != 4) { - pr_err("Invalid command line: ivrs_ioapic%s\n", str); - return 1; + ret = sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn); + if (ret != 5) { + pr_err("Invalid command line: ivrs_ioapic%s\n", str); + return 1; + } }
if (early_ioapic_map_size == EARLY_MAP_SIZE) { @@ -3063,7 +3069,7 @@ static int __init parse_ivrs_ioapic(char *str) return 1; }
- devid = ((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7); + devid = IVRS_GET_SBDF_ID(seg, bus, dev, fn);
cmdline_maps = true; i = early_ioapic_map_size++; @@ -3076,15 +3082,17 @@ static int __init parse_ivrs_ioapic(char *str)
static int __init parse_ivrs_hpet(char *str) { - unsigned int bus, dev, fn; + u32 seg = 0, bus, dev, fn; int ret, id, i; - u16 devid; + u32 devid;
ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn); - if (ret != 4) { - pr_err("Invalid command line: ivrs_hpet%s\n", str); - return 1; + ret = sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn); + if (ret != 5) { + pr_err("Invalid command line: ivrs_hpet%s\n", str); + return 1; + } }
if (early_hpet_map_size == EARLY_MAP_SIZE) { @@ -3093,7 +3101,7 @@ static int __init parse_ivrs_hpet(char *str) return 1; }
- devid = ((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7); + devid = IVRS_GET_SBDF_ID(seg, bus, dev, fn);
cmdline_maps = true; i = early_hpet_map_size++; @@ -3106,15 +3114,18 @@ static int __init parse_ivrs_hpet(char *str)
static int __init parse_ivrs_acpihid(char *str) { - u32 bus, dev, fn; + u32 seg = 0, bus, dev, fn; char *hid, *uid, *p; char acpiid[ACPIHID_UID_LEN + ACPIHID_HID_LEN] = {0}; int ret, i;
ret = sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid); if (ret != 4) { - pr_err("Invalid command line: ivrs_acpihid(%s)\n", str); - return 1; + ret = sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid); + if (ret != 5) { + pr_err("Invalid command line: ivrs_acpihid(%s)\n", str); + return 1; + } }
p = acpiid; @@ -3136,8 +3147,7 @@ static int __init parse_ivrs_acpihid(char *str) i = early_acpihid_map_size++; memcpy(early_acpihid_map[i].hid, hid, strlen(hid)); memcpy(early_acpihid_map[i].uid, uid, strlen(uid)); - early_acpihid_map[i].devid = - ((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7); + early_acpihid_map[i].devid = IVRS_GET_SBDF_ID(seg, bus, dev, fn); early_acpihid_map[i].cmd_line = true;
return 1;
From: Kim Phillips kim.phillips@amd.com
[ Upstream commit 1198d2316dc4265a97d0e8445a22c7a6d17580a4 ]
Currently, these options cause the following libkmod error:
libkmod: ERROR ../libkmod/libkmod-config.c:489 kcmdline_parse_result: \ Ignoring bad option on kernel command line while parsing module \ name: 'ivrs_xxxx[XX:XX'
Fix by introducing a new parameter format for these options and throw a warning for the deprecated format.
Users are still allowed to omit the PCI Segment if zero.
Adding a Link: to the reason why we're modding the syntax parsing in the driver and not in libkmod.
Fixes: ca3bf5d47cec ("iommu/amd: Introduces ivrs_acpihid kernel parameter") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-modules/20200310082308.14318-2-lucas.demarchi@... Reported-by: Kim Phillips kim.phillips@amd.com Co-developed-by: Suravee Suthikulpanit suravee.suthikulpanit@amd.com Signed-off-by: Suravee Suthikulpanit suravee.suthikulpanit@amd.com Signed-off-by: Kim Phillips kim.phillips@amd.com Link: https://lore.kernel.org/r/20220919155638.391481-2-kim.phillips@amd.com Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- .../admin-guide/kernel-parameters.txt | 27 +++++-- drivers/iommu/amd/init.c | 79 +++++++++++++------ 2 files changed, 76 insertions(+), 30 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index ce93bb358bc1..eb437d659f2c 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2105,7 +2105,13 @@ Provide an override to the IOAPIC-ID<->DEVICE-ID mapping provided in the IVRS ACPI table. By default, PCI segment is 0, and can be omitted. - For example: + + For example, to map IOAPIC-ID decimal 10 to + PCI segment 0x1 and PCI device 00:14.0, + write the parameter as: + ivrs_ioapic=10@0001:00:14.0 + + Deprecated formats: * To map IOAPIC-ID decimal 10 to PCI device 00:14.0 write the parameter as: ivrs_ioapic[10]=00:14.0 @@ -2117,7 +2123,13 @@ Provide an override to the HPET-ID<->DEVICE-ID mapping provided in the IVRS ACPI table. By default, PCI segment is 0, and can be omitted. - For example: + + For example, to map HPET-ID decimal 10 to + PCI segment 0x1 and PCI device 00:14.0, + write the parameter as: + ivrs_hpet=10@0001:00:14.0 + + Deprecated formats: * To map HPET-ID decimal 0 to PCI device 00:14.0 write the parameter as: ivrs_hpet[0]=00:14.0 @@ -2128,15 +2140,20 @@ ivrs_acpihid [HW,X86-64] Provide an override to the ACPI-HID:UID<->DEVICE-ID mapping provided in the IVRS ACPI table. + By default, PCI segment is 0, and can be omitted.
For example, to map UART-HID:UID AMD0020:0 to PCI segment 0x1 and PCI device ID 00:14.5, write the parameter as: - ivrs_acpihid[0001:00:14.5]=AMD0020:0 + ivrs_acpihid=AMD0020:0@0001:00:14.5
- By default, PCI segment is 0, and can be omitted. - For example, PCI device 00:14.5 write the parameter as: + Deprecated formats: + * To map UART-HID:UID AMD0020:0 to PCI segment is 0, + PCI device ID 00:14.5, write the parameter as: ivrs_acpihid[00:14.5]=AMD0020:0 + * To map UART-HID:UID AMD0020:0 to PCI segment 0x1 and + PCI device ID 00:14.5, write the parameter as: + ivrs_acpihid[0001:00:14.5]=AMD0020:0
js= [HW,JOY] Analog joystick See Documentation/input/joydev/joystick.rst. diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c index 85370f305aa8..ce822347f747 100644 --- a/drivers/iommu/amd/init.c +++ b/drivers/iommu/amd/init.c @@ -3051,18 +3051,24 @@ static int __init parse_amd_iommu_options(char *str) static int __init parse_ivrs_ioapic(char *str) { u32 seg = 0, bus, dev, fn; - int ret, id, i; + int id, i; u32 devid;
- ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn); - if (ret != 4) { - ret = sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn); - if (ret != 5) { - pr_err("Invalid command line: ivrs_ioapic%s\n", str); - return 1; - } + if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 || + sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) + goto found; + + if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 || + sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) { + pr_warn("ivrs_ioapic%s option format deprecated; use ivrs_ioapic=%d@%04x:%02x:%02x.%d instead\n", + str, id, seg, bus, dev, fn); + goto found; }
+ pr_err("Invalid command line: ivrs_ioapic%s\n", str); + return 1; + +found: if (early_ioapic_map_size == EARLY_MAP_SIZE) { pr_err("Early IOAPIC map overflow - ignoring ivrs_ioapic%s\n", str); @@ -3083,18 +3089,24 @@ static int __init parse_ivrs_ioapic(char *str) static int __init parse_ivrs_hpet(char *str) { u32 seg = 0, bus, dev, fn; - int ret, id, i; + int id, i; u32 devid;
- ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn); - if (ret != 4) { - ret = sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn); - if (ret != 5) { - pr_err("Invalid command line: ivrs_hpet%s\n", str); - return 1; - } + if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 || + sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) + goto found; + + if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 || + sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) { + pr_warn("ivrs_hpet%s option format deprecated; use ivrs_hpet=%d@%04x:%02x:%02x.%d instead\n", + str, id, seg, bus, dev, fn); + goto found; }
+ pr_err("Invalid command line: ivrs_hpet%s\n", str); + return 1; + +found: if (early_hpet_map_size == EARLY_MAP_SIZE) { pr_err("Early HPET map overflow - ignoring ivrs_hpet%s\n", str); @@ -3115,19 +3127,36 @@ static int __init parse_ivrs_hpet(char *str) static int __init parse_ivrs_acpihid(char *str) { u32 seg = 0, bus, dev, fn; - char *hid, *uid, *p; + char *hid, *uid, *p, *addr; char acpiid[ACPIHID_UID_LEN + ACPIHID_HID_LEN] = {0}; - int ret, i; - - ret = sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid); - if (ret != 4) { - ret = sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid); - if (ret != 5) { - pr_err("Invalid command line: ivrs_acpihid(%s)\n", str); - return 1; + int i; + + addr = strchr(str, '@'); + if (!addr) { + if (sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid) == 4 || + sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid) == 5) { + pr_warn("ivrs_acpihid%s option format deprecated; use ivrs_acpihid=%s@%04x:%02x:%02x.%d instead\n", + str, acpiid, seg, bus, dev, fn); + goto found; } + goto not_found; }
+ /* We have the '@', make it the terminator to get just the acpiid */ + *addr++ = 0; + + if (sscanf(str, "=%s", acpiid) != 1) + goto not_found; + + if (sscanf(addr, "%x:%x.%x", &bus, &dev, &fn) == 3 || + sscanf(addr, "%x:%x:%x.%x", &seg, &bus, &dev, &fn) == 4) + goto found; + +not_found: + pr_err("Invalid command line: ivrs_acpihid%s\n", str); + return 1; + +found: p = acpiid; hid = strsep(&p, ":"); uid = p;
From: Marek Vasut marex@denx.de
[ Upstream commit 39772efd98adecbd5b8c6096d465d2fcbafbde6a ]
Add pixel clock for second LCDIFv3 interface. Both LCDIFv3 interfaces use the same set of parent clock, so deduplicate imx8mp_media_disp1_pix_sels into common imx8mp_media_disp_pix_sels and use it for both.
Signed-off-by: Marek Vasut marex@denx.de Cc: Abel Vesa abel.vesa@nxp.com Cc: Fabio Estevam festevam@gmail.com Cc: NXP Linux Team linux-imx@nxp.com Cc: Peng Fan peng.fan@nxp.com Cc: Shawn Guo shawnguo@kernel.org Reviewed-by: Abel Vesa abel.vesa@nxp.com Link: https://lore.kernel.org/r/20220313123949.207284-1-marex@denx.de Signed-off-by: Abel Vesa abel.vesa@nxp.com Stable-dep-of: 5c1f7f109094 ("dt-bindings: clocks: imx8mp: Add ID for usb suspend clock") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/imx/clk-imx8mp.c | 5 +++-- include/dt-bindings/clock/imx8mp-clock.h | 4 +++- 2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c index 36e8d619e334..2af39c8240fa 100644 --- a/drivers/clk/imx/clk-imx8mp.c +++ b/drivers/clk/imx/clk-imx8mp.c @@ -362,7 +362,7 @@ static const char * const imx8mp_media_mipi_phy1_ref_sels[] = {"osc_24m", "sys_p "clk_ext2", "audio_pll2_out", "video_pll1_out", };
-static const char * const imx8mp_media_disp1_pix_sels[] = {"osc_24m", "video_pll1_out", "audio_pll2_out", +static const char * const imx8mp_media_disp_pix_sels[] = {"osc_24m", "video_pll1_out", "audio_pll2_out", "audio_pll1_out", "sys_pll1_800m", "sys_pll2_1000m", "sys_pll3_out", "clk_ext4", };
@@ -566,6 +566,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev) hws[IMX8MP_CLK_AHB] = imx8m_clk_hw_composite_bus_critical("ahb_root", imx8mp_ahb_sels, ccm_base + 0x9000); hws[IMX8MP_CLK_AUDIO_AHB] = imx8m_clk_hw_composite_bus("audio_ahb", imx8mp_audio_ahb_sels, ccm_base + 0x9100); hws[IMX8MP_CLK_MIPI_DSI_ESC_RX] = imx8m_clk_hw_composite_bus("mipi_dsi_esc_rx", imx8mp_mipi_dsi_esc_rx_sels, ccm_base + 0x9200); + hws[IMX8MP_CLK_MEDIA_DISP2_PIX] = imx8m_clk_hw_composite("media_disp2_pix", imx8mp_media_disp_pix_sels, ccm_base + 0x9300);
hws[IMX8MP_CLK_IPG_ROOT] = imx_clk_hw_divider2("ipg_root", "ahb_root", ccm_base + 0x9080, 0, 1); hws[IMX8MP_CLK_IPG_AUDIO_ROOT] = imx_clk_hw_divider2("ipg_audio_root", "audio_ahb", ccm_base + 0x9180, 0, 1); @@ -630,7 +631,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev) hws[IMX8MP_CLK_USDHC3] = imx8m_clk_hw_composite("usdhc3", imx8mp_usdhc3_sels, ccm_base + 0xbc80); hws[IMX8MP_CLK_MEDIA_CAM1_PIX] = imx8m_clk_hw_composite("media_cam1_pix", imx8mp_media_cam1_pix_sels, ccm_base + 0xbd00); hws[IMX8MP_CLK_MEDIA_MIPI_PHY1_REF] = imx8m_clk_hw_composite("media_mipi_phy1_ref", imx8mp_media_mipi_phy1_ref_sels, ccm_base + 0xbd80); - hws[IMX8MP_CLK_MEDIA_DISP1_PIX] = imx8m_clk_hw_composite("media_disp1_pix", imx8mp_media_disp1_pix_sels, ccm_base + 0xbe00); + hws[IMX8MP_CLK_MEDIA_DISP1_PIX] = imx8m_clk_hw_composite("media_disp1_pix", imx8mp_media_disp_pix_sels, ccm_base + 0xbe00); hws[IMX8MP_CLK_MEDIA_CAM2_PIX] = imx8m_clk_hw_composite("media_cam2_pix", imx8mp_media_cam2_pix_sels, ccm_base + 0xbe80); hws[IMX8MP_CLK_MEDIA_LDB] = imx8m_clk_hw_composite("media_ldb", imx8mp_media_ldb_sels, ccm_base + 0xbf00); hws[IMX8MP_CLK_MEMREPAIR] = imx8m_clk_hw_composite_critical("mem_repair", imx8mp_memrepair_sels, ccm_base + 0xbf80); diff --git a/include/dt-bindings/clock/imx8mp-clock.h b/include/dt-bindings/clock/imx8mp-clock.h index e8d68fbb6e3f..4a621fddfcd3 100644 --- a/include/dt-bindings/clock/imx8mp-clock.h +++ b/include/dt-bindings/clock/imx8mp-clock.h @@ -322,7 +322,9 @@ #define IMX8MP_CLK_HSIO_AXI 311 #define IMX8MP_CLK_MEDIA_ISP 312
-#define IMX8MP_CLK_END 313 +#define IMX8MP_CLK_MEDIA_DISP2_PIX 313 + +#define IMX8MP_CLK_END 314
#define IMX8MP_CLK_AUDIOMIX_SAI1_IPG 0 #define IMX8MP_CLK_AUDIOMIX_SAI1_MCLK1 1
From: Lucas Stach l.stach@pengutronix.de
[ Upstream commit 43896f56b59eeaf08687fa976257ae7083d01b41 ]
clkout1 and clkout2 allow to supply clocks from the SoC to the board, which is used by some board designs to provide reference clocks.
Signed-off-by: Lucas Stach l.stach@pengutronix.de Reviewed-by: Abel Vesa abel.vesa@nxp.com Link: https://lore.kernel.org/r/20220427162131.3127303-1-l.stach@pengutronix.de Signed-off-by: Abel Vesa abel.vesa@nxp.com Stable-dep-of: 5c1f7f109094 ("dt-bindings: clocks: imx8mp: Add ID for usb suspend clock") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/imx/clk-imx8mp.c | 14 ++++++++++++++ include/dt-bindings/clock/imx8mp-clock.h | 9 +++++++-- 2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c index 2af39c8240fa..187044b98bde 100644 --- a/drivers/clk/imx/clk-imx8mp.c +++ b/drivers/clk/imx/clk-imx8mp.c @@ -411,6 +411,11 @@ static const char * const imx8mp_sai7_sels[] = {"osc_24m", "audio_pll1_out", "au
static const char * const imx8mp_dram_core_sels[] = {"dram_pll_out", "dram_alt_root", };
+static const char * const imx8mp_clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "video_pll1_out", + "dummy", "dummy", "gpu_pll_out", "vpu_pll_out", + "arm_pll_out", "sys_pll1", "sys_pll2", "sys_pll3", + "dummy", "dummy", "osc_24m", "dummy", "osc_32k"}; + static struct clk_hw **hws; static struct clk_hw_onecell_data *clk_hw_data;
@@ -532,6 +537,15 @@ static int imx8mp_clocks_probe(struct platform_device *pdev) hws[IMX8MP_SYS_PLL2_500M] = imx_clk_hw_fixed_factor("sys_pll2_500m", "sys_pll2_500m_cg", 1, 2); hws[IMX8MP_SYS_PLL2_1000M] = imx_clk_hw_fixed_factor("sys_pll2_1000m", "sys_pll2_out", 1, 1);
+ hws[IMX8MP_CLK_CLKOUT1_SEL] = imx_clk_hw_mux2("clkout1_sel", anatop_base + 0x128, 4, 4, + imx8mp_clkout_sels, ARRAY_SIZE(imx8mp_clkout_sels)); + hws[IMX8MP_CLK_CLKOUT1_DIV] = imx_clk_hw_divider("clkout1_div", "clkout1_sel", anatop_base + 0x128, 0, 4); + hws[IMX8MP_CLK_CLKOUT1] = imx_clk_hw_gate("clkout1", "clkout1_div", anatop_base + 0x128, 8); + hws[IMX8MP_CLK_CLKOUT2_SEL] = imx_clk_hw_mux2("clkout2_sel", anatop_base + 0x128, 20, 4, + imx8mp_clkout_sels, ARRAY_SIZE(imx8mp_clkout_sels)); + hws[IMX8MP_CLK_CLKOUT2_DIV] = imx_clk_hw_divider("clkout2_div", "clkout2_sel", anatop_base + 0x128, 16, 4); + hws[IMX8MP_CLK_CLKOUT2] = imx_clk_hw_gate("clkout2", "clkout2_div", anatop_base + 0x128, 24); + hws[IMX8MP_CLK_A53_DIV] = imx8m_clk_hw_composite_core("arm_a53_div", imx8mp_a53_sels, ccm_base + 0x8000); hws[IMX8MP_CLK_A53_SRC] = hws[IMX8MP_CLK_A53_DIV]; hws[IMX8MP_CLK_A53_CG] = hws[IMX8MP_CLK_A53_DIV]; diff --git a/include/dt-bindings/clock/imx8mp-clock.h b/include/dt-bindings/clock/imx8mp-clock.h index 4a621fddfcd3..0ba16b76eb2f 100644 --- a/include/dt-bindings/clock/imx8mp-clock.h +++ b/include/dt-bindings/clock/imx8mp-clock.h @@ -321,10 +321,15 @@ #define IMX8MP_CLK_AUDIO_AXI 310 #define IMX8MP_CLK_HSIO_AXI 311 #define IMX8MP_CLK_MEDIA_ISP 312 - #define IMX8MP_CLK_MEDIA_DISP2_PIX 313 +#define IMX8MP_CLK_CLKOUT1_SEL 314 +#define IMX8MP_CLK_CLKOUT1_DIV 315 +#define IMX8MP_CLK_CLKOUT1 316 +#define IMX8MP_CLK_CLKOUT2_SEL 317 +#define IMX8MP_CLK_CLKOUT2_DIV 318 +#define IMX8MP_CLK_CLKOUT2 319
-#define IMX8MP_CLK_END 314 +#define IMX8MP_CLK_END 320
#define IMX8MP_CLK_AUDIOMIX_SAI1_IPG 0 #define IMX8MP_CLK_AUDIOMIX_SAI1_MCLK1 1
From: Li Jun jun.li@nxp.com
[ Upstream commit 5c1f7f1090947d494c30042123e0ec846f696336 ]
usb suspend clock has a gate shared with usb_root_clk.
Fixes: 9c140d9926761 ("clk: imx: Add support for i.MX8MP clock driver") Cc: stable@vger.kernel.org # v5.19+ Acked-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Tested-by: Alexander Stein alexander.stein@ew.tq-group.com Signed-off-by: Li Jun jun.li@nxp.com Signed-off-by: Abel Vesa abel.vesa@linaro.org Link: https://lore.kernel.org/r/1664549663-20364-1-git-send-email-jun.li@nxp.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/dt-bindings/clock/imx8mp-clock.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/dt-bindings/clock/imx8mp-clock.h b/include/dt-bindings/clock/imx8mp-clock.h index 0ba16b76eb2f..d7e513243dd2 100644 --- a/include/dt-bindings/clock/imx8mp-clock.h +++ b/include/dt-bindings/clock/imx8mp-clock.h @@ -328,8 +328,9 @@ #define IMX8MP_CLK_CLKOUT2_SEL 317 #define IMX8MP_CLK_CLKOUT2_DIV 318 #define IMX8MP_CLK_CLKOUT2 319 +#define IMX8MP_CLK_USB_SUSP 320
-#define IMX8MP_CLK_END 320 +#define IMX8MP_CLK_END 321
#define IMX8MP_CLK_AUDIOMIX_SAI1_IPG 0 #define IMX8MP_CLK_AUDIOMIX_SAI1_MCLK1 1
From: Li Jun jun.li@nxp.com
[ Upstream commit ed1f4ccfe947a3e1018a3bd7325134574c7ff9b3 ]
32K usb suspend clock gate is shared with usb_root_clk, this shared clock gate was initially defined only for usb suspend clock, usb suspend clk is kept on while system is active or system sleep with usb wakeup enabled, so usb root clock is fine with this situation; with the commit cf7f3f4fa9e5 ("clk: imx8mp: fix usb_root_clk parent"), this clock gate is changed to be for usb root clock, but usb root clock will be off while usb is suspended, so usb suspend clock will be gated too, this cause some usb functionalities will not work, so define this clock to be a shared clock gate to conform with the real HW status.
Fixes: 9c140d9926761 ("clk: imx: Add support for i.MX8MP clock driver") Cc: stable@vger.kernel.org # v5.19+ Tested-by: Alexander Stein alexander.stein@ew.tq-group.com Signed-off-by: Li Jun jun.li@nxp.com Signed-off-by: Abel Vesa abel.vesa@linaro.org Link: https://lore.kernel.org/r/1664549663-20364-2-git-send-email-jun.li@nxp.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/imx/clk-imx8mp.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c index 187044b98bde..72592e35836b 100644 --- a/drivers/clk/imx/clk-imx8mp.c +++ b/drivers/clk/imx/clk-imx8mp.c @@ -17,6 +17,7 @@
static u32 share_count_nand; static u32 share_count_media; +static u32 share_count_usb;
static const char * const pll_ref_sels[] = { "osc_24m", "dummy", "dummy", "dummy", }; static const char * const audio_pll1_bypass_sels[] = {"audio_pll1", "audio_pll1_ref_sel", }; @@ -706,7 +707,8 @@ static int imx8mp_clocks_probe(struct platform_device *pdev) hws[IMX8MP_CLK_UART2_ROOT] = imx_clk_hw_gate4("uart2_root_clk", "uart2", ccm_base + 0x44a0, 0); hws[IMX8MP_CLK_UART3_ROOT] = imx_clk_hw_gate4("uart3_root_clk", "uart3", ccm_base + 0x44b0, 0); hws[IMX8MP_CLK_UART4_ROOT] = imx_clk_hw_gate4("uart4_root_clk", "uart4", ccm_base + 0x44c0, 0); - hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate4("usb_root_clk", "hsio_axi", ccm_base + 0x44d0, 0); + hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate2_shared2("usb_root_clk", "hsio_axi", ccm_base + 0x44d0, 0, &share_count_usb); + hws[IMX8MP_CLK_USB_SUSP] = imx_clk_hw_gate2_shared2("usb_suspend_clk", "osc_32k", ccm_base + 0x44d0, 0, &share_count_usb); hws[IMX8MP_CLK_USB_PHY_ROOT] = imx_clk_hw_gate4("usb_phy_root_clk", "usb_phy_ref", ccm_base + 0x44f0, 0); hws[IMX8MP_CLK_USDHC1_ROOT] = imx_clk_hw_gate4("usdhc1_root_clk", "usdhc1", ccm_base + 0x4510, 0); hws[IMX8MP_CLK_USDHC2_ROOT] = imx_clk_hw_gate4("usdhc2_root_clk", "usdhc2", ccm_base + 0x4520, 0);
From: Mathias Nyman mathias.nyman@linux.intel.com
[ Upstream commit ab58f3bb6aaaf98ba81d5c627ac25c08ff4ed4f1 ]
When handling transfer events the event is passed along the handling callpath and parsed again in several occasions.
The event contains slot_id and endpoint index, from which the driver endpoint structure can be found. There wasn't however a way to get the endpoint index or parent usb device from this endpoint structure.
A lot of extra event parsing, and thus some DMA doublefetch cases, and excess variables and code can be avoided by adding endpoint index and parent usb virt device pointer to the endpoint structure.
Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20210129130044.206855-2-mathias.nyman@linux.intel.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: a1575120972e ("xhci: Prevent infinite loop in transaction errors recovery for streams") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci-mem.c | 2 ++ drivers/usb/host/xhci-ring.c | 28 ++++++++-------------------- drivers/usb/host/xhci.h | 2 ++ 3 files changed, 12 insertions(+), 20 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index d1a42300ae58..002e4948993d 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -1021,6 +1021,8 @@ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
/* Initialize the cancellation list and watchdog timers for each ep */ for (i = 0; i < 31; i++) { + dev->eps[i].ep_index = i; + dev->eps[i].vdev = dev; xhci_init_endpoint_timer(xhci, &dev->eps[i]); INIT_LIST_HEAD(&dev->eps[i].cancelled_td_list); INIT_LIST_HEAD(&dev->eps[i].bw_endpoint_list); diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index fa3a7ac15f82..5cee7150376d 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -1936,7 +1936,7 @@ static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci, * Avoid resetting endpoint if link is inactive. Can cause host hang. * Device will be reset soon to recover the link so don't do anything */ - if (xhci->devs[slot_id]->flags & VDEV_PORT_ERROR) + if (ep->vdev->flags & VDEV_PORT_ERROR) return;
command = xhci_alloc_command(xhci, false, GFP_ATOMIC); @@ -2045,18 +2045,14 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, struct xhci_transfer_event *event, struct xhci_virt_ep *ep, int *status) { - struct xhci_virt_device *xdev; struct xhci_ep_ctx *ep_ctx; struct xhci_ring *ep_ring; unsigned int slot_id; u32 trb_comp_code; - int ep_index;
slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags)); - xdev = xhci->devs[slot_id]; - ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1; ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer)); - ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); + ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index); trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
if (trb_comp_code == COMP_STOPPED_LENGTH_INVALID || @@ -2081,9 +2077,9 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, * stall later. Hub TT buffer should only be cleared for FS/LS * devices behind HS hubs for functional stalls. */ - if ((ep_index != 0) || (trb_comp_code != COMP_STALL_ERROR)) + if ((ep->ep_index != 0) || (trb_comp_code != COMP_STALL_ERROR)) xhci_clear_hub_tt_buffer(xhci, td, ep); - xhci_cleanup_halted_endpoint(xhci, slot_id, ep_index, + xhci_cleanup_halted_endpoint(xhci, slot_id, ep->ep_index, ep_ring->stream_id, td, EP_HARD_RESET); } else { /* Update ring dequeue pointer */ @@ -2117,19 +2113,13 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td, union xhci_trb *ep_trb, struct xhci_transfer_event *event, struct xhci_virt_ep *ep, int *status) { - struct xhci_virt_device *xdev; - unsigned int slot_id; - int ep_index; struct xhci_ep_ctx *ep_ctx; u32 trb_comp_code; u32 remaining, requested; u32 trb_type;
trb_type = TRB_FIELD_TO_TYPE(le32_to_cpu(ep_trb->generic.field[3])); - slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags)); - xdev = xhci->devs[slot_id]; - ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1; - ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); + ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index); trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len)); requested = td->urb->transfer_buffer_length; remaining = EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)); @@ -2177,7 +2167,7 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td, ep_ctx, trb_comp_code)) break; xhci_dbg(xhci, "TRB error %u, halted endpoint index = %u\n", - trb_comp_code, ep_index); + trb_comp_code, ep->ep_index); fallthrough; case COMP_STALL_ERROR: /* Did we transfer part of the data (middle) phase? */ @@ -2339,11 +2329,9 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, u32 trb_comp_code; u32 remaining, requested, ep_trb_len; unsigned int slot_id; - int ep_index;
slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags)); - slot_ctx = xhci_get_slot_ctx(xhci, xhci->devs[slot_id]->out_ctx); - ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1; + slot_ctx = xhci_get_slot_ctx(xhci, ep->vdev->out_ctx); ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer)); trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len)); remaining = EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)); @@ -2382,7 +2370,7 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, le32_to_cpu(slot_ctx->tt_info) & TT_SLOT) break; *status = 0; - xhci_cleanup_halted_endpoint(xhci, slot_id, ep_index, + xhci_cleanup_halted_endpoint(xhci, slot_id, ep->ep_index, ep_ring->stream_id, td, EP_SOFT_RESET); return 0; default: diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 059050f13522..5fbd159f6fa5 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -924,6 +924,8 @@ struct xhci_bw_info { #define SS_BW_RESERVED 10
struct xhci_virt_ep { + struct xhci_virt_device *vdev; /* parent */ + unsigned int ep_index; struct xhci_ring *ring; /* Related to endpoints that are configured to use stream IDs only */ struct xhci_stream_info *stream_info;
From: Mathias Nyman mathias.nyman@linux.intel.com
[ Upstream commit d4dff8043ea5b93a30cb9b19d4407bd506a6877a ]
isochronous endpoints do not support streams, meaning that there is only one ring per endpoint.
Avoid double-fetching the transfer event DMA to get the ring. Also makes passing the event to skip_isoc_td() uncecessary.
Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20210129130044.206855-3-mathias.nyman@linux.intel.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: a1575120972e ("xhci: Prevent infinite loop in transaction errors recovery for streams") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci-ring.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 5cee7150376d..6d34c56376e5 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -2209,7 +2209,6 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, union xhci_trb *ep_trb, struct xhci_transfer_event *event, struct xhci_virt_ep *ep, int *status) { - struct xhci_ring *ep_ring; struct urb_priv *urb_priv; int idx; struct usb_iso_packet_descriptor *frame; @@ -2218,7 +2217,6 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, u32 remaining, requested, ep_trb_len; int short_framestatus;
- ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer)); trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len)); urb_priv = td->urb->hcpriv; idx = urb_priv->num_tds_done; @@ -2279,7 +2277,7 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, }
if (sum_trbs_for_length) - frame->actual_length = sum_trb_lengths(xhci, ep_ring, ep_trb) + + frame->actual_length = sum_trb_lengths(xhci, ep->ring, ep_trb) + ep_trb_len - remaining; else frame->actual_length = requested; @@ -2290,15 +2288,12 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, }
static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, - struct xhci_transfer_event *event, struct xhci_virt_ep *ep, int *status) { - struct xhci_ring *ep_ring; struct urb_priv *urb_priv; struct usb_iso_packet_descriptor *frame; int idx;
- ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer)); urb_priv = td->urb->hcpriv; idx = urb_priv->num_tds_done; frame = &td->urb->iso_frame_desc[idx]; @@ -2310,11 +2305,11 @@ static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, frame->actual_length = 0;
/* Update ring dequeue pointer */ - while (ep_ring->dequeue != td->last_trb) - inc_deq(xhci, ep_ring); - inc_deq(xhci, ep_ring); + while (ep->ring->dequeue != td->last_trb) + inc_deq(xhci, ep->ring); + inc_deq(xhci, ep->ring);
- return xhci_td_cleanup(xhci, td, ep_ring, status); + return xhci_td_cleanup(xhci, td, ep->ring, status); }
/* @@ -2693,7 +2688,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, return -ESHUTDOWN; }
- skip_isoc_td(xhci, td, event, ep, &status); + skip_isoc_td(xhci, td, ep, &status); goto cleanup; } if (trb_comp_code == COMP_SHORT_PACKET)
From: Mathias Nyman mathias.nyman@linux.intel.com
[ Upstream commit d70f4231b81eeb6dd78bd913ff42729b524eec51 ]
Instead of passing slot id and endpoint index to cleanup_halted_endpoint() pass the endpoint structure pointer as it's already known.
Avoids again digging out the endpoint structure based on slot id and endpoint index, and passing them along the call chain for this purpose only.
Add slot_id to the virt_dev structure so that it can easily be found from a virt_dev, or its child, the virt_ep endpoint structure.
Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20210129130044.206855-4-mathias.nyman@linux.intel.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: a1575120972e ("xhci: Prevent infinite loop in transaction errors recovery for streams") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci-mem.c | 2 ++ drivers/usb/host/xhci-ring.c | 35 ++++++++++++++--------------------- drivers/usb/host/xhci.h | 1 + 3 files changed, 17 insertions(+), 21 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 002e4948993d..a8a9addb4d25 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -1003,6 +1003,8 @@ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id, if (!dev) return 0;
+ dev->slot_id = slot_id; + /* Allocate the (output) device context that will be used in the HC. */ dev->out_ctx = xhci_alloc_container_ctx(xhci, XHCI_CTX_TYPE_DEVICE, flags); if (!dev->out_ctx) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 6d34c56376e5..a0d210d3a3c6 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -1925,13 +1925,12 @@ static void xhci_clear_hub_tt_buffer(struct xhci_hcd *xhci, struct xhci_td *td, }
static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci, - unsigned int slot_id, unsigned int ep_index, - unsigned int stream_id, struct xhci_td *td, - enum xhci_ep_reset_type reset_type) + struct xhci_virt_ep *ep, unsigned int stream_id, + struct xhci_td *td, + enum xhci_ep_reset_type reset_type) { - struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index]; struct xhci_command *command; - + unsigned int slot_id = ep->vdev->slot_id; /* * Avoid resetting endpoint if link is inactive. Can cause host hang. * Device will be reset soon to recover the link so don't do anything @@ -1945,11 +1944,11 @@ static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci,
ep->ep_state |= EP_HALTED;
- xhci_queue_reset_ep(xhci, command, slot_id, ep_index, reset_type); + xhci_queue_reset_ep(xhci, command, slot_id, ep->ep_index, reset_type);
if (reset_type == EP_HARD_RESET) { ep->ep_state |= EP_HARD_CLEAR_TOGGLE; - xhci_cleanup_stalled_ring(xhci, slot_id, ep_index, stream_id, + xhci_cleanup_stalled_ring(xhci, slot_id, ep->ep_index, stream_id, td); } xhci_ring_cmd_db(xhci); @@ -2047,10 +2046,8 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, { struct xhci_ep_ctx *ep_ctx; struct xhci_ring *ep_ring; - unsigned int slot_id; u32 trb_comp_code;
- slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags)); ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer)); ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index); trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len)); @@ -2079,8 +2076,8 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, */ if ((ep->ep_index != 0) || (trb_comp_code != COMP_STALL_ERROR)) xhci_clear_hub_tt_buffer(xhci, td, ep); - xhci_cleanup_halted_endpoint(xhci, slot_id, ep->ep_index, - ep_ring->stream_id, td, EP_HARD_RESET); + xhci_cleanup_halted_endpoint(xhci, ep, ep_ring->stream_id, td, + EP_HARD_RESET); } else { /* Update ring dequeue pointer */ while (ep_ring->dequeue != td->last_trb) @@ -2323,9 +2320,7 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, struct xhci_ring *ep_ring; u32 trb_comp_code; u32 remaining, requested, ep_trb_len; - unsigned int slot_id;
- slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags)); slot_ctx = xhci_get_slot_ctx(xhci, ep->vdev->out_ctx); ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer)); trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len)); @@ -2365,8 +2360,8 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, le32_to_cpu(slot_ctx->tt_info) & TT_SLOT) break; *status = 0; - xhci_cleanup_halted_endpoint(xhci, slot_id, ep->ep_index, - ep_ring->stream_id, td, EP_SOFT_RESET); + xhci_cleanup_halted_endpoint(xhci, ep, ep_ring->stream_id, td, + EP_SOFT_RESET); return 0; default: /* do nothing */ @@ -2441,8 +2436,8 @@ static int handle_tx_event(struct xhci_hcd *xhci, case COMP_USB_TRANSACTION_ERROR: case COMP_INVALID_STREAM_TYPE_ERROR: case COMP_INVALID_STREAM_ID_ERROR: - xhci_cleanup_halted_endpoint(xhci, slot_id, ep_index, 0, - NULL, EP_SOFT_RESET); + xhci_cleanup_halted_endpoint(xhci, ep, 0, NULL, + EP_SOFT_RESET); goto cleanup; case COMP_RING_UNDERRUN: case COMP_RING_OVERRUN: @@ -2625,8 +2620,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, if (trb_comp_code == COMP_STALL_ERROR || xhci_requires_manual_halt_cleanup(xhci, ep_ctx, trb_comp_code)) { - xhci_cleanup_halted_endpoint(xhci, slot_id, - ep_index, + xhci_cleanup_halted_endpoint(xhci, ep, ep_ring->stream_id, NULL, EP_HARD_RESET); @@ -2720,8 +2714,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, if (trb_comp_code == COMP_STALL_ERROR || xhci_requires_manual_halt_cleanup(xhci, ep_ctx, trb_comp_code)) - xhci_cleanup_halted_endpoint(xhci, slot_id, - ep_index, + xhci_cleanup_halted_endpoint(xhci, ep, ep_ring->stream_id, td, EP_HARD_RESET); goto cleanup; diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 5fbd159f6fa5..9cbf106fb3ee 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1004,6 +1004,7 @@ struct xhci_interval_bw_table { #define EP_CTX_PER_DEV 31
struct xhci_virt_device { + int slot_id; struct usb_device *udev; /* * Commands to the hardware are passed an "input context" that
From: Mathias Nyman mathias.nyman@linux.intel.com
[ Upstream commit d8ac95001bea683d2088acb3e61613a27b8d2d5f ]
Create a separate helper function to issue reset endpont commands to clear halted endpoints.
This is useful for cases where a halted endpoint is discovered while completing another command, and the endpoint halt needs to be cleared with a endpoint reset first.
No functional changes
Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20210129130044.206855-16-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: a1575120972e ("xhci: Prevent infinite loop in transaction errors recovery for streams") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci-ring.c | 31 +++++++++++++++++++++++++------ 1 file changed, 25 insertions(+), 6 deletions(-)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index a0d210d3a3c6..b4a510450ac2 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -773,6 +773,26 @@ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci, seg->bounce_offs = 0; }
+static int xhci_reset_halted_ep(struct xhci_hcd *xhci, unsigned int slot_id, + unsigned int ep_index, enum xhci_ep_reset_type reset_type) +{ + struct xhci_command *command; + int ret = 0; + + command = xhci_alloc_command(xhci, false, GFP_ATOMIC); + if (!command) { + ret = -ENOMEM; + goto done; + } + + ret = xhci_queue_reset_ep(xhci, command, slot_id, ep_index, reset_type); +done: + if (ret) + xhci_err(xhci, "ERROR queuing reset endpoint for slot %d ep_index %d, %d\n", + slot_id, ep_index, ret); + return ret; +} + /* * When we get a command completion for a Stop Endpoint Command, we need to * unlink any cancelled TDs from the ring. There are two ways to do that: @@ -1929,8 +1949,9 @@ static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci, struct xhci_td *td, enum xhci_ep_reset_type reset_type) { - struct xhci_command *command; unsigned int slot_id = ep->vdev->slot_id; + int err; + /* * Avoid resetting endpoint if link is inactive. Can cause host hang. * Device will be reset soon to recover the link so don't do anything @@ -1938,13 +1959,11 @@ static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci, if (ep->vdev->flags & VDEV_PORT_ERROR) return;
- command = xhci_alloc_command(xhci, false, GFP_ATOMIC); - if (!command) - return; - ep->ep_state |= EP_HALTED;
- xhci_queue_reset_ep(xhci, command, slot_id, ep->ep_index, reset_type); + err = xhci_reset_halted_ep(xhci, slot_id, ep->ep_index, reset_type); + if (err) + return;
if (reset_type == EP_HARD_RESET) { ep->ep_state |= EP_HARD_CLEAR_TOGGLE;
From: Mathias Nyman mathias.nyman@linux.intel.com
[ Upstream commit 69eaf9e79fa7c7ff4b1eb626493ce5a81e467520 ]
No funtional changes
Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20210129130044.206855-17-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: a1575120972e ("xhci: Prevent infinite loop in transaction errors recovery for streams") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci-ring.c | 92 ++++++++++++++++++------------------ 1 file changed, 46 insertions(+), 46 deletions(-)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index b4a510450ac2..d523cbe7fcf1 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -773,6 +773,52 @@ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci, seg->bounce_offs = 0; }
+static int xhci_td_cleanup(struct xhci_hcd *xhci, struct xhci_td *td, + struct xhci_ring *ep_ring, int *status) +{ + struct urb *urb = NULL; + + /* Clean up the endpoint's TD list */ + urb = td->urb; + + /* if a bounce buffer was used to align this td then unmap it */ + xhci_unmap_td_bounce_buffer(xhci, ep_ring, td); + + /* Do one last check of the actual transfer length. + * If the host controller said we transferred more data than the buffer + * length, urb->actual_length will be a very big number (since it's + * unsigned). Play it safe and say we didn't transfer anything. + */ + if (urb->actual_length > urb->transfer_buffer_length) { + xhci_warn(xhci, "URB req %u and actual %u transfer length mismatch\n", + urb->transfer_buffer_length, urb->actual_length); + urb->actual_length = 0; + *status = 0; + } + list_del_init(&td->td_list); + /* Was this TD slated to be cancelled but completed anyway? */ + if (!list_empty(&td->cancelled_td_list)) + list_del_init(&td->cancelled_td_list); + + inc_td_cnt(urb); + /* Giveback the urb when all the tds are completed */ + if (last_td_in_urb(td)) { + if ((urb->actual_length != urb->transfer_buffer_length && + (urb->transfer_flags & URB_SHORT_NOT_OK)) || + (*status != 0 && !usb_endpoint_xfer_isoc(&urb->ep->desc))) + xhci_dbg(xhci, "Giveback URB %p, len = %d, expected = %d, status = %d\n", + urb, urb->actual_length, + urb->transfer_buffer_length, *status); + + /* set isoc urb status to 0 just as EHCI, UHCI, and OHCI */ + if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) + *status = 0; + xhci_giveback_urb_in_irq(xhci, td, *status); + } + + return 0; +} + static int xhci_reset_halted_ep(struct xhci_hcd *xhci, unsigned int slot_id, unsigned int ep_index, enum xhci_ep_reset_type reset_type) { @@ -2013,52 +2059,6 @@ int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code) return 0; }
-static int xhci_td_cleanup(struct xhci_hcd *xhci, struct xhci_td *td, - struct xhci_ring *ep_ring, int *status) -{ - struct urb *urb = NULL; - - /* Clean up the endpoint's TD list */ - urb = td->urb; - - /* if a bounce buffer was used to align this td then unmap it */ - xhci_unmap_td_bounce_buffer(xhci, ep_ring, td); - - /* Do one last check of the actual transfer length. - * If the host controller said we transferred more data than the buffer - * length, urb->actual_length will be a very big number (since it's - * unsigned). Play it safe and say we didn't transfer anything. - */ - if (urb->actual_length > urb->transfer_buffer_length) { - xhci_warn(xhci, "URB req %u and actual %u transfer length mismatch\n", - urb->transfer_buffer_length, urb->actual_length); - urb->actual_length = 0; - *status = 0; - } - list_del_init(&td->td_list); - /* Was this TD slated to be cancelled but completed anyway? */ - if (!list_empty(&td->cancelled_td_list)) - list_del_init(&td->cancelled_td_list); - - inc_td_cnt(urb); - /* Giveback the urb when all the tds are completed */ - if (last_td_in_urb(td)) { - if ((urb->actual_length != urb->transfer_buffer_length && - (urb->transfer_flags & URB_SHORT_NOT_OK)) || - (*status != 0 && !usb_endpoint_xfer_isoc(&urb->ep->desc))) - xhci_dbg(xhci, "Giveback URB %p, len = %d, expected = %d, status = %d\n", - urb, urb->actual_length, - urb->transfer_buffer_length, *status); - - /* set isoc urb status to 0 just as EHCI, UHCI, and OHCI */ - if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) - *status = 0; - xhci_giveback_urb_in_irq(xhci, td, *status); - } - - return 0; -} - static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, struct xhci_transfer_event *event, struct xhci_virt_ep *ep, int *status)
From: Mathias Nyman mathias.nyman@linux.intel.com
[ Upstream commit a6ccd1fd4bd4fca37eaa3d76bef940d6332919bc ]
In cases where the TD can't be given back in current handler we want to be able to store it until its time to return the TD.
Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20210129130044.206855-19-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: a1575120972e ("xhci: Prevent infinite loop in transaction errors recovery for streams") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci-ring.c | 56 +++++++++++++++++++----------------- drivers/usb/host/xhci.h | 1 + 2 files changed, 30 insertions(+), 27 deletions(-)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index d523cbe7fcf1..2bd407ff9f0b 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -774,7 +774,7 @@ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci, }
static int xhci_td_cleanup(struct xhci_hcd *xhci, struct xhci_td *td, - struct xhci_ring *ep_ring, int *status) + struct xhci_ring *ep_ring, int status) { struct urb *urb = NULL;
@@ -793,7 +793,7 @@ static int xhci_td_cleanup(struct xhci_hcd *xhci, struct xhci_td *td, xhci_warn(xhci, "URB req %u and actual %u transfer length mismatch\n", urb->transfer_buffer_length, urb->actual_length); urb->actual_length = 0; - *status = 0; + status = 0; } list_del_init(&td->td_list); /* Was this TD slated to be cancelled but completed anyway? */ @@ -805,15 +805,15 @@ static int xhci_td_cleanup(struct xhci_hcd *xhci, struct xhci_td *td, if (last_td_in_urb(td)) { if ((urb->actual_length != urb->transfer_buffer_length && (urb->transfer_flags & URB_SHORT_NOT_OK)) || - (*status != 0 && !usb_endpoint_xfer_isoc(&urb->ep->desc))) + (status != 0 && !usb_endpoint_xfer_isoc(&urb->ep->desc))) xhci_dbg(xhci, "Giveback URB %p, len = %d, expected = %d, status = %d\n", urb, urb->actual_length, - urb->transfer_buffer_length, *status); + urb->transfer_buffer_length, status);
/* set isoc urb status to 0 just as EHCI, UHCI, and OHCI */ if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) - *status = 0; - xhci_giveback_urb_in_irq(xhci, td, *status); + status = 0; + xhci_giveback_urb_in_irq(xhci, td, status); }
return 0; @@ -2060,8 +2060,7 @@ int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code) }
static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, - struct xhci_transfer_event *event, - struct xhci_virt_ep *ep, int *status) + struct xhci_transfer_event *event, struct xhci_virt_ep *ep) { struct xhci_ep_ctx *ep_ctx; struct xhci_ring *ep_ring; @@ -2104,7 +2103,7 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, inc_deq(xhci, ep_ring); }
- return xhci_td_cleanup(xhci, td, ep_ring, status); + return xhci_td_cleanup(xhci, td, ep_ring, td->status); }
/* sum trb lengths from ring dequeue up to stop_trb, _excluding_ stop_trb */ @@ -2127,7 +2126,7 @@ static int sum_trb_lengths(struct xhci_hcd *xhci, struct xhci_ring *ring, */ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td, union xhci_trb *ep_trb, struct xhci_transfer_event *event, - struct xhci_virt_ep *ep, int *status) + struct xhci_virt_ep *ep) { struct xhci_ep_ctx *ep_ctx; u32 trb_comp_code; @@ -2145,13 +2144,13 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td, if (trb_type != TRB_STATUS) { xhci_warn(xhci, "WARN: Success on ctrl %s TRB without IOC set?\n", (trb_type == TRB_DATA) ? "data" : "setup"); - *status = -ESHUTDOWN; + td->status = -ESHUTDOWN; break; } - *status = 0; + td->status = 0; break; case COMP_SHORT_PACKET: - *status = 0; + td->status = 0; break; case COMP_STOPPED_SHORT_PACKET: if (trb_type == TRB_DATA || trb_type == TRB_NORMAL) @@ -2215,7 +2214,7 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td, td->urb->actual_length = requested;
finish_td: - return finish_td(xhci, td, event, ep, status); + return finish_td(xhci, td, event, ep); }
/* @@ -2223,7 +2222,7 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td, */ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, union xhci_trb *ep_trb, struct xhci_transfer_event *event, - struct xhci_virt_ep *ep, int *status) + struct xhci_virt_ep *ep) { struct urb_priv *urb_priv; int idx; @@ -2300,11 +2299,11 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
td->urb->actual_length += frame->actual_length;
- return finish_td(xhci, td, event, ep, status); + return finish_td(xhci, td, event, ep); }
static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, - struct xhci_virt_ep *ep, int *status) + struct xhci_virt_ep *ep, int status) { struct urb_priv *urb_priv; struct usb_iso_packet_descriptor *frame; @@ -2333,7 +2332,7 @@ static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, */ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, union xhci_trb *ep_trb, struct xhci_transfer_event *event, - struct xhci_virt_ep *ep, int *status) + struct xhci_virt_ep *ep) { struct xhci_slot_ctx *slot_ctx; struct xhci_ring *ep_ring; @@ -2357,13 +2356,13 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, td->urb->ep->desc.bEndpointAddress, requested, remaining); } - *status = 0; + td->status = 0; break; case COMP_SHORT_PACKET: xhci_dbg(xhci, "ep %#x - asked for %d bytes, %d bytes untransferred\n", td->urb->ep->desc.bEndpointAddress, requested, remaining); - *status = 0; + td->status = 0; break; case COMP_STOPPED_SHORT_PACKET: td->urb->actual_length = remaining; @@ -2378,7 +2377,8 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, (ep_ring->err_count++ > MAX_SOFT_RETRY) || le32_to_cpu(slot_ctx->tt_info) & TT_SLOT) break; - *status = 0; + + td->status = 0; xhci_cleanup_halted_endpoint(xhci, ep, ep_ring->stream_id, td, EP_SOFT_RESET); return 0; @@ -2399,7 +2399,7 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, remaining); td->urb->actual_length = 0; } - return finish_td(xhci, td, event, ep, status); + return finish_td(xhci, td, event, ep); }
/* @@ -2701,7 +2701,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, return -ESHUTDOWN; }
- skip_isoc_td(xhci, td, ep, &status); + skip_isoc_td(xhci, td, ep, status); goto cleanup; } if (trb_comp_code == COMP_SHORT_PACKET) @@ -2729,6 +2729,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, * endpoint. Otherwise, the endpoint remains stalled * indefinitely. */ + if (trb_is_noop(ep_trb)) { if (trb_comp_code == COMP_STALL_ERROR || xhci_requires_manual_halt_cleanup(xhci, ep_ctx, @@ -2739,14 +2740,15 @@ static int handle_tx_event(struct xhci_hcd *xhci, goto cleanup; }
+ td->status = status; + /* update the urb's actual_length and give back to the core */ if (usb_endpoint_xfer_control(&td->urb->ep->desc)) - process_ctrl_td(xhci, td, ep_trb, event, ep, &status); + process_ctrl_td(xhci, td, ep_trb, event, ep); else if (usb_endpoint_xfer_isoc(&td->urb->ep->desc)) - process_isoc_td(xhci, td, ep_trb, event, ep, &status); + process_isoc_td(xhci, td, ep_trb, event, ep); else - process_bulk_intr_td(xhci, td, ep_trb, event, ep, - &status); + process_bulk_intr_td(xhci, td, ep_trb, event, ep); cleanup: handling_skipped_tds = ep->skip && trb_comp_code != COMP_MISSED_SERVICE_ERROR && diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 9cbf106fb3ee..af3759928a95 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1544,6 +1544,7 @@ struct xhci_segment { struct xhci_td { struct list_head td_list; struct list_head cancelled_td_list; + int status; struct urb *urb; struct xhci_segment *start_seg; union xhci_trb *first_trb;
From: Mathias Nyman mathias.nyman@linux.intel.com
[ Upstream commit 7c6c334e6fc8cd99e780fd74cd29687886a81862 ]
Halted endpoints can be discoverd both when handling transfer events and command completion events. Move code that handles halted endpoints before both of those event handlers.
Rename the function to xhci_handle_halted_ep() to better describe what it does. Try to reserve "cleanup" word in function names for last stage cleanup activities.
No functional changes
Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20210129130044.206855-21-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: a1575120972e ("xhci: Prevent infinite loop in transaction errors recovery for streams") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci-ring.c | 84 ++++++++++++++++++------------------ 1 file changed, 43 insertions(+), 41 deletions(-)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 2bd407ff9f0b..edb712081815 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -839,6 +839,35 @@ static int xhci_reset_halted_ep(struct xhci_hcd *xhci, unsigned int slot_id, return ret; }
+static void xhci_handle_halted_endpoint(struct xhci_hcd *xhci, + struct xhci_virt_ep *ep, unsigned int stream_id, + struct xhci_td *td, + enum xhci_ep_reset_type reset_type) +{ + unsigned int slot_id = ep->vdev->slot_id; + int err; + + /* + * Avoid resetting endpoint if link is inactive. Can cause host hang. + * Device will be reset soon to recover the link so don't do anything + */ + if (ep->vdev->flags & VDEV_PORT_ERROR) + return; + + ep->ep_state |= EP_HALTED; + + err = xhci_reset_halted_ep(xhci, slot_id, ep->ep_index, reset_type); + if (err) + return; + + if (reset_type == EP_HARD_RESET) { + ep->ep_state |= EP_HARD_CLEAR_TOGGLE; + xhci_cleanup_stalled_ring(xhci, slot_id, ep->ep_index, stream_id, + td); + } + xhci_ring_cmd_db(xhci); +} + /* * When we get a command completion for a Stop Endpoint Command, we need to * unlink any cancelled TDs from the ring. There are two ways to do that: @@ -1990,35 +2019,6 @@ static void xhci_clear_hub_tt_buffer(struct xhci_hcd *xhci, struct xhci_td *td, } }
-static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci, - struct xhci_virt_ep *ep, unsigned int stream_id, - struct xhci_td *td, - enum xhci_ep_reset_type reset_type) -{ - unsigned int slot_id = ep->vdev->slot_id; - int err; - - /* - * Avoid resetting endpoint if link is inactive. Can cause host hang. - * Device will be reset soon to recover the link so don't do anything - */ - if (ep->vdev->flags & VDEV_PORT_ERROR) - return; - - ep->ep_state |= EP_HALTED; - - err = xhci_reset_halted_ep(xhci, slot_id, ep->ep_index, reset_type); - if (err) - return; - - if (reset_type == EP_HARD_RESET) { - ep->ep_state |= EP_HARD_CLEAR_TOGGLE; - xhci_cleanup_stalled_ring(xhci, slot_id, ep->ep_index, stream_id, - td); - } - xhci_ring_cmd_db(xhci); -} - /* Check if an error has halted the endpoint ring. The class driver will * cleanup the halt for a non-default control endpoint if we indicate a stall. * However, a babble and other errors also halt the endpoint ring, and the class @@ -2094,7 +2094,8 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, */ if ((ep->ep_index != 0) || (trb_comp_code != COMP_STALL_ERROR)) xhci_clear_hub_tt_buffer(xhci, td, ep); - xhci_cleanup_halted_endpoint(xhci, ep, ep_ring->stream_id, td, + + xhci_handle_halted_endpoint(xhci, ep, ep_ring->stream_id, td, EP_HARD_RESET); } else { /* Update ring dequeue pointer */ @@ -2379,8 +2380,9 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, break;
td->status = 0; - xhci_cleanup_halted_endpoint(xhci, ep, ep_ring->stream_id, td, - EP_SOFT_RESET); + + xhci_handle_halted_endpoint(xhci, ep, ep_ring->stream_id, td, + EP_SOFT_RESET); return 0; default: /* do nothing */ @@ -2455,8 +2457,8 @@ static int handle_tx_event(struct xhci_hcd *xhci, case COMP_USB_TRANSACTION_ERROR: case COMP_INVALID_STREAM_TYPE_ERROR: case COMP_INVALID_STREAM_ID_ERROR: - xhci_cleanup_halted_endpoint(xhci, ep, 0, NULL, - EP_SOFT_RESET); + xhci_handle_halted_endpoint(xhci, ep, 0, NULL, + EP_SOFT_RESET); goto cleanup; case COMP_RING_UNDERRUN: case COMP_RING_OVERRUN: @@ -2639,10 +2641,10 @@ static int handle_tx_event(struct xhci_hcd *xhci, if (trb_comp_code == COMP_STALL_ERROR || xhci_requires_manual_halt_cleanup(xhci, ep_ctx, trb_comp_code)) { - xhci_cleanup_halted_endpoint(xhci, ep, - ep_ring->stream_id, - NULL, - EP_HARD_RESET); + xhci_handle_halted_endpoint(xhci, ep, + ep_ring->stream_id, + NULL, + EP_HARD_RESET); } goto cleanup; } @@ -2734,9 +2736,9 @@ static int handle_tx_event(struct xhci_hcd *xhci, if (trb_comp_code == COMP_STALL_ERROR || xhci_requires_manual_halt_cleanup(xhci, ep_ctx, trb_comp_code)) - xhci_cleanup_halted_endpoint(xhci, ep, - ep_ring->stream_id, - td, EP_HARD_RESET); + xhci_handle_halted_endpoint(xhci, ep, + ep_ring->stream_id, + td, EP_HARD_RESET); goto cleanup; }
From: Mathias Nyman mathias.nyman@linux.intel.com
[ Upstream commit a1575120972ecd7baa6af6a69e4e7ea9213bde7c ]
Make sure to also limit the amount of soft reset retries for transaction errors on streams in cases where the transaction error event doesn't point to any specific TRB.
In these cases we don't know the TRB or stream ring, but we do know which endpoint had the error.
To keep error counting simple and functional, move the current err_count from ring structure to endpoint structure.
Cc: stable@vger.kernel.org Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20221130091944.2171610-6-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci-ring.c | 14 ++++++++++---- drivers/usb/host/xhci.h | 2 +- 2 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index edb712081815..ead42fc3e16d 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -2349,7 +2349,7 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
switch (trb_comp_code) { case COMP_SUCCESS: - ep_ring->err_count = 0; + ep->err_count = 0; /* handle success with untransferred data as short packet */ if (ep_trb != td->last_trb || remaining) { xhci_warn(xhci, "WARN Successful completion on short TX\n"); @@ -2375,7 +2375,7 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, break; case COMP_USB_TRANSACTION_ERROR: if (xhci->quirks & XHCI_NO_SOFT_RETRY || - (ep_ring->err_count++ > MAX_SOFT_RETRY) || + (ep->err_count++ > MAX_SOFT_RETRY) || le32_to_cpu(slot_ctx->tt_info) & TT_SLOT) break;
@@ -2457,8 +2457,14 @@ static int handle_tx_event(struct xhci_hcd *xhci, case COMP_USB_TRANSACTION_ERROR: case COMP_INVALID_STREAM_TYPE_ERROR: case COMP_INVALID_STREAM_ID_ERROR: - xhci_handle_halted_endpoint(xhci, ep, 0, NULL, - EP_SOFT_RESET); + xhci_dbg(xhci, "Stream transaction error ep %u no id\n", + ep_index); + if (ep->err_count++ > MAX_SOFT_RETRY) + xhci_handle_halted_endpoint(xhci, ep, 0, NULL, + EP_HARD_RESET); + else + xhci_handle_halted_endpoint(xhci, ep, 0, NULL, + EP_SOFT_RESET); goto cleanup; case COMP_RING_UNDERRUN: case COMP_RING_OVERRUN: diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index af3759928a95..ac09b171b783 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -933,6 +933,7 @@ struct xhci_virt_ep { * have to restore the device state to the previous state */ struct xhci_ring *new_ring; + unsigned int err_count; unsigned int ep_state; #define SET_DEQ_PENDING (1 << 0) #define EP_HALTED (1 << 1) /* For stall handling */ @@ -1616,7 +1617,6 @@ struct xhci_ring { * if we own the TRB (if we are the consumer). See section 4.9.1. */ u32 cycle_state; - unsigned int err_count; unsigned int stream_id; unsigned int num_segs; unsigned int num_trbs_free;
From: Ferry Toth ftoth@exalondelft.nl
[ Upstream commit 8a7b31d545d3a15f0e6f5984ae16f0ca4fd76aac ]
Since commit 0f0101719138 ("usb: dwc3: Don't switch OTG -> peripheral if extcon is present") Dual Role support on Intel Merrifield platform broke due to rearranging the call to dwc3_get_extcon().
It appears to be caused by ulpi_read_id() on the first test write failing with -ETIMEDOUT. Currently ulpi_read_id() expects to discover the phy via DT when the test write fails and returns 0 in that case, even if DT does not provide the phy. As a result usb probe completes without phy.
Make ulpi_read_id() return -ETIMEDOUT to its user if the first test write fails. The user should then handle it appropriately. A follow up patch will make dwc3_core_init() set -EPROBE_DEFER in this case and bail out.
Fixes: ef6a7bcfb01c ("usb: ulpi: Support device discovery via DT") Cc: stable@vger.kernel.org Acked-by: Heikki Krogerus heikki.krogerus@linux.intel.com Signed-off-by: Ferry Toth ftoth@exalondelft.nl Link: https://lore.kernel.org/r/20221205201527.13525-2-ftoth@exalondelft.nl Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/common/ulpi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/common/ulpi.c b/drivers/usb/common/ulpi.c index 3c705f1bead8..1f077f2ea40f 100644 --- a/drivers/usb/common/ulpi.c +++ b/drivers/usb/common/ulpi.c @@ -208,7 +208,7 @@ static int ulpi_read_id(struct ulpi *ulpi) /* Test the interface */ ret = ulpi_write(ulpi, ULPI_SCRATCH, 0xaa); if (ret < 0) - goto err; + return ret;
ret = ulpi_read(ulpi, ULPI_SCRATCH); if (ret < 0)
From: Ye Bin yebin10@huawei.com
[ Upstream commit 7ea71af94eaaaf6d9aed24bc94a05b977a741cb9 ]
Syzbot found the following issue: ===================================================== BUG: KMSAN: uninit-value in ext4_evict_inode+0xdd/0x26b0 fs/ext4/inode.c:180 ext4_evict_inode+0xdd/0x26b0 fs/ext4/inode.c:180 evict+0x365/0x9a0 fs/inode.c:664 iput_final fs/inode.c:1747 [inline] iput+0x985/0xdd0 fs/inode.c:1773 __ext4_new_inode+0xe54/0x7ec0 fs/ext4/ialloc.c:1361 ext4_mknod+0x376/0x840 fs/ext4/namei.c:2844 vfs_mknod+0x79d/0x830 fs/namei.c:3914 do_mknodat+0x47d/0xaa0 __do_sys_mknodat fs/namei.c:3992 [inline] __se_sys_mknodat fs/namei.c:3989 [inline] __ia32_sys_mknodat+0xeb/0x150 fs/namei.c:3989 do_syscall_32_irqs_on arch/x86/entry/common.c:112 [inline] __do_fast_syscall_32+0xa2/0x100 arch/x86/entry/common.c:178 do_fast_syscall_32+0x33/0x70 arch/x86/entry/common.c:203 do_SYSENTER_32+0x1b/0x20 arch/x86/entry/common.c:246 entry_SYSENTER_compat_after_hwframe+0x70/0x82
Uninit was created at: __alloc_pages+0x9f1/0xe80 mm/page_alloc.c:5578 alloc_pages+0xaae/0xd80 mm/mempolicy.c:2285 alloc_slab_page mm/slub.c:1794 [inline] allocate_slab+0x1b5/0x1010 mm/slub.c:1939 new_slab mm/slub.c:1992 [inline] ___slab_alloc+0x10c3/0x2d60 mm/slub.c:3180 __slab_alloc mm/slub.c:3279 [inline] slab_alloc_node mm/slub.c:3364 [inline] slab_alloc mm/slub.c:3406 [inline] __kmem_cache_alloc_lru mm/slub.c:3413 [inline] kmem_cache_alloc_lru+0x6f3/0xb30 mm/slub.c:3429 alloc_inode_sb include/linux/fs.h:3117 [inline] ext4_alloc_inode+0x5f/0x860 fs/ext4/super.c:1321 alloc_inode+0x83/0x440 fs/inode.c:259 new_inode_pseudo fs/inode.c:1018 [inline] new_inode+0x3b/0x430 fs/inode.c:1046 __ext4_new_inode+0x2a7/0x7ec0 fs/ext4/ialloc.c:959 ext4_mkdir+0x4d5/0x1560 fs/ext4/namei.c:2992 vfs_mkdir+0x62a/0x870 fs/namei.c:4035 do_mkdirat+0x466/0x7b0 fs/namei.c:4060 __do_sys_mkdirat fs/namei.c:4075 [inline] __se_sys_mkdirat fs/namei.c:4073 [inline] __ia32_sys_mkdirat+0xc4/0x120 fs/namei.c:4073 do_syscall_32_irqs_on arch/x86/entry/common.c:112 [inline] __do_fast_syscall_32+0xa2/0x100 arch/x86/entry/common.c:178 do_fast_syscall_32+0x33/0x70 arch/x86/entry/common.c:203 do_SYSENTER_32+0x1b/0x20 arch/x86/entry/common.c:246 entry_SYSENTER_compat_after_hwframe+0x70/0x82
CPU: 1 PID: 4625 Comm: syz-executor.2 Not tainted 6.1.0-rc4-syzkaller-62821-gcb231e2f67ec #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 =====================================================
Now, 'ext4_alloc_inode()' didn't init 'ei->i_flags'. If new inode failed before set 'ei->i_flags' in '__ext4_new_inode()', then do 'iput()'. As after 6bc0d63dad7f commit will access 'ei->i_flags' in 'ext4_evict_inode()' which will lead to access uninit-value. To solve above issue just init 'ei->i_flags' in 'ext4_alloc_inode()'.
Reported-by: syzbot+57b25da729eb0b88177d@syzkaller.appspotmail.com Signed-off-by: Ye Bin yebin10@huawei.com Fixes: 6bc0d63dad7f ("ext4: remove EA inode entry from mbcache on inode eviction") Reviewed-by: Jan Kara jack@suse.cz Reviewed-by: Eric Biggers ebiggers@google.com Link: https://lore.kernel.org/r/20221117073603.2598882-1-yebin@huaweicloud.com Signed-off-by: Theodore Ts'o tytso@mit.edu Cc: stable@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ext4/super.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 935589579b8f..e940fb07ef2e 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1279,6 +1279,7 @@ static struct inode *ext4_alloc_inode(struct super_block *sb) return NULL;
inode_set_iversion(&ei->vfs_inode, 1); + ei->i_flags = 0; spin_lock_init(&ei->i_raw_lock); INIT_LIST_HEAD(&ei->i_prealloc_list); atomic_set(&ei->i_prealloc_active, 0);
From: Nicolas Dichtel nicolas.dichtel@6wind.com
commit 93ec1320b0170d7a207eda2d119c669b673401ed upstream.
As stated in the comment above xfrm_nlmsg_multicast(), rcu read lock must be held before calling this function.
Reported-by: syzbot+3d9866419b4aa8f985d6@syzkaller.appspotmail.com Fixes: 703b94b93c19 ("xfrm: notify default policy on update") Signed-off-by: Nicolas Dichtel nicolas.dichtel@6wind.com Signed-off-by: Steffen Klassert steffen.klassert@secunet.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/xfrm/xfrm_user.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1920,6 +1920,7 @@ static int xfrm_notify_userpolicy(struct int len = NLMSG_ALIGN(sizeof(*up)); struct nlmsghdr *nlh; struct sk_buff *skb; + int err;
skb = nlmsg_new(len, GFP_ATOMIC); if (skb == NULL) @@ -1938,7 +1939,11 @@ static int xfrm_notify_userpolicy(struct
nlmsg_end(skb, nlh);
- return xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY); + rcu_read_lock(); + err = xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY); + rcu_read_unlock(); + + return err; }
static bool xfrm_userpolicy_is_valid(__u8 policy)
From: Gavrilov Ilia Ilia.Gavrilov@infotecs.ru
commit 9ea4b476cea1b7d461d16dda25ca3c7e616e2d15 upstream.
When first_ip is 0, last_ip is 0xFFFFFFFF, and netmask is 31, the value of an arithmetic expression 2 << (netmask - mask_bits - 1) is subject to overflow due to a failure casting operands to a larger data type before performing the arithmetic.
Note that it's harmless since the value will be checked at the next step.
Found by InfoTeCS on behalf of Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: b9fed748185a ("netfilter: ipset: Check and reject crazy /0 input parameters") Signed-off-by: Ilia.Gavrilov Ilia.Gavrilov@infotecs.ru Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/netfilter/ipset/ip_set_bitmap_ip.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/net/netfilter/ipset/ip_set_bitmap_ip.c +++ b/net/netfilter/ipset/ip_set_bitmap_ip.c @@ -308,8 +308,8 @@ bitmap_ip_create(struct net *net, struct return -IPSET_ERR_BITMAP_RANGE;
pr_debug("mask_bits %u, netmask %u\n", mask_bits, netmask); - hosts = 2 << (32 - netmask - 1); - elements = 2 << (netmask - mask_bits - 1); + hosts = 2U << (32 - netmask - 1); + elements = 2UL << (netmask - mask_bits - 1); } if (elements > IPSET_BITMAP_MAX_RANGE + 1) return -IPSET_ERR_BITMAP_RANGE_SIZE;
From: Kajol Jain kjain@linux.ibm.com
commit 76d588dddc459fefa1da96e0a081a397c5c8e216 upstream.
Current imc-pmu code triggers a WARNING with CONFIG_DEBUG_ATOMIC_SLEEP and CONFIG_PROVE_LOCKING enabled, while running a thread_imc event.
Command to trigger the warning: # perf stat -e thread_imc/CPM_CS_FROM_L4_MEM_X_DPTEG/ sleep 5
Performance counter stats for 'sleep 5':
0 thread_imc/CPM_CS_FROM_L4_MEM_X_DPTEG/
5.002117947 seconds time elapsed
0.000131000 seconds user 0.001063000 seconds sys
Below is snippet of the warning in dmesg:
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:580 in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 2869, name: perf-exec preempt_count: 2, expected: 0 4 locks held by perf-exec/2869: #0: c00000004325c540 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0x64/0xa90 #1: c00000004325c5d8 (&sig->exec_update_lock){++++}-{3:3}, at: begin_new_exec+0x460/0xef0 #2: c0000003fa99d4e0 (&cpuctx_lock){-...}-{2:2}, at: perf_event_exec+0x290/0x510 #3: c000000017ab8418 (&ctx->lock){....}-{2:2}, at: perf_event_exec+0x29c/0x510 irq event stamp: 4806 hardirqs last enabled at (4805): [<c000000000f65b94>] _raw_spin_unlock_irqrestore+0x94/0xd0 hardirqs last disabled at (4806): [<c0000000003fae44>] perf_event_exec+0x394/0x510 softirqs last enabled at (0): [<c00000000013c404>] copy_process+0xc34/0x1ff0 softirqs last disabled at (0): [<0000000000000000>] 0x0 CPU: 36 PID: 2869 Comm: perf-exec Not tainted 6.2.0-rc2-00011-g1247637727f2 #61 Hardware name: 8375-42A POWER9 0x4e1202 opal:v7.0-16-g9b85f7d961 PowerNV Call Trace: dump_stack_lvl+0x98/0xe0 (unreliable) __might_resched+0x2f8/0x310 __mutex_lock+0x6c/0x13f0 thread_imc_event_add+0xf4/0x1b0 event_sched_in+0xe0/0x210 merge_sched_in+0x1f0/0x600 visit_groups_merge.isra.92.constprop.166+0x2bc/0x6c0 ctx_flexible_sched_in+0xcc/0x140 ctx_sched_in+0x20c/0x2a0 ctx_resched+0x104/0x1c0 perf_event_exec+0x340/0x510 begin_new_exec+0x730/0xef0 load_elf_binary+0x3f8/0x1e10 ... do not call blocking ops when !TASK_RUNNING; state=2001 set at [<00000000fd63e7cf>] do_nanosleep+0x60/0x1a0 WARNING: CPU: 36 PID: 2869 at kernel/sched/core.c:9912 __might_sleep+0x9c/0xb0 CPU: 36 PID: 2869 Comm: sleep Tainted: G W 6.2.0-rc2-00011-g1247637727f2 #61 Hardware name: 8375-42A POWER9 0x4e1202 opal:v7.0-16-g9b85f7d961 PowerNV NIP: c000000000194a1c LR: c000000000194a18 CTR: c000000000a78670 REGS: c00000004d2134e0 TRAP: 0700 Tainted: G W (6.2.0-rc2-00011-g1247637727f2) MSR: 9000000000021033 <SF,HV,ME,IR,DR,RI,LE> CR: 48002824 XER: 00000000 CFAR: c00000000013fb64 IRQMASK: 1
The above warning triggered because the current imc-pmu code uses mutex lock in interrupt disabled sections. The function mutex_lock() internally calls __might_resched(), which will check if IRQs are disabled and in case IRQs are disabled, it will trigger the warning.
Fix the issue by changing the mutex lock to spinlock.
Fixes: 8f95faaac56c ("powerpc/powernv: Detect and create IMC device") Reported-by: Michael Petlan mpetlan@redhat.com Reported-by: Peter Zijlstra peterz@infradead.org Signed-off-by: Kajol Jain kjain@linux.ibm.com [mpe: Fix comments, trim oops in change log, add reported-by tags] Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20230106065157.182648-1-kjain@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/powerpc/include/asm/imc-pmu.h | 2 arch/powerpc/perf/imc-pmu.c | 136 +++++++++++++++++-------------------- 2 files changed, 67 insertions(+), 71 deletions(-)
--- a/arch/powerpc/include/asm/imc-pmu.h +++ b/arch/powerpc/include/asm/imc-pmu.h @@ -137,7 +137,7 @@ struct imc_pmu { * are inited. */ struct imc_pmu_ref { - struct mutex lock; + spinlock_t lock; unsigned int id; int refc; }; --- a/arch/powerpc/perf/imc-pmu.c +++ b/arch/powerpc/perf/imc-pmu.c @@ -13,6 +13,7 @@ #include <asm/cputhreads.h> #include <asm/smp.h> #include <linux/string.h> +#include <linux/spinlock.h>
/* Nest IMC data structures and variables */
@@ -20,7 +21,7 @@ * Used to avoid races in counting the nest-pmu units during hotplug * register and unregister */ -static DEFINE_MUTEX(nest_init_lock); +static DEFINE_SPINLOCK(nest_init_lock); static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc); static struct imc_pmu **per_nest_pmu_arr; static cpumask_t nest_imc_cpumask; @@ -49,7 +50,7 @@ static int trace_imc_mem_size; * core and trace-imc */ static struct imc_pmu_ref imc_global_refc = { - .lock = __MUTEX_INITIALIZER(imc_global_refc.lock), + .lock = __SPIN_LOCK_INITIALIZER(imc_global_refc.lock), .id = 0, .refc = 0, }; @@ -393,7 +394,7 @@ static int ppc_nest_imc_cpu_offline(unsi get_hard_smp_processor_id(cpu)); /* * If this is the last cpu in this chip then, skip the reference - * count mutex lock and make the reference count on this chip zero. + * count lock and make the reference count on this chip zero. */ ref = get_nest_pmu_ref(cpu); if (!ref) @@ -455,15 +456,15 @@ static void nest_imc_counters_release(st /* * See if we need to disable the nest PMU. * If no events are currently in use, then we have to take a - * mutex to ensure that we don't race with another task doing + * lock to ensure that we don't race with another task doing * enable or disable the nest counters. */ ref = get_nest_pmu_ref(event->cpu); if (!ref) return;
- /* Take the mutex lock for this node and then decrement the reference count */ - mutex_lock(&ref->lock); + /* Take the lock for this node and then decrement the reference count */ + spin_lock(&ref->lock); if (ref->refc == 0) { /* * The scenario where this is true is, when perf session is @@ -475,7 +476,7 @@ static void nest_imc_counters_release(st * an OPAL call to disable the engine in that node. * */ - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); return; } ref->refc--; @@ -483,7 +484,7 @@ static void nest_imc_counters_release(st rc = opal_imc_counters_stop(OPAL_IMC_COUNTERS_NEST, get_hard_smp_processor_id(event->cpu)); if (rc) { - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); pr_err("nest-imc: Unable to stop the counters for core %d\n", node_id); return; } @@ -491,7 +492,7 @@ static void nest_imc_counters_release(st WARN(1, "nest-imc: Invalid event reference count\n"); ref->refc = 0; } - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); }
static int nest_imc_event_init(struct perf_event *event) @@ -550,26 +551,25 @@ static int nest_imc_event_init(struct pe
/* * Get the imc_pmu_ref struct for this node. - * Take the mutex lock and then increment the count of nest pmu events - * inited. + * Take the lock and then increment the count of nest pmu events inited. */ ref = get_nest_pmu_ref(event->cpu); if (!ref) return -EINVAL;
- mutex_lock(&ref->lock); + spin_lock(&ref->lock); if (ref->refc == 0) { rc = opal_imc_counters_start(OPAL_IMC_COUNTERS_NEST, get_hard_smp_processor_id(event->cpu)); if (rc) { - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); pr_err("nest-imc: Unable to start the counters for node %d\n", node_id); return rc; } } ++ref->refc; - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock);
event->destroy = nest_imc_counters_release; return 0; @@ -605,9 +605,8 @@ static int core_imc_mem_init(int cpu, in return -ENOMEM; mem_info->vbase = page_address(page);
- /* Init the mutex */ core_imc_refc[core_id].id = core_id; - mutex_init(&core_imc_refc[core_id].lock); + spin_lock_init(&core_imc_refc[core_id].lock);
rc = opal_imc_counters_init(OPAL_IMC_COUNTERS_CORE, __pa((void *)mem_info->vbase), @@ -696,9 +695,8 @@ static int ppc_core_imc_cpu_offline(unsi perf_pmu_migrate_context(&core_imc_pmu->pmu, cpu, ncpu); } else { /* - * If this is the last cpu in this core then, skip taking refernce - * count mutex lock for this core and directly zero "refc" for - * this core. + * If this is the last cpu in this core then skip taking reference + * count lock for this core and directly zero "refc" for this core. */ opal_imc_counters_stop(OPAL_IMC_COUNTERS_CORE, get_hard_smp_processor_id(cpu)); @@ -713,11 +711,11 @@ static int ppc_core_imc_cpu_offline(unsi * last cpu in this core and core-imc event running * in this cpu. */ - mutex_lock(&imc_global_refc.lock); + spin_lock(&imc_global_refc.lock); if (imc_global_refc.id == IMC_DOMAIN_CORE) imc_global_refc.refc--;
- mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock); } return 0; } @@ -732,7 +730,7 @@ static int core_imc_pmu_cpumask_init(voi
static void reset_global_refc(struct perf_event *event) { - mutex_lock(&imc_global_refc.lock); + spin_lock(&imc_global_refc.lock); imc_global_refc.refc--;
/* @@ -744,7 +742,7 @@ static void reset_global_refc(struct per imc_global_refc.refc = 0; imc_global_refc.id = 0; } - mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock); }
static void core_imc_counters_release(struct perf_event *event) @@ -757,17 +755,17 @@ static void core_imc_counters_release(st /* * See if we need to disable the IMC PMU. * If no events are currently in use, then we have to take a - * mutex to ensure that we don't race with another task doing + * lock to ensure that we don't race with another task doing * enable or disable the core counters. */ core_id = event->cpu / threads_per_core;
- /* Take the mutex lock and decrement the refernce count for this core */ + /* Take the lock and decrement the refernce count for this core */ ref = &core_imc_refc[core_id]; if (!ref) return;
- mutex_lock(&ref->lock); + spin_lock(&ref->lock); if (ref->refc == 0) { /* * The scenario where this is true is, when perf session is @@ -779,7 +777,7 @@ static void core_imc_counters_release(st * an OPAL call to disable the engine in that core. * */ - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); return; } ref->refc--; @@ -787,7 +785,7 @@ static void core_imc_counters_release(st rc = opal_imc_counters_stop(OPAL_IMC_COUNTERS_CORE, get_hard_smp_processor_id(event->cpu)); if (rc) { - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); pr_err("IMC: Unable to stop the counters for core %d\n", core_id); return; } @@ -795,7 +793,7 @@ static void core_imc_counters_release(st WARN(1, "core-imc: Invalid event reference count\n"); ref->refc = 0; } - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock);
reset_global_refc(event); } @@ -833,7 +831,6 @@ static int core_imc_event_init(struct pe if ((!pcmi->vbase)) return -ENODEV;
- /* Get the core_imc mutex for this core */ ref = &core_imc_refc[core_id]; if (!ref) return -EINVAL; @@ -841,22 +838,22 @@ static int core_imc_event_init(struct pe /* * Core pmu units are enabled only when it is used. * See if this is triggered for the first time. - * If yes, take the mutex lock and enable the core counters. + * If yes, take the lock and enable the core counters. * If not, just increment the count in core_imc_refc struct. */ - mutex_lock(&ref->lock); + spin_lock(&ref->lock); if (ref->refc == 0) { rc = opal_imc_counters_start(OPAL_IMC_COUNTERS_CORE, get_hard_smp_processor_id(event->cpu)); if (rc) { - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); pr_err("core-imc: Unable to start the counters for core %d\n", core_id); return rc; } } ++ref->refc; - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock);
/* * Since the system can run either in accumulation or trace-mode @@ -867,7 +864,7 @@ static int core_imc_event_init(struct pe * to know whether any other trace/thread imc * events are running. */ - mutex_lock(&imc_global_refc.lock); + spin_lock(&imc_global_refc.lock); if (imc_global_refc.id == 0 || imc_global_refc.id == IMC_DOMAIN_CORE) { /* * No other trace/thread imc events are running in @@ -876,10 +873,10 @@ static int core_imc_event_init(struct pe imc_global_refc.id = IMC_DOMAIN_CORE; imc_global_refc.refc++; } else { - mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock); return -EBUSY; } - mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock);
event->hw.event_base = (u64)pcmi->vbase + (config & IMC_EVENT_OFFSET_MASK); event->destroy = core_imc_counters_release; @@ -951,10 +948,10 @@ static int ppc_thread_imc_cpu_offline(un mtspr(SPRN_LDBAR, (mfspr(SPRN_LDBAR) & (~(1UL << 63))));
/* Reduce the refc if thread-imc event running on this cpu */ - mutex_lock(&imc_global_refc.lock); + spin_lock(&imc_global_refc.lock); if (imc_global_refc.id == IMC_DOMAIN_THREAD) imc_global_refc.refc--; - mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock);
return 0; } @@ -994,7 +991,7 @@ static int thread_imc_event_init(struct if (!target) return -EINVAL;
- mutex_lock(&imc_global_refc.lock); + spin_lock(&imc_global_refc.lock); /* * Check if any other trace/core imc events are running in the * system, if not set the global id to thread-imc. @@ -1003,10 +1000,10 @@ static int thread_imc_event_init(struct imc_global_refc.id = IMC_DOMAIN_THREAD; imc_global_refc.refc++; } else { - mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock); return -EBUSY; } - mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock);
event->pmu->task_ctx_nr = perf_sw_context; event->destroy = reset_global_refc; @@ -1128,25 +1125,25 @@ static int thread_imc_event_add(struct p /* * imc pmus are enabled only when it is used. * See if this is triggered for the first time. - * If yes, take the mutex lock and enable the counters. + * If yes, take the lock and enable the counters. * If not, just increment the count in ref count struct. */ ref = &core_imc_refc[core_id]; if (!ref) return -EINVAL;
- mutex_lock(&ref->lock); + spin_lock(&ref->lock); if (ref->refc == 0) { if (opal_imc_counters_start(OPAL_IMC_COUNTERS_CORE, get_hard_smp_processor_id(smp_processor_id()))) { - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); pr_err("thread-imc: Unable to start the counter\ for core %d\n", core_id); return -EINVAL; } } ++ref->refc; - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); return 0; }
@@ -1163,12 +1160,12 @@ static void thread_imc_event_del(struct return; }
- mutex_lock(&ref->lock); + spin_lock(&ref->lock); ref->refc--; if (ref->refc == 0) { if (opal_imc_counters_stop(OPAL_IMC_COUNTERS_CORE, get_hard_smp_processor_id(smp_processor_id()))) { - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); pr_err("thread-imc: Unable to stop the counters\ for core %d\n", core_id); return; @@ -1176,7 +1173,7 @@ static void thread_imc_event_del(struct } else if (ref->refc < 0) { ref->refc = 0; } - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock);
/* Set bit 0 of LDBAR to zero, to stop posting updates to memory */ mtspr(SPRN_LDBAR, (mfspr(SPRN_LDBAR) & (~(1UL << 63)))); @@ -1217,9 +1214,8 @@ static int trace_imc_mem_alloc(int cpu_i } }
- /* Init the mutex, if not already */ trace_imc_refc[core_id].id = core_id; - mutex_init(&trace_imc_refc[core_id].lock); + spin_lock_init(&trace_imc_refc[core_id].lock);
mtspr(SPRN_LDBAR, 0); return 0; @@ -1239,10 +1235,10 @@ static int ppc_trace_imc_cpu_offline(uns * Reduce the refc if any trace-imc event running * on this cpu. */ - mutex_lock(&imc_global_refc.lock); + spin_lock(&imc_global_refc.lock); if (imc_global_refc.id == IMC_DOMAIN_TRACE) imc_global_refc.refc--; - mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock);
return 0; } @@ -1364,17 +1360,17 @@ static int trace_imc_event_add(struct pe }
mtspr(SPRN_LDBAR, ldbar_value); - mutex_lock(&ref->lock); + spin_lock(&ref->lock); if (ref->refc == 0) { if (opal_imc_counters_start(OPAL_IMC_COUNTERS_TRACE, get_hard_smp_processor_id(smp_processor_id()))) { - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); pr_err("trace-imc: Unable to start the counters for core %d\n", core_id); return -EINVAL; } } ++ref->refc; - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); return 0; }
@@ -1407,19 +1403,19 @@ static void trace_imc_event_del(struct p return; }
- mutex_lock(&ref->lock); + spin_lock(&ref->lock); ref->refc--; if (ref->refc == 0) { if (opal_imc_counters_stop(OPAL_IMC_COUNTERS_TRACE, get_hard_smp_processor_id(smp_processor_id()))) { - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock); pr_err("trace-imc: Unable to stop the counters for core %d\n", core_id); return; } } else if (ref->refc < 0) { ref->refc = 0; } - mutex_unlock(&ref->lock); + spin_unlock(&ref->lock);
trace_imc_event_stop(event, flags); } @@ -1441,7 +1437,7 @@ static int trace_imc_event_init(struct p * no other thread is running any core/thread imc * events */ - mutex_lock(&imc_global_refc.lock); + spin_lock(&imc_global_refc.lock); if (imc_global_refc.id == 0 || imc_global_refc.id == IMC_DOMAIN_TRACE) { /* * No core/thread imc events are running in the @@ -1450,10 +1446,10 @@ static int trace_imc_event_init(struct p imc_global_refc.id = IMC_DOMAIN_TRACE; imc_global_refc.refc++; } else { - mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock); return -EBUSY; } - mutex_unlock(&imc_global_refc.lock); + spin_unlock(&imc_global_refc.lock);
event->hw.idx = -1;
@@ -1525,10 +1521,10 @@ static int init_nest_pmu_ref(void) i = 0; for_each_node(nid) { /* - * Mutex lock to avoid races while tracking the number of + * Take the lock to avoid races while tracking the number of * sessions using the chip's nest pmu units. */ - mutex_init(&nest_imc_refc[i].lock); + spin_lock_init(&nest_imc_refc[i].lock);
/* * Loop to init the "id" with the node_id. Variable "i" initialized to @@ -1625,7 +1621,7 @@ static void imc_common_mem_free(struct i static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr) { if (pmu_ptr->domain == IMC_DOMAIN_NEST) { - mutex_lock(&nest_init_lock); + spin_lock(&nest_init_lock); if (nest_pmus == 1) { cpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE); kfree(nest_imc_refc); @@ -1635,7 +1631,7 @@ static void imc_common_cpuhp_mem_free(st
if (nest_pmus > 0) nest_pmus--; - mutex_unlock(&nest_init_lock); + spin_unlock(&nest_init_lock); }
/* Free core_imc memory */ @@ -1792,11 +1788,11 @@ int init_imc_pmu(struct device_node *par * rest. To handle the cpuhotplug callback unregister, we track * the number of nest pmus in "nest_pmus". */ - mutex_lock(&nest_init_lock); + spin_lock(&nest_init_lock); if (nest_pmus == 0) { ret = init_nest_pmu_ref(); if (ret) { - mutex_unlock(&nest_init_lock); + spin_unlock(&nest_init_lock); kfree(per_nest_pmu_arr); per_nest_pmu_arr = NULL; goto err_free_mem; @@ -1804,7 +1800,7 @@ int init_imc_pmu(struct device_node *par /* Register for cpu hotplug notification. */ ret = nest_pmu_cpumask_init(); if (ret) { - mutex_unlock(&nest_init_lock); + spin_unlock(&nest_init_lock); kfree(nest_imc_refc); kfree(per_nest_pmu_arr); per_nest_pmu_arr = NULL; @@ -1812,7 +1808,7 @@ int init_imc_pmu(struct device_node *par } } nest_pmus++; - mutex_unlock(&nest_init_lock); + spin_unlock(&nest_init_lock); break; case IMC_DOMAIN_CORE: ret = core_imc_pmu_cpumask_init();
From: Peter Zijlstra peterz@infradead.org
commit 7c6dd961d0c8e7e8f9fdc65071fb09ece702e18d upstream.
With 'GNU assembler (GNU Binutils for Debian) 2.39.90.20221231' the build now reports:
arch/x86/realmode/rm/../../boot/bioscall.S: Assembler messages: arch/x86/realmode/rm/../../boot/bioscall.S:35: Warning: found `movsd'; assuming `movsl' was meant arch/x86/realmode/rm/../../boot/bioscall.S:70: Warning: found `movsd'; assuming `movsl' was meant
arch/x86/boot/bioscall.S: Assembler messages: arch/x86/boot/bioscall.S:35: Warning: found `movsd'; assuming `movsl' was meant arch/x86/boot/bioscall.S:70: Warning: found `movsd'; assuming `movsl' was meant
Which is due to:
PR gas/29525
Note that with the dropped CMPSD and MOVSD Intel Syntax string insn templates taking operands, mixed IsString/non-IsString template groups (with memory operands) cannot occur anymore. With that maybe_adjust_templates() becomes unnecessary (and is hence being removed).
More details: https://sourceware.org/bugzilla/show_bug.cgi?id=29525
Borislav Petkov further explains:
" the particular problem here is is that the 'd' suffix is "conflicting" in the sense that you can have SSE mnemonics like movsD %xmm... and the same thing also for string ops (which is the case here) so apparently the agreement in binutils land is to use the always accepted suffixes 'l' or 'q' and phase out 'd' slowly... "
Fixes: 7a734e7dd93b ("x86, setup: "glove box" BIOS calls -- infrastructure") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Acked-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/Y71I3Ex2pvIxMpsP@hirez.programming.kicks-ass.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/boot/bioscall.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/boot/bioscall.S +++ b/arch/x86/boot/bioscall.S @@ -32,7 +32,7 @@ intcall: movw %dx, %si movw %sp, %di movw $11, %cx - rep; movsd + rep; movsl
/* Pop full state from the stack */ popal @@ -67,7 +67,7 @@ intcall: jz 4f movw %sp, %si movw $11, %cx - rep; movsd + rep; movsl 4: addw $44, %sp
/* Restore state and return */
From: Eliav Farber farbere@amazon.com
commit e84077437902ec99eba0a6b516df772653f142c7 upstream.
Fix period calculation in case user sets a value of 1000. The input of round_jiffies_relative() should be in jiffies and not in milli-seconds.
[ bp: Use the same code pattern as in edac_device_workq_setup() for clarity. ]
Fixes: c4cf3b454eca ("EDAC: Rework workqueue handling") Signed-off-by: Eliav Farber farbere@amazon.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Cc: stable@kernel.org Link: https://lore.kernel.org/r/20221020124458.22153-1-farbere@amazon.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/edac/edac_device.c | 17 ++++++++--------- drivers/edac/edac_module.h | 2 +- 2 files changed, 9 insertions(+), 10 deletions(-)
--- a/drivers/edac/edac_device.c +++ b/drivers/edac/edac_device.c @@ -424,17 +424,16 @@ static void edac_device_workq_teardown(s * Then restart the workq on the new delay */ void edac_device_reset_delay_period(struct edac_device_ctl_info *edac_dev, - unsigned long value) + unsigned long msec) { - unsigned long jiffs = msecs_to_jiffies(value); + edac_dev->poll_msec = msec; + edac_dev->delay = msecs_to_jiffies(msec);
- if (value == 1000) - jiffs = round_jiffies_relative(value); - - edac_dev->poll_msec = value; - edac_dev->delay = jiffs; - - edac_mod_work(&edac_dev->work, jiffs); + /* See comment in edac_device_workq_setup() above */ + if (edac_dev->poll_msec == 1000) + edac_mod_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay)); + else + edac_mod_work(&edac_dev->work, edac_dev->delay); }
int edac_device_alloc_index(void) --- a/drivers/edac/edac_module.h +++ b/drivers/edac/edac_module.h @@ -56,7 +56,7 @@ bool edac_stop_work(struct delayed_work bool edac_mod_work(struct delayed_work *work, unsigned long delay);
extern void edac_device_reset_delay_period(struct edac_device_ctl_info - *edac_dev, unsigned long value); + *edac_dev, unsigned long msec); extern void edac_mc_reset_delay_period(unsigned long value);
extern void *edac_align_ptr(void **p, unsigned size, int n_elems);
From: Ricardo Ribalda ribalda@chromium.org
[ Upstream commit 02228f6aa6a64d588bc31e3267d05ff184d772eb ]
If the system does not come from reset (like when it is kexec()), the regulator might have an IRQ waiting for us.
If we enable the IRQ handler before its structures are ready, we crash.
This patch fixes:
[ 1.141839] Unable to handle kernel read from unreadable memory at virtual address 0000000000000078 [ 1.316096] Call trace: [ 1.316101] blocking_notifier_call_chain+0x20/0xa8 [ 1.322757] cpu cpu0: dummy supplies not allowed for exclusive requests [ 1.327823] regulator_notifier_call_chain+0x1c/0x2c [ 1.327825] da9211_irq_handler+0x68/0xf8 [ 1.327829] irq_thread+0x11c/0x234 [ 1.327833] kthread+0x13c/0x154
Signed-off-by: Ricardo Ribalda ribalda@chromium.org Reviewed-by: Adam Ward DLG-Adam.Ward.opensource@dm.renesas.com Link: https://lore.kernel.org/r/20221124-da9211-v2-0-1779e3c5d491@chromium.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/regulator/da9211-regulator.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/regulator/da9211-regulator.c b/drivers/regulator/da9211-regulator.c index e01b32d1fa17..00828f5baa97 100644 --- a/drivers/regulator/da9211-regulator.c +++ b/drivers/regulator/da9211-regulator.c @@ -498,6 +498,12 @@ static int da9211_i2c_probe(struct i2c_client *i2c)
chip->chip_irq = i2c->irq;
+ ret = da9211_regulator_init(chip); + if (ret < 0) { + dev_err(chip->dev, "Failed to initialize regulator: %d\n", ret); + return ret; + } + if (chip->chip_irq != 0) { ret = devm_request_threaded_irq(chip->dev, chip->chip_irq, NULL, da9211_irq_handler, @@ -512,11 +518,6 @@ static int da9211_i2c_probe(struct i2c_client *i2c) dev_warn(chip->dev, "No IRQ configured\n"); }
- ret = da9211_regulator_init(chip); - - if (ret < 0) - dev_err(chip->dev, "Failed to initialize regulator: %d\n", ret); - return ret; }
From: Emanuele Ghidoli emanuele.ghidoli@toradex.com
[ Upstream commit 472a6309c6467af89dbf660a8310369cc9cb041f ]
Restore volume after charge pump and PGA activation to ensure that volume settings are correctly applied when re-enabling codec from SND_SOC_BIAS_OFF state. CLASS_W, CHARGE_PUMP and POWER_MANAGEMENT_2 register configuration affect how the volume register are applied and must be configured first.
Fixes: a91eb199e4dc ("ASoC: Initial WM8904 CODEC driver") Link: https://lore.kernel.org/all/c7864c35-738c-a867-a6a6-ddf9f98df7e7@gmail.com/ Signed-off-by: Emanuele Ghidoli emanuele.ghidoli@toradex.com Signed-off-by: Francesco Dolcini francesco.dolcini@toradex.com Acked-by: Charles Keepax ckeepax@opensource.cirrus.com Link: https://lore.kernel.org/r/20221223080247.7258-1-francesco@dolcini.it Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/wm8904.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/sound/soc/codecs/wm8904.c b/sound/soc/codecs/wm8904.c index 1c360bae5652..cc96c9bdff41 100644 --- a/sound/soc/codecs/wm8904.c +++ b/sound/soc/codecs/wm8904.c @@ -697,6 +697,7 @@ static int out_pga_event(struct snd_soc_dapm_widget *w, int dcs_mask; int dcs_l, dcs_r; int dcs_l_reg, dcs_r_reg; + int an_out_reg; int timeout; int pwr_reg;
@@ -712,6 +713,7 @@ static int out_pga_event(struct snd_soc_dapm_widget *w, dcs_mask = WM8904_DCS_ENA_CHAN_0 | WM8904_DCS_ENA_CHAN_1; dcs_r_reg = WM8904_DC_SERVO_8; dcs_l_reg = WM8904_DC_SERVO_9; + an_out_reg = WM8904_ANALOGUE_OUT1_LEFT; dcs_l = 0; dcs_r = 1; break; @@ -720,6 +722,7 @@ static int out_pga_event(struct snd_soc_dapm_widget *w, dcs_mask = WM8904_DCS_ENA_CHAN_2 | WM8904_DCS_ENA_CHAN_3; dcs_r_reg = WM8904_DC_SERVO_6; dcs_l_reg = WM8904_DC_SERVO_7; + an_out_reg = WM8904_ANALOGUE_OUT2_LEFT; dcs_l = 2; dcs_r = 3; break; @@ -792,6 +795,10 @@ static int out_pga_event(struct snd_soc_dapm_widget *w, snd_soc_component_update_bits(component, reg, WM8904_HPL_ENA_OUTP | WM8904_HPR_ENA_OUTP, WM8904_HPL_ENA_OUTP | WM8904_HPR_ENA_OUTP); + + /* Update volume, requires PGA to be powered */ + val = snd_soc_component_read(component, an_out_reg); + snd_soc_component_write(component, an_out_reg, val); break;
case SND_SOC_DAPM_POST_PMU:
From: Tung Nguyen tung.q.nguyen@dektech.com.au
[ Upstream commit c244c092f1ed2acfb5af3d3da81e22367d3dd733 ]
This unexpected behavior is observed:
node 1 | node 2 ------ | ------ link is established | link is established reboot | link is reset up | send discovery message receive discovery message | link is established | link is established send discovery message | | receive discovery message | link is reset (unexpected) | send reset message link is reset |
It is due to delayed re-discovery as described in function tipc_node_check_dest(): "this link endpoint has already reset and re-established contact with the peer, before receiving a discovery message from that node."
However, commit 598411d70f85 has changed the condition for calling tipc_node_link_down() which was the acceptance of new media address.
This commit fixes this by restoring the old and correct behavior.
Fixes: 598411d70f85 ("tipc: make resetting of links non-atomic") Acked-by: Jon Maloy jmaloy@redhat.com Signed-off-by: Tung Nguyen tung.q.nguyen@dektech.com.au Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/tipc/node.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/net/tipc/node.c b/net/tipc/node.c index 7589f2ac6fd0..38f61dccb855 100644 --- a/net/tipc/node.c +++ b/net/tipc/node.c @@ -1152,8 +1152,9 @@ void tipc_node_check_dest(struct net *net, u32 addr, bool addr_match = false; bool sign_match = false; bool link_up = false; + bool link_is_reset = false; bool accept_addr = false; - bool reset = true; + bool reset = false; char *if_name; unsigned long intv; u16 session; @@ -1173,14 +1174,14 @@ void tipc_node_check_dest(struct net *net, u32 addr, /* Prepare to validate requesting node's signature and media address */ l = le->link; link_up = l && tipc_link_is_up(l); + link_is_reset = l && tipc_link_is_reset(l); addr_match = l && !memcmp(&le->maddr, maddr, sizeof(*maddr)); sign_match = (signature == n->signature);
/* These three flags give us eight permutations: */
if (sign_match && addr_match && link_up) { - /* All is fine. Do nothing. */ - reset = false; + /* All is fine. Ignore requests. */ /* Peer node is not a container/local namespace */ if (!n->peer_hash_mix) n->peer_hash_mix = hash_mixes; @@ -1205,6 +1206,7 @@ void tipc_node_check_dest(struct net *net, u32 addr, */ accept_addr = true; *respond = true; + reset = true; } else if (!sign_match && addr_match && link_up) { /* Peer node rebooted. Two possibilities: * - Delayed re-discovery; this link endpoint has already @@ -1236,6 +1238,7 @@ void tipc_node_check_dest(struct net *net, u32 addr, n->signature = signature; accept_addr = true; *respond = true; + reset = true; }
if (!accept_addr) @@ -1264,6 +1267,7 @@ void tipc_node_check_dest(struct net *net, u32 addr, tipc_link_fsm_evt(l, LINK_RESET_EVT); if (n->state == NODE_FAILINGOVER) tipc_link_fsm_evt(l, LINK_FAILOVER_BEGIN_EVT); + link_is_reset = tipc_link_is_reset(l); le->link = l; n->link_cnt++; tipc_node_calculate_timer(n, l); @@ -1276,7 +1280,7 @@ void tipc_node_check_dest(struct net *net, u32 addr, memcpy(&le->maddr, maddr, sizeof(*maddr)); exit: tipc_node_write_unlock(n); - if (reset && l && !tipc_link_is_reset(l)) + if (reset && !link_is_reset) tipc_node_link_down(n, b->identity, false); tipc_node_put(n); }
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit cdd41e878526797df29903fe592d6a26b096ac7d ]
Since multiple blocks of same type are present in 98xx, modify functions which get resource count and which update resource count to work with individual block address instead of block type.
Reviewed-by: Jesse Brandeburg jesse.brandeburg@intel.com Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Rakesh Babu rsaladi2@marvell.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: b4e9b8763e41 ("octeontx2-af: Fix LMAC config in cgx_lmac_rx_tx_enable") Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/rvu.c | 73 +++++++++++++------ .../net/ethernet/marvell/octeontx2/af/rvu.h | 2 + 2 files changed, 53 insertions(+), 22 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index c26652436c53..9bab525ecd86 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -316,31 +316,36 @@ static void rvu_update_rsrc_map(struct rvu *rvu, struct rvu_pfvf *pfvf,
block->fn_map[lf] = attach ? pcifunc : 0;
- switch (block->type) { - case BLKTYPE_NPA: + switch (block->addr) { + case BLKADDR_NPA: pfvf->npalf = attach ? true : false; num_lfs = pfvf->npalf; break; - case BLKTYPE_NIX: + case BLKADDR_NIX0: + case BLKADDR_NIX1: pfvf->nixlf = attach ? true : false; num_lfs = pfvf->nixlf; break; - case BLKTYPE_SSO: + case BLKADDR_SSO: attach ? pfvf->sso++ : pfvf->sso--; num_lfs = pfvf->sso; break; - case BLKTYPE_SSOW: + case BLKADDR_SSOW: attach ? pfvf->ssow++ : pfvf->ssow--; num_lfs = pfvf->ssow; break; - case BLKTYPE_TIM: + case BLKADDR_TIM: attach ? pfvf->timlfs++ : pfvf->timlfs--; num_lfs = pfvf->timlfs; break; - case BLKTYPE_CPT: + case BLKADDR_CPT0: attach ? pfvf->cptlfs++ : pfvf->cptlfs--; num_lfs = pfvf->cptlfs; break; + case BLKADDR_CPT1: + attach ? pfvf->cpt1_lfs++ : pfvf->cpt1_lfs--; + num_lfs = pfvf->cpt1_lfs; + break; }
reg = is_pf ? block->pf_lfcnt_reg : block->vf_lfcnt_reg; @@ -1035,7 +1040,30 @@ int rvu_mbox_handler_ready(struct rvu *rvu, struct msg_req *req, /* Get current count of a RVU block's LF/slots * provisioned to a given RVU func. */ -static u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blktype) +u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blkaddr) +{ + switch (blkaddr) { + case BLKADDR_NPA: + return pfvf->npalf ? 1 : 0; + case BLKADDR_NIX0: + case BLKADDR_NIX1: + return pfvf->nixlf ? 1 : 0; + case BLKADDR_SSO: + return pfvf->sso; + case BLKADDR_SSOW: + return pfvf->ssow; + case BLKADDR_TIM: + return pfvf->timlfs; + case BLKADDR_CPT0: + return pfvf->cptlfs; + case BLKADDR_CPT1: + return pfvf->cpt1_lfs; + } + return 0; +} + +/* Return true if LFs of block type are attached to pcifunc */ +static bool is_blktype_attached(struct rvu_pfvf *pfvf, int blktype) { switch (blktype) { case BLKTYPE_NPA: @@ -1043,15 +1071,16 @@ static u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blktype) case BLKTYPE_NIX: return pfvf->nixlf ? 1 : 0; case BLKTYPE_SSO: - return pfvf->sso; + return !!pfvf->sso; case BLKTYPE_SSOW: - return pfvf->ssow; + return !!pfvf->ssow; case BLKTYPE_TIM: - return pfvf->timlfs; + return !!pfvf->timlfs; case BLKTYPE_CPT: - return pfvf->cptlfs; + return pfvf->cptlfs || pfvf->cpt1_lfs; } - return 0; + + return false; }
bool is_pffunc_map_valid(struct rvu *rvu, u16 pcifunc, int blktype) @@ -1064,7 +1093,7 @@ bool is_pffunc_map_valid(struct rvu *rvu, u16 pcifunc, int blktype) pfvf = rvu_get_pfvf(rvu, pcifunc);
/* Check if this PFFUNC has a LF of type blktype attached */ - if (!rvu_get_rsrc_mapcount(pfvf, blktype)) + if (!is_blktype_attached(pfvf, blktype)) return false;
return true; @@ -1105,7 +1134,7 @@ static void rvu_detach_block(struct rvu *rvu, int pcifunc, int blktype)
block = &hw->block[blkaddr];
- num_lfs = rvu_get_rsrc_mapcount(pfvf, block->type); + num_lfs = rvu_get_rsrc_mapcount(pfvf, block->addr); if (!num_lfs) return;
@@ -1226,7 +1255,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu, int free_lfs, mappedlfs;
/* Only one NPA LF can be attached */ - if (req->npalf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NPA)) { + if (req->npalf && !is_blktype_attached(pfvf, BLKTYPE_NPA)) { block = &hw->block[BLKADDR_NPA]; free_lfs = rvu_rsrc_free_count(&block->lf); if (!free_lfs) @@ -1239,7 +1268,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu, }
/* Only one NIX LF can be attached */ - if (req->nixlf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NIX)) { + if (req->nixlf && !is_blktype_attached(pfvf, BLKTYPE_NIX)) { block = &hw->block[BLKADDR_NIX0]; free_lfs = rvu_rsrc_free_count(&block->lf); if (!free_lfs) @@ -1260,7 +1289,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu, pcifunc, req->sso, block->lf.max); return -EINVAL; } - mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type); + mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr); free_lfs = rvu_rsrc_free_count(&block->lf); /* Check if additional resources are available */ if (req->sso > mappedlfs && @@ -1276,7 +1305,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu, pcifunc, req->sso, block->lf.max); return -EINVAL; } - mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type); + mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr); free_lfs = rvu_rsrc_free_count(&block->lf); if (req->ssow > mappedlfs && ((req->ssow - mappedlfs) > free_lfs)) @@ -1291,7 +1320,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu, pcifunc, req->timlfs, block->lf.max); return -EINVAL; } - mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type); + mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr); free_lfs = rvu_rsrc_free_count(&block->lf); if (req->timlfs > mappedlfs && ((req->timlfs - mappedlfs) > free_lfs)) @@ -1306,7 +1335,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu, pcifunc, req->cptlfs, block->lf.max); return -EINVAL; } - mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type); + mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr); free_lfs = rvu_rsrc_free_count(&block->lf); if (req->cptlfs > mappedlfs && ((req->cptlfs - mappedlfs) > free_lfs)) @@ -1942,7 +1971,7 @@ static void rvu_blklf_teardown(struct rvu *rvu, u16 pcifunc, u8 blkaddr)
block = &rvu->hw->block[blkaddr]; num_lfs = rvu_get_rsrc_mapcount(rvu_get_pfvf(rvu, pcifunc), - block->type); + block->addr); if (!num_lfs) return; for (slot = 0; slot < num_lfs; slot++) { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 90eed3160915..0cb5093744fe 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -137,6 +137,7 @@ struct rvu_pfvf { u16 ssow; u16 cptlfs; u16 timlfs; + u16 cpt1_lfs; u8 cgx_lmac;
/* Block LF's MSIX vector info */ @@ -420,6 +421,7 @@ void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id); int rvu_rsrc_free_count(struct rsrc_bmap *rsrc); int rvu_alloc_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc); bool rvu_rsrc_check_contig(struct rsrc_bmap *rsrc, int nrsrc); +u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blkaddr); int rvu_get_pf(u16 pcifunc); struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc); void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf);
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit c5a73b632b901c4b07d156bb8a8a2c5517678f35 ]
Firmware configures NIX block mapping for all CGXs to achieve maximum throughput. This patch reads the configuration and create mapping between RVU PF and NIX blocks. And for LBK VFs assign NIX0 for even numbered VFs and NIX1 for odd numbered VFs.
Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Rakesh Babu rsaladi2@marvell.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: b4e9b8763e41 ("octeontx2-af: Fix LMAC config in cgx_lmac_rx_tx_enable") Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/cgx.c | 13 +++- .../net/ethernet/marvell/octeontx2/af/cgx.h | 5 ++ .../net/ethernet/marvell/octeontx2/af/rvu.c | 61 ++++++++++++++++++- .../net/ethernet/marvell/octeontx2/af/rvu.h | 2 + .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 15 +++++ .../ethernet/marvell/octeontx2/af/rvu_nix.c | 21 +++++-- 6 files changed, 107 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index fc27a40202c6..1a8f5a039d50 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -145,6 +145,16 @@ int cgx_get_cgxid(void *cgxd) return cgx->cgx_id; }
+u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id) +{ + struct cgx *cgx_dev = cgx_get_pdata(cgx_id); + u64 cfg; + + cfg = cgx_read(cgx_dev, lmac_id, CGXX_CMRX_CFG); + + return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT; +} + /* Ensure the required lock for event queue(where asynchronous events are * posted) is acquired before calling this API. Else an asynchronous event(with * latest link status) can reach the destination before this function returns @@ -814,8 +824,7 @@ static int cgx_lmac_verify_fwi_version(struct cgx *cgx) minor_ver = FIELD_GET(RESP_MINOR_VER, resp); dev_dbg(dev, "Firmware command interface version = %d.%d\n", major_ver, minor_ver); - if (major_ver != CGX_FIRMWARE_MAJOR_VER || - minor_ver != CGX_FIRMWARE_MINOR_VER) + if (major_ver != CGX_FIRMWARE_MAJOR_VER) return -EIO; else return 0; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h index 27ca3291682b..bcfc3e5f66bb 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h @@ -27,6 +27,10 @@
/* Registers */ #define CGXX_CMRX_CFG 0x00 +#define CMR_P2X_SEL_MASK GENMASK_ULL(61, 59) +#define CMR_P2X_SEL_SHIFT 59ULL +#define CMR_P2X_SEL_NIX0 1ULL +#define CMR_P2X_SEL_NIX1 2ULL #define CMR_EN BIT_ULL(55) #define DATA_PKT_TX_EN BIT_ULL(53) #define DATA_PKT_RX_EN BIT_ULL(54) @@ -142,5 +146,6 @@ int cgx_lmac_get_pause_frm(void *cgxd, int lmac_id, int cgx_lmac_set_pause_frm(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause); void cgx_lmac_ptp_config(void *cgxd, int lmac_id, bool enable); +u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id);
#endif /* CGX_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 9bab525ecd86..acbc67074f59 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -1208,6 +1208,58 @@ int rvu_mbox_handler_detach_resources(struct rvu *rvu, return rvu_detach_rsrcs(rvu, detach, detach->hdr.pcifunc); }
+static int rvu_get_nix_blkaddr(struct rvu *rvu, u16 pcifunc) +{ + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + int blkaddr = BLKADDR_NIX0, vf; + struct rvu_pfvf *pf; + + /* All CGX mapped PFs are set with assigned NIX block during init */ + if (is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc))) { + pf = rvu_get_pfvf(rvu, pcifunc & ~RVU_PFVF_FUNC_MASK); + blkaddr = pf->nix_blkaddr; + } else if (is_afvf(pcifunc)) { + vf = pcifunc - 1; + /* Assign NIX based on VF number. All even numbered VFs get + * NIX0 and odd numbered gets NIX1 + */ + blkaddr = (vf & 1) ? BLKADDR_NIX1 : BLKADDR_NIX0; + /* NIX1 is not present on all silicons */ + if (!is_block_implemented(rvu->hw, BLKADDR_NIX1)) + blkaddr = BLKADDR_NIX0; + } + + switch (blkaddr) { + case BLKADDR_NIX1: + pfvf->nix_blkaddr = BLKADDR_NIX1; + break; + case BLKADDR_NIX0: + default: + pfvf->nix_blkaddr = BLKADDR_NIX0; + break; + } + + return pfvf->nix_blkaddr; +} + +static int rvu_get_attach_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc) +{ + int blkaddr; + + switch (blktype) { + case BLKTYPE_NIX: + blkaddr = rvu_get_nix_blkaddr(rvu, pcifunc); + break; + default: + return rvu_get_blkaddr(rvu, blktype, 0); + }; + + if (is_block_implemented(rvu->hw, blkaddr)) + return blkaddr; + + return -ENODEV; +} + static void rvu_attach_block(struct rvu *rvu, int pcifunc, int blktype, int num_lfs) { @@ -1221,7 +1273,7 @@ static void rvu_attach_block(struct rvu *rvu, int pcifunc, if (!num_lfs) return;
- blkaddr = rvu_get_blkaddr(rvu, blktype, 0); + blkaddr = rvu_get_attach_blkaddr(rvu, blktype, pcifunc); if (blkaddr < 0) return;
@@ -1250,9 +1302,9 @@ static int rvu_check_rsrc_availability(struct rvu *rvu, struct rsrc_attach *req, u16 pcifunc) { struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + int free_lfs, mappedlfs, blkaddr; struct rvu_hwinfo *hw = rvu->hw; struct rvu_block *block; - int free_lfs, mappedlfs;
/* Only one NPA LF can be attached */ if (req->npalf && !is_blktype_attached(pfvf, BLKTYPE_NPA)) { @@ -1269,7 +1321,10 @@ static int rvu_check_rsrc_availability(struct rvu *rvu,
/* Only one NIX LF can be attached */ if (req->nixlf && !is_blktype_attached(pfvf, BLKTYPE_NIX)) { - block = &hw->block[BLKADDR_NIX0]; + blkaddr = rvu_get_attach_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (blkaddr < 0) + return blkaddr; + block = &hw->block[blkaddr]; free_lfs = rvu_rsrc_free_count(&block->lf); if (!free_lfs) goto fail; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 0cb5093744fe..fc6d785b98dd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -183,6 +183,8 @@ struct rvu_pfvf {
bool cgx_in_use; /* this PF/VF using CGX? */ int cgx_users; /* number of cgx users - used only by PFs */ + + u8 nix_blkaddr; /* BLKADDR_NIX0/1 assigned to this PF */ };
struct nix_txsch { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index f4ecc755eaff..6c6b411e78fd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -74,6 +74,20 @@ void *rvu_cgx_pdata(u8 cgx_id, struct rvu *rvu) return rvu->cgx_idmap[cgx_id]; }
+/* Based on P2X connectivity find mapped NIX block for a PF */ +static void rvu_map_cgx_nix_block(struct rvu *rvu, int pf, + int cgx_id, int lmac_id) +{ + struct rvu_pfvf *pfvf = &rvu->pf[pf]; + u8 p2x; + + p2x = cgx_lmac_get_p2x(cgx_id, lmac_id); + /* Firmware sets P2X_SELECT as either NIX0 or NIX1 */ + pfvf->nix_blkaddr = BLKADDR_NIX0; + if (p2x == CMR_P2X_SEL_NIX1) + pfvf->nix_blkaddr = BLKADDR_NIX1; +} + static int rvu_map_cgx_lmac_pf(struct rvu *rvu) { struct npc_pkind *pkind = &rvu->hw->pkind; @@ -117,6 +131,7 @@ static int rvu_map_cgx_lmac_pf(struct rvu *rvu) rvu->cgxlmac2pf_map[CGX_OFFSET(cgx) + lmac] = 1 << pf; free_pkind = rvu_alloc_rsrc(&pkind->rsrc); pkind->pfchan_map[free_pkind] = ((pf) & 0x3F) << 16; + rvu_map_cgx_nix_block(rvu, pf, cgx, lmac); rvu->cgx_mapped_pfs++; } } diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index f6a3cf3e6f23..9886a30e9723 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -187,8 +187,8 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr, static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf) { struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + int pkind, pf, vf, lbkid; u8 cgx_id, lmac_id; - int pkind, pf, vf; int err;
pf = rvu_get_pf(pcifunc); @@ -221,13 +221,24 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf) case NIX_INTF_TYPE_LBK: vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1;
+ /* If NIX1 block is present on the silicon then NIXes are + * assigned alternatively for lbk interfaces. NIX0 should + * send packets on lbk link 1 channels and NIX1 should send + * on lbk link 0 channels for the communication between + * NIX0 and NIX1. + */ + lbkid = 0; + if (rvu->hw->lbk_links > 1) + lbkid = vf & 0x1 ? 0 : 1; + /* Note that AF's VFs work in pairs and talk over consecutive * loopback channels.Therefore if odd number of AF VFs are * enabled then the last VF remains with no pair. */ - pfvf->rx_chan_base = NIX_CHAN_LBK_CHX(0, vf); - pfvf->tx_chan_base = vf & 0x1 ? NIX_CHAN_LBK_CHX(0, vf - 1) : - NIX_CHAN_LBK_CHX(0, vf + 1); + pfvf->rx_chan_base = NIX_CHAN_LBK_CHX(lbkid, vf); + pfvf->tx_chan_base = vf & 0x1 ? + NIX_CHAN_LBK_CHX(lbkid, vf - 1) : + NIX_CHAN_LBK_CHX(lbkid, vf + 1); pfvf->rx_chan_cnt = 1; pfvf->tx_chan_cnt = 1; rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf, @@ -3157,7 +3168,7 @@ int rvu_nix_init(struct rvu *rvu) hw->cgx = (cfg >> 12) & 0xF; hw->lmac_per_cgx = (cfg >> 8) & 0xF; hw->cgx_links = hw->cgx * hw->lmac_per_cgx; - hw->lbk_links = 1; + hw->lbk_links = (cfg >> 24) & 0xF; hw->sdp_links = 1;
/* Initialize admin queue */
From: Angela Czubak aczubak@marvell.com
[ Upstream commit b4e9b8763e417db31c7088103cc557d55cb7a8f5 ]
PF netdev can request AF to enable or disable reception and transmission on assigned CGX::LMAC. The current code instead of disabling or enabling 'reception and transmission' also disables/enable the LMAC. This patch fixes this issue.
Fixes: 1435f66a28b4 ("octeontx2-af: CGX Rx/Tx enable/disable mbox handlers") Signed-off-by: Angela Czubak aczubak@marvell.com Signed-off-by: Hariprasad Kelam hkelam@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Link: https://lore.kernel.org/r/20230105160107.17638-1-hkelam@marvell.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/cgx.c | 4 ++-- drivers/net/ethernet/marvell/octeontx2/af/cgx.h | 1 - 2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index 1a8f5a039d50..c0a0a31272cc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -350,9 +350,9 @@ int cgx_lmac_rx_tx_enable(void *cgxd, int lmac_id, bool enable)
cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG); if (enable) - cfg |= CMR_EN | DATA_PKT_RX_EN | DATA_PKT_TX_EN; + cfg |= DATA_PKT_RX_EN | DATA_PKT_TX_EN; else - cfg &= ~(CMR_EN | DATA_PKT_RX_EN | DATA_PKT_TX_EN); + cfg &= ~(DATA_PKT_RX_EN | DATA_PKT_TX_EN); cgx_write(cgx, lmac_id, CGXX_CMRX_CFG, cfg); return 0; } diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h index bcfc3e5f66bb..e176a6c654ef 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h @@ -31,7 +31,6 @@ #define CMR_P2X_SEL_SHIFT 59ULL #define CMR_P2X_SEL_NIX0 1ULL #define CMR_P2X_SEL_NIX1 2ULL -#define CMR_EN BIT_ULL(55) #define DATA_PKT_TX_EN BIT_ULL(53) #define DATA_PKT_RX_EN BIT_ULL(54) #define CGX_LMAC_TYPE_SHIFT 40
From: Roger Pau Monne roger.pau@citrix.com
[ Upstream commit c0dccad87cf68fc6012aec7567e354353097ec1a ]
The currently lockless access to the xen console list in vtermno_to_xencons() is incorrect, as additions and removals from the list can happen anytime, and as such the traversal of the list to get the private console data for a given termno needs to happen with the lock held. Note users that modify the list already do so with the lock taken.
Adjust current lock takers to use the _irq{save,restore} helpers, since the context in which vtermno_to_xencons() is called can have interrupts disabled. Use the _irq{save,restore} set of helpers to switch the current callers to disable interrupts in the locked region. I haven't checked if existing users could instead use the _irq variant, as I think it's safer to use _irq{save,restore} upfront.
While there switch from using list_for_each_entry_safe to list_for_each_entry: the current entry cursor won't be removed as part of the code in the loop body, so using the _safe variant is pointless.
Fixes: 02e19f9c7cac ('hvc_xen: implement multiconsole support') Signed-off-by: Roger Pau Monné roger.pau@citrix.com Reviewed-by: Stefano Stabellini sstabellini@kernel.org Link: https://lore.kernel.org/r/20221130163611.14686-1-roger.pau@citrix.com Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/tty/hvc/hvc_xen.c | 46 ++++++++++++++++++++++++--------------- 1 file changed, 29 insertions(+), 17 deletions(-)
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c index 7948660e042f..6f387a4fd96a 100644 --- a/drivers/tty/hvc/hvc_xen.c +++ b/drivers/tty/hvc/hvc_xen.c @@ -52,17 +52,22 @@ static DEFINE_SPINLOCK(xencons_lock);
static struct xencons_info *vtermno_to_xencons(int vtermno) { - struct xencons_info *entry, *n, *ret = NULL; + struct xencons_info *entry, *ret = NULL; + unsigned long flags;
- if (list_empty(&xenconsoles)) - return NULL; + spin_lock_irqsave(&xencons_lock, flags); + if (list_empty(&xenconsoles)) { + spin_unlock_irqrestore(&xencons_lock, flags); + return NULL; + }
- list_for_each_entry_safe(entry, n, &xenconsoles, list) { + list_for_each_entry(entry, &xenconsoles, list) { if (entry->vtermno == vtermno) { ret = entry; break; } } + spin_unlock_irqrestore(&xencons_lock, flags);
return ret; } @@ -223,7 +228,7 @@ static int xen_hvm_console_init(void) { int r; uint64_t v = 0; - unsigned long gfn; + unsigned long gfn, flags; struct xencons_info *info;
if (!xen_hvm_domain()) @@ -258,9 +263,9 @@ static int xen_hvm_console_init(void) goto err; info->vtermno = HVC_COOKIE;
- spin_lock(&xencons_lock); + spin_lock_irqsave(&xencons_lock, flags); list_add_tail(&info->list, &xenconsoles); - spin_unlock(&xencons_lock); + spin_unlock_irqrestore(&xencons_lock, flags);
return 0; err: @@ -283,6 +288,7 @@ static int xencons_info_pv_init(struct xencons_info *info, int vtermno) static int xen_pv_console_init(void) { struct xencons_info *info; + unsigned long flags;
if (!xen_pv_domain()) return -ENODEV; @@ -299,9 +305,9 @@ static int xen_pv_console_init(void) /* already configured */ return 0; } - spin_lock(&xencons_lock); + spin_lock_irqsave(&xencons_lock, flags); xencons_info_pv_init(info, HVC_COOKIE); - spin_unlock(&xencons_lock); + spin_unlock_irqrestore(&xencons_lock, flags);
return 0; } @@ -309,6 +315,7 @@ static int xen_pv_console_init(void) static int xen_initial_domain_console_init(void) { struct xencons_info *info; + unsigned long flags;
if (!xen_initial_domain()) return -ENODEV; @@ -323,9 +330,9 @@ static int xen_initial_domain_console_init(void) info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false); info->vtermno = HVC_COOKIE;
- spin_lock(&xencons_lock); + spin_lock_irqsave(&xencons_lock, flags); list_add_tail(&info->list, &xenconsoles); - spin_unlock(&xencons_lock); + spin_unlock_irqrestore(&xencons_lock, flags);
return 0; } @@ -380,10 +387,12 @@ static void xencons_free(struct xencons_info *info)
static int xen_console_remove(struct xencons_info *info) { + unsigned long flags; + xencons_disconnect_backend(info); - spin_lock(&xencons_lock); + spin_lock_irqsave(&xencons_lock, flags); list_del(&info->list); - spin_unlock(&xencons_lock); + spin_unlock_irqrestore(&xencons_lock, flags); if (info->xbdev != NULL) xencons_free(info); else { @@ -464,6 +473,7 @@ static int xencons_probe(struct xenbus_device *dev, { int ret, devid; struct xencons_info *info; + unsigned long flags;
devid = dev->nodename[strlen(dev->nodename) - 1] - '0'; if (devid == 0) @@ -482,9 +492,9 @@ static int xencons_probe(struct xenbus_device *dev, ret = xencons_connect_backend(dev, info); if (ret < 0) goto error; - spin_lock(&xencons_lock); + spin_lock_irqsave(&xencons_lock, flags); list_add_tail(&info->list, &xenconsoles); - spin_unlock(&xencons_lock); + spin_unlock_irqrestore(&xencons_lock, flags);
return 0;
@@ -583,10 +593,12 @@ static int __init xen_hvc_init(void)
info->hvc = hvc_alloc(HVC_COOKIE, info->irq, ops, 256); if (IS_ERR(info->hvc)) { + unsigned long flags; + r = PTR_ERR(info->hvc); - spin_lock(&xencons_lock); + spin_lock_irqsave(&xencons_lock, flags); list_del(&info->list); - spin_unlock(&xencons_lock); + spin_unlock_irqrestore(&xencons_lock, flags); if (info->irq) unbind_from_irqhandler(info->irq, NULL); kfree(info);
From: Minsuk Kang linuxlovemin@yonsei.ac.kr
[ Upstream commit 9dab880d675b9d0dd56c6428e4e8352a3339371d ]
Fix a use-after-free that occurs in hcd when in_urb sent from pn533_usb_send_frame() is completed earlier than out_urb. Its callback frees the skb data in pn533_send_async_complete() that is used as a transfer buffer of out_urb. Wait before sending in_urb until the callback of out_urb is called. To modify the callback of out_urb alone, separate the complete function of out_urb and ack_urb.
Found by a modified version of syzkaller.
BUG: KASAN: use-after-free in dummy_timer Call Trace: memcpy (mm/kasan/shadow.c:65) dummy_perform_transfer (drivers/usb/gadget/udc/dummy_hcd.c:1352) transfer (drivers/usb/gadget/udc/dummy_hcd.c:1453) dummy_timer (drivers/usb/gadget/udc/dummy_hcd.c:1972) arch_static_branch (arch/x86/include/asm/jump_label.h:27) static_key_false (include/linux/jump_label.h:207) timer_expire_exit (include/trace/events/timer.h:127) call_timer_fn (kernel/time/timer.c:1475) expire_timers (kernel/time/timer.c:1519) __run_timers (kernel/time/timer.c:1790) run_timer_softirq (kernel/time/timer.c:1803)
Fixes: c46ee38620a2 ("NFC: pn533: add NXP pn533 nfc device driver") Signed-off-by: Minsuk Kang linuxlovemin@yonsei.ac.kr Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nfc/pn533/usb.c | 44 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 41 insertions(+), 3 deletions(-)
diff --git a/drivers/nfc/pn533/usb.c b/drivers/nfc/pn533/usb.c index 84f2983bf384..57b07446bb76 100644 --- a/drivers/nfc/pn533/usb.c +++ b/drivers/nfc/pn533/usb.c @@ -153,10 +153,17 @@ static int pn533_usb_send_ack(struct pn533 *dev, gfp_t flags) return usb_submit_urb(phy->ack_urb, flags); }
+struct pn533_out_arg { + struct pn533_usb_phy *phy; + struct completion done; +}; + static int pn533_usb_send_frame(struct pn533 *dev, struct sk_buff *out) { struct pn533_usb_phy *phy = dev->phy; + struct pn533_out_arg arg; + void *cntx; int rc;
if (phy->priv == NULL) @@ -168,10 +175,17 @@ static int pn533_usb_send_frame(struct pn533 *dev, print_hex_dump_debug("PN533 TX: ", DUMP_PREFIX_NONE, 16, 1, out->data, out->len, false);
+ init_completion(&arg.done); + cntx = phy->out_urb->context; + phy->out_urb->context = &arg; + rc = usb_submit_urb(phy->out_urb, GFP_KERNEL); if (rc) return rc;
+ wait_for_completion(&arg.done); + phy->out_urb->context = cntx; + if (dev->protocol_type == PN533_PROTO_REQ_RESP) { /* request for response for sent packet directly */ rc = pn533_submit_urb_for_response(phy, GFP_KERNEL); @@ -412,7 +426,31 @@ static int pn533_acr122_poweron_rdr(struct pn533_usb_phy *phy) return arg.rc; }
-static void pn533_send_complete(struct urb *urb) +static void pn533_out_complete(struct urb *urb) +{ + struct pn533_out_arg *arg = urb->context; + struct pn533_usb_phy *phy = arg->phy; + + switch (urb->status) { + case 0: + break; /* success */ + case -ECONNRESET: + case -ENOENT: + dev_dbg(&phy->udev->dev, + "The urb has been stopped (status %d)\n", + urb->status); + break; + case -ESHUTDOWN: + default: + nfc_err(&phy->udev->dev, + "Urb failure (status %d)\n", + urb->status); + } + + complete(&arg->done); +} + +static void pn533_ack_complete(struct urb *urb) { struct pn533_usb_phy *phy = urb->context;
@@ -500,10 +538,10 @@ static int pn533_usb_probe(struct usb_interface *interface,
usb_fill_bulk_urb(phy->out_urb, phy->udev, usb_sndbulkpipe(phy->udev, out_endpoint), - NULL, 0, pn533_send_complete, phy); + NULL, 0, pn533_out_complete, phy); usb_fill_bulk_urb(phy->ack_urb, phy->udev, usb_sndbulkpipe(phy->udev, out_endpoint), - NULL, 0, pn533_send_complete, phy); + NULL, 0, pn533_ack_complete, phy);
switch (id->driver_info) { case PN533_DEVICE_STD:
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 9e17f99220d111ea031b44153fdfe364b0024ff2 ]
The 'TCA_MPLS_LABEL' attribute is of 'NLA_U32' type, but has a validation type of 'NLA_VALIDATE_FUNCTION'. This is an invalid combination according to the comment above 'struct nla_policy':
" Meaning of `validate' field, use via NLA_POLICY_VALIDATE_FN: NLA_BINARY Validation function called for the attribute. All other Unused - but note that it's a union "
This can trigger the warning [1] in nla_get_range_unsigned() when validation of the attribute fails. Despite being of 'NLA_U32' type, the associated 'min'/'max' fields in the policy are negative as they are aliased by the 'validate' field.
Fix by changing the attribute type to 'NLA_BINARY' which is consistent with the above comment and all other users of NLA_POLICY_VALIDATE_FN(). As a result, move the length validation to the validation function.
No regressions in MPLS tests:
# ./tdc.py -f tc-tests/actions/mpls.json [...] # echo $? 0
[1] WARNING: CPU: 0 PID: 17743 at lib/nlattr.c:118 nla_get_range_unsigned+0x1d8/0x1e0 lib/nlattr.c:117 Modules linked in: CPU: 0 PID: 17743 Comm: syz-executor.0 Not tainted 6.1.0-rc8 #3 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014 RIP: 0010:nla_get_range_unsigned+0x1d8/0x1e0 lib/nlattr.c:117 [...] Call Trace: <TASK> __netlink_policy_dump_write_attr+0x23d/0x990 net/netlink/policy.c:310 netlink_policy_dump_write_attr+0x22/0x30 net/netlink/policy.c:411 netlink_ack_tlv_fill net/netlink/af_netlink.c:2454 [inline] netlink_ack+0x546/0x760 net/netlink/af_netlink.c:2506 netlink_rcv_skb+0x1b7/0x240 net/netlink/af_netlink.c:2546 rtnetlink_rcv+0x18/0x20 net/core/rtnetlink.c:6109 netlink_unicast_kernel net/netlink/af_netlink.c:1319 [inline] netlink_unicast+0x5e9/0x6b0 net/netlink/af_netlink.c:1345 netlink_sendmsg+0x739/0x860 net/netlink/af_netlink.c:1921 sock_sendmsg_nosec net/socket.c:714 [inline] sock_sendmsg net/socket.c:734 [inline] ____sys_sendmsg+0x38f/0x500 net/socket.c:2482 ___sys_sendmsg net/socket.c:2536 [inline] __sys_sendmsg+0x197/0x230 net/socket.c:2565 __do_sys_sendmsg net/socket.c:2574 [inline] __se_sys_sendmsg net/socket.c:2572 [inline] __x64_sys_sendmsg+0x42/0x50 net/socket.c:2572 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Link: https://lore.kernel.org/netdev/CAO4mrfdmjvRUNbDyP0R03_DrD_eFCLCguz6OxZ2TYRSv... Fixes: 2a2ea50870ba ("net: sched: add mpls manipulation actions to TC") Reported-by: Wei Chen harperchen1110@gmail.com Tested-by: Wei Chen harperchen1110@gmail.com Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Alexander Duyck alexanderduyck@fb.com Link: https://lore.kernel.org/r/20230107171004.608436-1-idosch@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/act_mpls.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c index d1486ea496a2..09799412b248 100644 --- a/net/sched/act_mpls.c +++ b/net/sched/act_mpls.c @@ -133,6 +133,11 @@ static int valid_label(const struct nlattr *attr, { const u32 *label = nla_data(attr);
+ if (nla_len(attr) != sizeof(*label)) { + NL_SET_ERR_MSG_MOD(extack, "Invalid MPLS label length"); + return -EINVAL; + } + if (*label & ~MPLS_LABEL_MASK || *label == MPLS_LABEL_IMPLNULL) { NL_SET_ERR_MSG_MOD(extack, "MPLS label out of range"); return -EINVAL; @@ -144,7 +149,8 @@ static int valid_label(const struct nlattr *attr, static const struct nla_policy mpls_policy[TCA_MPLS_MAX + 1] = { [TCA_MPLS_PARMS] = NLA_POLICY_EXACT_LEN(sizeof(struct tc_mpls)), [TCA_MPLS_PROTO] = { .type = NLA_U16 }, - [TCA_MPLS_LABEL] = NLA_POLICY_VALIDATE_FN(NLA_U32, valid_label), + [TCA_MPLS_LABEL] = NLA_POLICY_VALIDATE_FN(NLA_BINARY, + valid_label), [TCA_MPLS_TC] = NLA_POLICY_RANGE(NLA_U8, 0, 7), [TCA_MPLS_TTL] = NLA_POLICY_MIN(NLA_U8, 1), [TCA_MPLS_BOS] = NLA_POLICY_RANGE(NLA_U8, 0, 1),
From: Rahul Rameshbabu rrameshbabu@nvidia.com
[ Upstream commit fe91d57277eef8bb4aca05acfa337b4a51d0bba4 ]
.max_adj of ptp_clock_info acts as an absolute value for the amount in ppb that can be set for a single call of .adjfine. This means that a single call to .getfine cannot be greater than .max_adj or less than -(.max_adj). Provides correct value for max frequency adjustment value supported by devices.
Fixes: 3d8c38af1493 ("net/mlx5e: Add PTP Hardware Clock (PHC) support") Signed-off-by: Rahul Rameshbabu rrameshbabu@nvidia.com Reviewed-by: Gal Pressman gal@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c index c70c1f0ca0c1..44a434b1178b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c @@ -440,7 +440,7 @@ static int mlx5_ptp_verify(struct ptp_clock_info *ptp, unsigned int pin, static const struct ptp_clock_info mlx5_ptp_clock_info = { .owner = THIS_MODULE, .name = "mlx5_ptp", - .max_adj = 100000000, + .max_adj = 50000000, .n_alarm = 0, .n_ext_ts = 0, .n_per_out = 0,
From: Gavin Li gavinl@nvidia.com
[ Upstream commit d515d63cae2cd186acf40deaa8ef33067bb7f637 ]
Previously, encap rules with gbp option would be offloaded by mistake but driver does not support gbp option offload.
To fix this issue, check if the encap rule has gbp option and don't offload the rule
Fixes: d8f9dfae49ce ("net: sched: allow flower to match vxlan options") Signed-off-by: Gavin Li gavinl@nvidia.com Reviewed-by: Maor Dickman maord@nvidia.com Signed-off-by: Saeed Mahameed saeedm@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c index 038a0f1cecec..e44281ae570d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c @@ -88,6 +88,8 @@ static int mlx5e_gen_ip_tunnel_header_vxlan(char buf[], struct udphdr *udp = (struct udphdr *)(buf); struct vxlanhdr *vxh;
+ if (tun_key->tun_flags & TUNNEL_VXLAN_OPT) + return -EOPNOTSUPP; vxh = (struct vxlanhdr *)((char *)udp + sizeof(struct udphdr)); *ip_proto = IPPROTO_UDP;
From: Aaron Thompson dev@aaront.org
[ Upstream commit 115d9d77bb0f9152c60b6e8646369fa7f6167593 ]
If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, memblock_free_pages() only releases pages to the buddy allocator if they are not in the deferred range. This is correct for free pages (as defined by for_each_free_mem_pfn_range_in_zone()) because free pages in the deferred range will be initialized and released as part of the deferred init process. memblock_free_pages() is called by memblock_free_late(), which is used to free reserved ranges after memblock_free_all() has run. All pages in reserved ranges have been initialized at that point, and accordingly, those pages are not touched by the deferred init process. This means that currently, if the pages that memblock_free_late() intends to release are in the deferred range, they will never be released to the buddy allocator. They will forever be reserved.
In addition, memblock_free_pages() calls kmsan_memblock_free_pages(), which is also correct for free pages but is not correct for reserved pages. KMSAN metadata for reserved pages is initialized by kmsan_init_shadow(), which runs shortly before memblock_free_all().
For both of these reasons, memblock_free_pages() should only be called for free pages, and memblock_free_late() should call __free_pages_core() directly instead.
One case where this issue can occur in the wild is EFI boot on x86_64. The x86 EFI code reserves all EFI boot services memory ranges via memblock_reserve() and frees them later via memblock_free_late() (efi_reserve_boot_services() and efi_free_boot_services(), respectively). If any of those ranges happens to fall within the deferred init range, the pages will not be released and that memory will be unavailable.
For example, on an Amazon EC2 t3.micro VM (1 GB) booting via EFI:
v6.2-rc2: # grep -E 'Node|spanned|present|managed' /proc/zoneinfo Node 0, zone DMA spanned 4095 present 3999 managed 3840 Node 0, zone DMA32 spanned 246652 present 245868 managed 178867
v6.2-rc2 + patch: # grep -E 'Node|spanned|present|managed' /proc/zoneinfo Node 0, zone DMA spanned 4095 present 3999 managed 3840 Node 0, zone DMA32 spanned 246652 present 245868 managed 222816 # +43,949 pages
Fixes: 3a80a7fa7989 ("mm: meminit: initialise a subset of struct pages if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set") Signed-off-by: Aaron Thompson dev@aaront.org Link: https://lore.kernel.org/r/01010185892de53e-e379acfb-7044-4b24-b30a-e2657c1ba... Signed-off-by: Mike Rapoport (IBM) rppt@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/memblock.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/memblock.c b/mm/memblock.c index f72d53957033..f6a4dffb9a88 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1597,7 +1597,13 @@ void __init __memblock_free_late(phys_addr_t base, phys_addr_t size) end = PFN_DOWN(base + size);
for (; cursor < end; cursor++) { - memblock_free_pages(pfn_to_page(cursor), cursor, 0); + /* + * Reserved pages are always initialized by the end of + * memblock_free_all() (by memmap_init() and, if deferred + * initialization is enabled, memmap_init_reserved_pages()), so + * these pages can be released directly to the buddy allocator. + */ + __free_pages_core(pfn_to_page(cursor), 0); totalram_pages_inc(); } }
From: Yong Wu yong.wu@mediatek.com
[ Upstream commit ac304c070c54413efabf29f9e73c54576d329774 ]
In the original code, we lack the error handle. This patch adds them.
Signed-off-by: Yong Wu yong.wu@mediatek.com Link: https://lore.kernel.org/r/20210412064843.11614-2-yong.wu@mediatek.com Signed-off-by: Joerg Roedel jroedel@suse.de Stable-dep-of: 142e821f68cf ("iommu/mediatek-v1: Fix an error handling path in mtk_iommu_v1_probe()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/mtk_iommu_v1.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c index 82ddfe9170d4..4ed8bc755f5c 100644 --- a/drivers/iommu/mtk_iommu_v1.c +++ b/drivers/iommu/mtk_iommu_v1.c @@ -624,12 +624,26 @@ static int mtk_iommu_probe(struct platform_device *pdev)
ret = iommu_device_register(&data->iommu); if (ret) - return ret; + goto out_sysfs_remove;
- if (!iommu_present(&platform_bus_type)) - bus_set_iommu(&platform_bus_type, &mtk_iommu_ops); + if (!iommu_present(&platform_bus_type)) { + ret = bus_set_iommu(&platform_bus_type, &mtk_iommu_ops); + if (ret) + goto out_dev_unreg; + }
- return component_master_add_with_match(dev, &mtk_iommu_com_ops, match); + ret = component_master_add_with_match(dev, &mtk_iommu_com_ops, match); + if (ret) + goto out_bus_set_null; + return ret; + +out_bus_set_null: + bus_set_iommu(&platform_bus_type, NULL); +out_dev_unreg: + iommu_device_unregister(&data->iommu); +out_sysfs_remove: + iommu_device_sysfs_remove(&data->iommu); + return ret; }
static int mtk_iommu_remove(struct platform_device *pdev)
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
[ Upstream commit 142e821f68cf5da79ce722cb9c1323afae30e185 ]
A clk, prepared and enabled in mtk_iommu_v1_hw_init(), is not released in the error handling path of mtk_iommu_v1_probe().
Add the corresponding clk_disable_unprepare(), as already done in the remove function.
Fixes: b17336c55d89 ("iommu/mediatek: add support for mtk iommu generation one HW") Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Reviewed-by: Yong Wu yong.wu@mediatek.com Reviewed-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Reviewed-by: Matthias Brugger matthias.bgg@gmail.com Link: https://lore.kernel.org/r/593e7b7d97c6e064b29716b091a9d4fd122241fb.167147316... Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/mtk_iommu_v1.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c index 4ed8bc755f5c..2abbdd71d8d9 100644 --- a/drivers/iommu/mtk_iommu_v1.c +++ b/drivers/iommu/mtk_iommu_v1.c @@ -618,7 +618,7 @@ static int mtk_iommu_probe(struct platform_device *pdev) ret = iommu_device_sysfs_add(&data->iommu, &pdev->dev, NULL, dev_name(&pdev->dev)); if (ret) - return ret; + goto out_clk_unprepare;
iommu_device_set_ops(&data->iommu, &mtk_iommu_ops);
@@ -643,6 +643,8 @@ static int mtk_iommu_probe(struct platform_device *pdev) iommu_device_unregister(&data->iommu); out_sysfs_remove: iommu_device_sysfs_remove(&data->iommu); +out_clk_unprepare: + clk_disable_unprepare(data->bclk); return ret; }
From: Paolo Bonzini pbonzini@redhat.com
[ Upstream commit cde363ab7ca7aea7a853851cd6a6745a9e1aaf5e ]
Add a section to document all the different ways in which the KVM API sucks.
I am sure there are way more, give people a place to vent so that userspace authors are aware.
Signed-off-by: Paolo Bonzini pbonzini@redhat.com Message-Id: 20220322110712.222449-4-pbonzini@redhat.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/virt/kvm/api.rst | 46 ++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index cd8a58556804..d807994360d4 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6398,3 +6398,49 @@ When enabled, KVM will disable paravirtual features provided to the guest according to the bits in the KVM_CPUID_FEATURES CPUID leaf (0x40000001). Otherwise, a guest may use the paravirtual features regardless of what has actually been exposed through the CPUID leaf. + +9. Known KVM API problems +========================= + +In some cases, KVM's API has some inconsistencies or common pitfalls +that userspace need to be aware of. This section details some of +these issues. + +Most of them are architecture specific, so the section is split by +architecture. + +9.1. x86 +-------- + +``KVM_GET_SUPPORTED_CPUID`` issues +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +In general, ``KVM_GET_SUPPORTED_CPUID`` is designed so that it is possible +to take its result and pass it directly to ``KVM_SET_CPUID2``. This section +documents some cases in which that requires some care. + +Local APIC features +~~~~~~~~~~~~~~~~~~~ + +CPU[EAX=1]:ECX[21] (X2APIC) is reported by ``KVM_GET_SUPPORTED_CPUID``, +but it can only be enabled if ``KVM_CREATE_IRQCHIP`` or +``KVM_ENABLE_CAP(KVM_CAP_IRQCHIP_SPLIT)`` are used to enable in-kernel emulation of +the local APIC. + +The same is true for the ``KVM_FEATURE_PV_UNHALT`` paravirtualized feature. + +CPU[EAX=1]:ECX[24] (TSC_DEADLINE) is not reported by ``KVM_GET_SUPPORTED_CPUID``. +It can be enabled if ``KVM_CAP_TSC_DEADLINE_TIMER`` is present and the kernel +has enabled in-kernel emulation of the local APIC. + +Obsolete ioctls and capabilities +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +KVM_CAP_DISABLE_QUIRKS does not let userspace know which quirks are actually +available. Use ``KVM_CHECK_EXTENSION(KVM_CAP_DISABLE_QUIRKS2)`` instead if +available. + +Ordering of KVM_GET_*/KVM_SET_* ioctls +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +TBD
From: Paolo Bonzini pbonzini@redhat.com
[ Upstream commit 45e966fcca03ecdcccac7cb236e16eea38cc18af ]
Passing the host topology to the guest is almost certainly wrong and will confuse the scheduler. In addition, several fields of these CPUID leaves vary on each processor; it is simply impossible to return the right values from KVM_GET_SUPPORTED_CPUID in such a way that they can be passed to KVM_SET_CPUID2.
The values that will most likely prevent confusion are all zeroes. Userspace will have to override it anyway if it wishes to present a specific topology to the guest.
Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/virt/kvm/api.rst | 14 ++++++++++++++ arch/x86/kvm/cpuid.c | 32 ++++++++++++++++---------------- 2 files changed, 30 insertions(+), 16 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index d807994360d4..2b4b64797191 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6433,6 +6433,20 @@ CPU[EAX=1]:ECX[24] (TSC_DEADLINE) is not reported by ``KVM_GET_SUPPORTED_CPUID`` It can be enabled if ``KVM_CAP_TSC_DEADLINE_TIMER`` is present and the kernel has enabled in-kernel emulation of the local APIC.
+CPU topology +~~~~~~~~~~~~ + +Several CPUID values include topology information for the host CPU: +0x0b and 0x1f for Intel systems, 0x8000001e for AMD systems. Different +versions of KVM return different values for this information and userspace +should not rely on it. Currently they return all zeroes. + +If userspace wishes to set up a guest topology, it should be careful that +the values of these three leaves differ for each CPU. In particular, +the APIC ID is found in EDX for all subleaves of 0x0b and 0x1f, and in EAX +for 0x8000001e; the latter also encodes the core id and node id in bits +7:0 of EBX and ECX respectively. + Obsolete ioctls and capabilities ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 06a776fdb90c..de4b171cb76b 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -511,16 +511,22 @@ struct kvm_cpuid_array { int nent; };
+static struct kvm_cpuid_entry2 *get_next_cpuid(struct kvm_cpuid_array *array) +{ + if (array->nent >= array->maxnent) + return NULL; + + return &array->entries[array->nent++]; +} + static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array, u32 function, u32 index) { - struct kvm_cpuid_entry2 *entry; + struct kvm_cpuid_entry2 *entry = get_next_cpuid(array);
- if (array->nent >= array->maxnent) + if (!entry) return NULL;
- entry = &array->entries[array->nent++]; - entry->function = function; entry->index = index; entry->flags = 0; @@ -698,22 +704,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) entry->edx = edx.full; break; } - /* - * Per Intel's SDM, the 0x1f is a superset of 0xb, - * thus they can be handled by common code. - */ case 0x1f: case 0xb: /* - * Populate entries until the level type (ECX[15:8]) of the - * previous entry is zero. Note, CPUID EAX.{0x1f,0xb}.0 is - * the starting entry, filled by the primary do_host_cpuid(). + * No topology; a valid topology is indicated by the presence + * of subleaf 1. */ - for (i = 1; entry->ecx & 0xff00; ++i) { - entry = do_host_cpuid(array, function, i); - if (!entry) - goto out; - } + entry->eax = entry->ebx = entry->ecx = 0; break; case 0xd: entry->eax &= supported_xcr0; @@ -866,6 +863,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) entry->ebx = entry->ecx = entry->edx = 0; break; case 0x8000001e: + /* Do not return host topology information. */ + entry->eax = entry->ebx = entry->ecx = 0; + entry->edx = 0; /* reserved */ break; /* Support memory encryption cpuid if host supports it */ case 0x8000001F:
From: Reinette Chatre reinette.chatre@intel.com
[ Upstream commit e0ad6dc8969f790f14bddcfd7ea284b7e5f88a16 ]
James reported in [1] that there could be two tasks running on the same CPU with task_struct->on_cpu set. Using task_struct->on_cpu as a test if a task is running on a CPU may thus match the old task for a CPU while the scheduler is running and IPI it unnecessarily.
task_curr() is the correct helper to use. While doing so move the #ifdef check of the CONFIG_SMP symbol to be a C conditional used to determine if this helper should be used to ensure the code is always checked for correctness by the compiler.
[1] https://lore.kernel.org/lkml/a782d2f3-d2f6-795f-f4b1-9462205fd581@arm.com
Reported-by: James Morse james.morse@arm.com Signed-off-by: Reinette Chatre reinette.chatre@intel.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/e9e68ce1441a73401e08b641cc3b9a3cf13fe6d4.160824314... Stable-dep-of: fe1f0714385f ("x86/resctrl: Fix task CLOSID/RMID update race") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index 5a59e3315b34..f2ad3c117426 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -2313,19 +2313,15 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to, t->closid = to->closid; t->rmid = to->mon.rmid;
-#ifdef CONFIG_SMP /* - * This is safe on x86 w/o barriers as the ordering - * of writing to task_cpu() and t->on_cpu is - * reverse to the reading here. The detection is - * inaccurate as tasks might move or schedule - * before the smp function call takes place. In - * such a case the function call is pointless, but + * If the task is on a CPU, set the CPU in the mask. + * The detection is inaccurate as tasks might move or + * schedule before the smp function call takes place. + * In such a case the function call is pointless, but * there is no other side effect. */ - if (mask && t->on_cpu) + if (IS_ENABLED(CONFIG_SMP) && mask && task_curr(t)) cpumask_set_cpu(task_cpu(t), mask); -#endif } } read_unlock(&tasklist_lock);
From: Peter Newman peternewman@google.com
[ Upstream commit fe1f0714385fbcf76b0cbceb02b7277d842014fc ]
When the user moves a running task to a new rdtgroup using the task's file interface or by deleting its rdtgroup, the resulting change in CLOSID/RMID must be immediately propagated to the PQR_ASSOC MSR on the task(s) CPUs.
x86 allows reordering loads with prior stores, so if the task starts running between a task_curr() check that the CPU hoisted before the stores in the CLOSID/RMID update then it can start running with the old CLOSID/RMID until it is switched again because __rdtgroup_move_task() failed to determine that it needs to be interrupted to obtain the new CLOSID/RMID.
Refer to the diagram below:
CPU 0 CPU 1 ----- ----- __rdtgroup_move_task(): curr <- t1->cpu->rq->curr __schedule(): rq->curr <- t1 resctrl_sched_in(): t1->{closid,rmid} -> {1,1} t1->{closid,rmid} <- {2,2} if (curr == t1) // false IPI(t1->cpu)
A similar race impacts rdt_move_group_tasks(), which updates tasks in a deleted rdtgroup.
In both cases, use smp_mb() to order the task_struct::{closid,rmid} stores before the loads in task_curr(). In particular, in the rdt_move_group_tasks() case, simply execute an smp_mb() on every iteration with a matching task.
It is possible to use a single smp_mb() in rdt_move_group_tasks(), but this would require two passes and a means of remembering which task_structs were updated in the first loop. However, benchmarking results below showed too little performance impact in the simple approach to justify implementing the two-pass approach.
Times below were collected using `perf stat` to measure the time to remove a group containing a 1600-task, parallel workload.
CPU: Intel(R) Xeon(R) Platinum P-8136 CPU @ 2.00GHz (112 threads)
# mkdir /sys/fs/resctrl/test # echo $$ > /sys/fs/resctrl/test/tasks # perf bench sched messaging -g 40 -l 100000
task-clock time ranges collected using:
# perf stat rmdir /sys/fs/resctrl/test
Baseline: 1.54 - 1.60 ms smp_mb() every matching task: 1.57 - 1.67 ms
[ bp: Massage commit message. ]
Fixes: ae28d1aae48a ("x86/resctrl: Use an IPI instead of task_work_add() to update PQR_ASSOC MSR") Fixes: 0efc89be9471 ("x86/intel_rdt: Update task closid immediately on CPU in rmdir and unmount") Signed-off-by: Peter Newman peternewman@google.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Reinette Chatre reinette.chatre@intel.com Reviewed-by: Babu Moger babu.moger@amd.com Cc: stable@kernel.org Link: https://lore.kernel.org/r/20221220161123.432120-1-peternewman@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index f2ad3c117426..ff26de11b3f1 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -577,8 +577,10 @@ static int __rdtgroup_move_task(struct task_struct *tsk, /* * Ensure the task's closid and rmid are written before determining if * the task is current that will decide if it will be interrupted. + * This pairs with the full barrier between the rq->curr update and + * resctrl_sched_in() during context switch. */ - barrier(); + smp_mb();
/* * By now, the task's closid and rmid are set. If the task is current @@ -2313,6 +2315,14 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to, t->closid = to->closid; t->rmid = to->mon.rmid;
+ /* + * Order the closid/rmid stores above before the loads + * in task_curr(). This pairs with the full barrier + * between the rq->curr update and resctrl_sched_in() + * during context switch. + */ + smp_mb(); + /* * If the task is on a CPU, set the CPU in the mask. * The detection is inaccurate as tasks might move or
From: Mark Rutland mark.rutland@arm.com
[ Upstream commit 8e6082e94aac6d0338883b5953631b662a5a9188 ]
The code for the atomic ops is formatted inconsistently, and while this is not a functional problem it is rather distracting when working on them.
Some have ops have consistent indentation, e.g.
| #define ATOMIC_OP_ADD_RETURN(name, mb, cl...) \ | static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \ | { \ | u32 tmp; \ | \ | asm volatile( \ | __LSE_PREAMBLE \ | " ldadd" #mb " %w[i], %w[tmp], %[v]\n" \ | " add %w[i], %w[i], %w[tmp]" \ | : [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \ | : "r" (v) \ | : cl); \ | \ | return i; \ | }
While others have negative indentation for some lines, and/or have misaligned trailing backslashes, e.g.
| static inline void __lse_atomic_##op(int i, atomic_t *v) \ | { \ | asm volatile( \ | __LSE_PREAMBLE \ | " " #asm_op " %w[i], %[v]\n" \ | : [i] "+r" (i), [v] "+Q" (v->counter) \ | : "r" (v)); \ | }
This patch makes the indentation consistent and also aligns the trailing backslashes. This makes the code easier to read for those (like myself) who are easily distracted by these inconsistencies.
This is intended as a cleanup. There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Boqun Feng boqun.feng@gmail.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will@kernel.org Acked-by: Will Deacon will@kernel.org Acked-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lore.kernel.org/r/20211210151410.2782645-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Stable-dep-of: 031af50045ea ("arm64: cmpxchg_double*: hazard against entire exchange variable") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/include/asm/atomic_ll_sc.h | 86 +++++++++++++-------------- arch/arm64/include/asm/atomic_lse.h | 14 ++--- 2 files changed, 50 insertions(+), 50 deletions(-)
diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h index 13869b76b58c..fe0db8d416fb 100644 --- a/arch/arm64/include/asm/atomic_ll_sc.h +++ b/arch/arm64/include/asm/atomic_ll_sc.h @@ -44,11 +44,11 @@ __ll_sc_atomic_##op(int i, atomic_t *v) \ \ asm volatile("// atomic_" #op "\n" \ __LL_SC_FALLBACK( \ -" prfm pstl1strm, %2\n" \ -"1: ldxr %w0, %2\n" \ -" " #asm_op " %w0, %w0, %w3\n" \ -" stxr %w1, %w0, %2\n" \ -" cbnz %w1, 1b\n") \ + " prfm pstl1strm, %2\n" \ + "1: ldxr %w0, %2\n" \ + " " #asm_op " %w0, %w0, %w3\n" \ + " stxr %w1, %w0, %2\n" \ + " cbnz %w1, 1b\n") \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i)); \ } @@ -62,12 +62,12 @@ __ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \ \ asm volatile("// atomic_" #op "_return" #name "\n" \ __LL_SC_FALLBACK( \ -" prfm pstl1strm, %2\n" \ -"1: ld" #acq "xr %w0, %2\n" \ -" " #asm_op " %w0, %w0, %w3\n" \ -" st" #rel "xr %w1, %w0, %2\n" \ -" cbnz %w1, 1b\n" \ -" " #mb ) \ + " prfm pstl1strm, %2\n" \ + "1: ld" #acq "xr %w0, %2\n" \ + " " #asm_op " %w0, %w0, %w3\n" \ + " st" #rel "xr %w1, %w0, %2\n" \ + " cbnz %w1, 1b\n" \ + " " #mb ) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i) \ : cl); \ @@ -84,12 +84,12 @@ __ll_sc_atomic_fetch_##op##name(int i, atomic_t *v) \ \ asm volatile("// atomic_fetch_" #op #name "\n" \ __LL_SC_FALLBACK( \ -" prfm pstl1strm, %3\n" \ -"1: ld" #acq "xr %w0, %3\n" \ -" " #asm_op " %w1, %w0, %w4\n" \ -" st" #rel "xr %w2, %w1, %3\n" \ -" cbnz %w2, 1b\n" \ -" " #mb ) \ + " prfm pstl1strm, %3\n" \ + "1: ld" #acq "xr %w0, %3\n" \ + " " #asm_op " %w1, %w0, %w4\n" \ + " st" #rel "xr %w2, %w1, %3\n" \ + " cbnz %w2, 1b\n" \ + " " #mb ) \ : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i) \ : cl); \ @@ -143,11 +143,11 @@ __ll_sc_atomic64_##op(s64 i, atomic64_t *v) \ \ asm volatile("// atomic64_" #op "\n" \ __LL_SC_FALLBACK( \ -" prfm pstl1strm, %2\n" \ -"1: ldxr %0, %2\n" \ -" " #asm_op " %0, %0, %3\n" \ -" stxr %w1, %0, %2\n" \ -" cbnz %w1, 1b") \ + " prfm pstl1strm, %2\n" \ + "1: ldxr %0, %2\n" \ + " " #asm_op " %0, %0, %3\n" \ + " stxr %w1, %0, %2\n" \ + " cbnz %w1, 1b") \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i)); \ } @@ -161,12 +161,12 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \ \ asm volatile("// atomic64_" #op "_return" #name "\n" \ __LL_SC_FALLBACK( \ -" prfm pstl1strm, %2\n" \ -"1: ld" #acq "xr %0, %2\n" \ -" " #asm_op " %0, %0, %3\n" \ -" st" #rel "xr %w1, %0, %2\n" \ -" cbnz %w1, 1b\n" \ -" " #mb ) \ + " prfm pstl1strm, %2\n" \ + "1: ld" #acq "xr %0, %2\n" \ + " " #asm_op " %0, %0, %3\n" \ + " st" #rel "xr %w1, %0, %2\n" \ + " cbnz %w1, 1b\n" \ + " " #mb ) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i) \ : cl); \ @@ -176,19 +176,19 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \
#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\ static inline long \ -__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ +__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ { \ s64 result, val; \ unsigned long tmp; \ \ asm volatile("// atomic64_fetch_" #op #name "\n" \ __LL_SC_FALLBACK( \ -" prfm pstl1strm, %3\n" \ -"1: ld" #acq "xr %0, %3\n" \ -" " #asm_op " %1, %0, %4\n" \ -" st" #rel "xr %w2, %1, %3\n" \ -" cbnz %w2, 1b\n" \ -" " #mb ) \ + " prfm pstl1strm, %3\n" \ + "1: ld" #acq "xr %0, %3\n" \ + " " #asm_op " %1, %0, %4\n" \ + " st" #rel "xr %w2, %1, %3\n" \ + " cbnz %w2, 1b\n" \ + " " #mb ) \ : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i) \ : cl); \ @@ -241,14 +241,14 @@ __ll_sc_atomic64_dec_if_positive(atomic64_t *v)
asm volatile("// atomic64_dec_if_positive\n" __LL_SC_FALLBACK( -" prfm pstl1strm, %2\n" -"1: ldxr %0, %2\n" -" subs %0, %0, #1\n" -" b.lt 2f\n" -" stlxr %w1, %0, %2\n" -" cbnz %w1, 1b\n" -" dmb ish\n" -"2:") + " prfm pstl1strm, %2\n" + "1: ldxr %0, %2\n" + " subs %0, %0, #1\n" + " b.lt 2f\n" + " stlxr %w1, %0, %2\n" + " cbnz %w1, 1b\n" + " dmb ish\n" + "2:") : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) : : "cc", "memory"); diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h index da3280f639cd..ab661375835e 100644 --- a/arch/arm64/include/asm/atomic_lse.h +++ b/arch/arm64/include/asm/atomic_lse.h @@ -11,11 +11,11 @@ #define __ASM_ATOMIC_LSE_H
#define ATOMIC_OP(op, asm_op) \ -static inline void __lse_atomic_##op(int i, atomic_t *v) \ +static inline void __lse_atomic_##op(int i, atomic_t *v) \ { \ asm volatile( \ __LSE_PREAMBLE \ -" " #asm_op " %w[i], %[v]\n" \ + " " #asm_op " %w[i], %[v]\n" \ : [i] "+r" (i), [v] "+Q" (v->counter) \ : "r" (v)); \ } @@ -32,7 +32,7 @@ static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v) \ { \ asm volatile( \ __LSE_PREAMBLE \ -" " #asm_op #mb " %w[i], %w[i], %[v]" \ + " " #asm_op #mb " %w[i], %w[i], %[v]" \ : [i] "+r" (i), [v] "+Q" (v->counter) \ : "r" (v) \ : cl); \ @@ -130,7 +130,7 @@ static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \ " add %w[i], %w[i], %w[tmp]" \ : [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \ : "r" (v) \ - : cl); \ + : cl); \ \ return i; \ } @@ -168,7 +168,7 @@ static inline void __lse_atomic64_##op(s64 i, atomic64_t *v) \ { \ asm volatile( \ __LSE_PREAMBLE \ -" " #asm_op " %[i], %[v]\n" \ + " " #asm_op " %[i], %[v]\n" \ : [i] "+r" (i), [v] "+Q" (v->counter) \ : "r" (v)); \ } @@ -185,7 +185,7 @@ static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\ { \ asm volatile( \ __LSE_PREAMBLE \ -" " #asm_op #mb " %[i], %[i], %[v]" \ + " " #asm_op #mb " %[i], %[i], %[v]" \ : [i] "+r" (i), [v] "+Q" (v->counter) \ : "r" (v) \ : cl); \ @@ -272,7 +272,7 @@ static inline void __lse_atomic64_sub(s64 i, atomic64_t *v) }
#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \ -static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \ +static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)\ { \ unsigned long tmp; \ \
From: Mark Rutland mark.rutland@arm.com
[ Upstream commit b2c3ccbd0011bb3b51d0fec24cb3a5812b1ec8ea ]
When CONFIG_ARM64_LSE_ATOMICS=y, each use of an LL/SC atomic results in a fragment of code being generated in a subsection without a clear association with its caller. A trampoline in the caller branches to the LL/SC atomic with with a direct branch, and the atomic directly branches back into its trampoline.
This breaks backtracing, as any PC within the out-of-line fragment will be symbolized as an offset from the nearest prior symbol (which may not be the function using the atomic), and since the atomic returns with a direct branch, the caller's PC may be missing from the backtrace.
For example, with secondary_start_kernel() hacked to contain atomic_inc(NULL), the resulting exception can be reported as being taken from cpus_are_stuck_in_kernel():
| Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 | Mem abort info: | ESR = 0x0000000096000004 | EC = 0x25: DABT (current EL), IL = 32 bits | SET = 0, FnV = 0 | EA = 0, S1PTW = 0 | FSC = 0x04: level 0 translation fault | Data abort info: | ISV = 0, ISS = 0x00000004 | CM = 0, WnR = 0 | [0000000000000000] user address but active_mm is swapper | Internal error: Oops: 96000004 [#1] PREEMPT SMP | Modules linked in: | CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.19.0-11219-geb555cb5b794-dirty #3 | Hardware name: linux,dummy-virt (DT) | pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : cpus_are_stuck_in_kernel+0xa4/0x120 | lr : secondary_start_kernel+0x164/0x170 | sp : ffff80000a4cbe90 | x29: ffff80000a4cbe90 x28: 0000000000000000 x27: 0000000000000000 | x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000 | x23: 0000000000000000 x22: 0000000000000000 x21: 0000000000000000 | x20: 0000000000000001 x19: 0000000000000001 x18: 0000000000000008 | x17: 3030383832343030 x16: 3030303030307830 x15: ffff80000a4cbab0 | x14: 0000000000000001 x13: 5d31666130663133 x12: 3478305b20313030 | x11: 3030303030303078 x10: 3020726f73736563 x9 : 726f737365636f72 | x8 : ffff800009ff2ef0 x7 : 0000000000000003 x6 : 0000000000000000 | x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000100 | x2 : 0000000000000000 x1 : ffff0000029bd880 x0 : 0000000000000000 | Call trace: | cpus_are_stuck_in_kernel+0xa4/0x120 | __secondary_switched+0xb0/0xb4 | Code: 35ffffa3 17fffc6c d53cd040 f9800011 (885f7c01) | ---[ end trace 0000000000000000 ]---
This is confusing and hinders debugging, and will be problematic for CONFIG_LIVEPATCH as these cases cannot be unwound reliably.
This is very similar to recent issues with out-of-line exception fixups, which were removed in commits:
35d67794b8828333 ("arm64: lib: __arch_clear_user(): fold fixups into body") 4012e0e22739eef9 ("arm64: lib: __arch_copy_from_user(): fold fixups into body") 139f9ab73d60cf76 ("arm64: lib: __arch_copy_to_user(): fold fixups into body")
When the trampolines were introduced in commit:
addfc38672c73efd ("arm64: atomics: avoid out-of-line ll/sc atomics")
The rationale was to improve icache performance by grouping the LL/SC atomics together. This has never been measured, and this theoretical benefit is outweighed by other factors:
* As the subsections are collapsed into sections at object file granularity, these are spread out throughout the kernel and can share cachelines with unrelated code regardless.
* GCC 12.1.0 has been observed to place the trampoline out-of-line in specialised __ll_sc_*() functions, introducing more branching than was intended.
* Removing the trampolines has been observed to shrink a defconfig kernel Image by 64KiB when building with GCC 12.1.0.
This patch removes the LL/SC trampolines, meaning that the LL/SC atomics will be inlined into their callers (or placed in out-of line functions using regular BL/RET pairs). When CONFIG_ARM64_LSE_ATOMICS=y, the LL/SC atomics are always called in an unlikely branch, and will be placed in a cold portion of the function, so this should have minimal impact to the hot paths.
Other than the improved backtracing, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Will Deacon will@kernel.org Link: https://lore.kernel.org/r/20220817155914.3975112-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Stable-dep-of: 031af50045ea ("arm64: cmpxchg_double*: hazard against entire exchange variable") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/include/asm/atomic_ll_sc.h | 40 ++++++--------------------- 1 file changed, 9 insertions(+), 31 deletions(-)
diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h index fe0db8d416fb..906e2d8c254c 100644 --- a/arch/arm64/include/asm/atomic_ll_sc.h +++ b/arch/arm64/include/asm/atomic_ll_sc.h @@ -12,19 +12,6 @@
#include <linux/stringify.h>
-#ifdef CONFIG_ARM64_LSE_ATOMICS -#define __LL_SC_FALLBACK(asm_ops) \ -" b 3f\n" \ -" .subsection 1\n" \ -"3:\n" \ -asm_ops "\n" \ -" b 4f\n" \ -" .previous\n" \ -"4:\n" -#else -#define __LL_SC_FALLBACK(asm_ops) asm_ops -#endif - #ifndef CONFIG_CC_HAS_K_CONSTRAINT #define K #endif @@ -43,12 +30,11 @@ __ll_sc_atomic_##op(int i, atomic_t *v) \ int result; \ \ asm volatile("// atomic_" #op "\n" \ - __LL_SC_FALLBACK( \ " prfm pstl1strm, %2\n" \ "1: ldxr %w0, %2\n" \ " " #asm_op " %w0, %w0, %w3\n" \ " stxr %w1, %w0, %2\n" \ - " cbnz %w1, 1b\n") \ + " cbnz %w1, 1b\n" \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i)); \ } @@ -61,13 +47,12 @@ __ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \ int result; \ \ asm volatile("// atomic_" #op "_return" #name "\n" \ - __LL_SC_FALLBACK( \ " prfm pstl1strm, %2\n" \ "1: ld" #acq "xr %w0, %2\n" \ " " #asm_op " %w0, %w0, %w3\n" \ " st" #rel "xr %w1, %w0, %2\n" \ " cbnz %w1, 1b\n" \ - " " #mb ) \ + " " #mb \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i) \ : cl); \ @@ -83,13 +68,12 @@ __ll_sc_atomic_fetch_##op##name(int i, atomic_t *v) \ int val, result; \ \ asm volatile("// atomic_fetch_" #op #name "\n" \ - __LL_SC_FALLBACK( \ " prfm pstl1strm, %3\n" \ "1: ld" #acq "xr %w0, %3\n" \ " " #asm_op " %w1, %w0, %w4\n" \ " st" #rel "xr %w2, %w1, %3\n" \ " cbnz %w2, 1b\n" \ - " " #mb ) \ + " " #mb \ : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i) \ : cl); \ @@ -142,12 +126,11 @@ __ll_sc_atomic64_##op(s64 i, atomic64_t *v) \ unsigned long tmp; \ \ asm volatile("// atomic64_" #op "\n" \ - __LL_SC_FALLBACK( \ " prfm pstl1strm, %2\n" \ "1: ldxr %0, %2\n" \ " " #asm_op " %0, %0, %3\n" \ " stxr %w1, %0, %2\n" \ - " cbnz %w1, 1b") \ + " cbnz %w1, 1b" \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i)); \ } @@ -160,13 +143,12 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \ unsigned long tmp; \ \ asm volatile("// atomic64_" #op "_return" #name "\n" \ - __LL_SC_FALLBACK( \ " prfm pstl1strm, %2\n" \ "1: ld" #acq "xr %0, %2\n" \ " " #asm_op " %0, %0, %3\n" \ " st" #rel "xr %w1, %0, %2\n" \ " cbnz %w1, 1b\n" \ - " " #mb ) \ + " " #mb \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i) \ : cl); \ @@ -182,13 +164,12 @@ __ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ unsigned long tmp; \ \ asm volatile("// atomic64_fetch_" #op #name "\n" \ - __LL_SC_FALLBACK( \ " prfm pstl1strm, %3\n" \ "1: ld" #acq "xr %0, %3\n" \ " " #asm_op " %1, %0, %4\n" \ " st" #rel "xr %w2, %1, %3\n" \ " cbnz %w2, 1b\n" \ - " " #mb ) \ + " " #mb \ : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ : __stringify(constraint) "r" (i) \ : cl); \ @@ -240,7 +221,6 @@ __ll_sc_atomic64_dec_if_positive(atomic64_t *v) unsigned long tmp;
asm volatile("// atomic64_dec_if_positive\n" - __LL_SC_FALLBACK( " prfm pstl1strm, %2\n" "1: ldxr %0, %2\n" " subs %0, %0, #1\n" @@ -248,7 +228,7 @@ __ll_sc_atomic64_dec_if_positive(atomic64_t *v) " stlxr %w1, %0, %2\n" " cbnz %w1, 1b\n" " dmb ish\n" - "2:") + "2:" : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) : : "cc", "memory"); @@ -274,7 +254,6 @@ __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr, \ old = (u##sz)old; \ \ asm volatile( \ - __LL_SC_FALLBACK( \ " prfm pstl1strm, %[v]\n" \ "1: ld" #acq "xr" #sfx "\t%" #w "[oldval], %[v]\n" \ " eor %" #w "[tmp], %" #w "[oldval], %" #w "[old]\n" \ @@ -282,7 +261,7 @@ __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr, \ " st" #rel "xr" #sfx "\t%w[tmp], %" #w "[new], %[v]\n" \ " cbnz %w[tmp], 1b\n" \ " " #mb "\n" \ - "2:") \ + "2:" \ : [tmp] "=&r" (tmp), [oldval] "=&r" (oldval), \ [v] "+Q" (*(u##sz *)ptr) \ : [old] __stringify(constraint) "r" (old), [new] "r" (new) \ @@ -326,7 +305,6 @@ __ll_sc__cmpxchg_double##name(unsigned long old1, \ unsigned long tmp, ret; \ \ asm volatile("// __cmpxchg_double" #name "\n" \ - __LL_SC_FALLBACK( \ " prfm pstl1strm, %2\n" \ "1: ldxp %0, %1, %2\n" \ " eor %0, %0, %3\n" \ @@ -336,7 +314,7 @@ __ll_sc__cmpxchg_double##name(unsigned long old1, \ " st" #rel "xp %w0, %5, %6, %2\n" \ " cbnz %w0, 1b\n" \ " " #mb "\n" \ - "2:") \ + "2:" \ : "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr) \ : "r" (old1), "r" (old2), "r" (new1), "r" (new2) \ : cl); \
From: Mark Rutland mark.rutland@arm.com
[ Upstream commit 031af50045ea97ed4386eb3751ca2c134d0fc911 ]
The inline assembly for arm64's cmpxchg_double*() implementations use a +Q constraint to hazard against other accesses to the memory location being exchanged. However, the pointer passed to the constraint is a pointer to unsigned long, and thus the hazard only applies to the first 8 bytes of the location.
GCC can take advantage of this, assuming that other portions of the location are unchanged, leading to a number of potential problems.
This is similar to what we fixed back in commit:
fee960bed5e857eb ("arm64: xchg: hazard against entire exchange variable")
... but we forgot to adjust cmpxchg_double*() similarly at the same time.
The same problem applies, as demonstrated with the following test:
| struct big { | u64 lo, hi; | } __aligned(128); | | unsigned long foo(struct big *b) | { | u64 hi_old, hi_new; | | hi_old = b->hi; | cmpxchg_double_local(&b->lo, &b->hi, 0x12, 0x34, 0x56, 0x78); | hi_new = b->hi; | | return hi_old ^ hi_new; | }
... which GCC 12.1.0 compiles as:
| 0000000000000000 <foo>: | 0: d503233f paciasp | 4: aa0003e4 mov x4, x0 | 8: 1400000e b 40 <foo+0x40> | c: d2800240 mov x0, #0x12 // #18 | 10: d2800681 mov x1, #0x34 // #52 | 14: aa0003e5 mov x5, x0 | 18: aa0103e6 mov x6, x1 | 1c: d2800ac2 mov x2, #0x56 // #86 | 20: d2800f03 mov x3, #0x78 // #120 | 24: 48207c82 casp x0, x1, x2, x3, [x4] | 28: ca050000 eor x0, x0, x5 | 2c: ca060021 eor x1, x1, x6 | 30: aa010000 orr x0, x0, x1 | 34: d2800000 mov x0, #0x0 // #0 <--- BANG | 38: d50323bf autiasp | 3c: d65f03c0 ret | 40: d2800240 mov x0, #0x12 // #18 | 44: d2800681 mov x1, #0x34 // #52 | 48: d2800ac2 mov x2, #0x56 // #86 | 4c: d2800f03 mov x3, #0x78 // #120 | 50: f9800091 prfm pstl1strm, [x4] | 54: c87f1885 ldxp x5, x6, [x4] | 58: ca0000a5 eor x5, x5, x0 | 5c: ca0100c6 eor x6, x6, x1 | 60: aa0600a6 orr x6, x5, x6 | 64: b5000066 cbnz x6, 70 <foo+0x70> | 68: c8250c82 stxp w5, x2, x3, [x4] | 6c: 35ffff45 cbnz w5, 54 <foo+0x54> | 70: d2800000 mov x0, #0x0 // #0 <--- BANG | 74: d50323bf autiasp | 78: d65f03c0 ret
Notice that at the lines with "BANG" comments, GCC has assumed that the higher 8 bytes are unchanged by the cmpxchg_double() call, and that `hi_old ^ hi_new` can be reduced to a constant zero, for both LSE and LL/SC versions of cmpxchg_double().
This patch fixes the issue by passing a pointer to __uint128_t into the +Q constraint, ensuring that the compiler hazards against the entire 16 bytes being modified.
With this change, GCC 12.1.0 compiles the above test as:
| 0000000000000000 <foo>: | 0: f9400407 ldr x7, [x0, #8] | 4: d503233f paciasp | 8: aa0003e4 mov x4, x0 | c: 1400000f b 48 <foo+0x48> | 10: d2800240 mov x0, #0x12 // #18 | 14: d2800681 mov x1, #0x34 // #52 | 18: aa0003e5 mov x5, x0 | 1c: aa0103e6 mov x6, x1 | 20: d2800ac2 mov x2, #0x56 // #86 | 24: d2800f03 mov x3, #0x78 // #120 | 28: 48207c82 casp x0, x1, x2, x3, [x4] | 2c: ca050000 eor x0, x0, x5 | 30: ca060021 eor x1, x1, x6 | 34: aa010000 orr x0, x0, x1 | 38: f9400480 ldr x0, [x4, #8] | 3c: d50323bf autiasp | 40: ca0000e0 eor x0, x7, x0 | 44: d65f03c0 ret | 48: d2800240 mov x0, #0x12 // #18 | 4c: d2800681 mov x1, #0x34 // #52 | 50: d2800ac2 mov x2, #0x56 // #86 | 54: d2800f03 mov x3, #0x78 // #120 | 58: f9800091 prfm pstl1strm, [x4] | 5c: c87f1885 ldxp x5, x6, [x4] | 60: ca0000a5 eor x5, x5, x0 | 64: ca0100c6 eor x6, x6, x1 | 68: aa0600a6 orr x6, x5, x6 | 6c: b5000066 cbnz x6, 78 <foo+0x78> | 70: c8250c82 stxp w5, x2, x3, [x4] | 74: 35ffff45 cbnz w5, 5c <foo+0x5c> | 78: f9400480 ldr x0, [x4, #8] | 7c: d50323bf autiasp | 80: ca0000e0 eor x0, x7, x0 | 84: d65f03c0 ret
... sampling the high 8 bytes before and after the cmpxchg, and performing an EOR, as we'd expect.
For backporting, I've tested this atop linux-4.9.y with GCC 5.5.0. Note that linux-4.9.y is oldest currently supported stable release, and mandates GCC 5.1+. Unfortunately I couldn't get a GCC 5.1 binary to run on my machines due to library incompatibilities.
I've also used a standalone test to check that we can use a __uint128_t pointer in a +Q constraint at least as far back as GCC 4.8.5 and LLVM 3.9.1.
Fixes: 5284e1b4bc8a ("arm64: xchg: Implement cmpxchg_double") Fixes: e9a4b795652f ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU") Reported-by: Boqun Feng boqun.feng@gmail.com Link: https://lore.kernel.org/lkml/Y6DEfQXymYVgL3oJ@boqun-archlinux/ Reported-by: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/lkml/Y6GXoO4qmH9OIZ5Q@hirez.programming.kicks-ass.ne... Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: stable@vger.kernel.org Cc: Arnd Bergmann arnd@arndb.de Cc: Catalin Marinas catalin.marinas@arm.com Cc: Steve Capper steve.capper@arm.com Cc: Will Deacon will@kernel.org Link: https://lore.kernel.org/r/20230104151626.3262137-1-mark.rutland@arm.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/include/asm/atomic_ll_sc.h | 2 +- arch/arm64/include/asm/atomic_lse.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h index 906e2d8c254c..abd302e521c0 100644 --- a/arch/arm64/include/asm/atomic_ll_sc.h +++ b/arch/arm64/include/asm/atomic_ll_sc.h @@ -315,7 +315,7 @@ __ll_sc__cmpxchg_double##name(unsigned long old1, \ " cbnz %w0, 1b\n" \ " " #mb "\n" \ "2:" \ - : "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr) \ + : "=&r" (tmp), "=&r" (ret), "+Q" (*(__uint128_t *)ptr) \ : "r" (old1), "r" (old2), "r" (new1), "r" (new2) \ : cl); \ \ diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h index ab661375835e..28e96118c1e5 100644 --- a/arch/arm64/include/asm/atomic_lse.h +++ b/arch/arm64/include/asm/atomic_lse.h @@ -403,7 +403,7 @@ __lse__cmpxchg_double##name(unsigned long old1, \ " eor %[old2], %[old2], %[oldval2]\n" \ " orr %[old1], %[old1], %[old2]" \ : [old1] "+&r" (x0), [old2] "+&r" (x1), \ - [v] "+Q" (*(unsigned long *)ptr) \ + [v] "+Q" (*(__uint128_t *)ptr) \ : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \ [oldval1] "r" (oldval1), [oldval2] "r" (oldval2) \ : cl); \
From: Johan Hovold johan+linaro@kernel.org
[ Upstream commit 703c13fe3c9af557d312f5895ed6a5fda2711104 ]
In cases where runtime services are not supported or have been disabled, the runtime services workqueue will never have been allocated.
Do not try to destroy the workqueue unconditionally in the unlikely event that EFI initialisation fails to avoid dereferencing a NULL pointer.
Fixes: 98086df8b70c ("efi: add missed destroy_workqueue when efisubsys_init fails") Cc: stable@vger.kernel.org Cc: Li Heng liheng40@huawei.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/efi/efi.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index ba03f5a4b30c..a2765d668856 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -385,8 +385,8 @@ static int __init efisubsys_init(void) efi_kobj = kobject_create_and_add("efi", firmware_kobj); if (!efi_kobj) { pr_err("efi: Firmware registration failed.\n"); - destroy_workqueue(efi_rts_wq); - return -ENOMEM; + error = -ENOMEM; + goto err_destroy_wq; }
if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE | @@ -429,7 +429,10 @@ static int __init efisubsys_init(void) generic_ops_unregister(); err_put: kobject_put(efi_kobj); - destroy_workqueue(efi_rts_wq); +err_destroy_wq: + if (efi_rts_wq) + destroy_workqueue(efi_rts_wq); + return error; }
From: Rob Clark robdclark@chromium.org
[ Upstream commit 52531258318ed59a2dc5a43df2eaf0eb1d65438e ]
Userspace can guess the handle value and try to race GEM object creation with handle close, resulting in a use-after-free if we dereference the object after dropping the handle's reference. For that reason, dropping the handle's reference must be done *after* we are done dereferencing the object.
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Chia-I Wu olvaffe@gmail.com Fixes: 62fb7a5e1096 ("virtio-gpu: add 3d/virgl support") Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko dmitry.osipenko@collabora.com Link: https://patchwork.freedesktop.org/patch/msgid/20221216233355.542197-2-robdcl... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/virtio/virtgpu_ioctl.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 33b8ebab178a..36efa273155d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -279,10 +279,18 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, drm_gem_object_release(obj); return ret; } - drm_gem_object_put(obj);
rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */ rc->bo_handle = handle; + + /* + * The handle owns the reference now. But we must drop our + * remaining reference *after* we no longer need to dereference + * the obj. Otherwise userspace could guess the handle and + * race closing it from another thread. + */ + drm_gem_object_put(obj); + return 0; }
From: Jens Axboe axboe@kernel.dk
commit af82425c6a2d2f347c79b63ce74fca6dc6be157f upstream.
If we cancel the task_work, the worker will never come into existance. As this is the last reference to it, ensure that we get it freed appropriately.
Cc: stable@vger.kernel.org Reported-by: 진호 wnwlsgh98@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- io_uring/io-wq.c | 1 + 1 file changed, 1 insertion(+)
--- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -1217,6 +1217,7 @@ static void io_wq_cancel_tw_create(struc
worker = container_of(cb, struct io_worker, create_work); io_worker_cancel_cb(worker); + kfree(worker); } }
From: Jens Axboe axboe@kernel.dk
commit e6db6f9398dadcbc06318a133d4c44a2d3844e61 upstream.
We have two types of task_work based creation, one is using an existing worker to setup a new one (eg when going to sleep and we have no free workers), and the other is allocating a new worker. Only the latter should be freed when we cancel task_work creation for a new worker.
Fixes: af82425c6a2d ("io_uring/io-wq: free worker if task_work creation is canceled") Reported-by: syzbot+d56ec896af3637bdb7e4@syzkaller.appspotmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- io_uring/io-wq.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -1217,7 +1217,12 @@ static void io_wq_cancel_tw_create(struc
worker = container_of(cb, struct io_worker, create_work); io_worker_cancel_cb(worker); - kfree(worker); + /* + * Only the worker continuation helper has worker allocated and + * hence needs freeing. + */ + if (cb->func == create_worker_cont) + kfree(worker); } }
From: Ferry Toth ftoth@exalondelft.nl
commit b659b613cea2ae39746ca8bd2b69d1985dd9d770 upstream.
This reverts commit 8a7b31d545d3a15f0e6f5984ae16f0ca4fd76aac.
This patch results in some qemu test failures, specifically xilinx-zynq-a9 machine and zynq-zc702 as well as zynq-zed devicetree files, when trying to boot from USB drive.
Link: https://lore.kernel.org/lkml/20221220194334.GA942039@roeck-us.net/ Fixes: 8a7b31d545d3 ("usb: ulpi: defer ulpi_register on ulpi_read_id timeout") Cc: stable@vger.kernel.org Reported-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Ferry Toth ftoth@exalondelft.nl Link: https://lore.kernel.org/r/20221222205302.45761-1-ftoth@exalondelft.nl Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/common/ulpi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/usb/common/ulpi.c +++ b/drivers/usb/common/ulpi.c @@ -208,7 +208,7 @@ static int ulpi_read_id(struct ulpi *ulp /* Test the interface */ ret = ulpi_write(ulpi, ULPI_SCRATCH, 0xaa); if (ret < 0) - return ret; + goto err;
ret = ulpi_read(ulpi, ULPI_SCRATCH); if (ret < 0)
Hello!
On Mon, 16 Jan 2023 at 10:06, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.10.164 release. There are 64 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 18 Jan 2023 15:47:28 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.10.164-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.10.y and the diffstat can be found below.
thanks,
greg k-h
Preliminarily,
| /builds/linux/drivers/gpu/drm/msm/dp/dp_aux.c: In function 'dp_aux_isr': | /builds/linux/drivers/gpu/drm/msm/dp/dp_aux.c:427:14: error: 'isr' undeclared (first use in this function); did you mean 'idr'? | 427 | if (!isr) | | ^~~ | | idr
It's currently failing for arm, arm64, (not i386) and x86, with GCC 8, 10, 11, 12; Clang 15 and nightly. We'll test the extended set of architectures and update momentarily.
Greetings!
Daniel Díaz daniel.diaz@linaro.org
Hi!
On Mon, 16 Jan 2023 at 10:06, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.10.164 release. There are 64 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 18 Jan 2023 15:47:28 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.10.164-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.10.y and the diffstat can be found below.
thanks,
greg k-h
Preliminarily,
| /builds/linux/drivers/gpu/drm/msm/dp/dp_aux.c: In function 'dp_aux_isr': | /builds/linux/drivers/gpu/drm/msm/dp/dp_aux.c:427:14: error: 'isr' undeclared (first use in this function); did you mean 'idr'? | 427 | if (!isr) | | ^~~ | | idr
It's currently failing for arm, arm64, (not i386) and x86, with GCC 8, 10, 11, 12; Clang 15 and nightly. We'll test the extended set of architectures and update momentarily.
CIP testing sees the same build problem:
https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/pipelines/74...
CC [M] drivers/gpu/drm/msm/dp/dp_display.o 6758drivers/gpu/drm/msm/dp/dp_aux.c: In function 'dp_aux_isr': 6759drivers/gpu/drm/msm/dp/dp_aux.c:427:7: error: 'isr' undeclared (first use in this function); did you mean 'idr'? 6760 427 | if (!isr) 6761 | ^~~ 6762 | idr 6763drivers/gpu/drm/msm/dp/dp_aux.c:427:7: note: each undeclared identifier is reported only once for each function it appears in 6764make[4]: *** [scripts/Makefile.build:286: drivers/gpu/drm/msm/dp/dp_aux.o] Error 1 6765make[4]: *** Waiting for unfinished jobs.... 6766
Best regards, Pavel
On Mon, Jan 16, 2023 at 12:58:35PM -0600, Daniel Díaz wrote:
Hello!
On Mon, 16 Jan 2023 at 10:06, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.10.164 release. There are 64 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 18 Jan 2023 15:47:28 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.10.164-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.10.y and the diffstat can be found below.
thanks,
greg k-h
Preliminarily,
| /builds/linux/drivers/gpu/drm/msm/dp/dp_aux.c: In function 'dp_aux_isr': | /builds/linux/drivers/gpu/drm/msm/dp/dp_aux.c:427:14: error: 'isr' undeclared (first use in this function); did you mean 'idr'? | 427 | if (!isr) | | ^~~ | | idr
It's currently failing for arm, arm64, (not i386) and x86, with GCC 8, 10, 11, 12; Clang 15 and nightly. We'll test the extended set of architectures and update momentarily.
Thanks for the report, now fixed up in my tree, I'll push out a new -rc2 later today with it.
greg k-h
On 1/16/23 08:51, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.10.164 release. There are 64 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 18 Jan 2023 15:47:28 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.10.164-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.10.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
Hi Greg,
On Mon, Jan 16, 2023 at 04:51:07PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.10.164 release. There are 64 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 18 Jan 2023 15:47:28 +0000. Anything received after that time might be too late.
Build test (gcc version 11.3.1 20221127): mips: 63 configs -> no failure arm: 104 configs -> 1 failure arm64: 3 configs -> 1 failure x86_64: 4 configs -> no failure alpha allmodconfig -> no failure powerpc allmodconfig -> no failure riscv allmodconfig -> no failure s390 allmodconfig -> no failure xtensa allmodconfig -> no failure
Note: arm and arm64 builds fail with the error: drivers/gpu/drm/msm/dp/dp_aux.c: In function 'dp_aux_isr': drivers/gpu/drm/msm/dp/dp_aux.c:427:14: error: 'isr' undeclared (first use in this function); did you mean 'idr'? 427 | if (!isr) | ^~~ |
Boot test: x86_64: Booted on my test laptop. No regression. x86_64: Booted on qemu. No regression. [1] arm64: Booted on rpi4b (4GB model). No regression. [2]
[1]. https://openqa.qa.codethink.co.uk/tests/2661 [2]. https://openqa.qa.codethink.co.uk/tests/2667
Tested-by: Sudip Mukherjee sudip.mukherjee@codethink.co.uk
On Tue, Jan 17, 2023 at 12:35:20PM +0000, Sudip Mukherjee wrote:
Hi Greg,
On Mon, Jan 16, 2023 at 04:51:07PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.10.164 release. There are 64 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 18 Jan 2023 15:47:28 +0000. Anything received after that time might be too late.
Build test (gcc version 11.3.1 20221127): mips: 63 configs -> no failure arm: 104 configs -> 1 failure arm64: 3 configs -> 1 failure x86_64: 4 configs -> no failure alpha allmodconfig -> no failure powerpc allmodconfig -> no failure riscv allmodconfig -> no failure s390 allmodconfig -> no failure xtensa allmodconfig -> no failure
Note: arm and arm64 builds fail with the error: drivers/gpu/drm/msm/dp/dp_aux.c: In function 'dp_aux_isr': drivers/gpu/drm/msm/dp/dp_aux.c:427:14: error: 'isr' undeclared (first use in this function); did you mean 'idr'? 427 | if (!isr) | ^~~ |
Should now be fixed in -rc2.
thanks,
greg k-h
linux-stable-mirror@lists.linaro.org