This is the start of the stable review cycle for the 5.5.18 release. There are 257 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat, 18 Apr 2020 13:11:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.5.18-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.5.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 5.5.18-rc1
Faiz Abbas faiz_abbas@ti.com mmc: sdhci: Refactor sdhci_set_timeout()
Faiz Abbas faiz_abbas@ti.com mmc: sdhci: Convert sdhci_set_timeout_irq() to non-static
Christophe Leroy christophe.leroy@c-s.fr powerpc/kasan: Fix kasan_remap_early_shadow_ro()
Peter Zijlstra peterz@infradead.org perf/core: Remove 'struct sched_in_data'
Peter Zijlstra peterz@infradead.org perf/core: Fix event cgroup tracking
Peter Zijlstra peterz@infradead.org perf/core: Unify {pinned,flexible}_sched_in()
Trond Myklebust trond.myklebust@hammerspace.com NFS: finish_automount() requires us to hold 2 refs to the mount record
Imre Deak imre.deak@intel.com drm/i915/icl+: Don't enable DDI IO power on a TypeC port in TBT mode
Prike Liang Prike.Liang@amd.com drm/amdgpu: fix gfx hang during suspend with video playback (v2)
Lyude Paul lyude@redhat.com drm/dp_mst: Fix clearing payload state on topology disable
Sasha Levin sashal@kernel.org Revert "drm/dp_mst: Remove VCPI while disabling topology mgr"
Torsten Duwe duwe@lst.de drm/bridge: analogix-anx78xx: Fix drm_dp_link helper removal
Mark Brown broonie@kernel.org arm64: Always force a branch protection mode when the compiler has one
Sreekanth Reddy sreekanth.reddy@broadcom.com scsi: mpt3sas: Fix kernel panic observed on soft HBA unplug
Michael Ellerman mpe@ellerman.id.au powerpc/64: Prevent stack protection in early boot
Christophe Leroy christophe.leroy@c-s.fr powerpc/kprobes: Ignore traps that happened in real mode
Cédric Le Goater clg@kaod.org powerpc/xive: Fix xmon support on the PowerNV platform
Daniel Axtens dja@axtens.net powerpc/64: Setup a paca before parsing device tree etc.
Cédric Le Goater clg@kaod.org powerpc/xive: Use XIVE_BAD_IRQ instead of zero to catch non configured IPIs
Aneesh Kumar K.V aneesh.kumar@linux.ibm.com powerpc/hash64/devmap: Use H_PAGE_THP_HUGE when setting up huge devmap PTE entries
Laurentiu Tudor laurentiu.tudor@nxp.com powerpc/fsl_booke: Avoid creating duplicate tlb1 entry
Michael Ellerman mpe@ellerman.id.au powerpc/64/tm: Don't let userspace set regs->trap via sigreturn
Clement Courbet courbet@google.com powerpc: Make setjmp/longjmp signature standard
Juergen Gross jgross@suse.com xen/blkfront: fix memory allocation flags in blkfront_setup_indirect()
Wen Yang wenyang@linux.alibaba.com ipmi: fix hung processes in __get_guid()
Kai-Heng Feng kai.heng.feng@canonical.com libata: Return correct status in sata_pmp_eh_recover_pm() when ATA_DFLAG_DETACH is set
Simon Gander simon@tuxera.com hfsplus: fix crash and filesystem corruption when deleting files
Oliver O'Halloran oohall@gmail.com cpufreq: powernv: Fix use-after-free
Eric Biggers ebiggers@google.com kmod: make request_module() return an error when autoloading is disabled
Paul Cercueil paul@crapouillou.net clk: ingenic/TCU: Fix round_rate returning error
Paul Cercueil paul@crapouillou.net clk: ingenic/jz4770: Exit with error if CGU init failed
Masami Hiramatsu mhiramat@kernel.org ftrace/kprobe: Show the maxactive number on kprobe_events
Hans de Goede hdegoede@redhat.com Input: i8042 - add Acer Aspire 5738z to nomux list
Michael Mueller mimu@linux.ibm.com s390/diag: fix display of diagnose call statistics
Sam Lunt samueljlunt@gmail.com perf tools: Support Python 3.8+ in Makefile
Changwei Ge chge@linux.alibaba.com ocfs2: no need try to truncate file beyond i_size
Eric Biggers ebiggers@google.com fs/filesystems.c: downgrade user-reachable WARN_ONCE() to pr_warn_once()
Chris Wilson chris@chris-wilson.co.uk drm/i915/gt: Treat idling as a RPS downclock event
Qian Cai cai@lca.pw ext4: fix a data race at inode->i_blocks
Trond Myklebust trond.myklebust@hammerspace.com NFS: Fix a page leak in nfs_destroy_unlinked_subrequests()
Trond Myklebust trond.myklebust@hammerspace.com NFS: Fix use-after-free issues in nfs_pageio_add_request()
J. Bruce Fields bfields@redhat.com nfsd: fsnotify on rmdir under nfsd/clients/
Hans de Goede hdegoede@redhat.com drm/vboxvideo: Add missing remove_conflicting_pci_framebuffers call, v2
Libor Pechacek lpechacek@suse.cz powerpc/pseries: Avoid NULL pointer dereference when drmem is unavailable
Marek Szyprowski m.szyprowski@samsung.com drm/prime: fix extracting of the DMA addresses from a scatterlist
Michael Strauss michael.strauss@amd.com drm/amd/display: Check for null fclk voltage when parsing clock table
Aaron Liu aaron.liu@amd.com drm/amdgpu: unify fw_write_wait for new gfx9 asics
Yuxian Dai Yuxian.Dai@amd.com drm/amdgpu/powerplay: using the FCLK DPM table to set the MCLK
Chris Wilson chris@chris-wilson.co.uk drm: Remove PageReserved manipulation from drm_pci_alloc
Christian Gmeiner christian.gmeiner@gmail.com drm/etnaviv: rework perfmon query infrastructure
Chris Wilson chris@chris-wilson.co.uk drm/i915/gem: Flush all the reloc_gpu batch
Eric Auger eric.auger@redhat.com vfio: platform: Switch to platform_get_irq_optional()
Michael Ellerman mpe@ellerman.id.au selftests/powerpc: Fix try-run when source tree is not writable
Christophe Leroy christophe.leroy@c-s.fr selftests/powerpc: Add tlbie_test in .gitignore
Christophe Leroy christophe.leroy@c-s.fr selftests/vm: fix map_hugetlb length used for testing read and write
Michal Hocko mhocko@suse.com selftests: vm: drop dependencies on page flags from mlock2 tests
Fredrik Strupe fredrik@strupe.net arm64: armv8_deprecated: Fix undef_hook mask for thumb setend
Dave Gerlach d-gerlach@ti.com arm64: dts: ti: k3-am65: Add clocks to dwc3 nodes
Marek Szyprowski m.szyprowski@samsung.com ARM: dts: exynos: Fix polarity of the LCD SPI bus on UniversalC210 board
James Smart jsmart2021@gmail.com scsi: lpfc: Fix broken Credit Recovery after driver load
James Smart jsmart2021@gmail.com scsi: lpfc: Fix lpfc_io_buf resource leak in lpfc_get_scsi_buf_s4 error path
Stanley Chu stanley.chu@mediatek.com scsi: ufs: fix Auto-Hibern8 error detection
Steffen Maier maier@linux.ibm.com scsi: zfcp: fix missing erp_lock in port recovery trigger for point-to-point
Gilad Ben-Yossef gilad@benyossef.com crypto: ccree - dec auth tag size from cryptlen map
Gilad Ben-Yossef gilad@benyossef.com crypto: ccree - only try to map auth tag if needed
Gilad Ben-Yossef gilad@benyossef.com crypto: ccree - protect against empty or NULL scatterlists
Andrei Botila andrei.botila@nxp.com crypto: caam - update xts sector size for large input length
Horia Geantă horia.geanta@nxp.com crypto: caam/qi2 - fix chacha20 data size error
Matthew Wilcox (Oracle) willy@infradead.org xarray: Fix early termination of xas_for_each_marked
Matthew Wilcox (Oracle) willy@infradead.org XArray: Fix xas_pause for large multi-index entries
Nikos Tsironis ntsironis@arrikto.com dm clone metadata: Fix return type of dm_clone_nr_of_hydrated_regions()
Nikos Tsironis ntsironis@arrikto.com dm clone: Add missing casts to prevent overflows and data corruption
Nikos Tsironis ntsironis@arrikto.com dm clone: Add overflow check for number of regions
Nikos Tsironis ntsironis@arrikto.com dm clone: Fix handling of partial region discards
Bob Liu bob.liu@oracle.com dm zoned: remove duplicate nr_rnd_zones increase in dmz_init_zone()
Shetty, Harshini X (EXT-Sony Mobile) Harshini.X.Shetty@sony.com dm verity fec: fix memory leak in verity_fec_dtr
Mikulas Patocka mpatocka@redhat.com dm integrity: fix a crash with unusually large tag size
Mikulas Patocka mpatocka@redhat.com dm writecache: add cond_resched to avoid CPU hangs
Jakub Kicinski kuba@kernel.org mm, memcg: do not high throttle allocators based on wraparound
Maxime Ripard maxime@cerno.tech arm64: dts: allwinner: h5: Fix PMU compatible
Scott Wood swood@redhat.com sched/core: Remove duplicate assignment in sched_tick_remote()
Maxime Ripard maxime@cerno.tech arm64: dts: allwinner: h6: Fix PMU compatible
Subash Abhinov Kasiviswanathan subashab@codeaurora.org net: qualcomm: rmnet: Allow configuration updates to existing devices
Anssi Hannula anssi.hannula@bitwise.fi tools: gpio: Fix out-of-tree build regression
Yangbo Lu yangbo.lu@nxp.com mmc: sdhci-of-esdhc: fix esdhc_reset() for different controller versions
Jens Axboe axboe@kernel.dk io_uring: honor original task RLIMIT_FSIZE
Rosioru Dragos dragos.rosioru@nxp.com crypto: mxs-dcp - fix scatterlist linearization for hash
Dan Carpenter dan.carpenter@oracle.com crypto: rng - Fix a refcounting bug in crypto_rng_reset()
Nikita Shubin NShubin@topcon.com remoteproc: Fix NULL pointer dereference in rproc_virtio_notify
Sibi Sankar sibis@codeaurora.org remoteproc: qcom_q6v5_mss: Reload the mba region on coredump
Bjorn Andersson bjorn.andersson@linaro.org remoteproc: qcom_q6v5_mss: Don't reassign mpss region on shutdown
Josef Bacik josef@toxicpanda.com btrfs: use nofs allocations for running delayed items
Robbie Ko robbieko@synology.com btrfs: fix missing semaphore unlock in btrfs_sync_file
Josef Bacik josef@toxicpanda.com btrfs: unset reloc control if we fail to recover
Filipe Manana fdmanana@suse.com btrfs: fix missing file extent item for hole after ranged fsync
Josef Bacik josef@toxicpanda.com btrfs: drop block from cache on error in relocation
Josef Bacik josef@toxicpanda.com btrfs: set update the uuid generation as soon as possible
Josef Bacik josef@toxicpanda.com btrfs: reloc: clean dirty subvols if we fail to start a transaction
Filipe Manana fdmanana@suse.com Btrfs: fix crash during unmount due to race with delayed inode workers
Qu Wenruo wqu@suse.com btrfs: Don't submit any btree write bio if the fs has errors
Piotr Sroka piotrs@cadence.com mtd: rawnand: cadence: reinit completion before executing a new command
Piotr Sroka piotrs@cadence.com mtd: rawnand: cadence: change bad block marker size
Piotr Sroka piotrs@cadence.com mtd: rawnand: cadence: fix the calculation of the avaialble OOB size
Frieder Schrempf frieder.schrempf@kontron.de mtd: spinand: Do not erase the block before writing a bad block marker
Frieder Schrempf frieder.schrempf@kontron.de mtd: spinand: Stop using spinand->oobbuf for buffering bad block markers
Yilu Lin linyilu@huawei.com CIFS: Fix bug which the return value by asynchronous read is error
Steve French stfrench@microsoft.com smb3: fix performance regression with setting mtime
Vitaly Kuznetsov vkuznets@redhat.com KVM: VMX: fix crash cleanup when KVM wasn't used
Sean Christopherson sean.j.christopherson@intel.com KVM: VMX: Add a trampoline to fix VMREAD error handling
Sean Christopherson sean.j.christopherson@intel.com KVM: x86: Gracefully handle __vmalloc() failure during VM allocation
Sean Christopherson sean.j.christopherson@intel.com KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec support
Sean Christopherson sean.j.christopherson@intel.com KVM: x86: Allocate new rmap and large page tracking when moving memslot
David Hildenbrand david@redhat.com KVM: s390: vsie: Fix delivery of addressing exceptions
David Hildenbrand david@redhat.com KVM: s390: vsie: Fix region 1 ASCE sanity shadow address checks
Sean Christopherson sean.j.christopherson@intel.com KVM: nVMX: Properly handle userspace interrupt window request
Fabiano Rosas farosas@linux.ibm.com KVM: PPC: Book3S HV: Skip kvmppc_uvmem_free if Ultravisor is not supported
Kristian Klausen kristian@klausen.dk platform/x86: asus-wmi: Support laptops where the first battery is named BATT
Thomas Gleixner tglx@linutronix.de x86/entry/32: Add missing ASM_CLAC to general_protection entry
Hans de Goede hdegoede@redhat.com x86/tsc_msr: Make MSR derived TSC frequency more accurate
Hans de Goede hdegoede@redhat.com x86/tsc_msr: Fix MSR_FSB_FREQ mask for Cherry Trail devices
Hans de Goede hdegoede@redhat.com x86/tsc_msr: Use named struct initializers
Eric W. Biederman ebiederm@xmission.com signal: Extend exec_id to 64bits
Remi Pommarel repk@triplefau.lt ath9k: Handle txpower changes even when TPC is disabled
Neeraj Upadhyay neeraju@codeaurora.org PM: sleep: wakeup: Skip wakeup_source_sysfs_remove() if device is not there
Ulf Hansson ulf.hansson@linaro.org PM / Domains: Allow no domain-idle-states DT property in genpd when parsing
Gustavo A. R. Silva gustavo@embeddedor.com MIPS: OCTEON: irq: Fix potential NULL pointer dereference
Huacai Chen chenhc@lemote.com MIPS/tlbex: Fix LDDIR usage in setup_pw() for Loongson-3
Vasily Averin vvs@virtuozzo.com pstore: pstore_ftrace_seq_next should increase position index
Jens Axboe axboe@kernel.dk io_uring: remove bogus RLIMIT_NOFILE check in file registration
Sungbo Eo mans0n@gorani.run irqchip/versatile-fpga: Apply clear-mask earlier
Thomas Gleixner tglx@linutronix.de genirq/debugfs: Add missing sanity checks to interrupt injection
Thomas Gleixner tglx@linutronix.de cpu/hotplug: Ignore pm_wakeup_pending() for disable_nonboot_cpus()
Paul E. McKenney paulmck@kernel.org rcu: Make rcu_barrier() account for offline no-CBs CPUs
Ludovic Barre ludovic.barre@st.com mmc: mmci_sdmmc: Fix clear busyd0end irq flag
Yang Xu xuyang2018.jy@cn.fujitsu.com KEYS: reaching the keys quotas correctly
Vasily Averin vvs@virtuozzo.com tpm: tpm2_bios_measurements_next should increase position index
Vasily Averin vvs@virtuozzo.com tpm: tpm1_bios_measurements_next should increase position index
Matthew Garrett matthewgarrett@google.com tpm: Don't make log failures fatal
Vincent Guittot vincent.guittot@linaro.org sched/fair: Fix enqueue_task_fair warning
Gao Xiang xiang@kernel.org erofs: correct the remaining shrink objects
Kishon Vijay Abraham I kishon@ti.com PCI: endpoint: Fix for concurrent memory allocation in OB address region
Bjorn Andersson bjorn.andersson@linaro.org PCI: qcom: Fix the fixup of PCI_VENDOR_ID_QCOM
Sean V Kelley sean.v.kelley@linux.intel.com PCI: Add boot interrupt quirk mechanism for Xeon chipsets
Yicong Yang yangyicong@hisilicon.com PCI/ASPM: Clear the correct bits when enabling L1 substates
Lukas Wunner lukas@wunner.de PCI: pciehp: Fix indefinite wait on sysfs requests
Tom Lendacky thomas.lendacky@amd.com efi/x86: Add TPM related EFI tables to unencrypted mapping checks
James Smart jsmart2021@gmail.com nvme-fc: Revert "add module to ops template to allow module references"
Sagi Grimberg sagi@grimberg.me nvmet-tcp: fix maxh2cdata icresp parameter
Martin Blumenstingl martin.blumenstingl@googlemail.com thermal: devfreq_cooling: inline all stubs for CONFIG_DEVFREQ_THERMAL=n
Rafael J. Wysocki rafael.j.wysocki@intel.com ACPI: PM: s2idle: Refine active GPEs check
Rafael J. Wysocki rafael.j.wysocki@intel.com ACPICA: Allow acpi_any_gpe_status_set() to skip one GPE
Jan Engelhardt jengelh@inai.de acpi/x86: ignore unspecified bit positions in the ACPI global lock field
Rafael J. Wysocki rafael.j.wysocki@intel.com ACPI: EC: Avoid printing confusing messages in acpi_ec_setup()
Sven Schnelle svens@linux.ibm.com seccomp: Add missing compat_ioctl for notify
Benoit Parrot bparrot@ti.com media: ti-vpe: cal: fix a kernel oops when unloading module
Benoit Parrot bparrot@ti.com media: ti-vpe: cal: fix disable_irqs to only the intended target
Andrzej Pietrasiewicz andrzej.p@collabora.com media: hantro: Read be32 words starting at every fourth byte
Stanimir Varbanov stanimir.varbanov@linaro.org media: venus: firmware: Ignore secure call error on first resume
Stanimir Varbanov stanimir.varbanov@linaro.org media: venus: cache vb payload to be used by clock scaling
Takashi Iwai tiwai@suse.de ALSA: hda/realtek - Add quirk for MSI GL63
Hans de Goede hdegoede@redhat.com ALSA: hda/realtek - Add quirk for Lenovo Carbon X1 8th gen
Thomas Hebb tommyhebb@gmail.com ALSA: hda/realtek - Remove now-unnecessary XPS 13 headphone noise fixups
Thomas Hebb tommyhebb@gmail.com ALSA: hda/realtek - Set principled PC Beep configuration for ALC256
Thomas Hebb tommyhebb@gmail.com ALSA: doc: Document PC Beep Hidden Register on Realtek ALC256
Hui Wang hui.wang@canonical.com ALSA: hda/realtek - a fake key event is triggered by running shutup
Kai-Heng Feng kai.heng.feng@canonical.com ALSA: hda/realtek: Enable mute LED on an HP system
Takashi Iwai tiwai@suse.de ALSA: pcm: oss: Fix regression by buffer overflow fix
Takashi Iwai tiwai@suse.de ALSA: ice1724: Fix invalid access for enumerated ctl items
Takashi Iwai tiwai@suse.de ALSA: hda: Fix potential access overflow in beep helper
Takashi Iwai tiwai@suse.de ALSA: hda: Add driver blacklist
Takashi Iwai tiwai@suse.de ALSA: usb-audio: Add mixer workaround for TRX40 and co
Thinh Nguyen Thinh.Nguyen@synopsys.com usb: gadget: composite: Inform controller driver of self-powered
Sriharsha Allenki sallenki@codeaurora.org usb: gadget: f_fs: Fix use after free issue as part of queue failure
이경택 gt82.lee@samsung.com ASoC: topology: use name_prefix for new kcontrol
이경택 gt82.lee@samsung.com ASoC: dpcm: allow start or stop during pause for backend
이경택 gt82.lee@samsung.com ASoC: dapm: connect virtual mux with default value
이경택 gt82.lee@samsung.com ASoC: fix regwmask
Josef Bacik josef@toxicpanda.com btrfs: track reloc roots based on their commit root bytenr
Josef Bacik josef@toxicpanda.com btrfs: restart relocate_tree_blocks properly
Josef Bacik josef@toxicpanda.com btrfs: remove a BUG_ON() from merge_reloc_roots()
Qu Wenruo wqu@suse.com btrfs: qgroup: ensure qgroup_rescan_running is only set when the worker is at least queued
Zhiqiang Liu liuzhiqiang26@huawei.com block, bfq: fix use-after-free in bfq_idle_slice_timer_body
Boqun Feng boqun.feng@gmail.com locking/lockdep: Avoid recursion in lockdep_count_{for,back}ward_deps()
Vladimir Oltean vladimir.oltean@nxp.com spi: spi-fsl-dspi: Replace interruptible wait queue with a simple completion
Junyong Sun sunjy516@gmail.com firmware: fix a double abort case with fw_load_sysfs_fallback
Guoqing Jiang guoqing.jiang@cloud.ionos.com md: check arrays is suspended in mddev_detach before call quiesce operations
Marc Zyngier maz@kernel.org irqchip/gic-v4: Provide irq_retrigger to avoid circular locking dependency
Neil Armstrong narmstrong@baylibre.com usb: dwc3: core: add support for disabling SS instances in park mode
Dongchun Zhu dongchun.zhu@mediatek.com media: i2c: ov5695: Fix power on and off sequences
Hsin-Yi Wang hsinyi@chromium.org media: mtk-vpu: avoid unaligned access to DTCM buffer.
Alexey Dobriyan adobriyan@gmail.com block, zoned: fix integer overflow with BLKRESETZONE et al
Sahitya Tummala stummala@codeaurora.org block: Fix use-after-free issue accessing struct io_cq
Alexander Sverdlin alexander.sverdlin@nokia.com genirq/irqdomain: Check pointer in irq_domain_alloc_irqs_hierarchy()
Ard Biesheuvel ardb@kernel.org efi/x86: Ignore the memory attributes table on i386
Arvind Sankar nivedita@alum.mit.edu x86/boot: Use unsigned comparison for addresses
Peng Fan peng.fan@nxp.com cpufreq: imx6q: fix error handling
Bob Peterson rpeterso@redhat.com gfs2: Don't demote a glock until its revokes are written
Bob Peterson rpeterso@redhat.com gfs2: Do log_flush in gfs2_ail_empty_gl even if ail list is empty
chenqiwu chenqiwu@xiaomi.com pstore/platform: fix potential mem leak if pstore_init_fs failed
John Garry john.garry@huawei.com libata: Remove extra scsi_host_put() in ata_scsi_add_hosts()
Matt Ranostay matt.ranostay@konsulko.com media: i2c: video-i2c: fix build errors due to 'imply hwmon'
Paolo Valente paolo.valente@linaro.org block, bfq: move forward the getting of an extra ref in bfq_bfqq_move
Logan Gunthorpe logang@deltatee.com PCI/switchtec: Fix init_completion race condition with poll_wait()
Andy Lutomirski luto@kernel.org selftests/x86/ptrace_syscall_32: Fix no-vDSO segfault
Tao Zhou ouwen210@hotmail.com sched/fair: Fix condition of avg_load calculation
Michael Wang yun.wang@linux.alibaba.com sched: Avoid scale real weight down to zero
Michael Tretter m.tretter@pengutronix.de media: allegro: fix type of gop_length in channel_create message
Ahmed S. Darwish a.darwish@linutronix.de time/sched_clock: Expire timer in hardirq context
Sungbo Eo mans0n@gorani.run irqchip/versatile-fpga: Handle chained IRQs properly
Vladimir Oltean vladimir.oltean@nxp.com spi: spi-fsl-dspi: Avoid NULL pointer in dspi_slave_abort for non-DMA mode
Taehee Yoo ap420073@gmail.com debugfs: Check module state before warning in {full/open}_proxy_open()
Konstantin Khlebnikov khlebnikov@yandex-team.ru block: keep bdi->io_pages in sync with max_sectors_kb for stacked devices
Thomas Hellstrom thellstrom@vmware.com dma-mapping: Fix dma_pgprot() for unencrypted coherent pages
Thomas Hellstrom thellstrom@vmware.com x86: Don't let pgprot_modify() change the page encryption bit
Rafael J. Wysocki rafael.j.wysocki@intel.com ACPI: EC: Do not clear boot_ec_is_ecdt in acpi_ec_add()
Mathias Nyman mathias.nyman@linux.intel.com xhci: bail out early if driver can't accress host in resume
Laurent Pinchart laurent.pinchart@ideasonboard.com media: imx: imx7-media-csi: Fix video field handling
Laurent Pinchart laurent.pinchart@ideasonboard.com media: imx: imx7_mipi_csis: Power off the source when stopping streaming
Alexey Dobriyan adobriyan@gmail.com null_blk: fix spurious IO errors after failed past-wp access
Bart Van Assche bvanassche@acm.org null_blk: Handle null_add_dev() failures properly
Bart Van Assche bvanassche@acm.org blk-mq: Fix a recently introduced regression in blk_mq_realloc_hw_ctxs()
Bart Van Assche bvanassche@acm.org null_blk: Fix the null_add_dev() error path
Chris Wilson chris@chris-wilson.co.uk sched/vtime: Prevent unstable evaluation of WARN(vtime->state)
James Morse james.morse@arm.com firmware: arm_sdei: fix double-lock on hibernate with shared events
Stephan Gerhold stephan@gerhold.net media: venus: hfi_parser: Ignore HEVC encoding for V1
Dafna Hirschfeld dafna.hirschfeld@collabora.com media: vimc: streamer: fix memory leak in vimc subdevs if kthread_run fails
Ajay Singh ajay.kathat@microchip.com staging: wilc1000: avoid double unlocking of 'wilc->hif_cs' mutex
Ajay Gupta ajayg@nvidia.com usb: ucsi: ccg: disable runtime pm during fw flashing
Robert Richter rrichter@marvell.com EDAC/mc: Report "unknown memory" on too many DIMM labels found
Christoph Niedermaier cniedermaier@dh-electronics.com cpufreq: imx6q: Fixes unwanted cpu overclocking on i.MX6ULL
Mohammad Rasim mohammad.rasim96@gmail.com media: rc: add keymap for Videostrong KII Pro
Chris Packham chris.packham@alliedtelesis.co.nz i2c: pca-platform: Use platform_irq_get_optional
Alain Volmat avolmat@me.com i2c: st: fix missing struct parameter description
Xu Wang vulab@iscas.ac.cn qlcnic: Fix bad kzalloc null test
Ilan Peer ilan.peer@intel.com cfg80211: Do not warn on same channel at the end of CSA
Yintian Tao yttao@amd.com drm/scheduler: fix rare NULL ptr race
Raju Rangoju rajur@chelsio.com cxgb4/ptp: pass the sign of offset delta in FW CMD
Alan Maguire alan.maguire@oracle.com selftests/net: add definition for SOL_DCCP to fix compilation errors for old libc
Luo bin luobin9@huawei.com hinic: fix wrong value of MIN_SKB_LEN
Luo bin luobin9@huawei.com hinic: fix wrong para of wait_for_completion_timeout
Luo bin luobin9@huawei.com hinic: fix out-of-order excution in arm cpu
Luo bin luobin9@huawei.com hinic: fix the bug of clearing event queue
Luo bin luobin9@huawei.com hinic: fix a bug of waitting for IO stopped
Greentime Hu greentime.hu@sifive.com riscv: uaccess should be used in nommu mode
Tony Lindgren tony@atomide.com ARM: dts: omap4-droid4: Fix lost touchscreen interrupts
Zheng Wei wei.zheng@vivo.com net: vxge: fix wrong __VA_ARGS__ usage
Markus Fuchs mklntf@gmail.com net: stmmac: platform: Fix misleading interrupt error msg
David Howells dhowells@redhat.com rxrpc: Fix call interruptibility handling
David Howells dhowells@redhat.com rxrpc: Abstract out the calculation of whether there's Tx space
Grigore Popescu grigore.popescu@nxp.com soc: fsl: dpio: register dpio irq handlers after dpio create
Nick Reitemeyer nick.reitemeyer@web.de Input: tm2-touchkey - add support for Coreriver TC360 variant
Ilan Peer ilan.peer@intel.com iwlwifi: mvm: Fix rate scale NSS configuration
Avraham Stern avraham.stern@intel.com iwlwifi: mvm: take the required lock when clearing time event data
Yonghong Song yhs@fb.com bpf: Fix deadlock with rq_lock in bpf_send_signal()
Tony Lindgren tony@atomide.com ARM: dts: Fix dm814x Ethernet by changing to use rgmii-id mode
Ondrej Jirman megous@megous.com bus: sunxi-rsb: Return correct data when mixing 16-bit and 8-bit reads
Ondrej Jirman megous@megous.com ARM: dts: sun8i-a83t-tbs-a711: HM5065 doesn't like such a high voltage
-------------
Diffstat:
Documentation/sound/hd-audio/index.rst | 1 + Documentation/sound/hd-audio/models.rst | 2 - Documentation/sound/hd-audio/realtek-pc-beep.rst | 129 ++++++++++++ Makefile | 4 +- arch/arm/boot/dts/dm8148-evm.dts | 4 +- arch/arm/boot/dts/dm8148-t410.dts | 4 +- arch/arm/boot/dts/dra62x-j5eco-evm.dts | 4 +- arch/arm/boot/dts/exynos4210-universal_c210.dts | 4 +- arch/arm/boot/dts/motorola-mapphone-common.dtsi | 2 +- arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts | 4 +- arch/arm64/Makefile | 4 + arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi | 3 +- arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi | 3 +- arch/arm64/boot/dts/ti/k3-am65-main.dtsi | 2 + arch/arm64/kernel/armv8_deprecated.c | 2 +- arch/mips/cavium-octeon/octeon-irq.c | 3 + arch/mips/mm/tlbex.c | 5 +- arch/powerpc/include/asm/book3s/64/hash-4k.h | 6 + arch/powerpc/include/asm/book3s/64/hash-64k.h | 8 +- arch/powerpc/include/asm/book3s/64/pgtable.h | 4 +- arch/powerpc/include/asm/book3s/64/radix.h | 5 + arch/powerpc/include/asm/drmem.h | 4 +- arch/powerpc/include/asm/setjmp.h | 6 +- arch/powerpc/kernel/dt_cpu_ftrs.c | 1 - arch/powerpc/kernel/kprobes.c | 3 + arch/powerpc/kernel/paca.c | 14 +- arch/powerpc/kernel/setup.h | 6 + arch/powerpc/kernel/setup_64.c | 32 ++- arch/powerpc/kernel/signal_64.c | 4 +- arch/powerpc/kexec/Makefile | 3 - arch/powerpc/kvm/book3s_hv_uvmem.c | 3 + arch/powerpc/mm/kasan/kasan_init_32.c | 2 +- arch/powerpc/mm/nohash/tlb_low.S | 12 +- arch/powerpc/platforms/pseries/hotplug-memory.c | 8 +- arch/powerpc/sysdev/xive/common.c | 16 +- arch/powerpc/sysdev/xive/native.c | 4 +- arch/powerpc/sysdev/xive/spapr.c | 4 +- arch/powerpc/sysdev/xive/xive-internal.h | 7 + arch/powerpc/xmon/Makefile | 3 - arch/riscv/Kconfig | 1 - arch/riscv/include/asm/uaccess.h | 36 ++-- arch/riscv/lib/Makefile | 2 +- arch/s390/kernel/diag.c | 2 +- arch/s390/kvm/vsie.c | 1 + arch/s390/mm/gmap.c | 6 +- arch/x86/boot/compressed/head_32.S | 2 +- arch/x86/boot/compressed/head_64.S | 4 +- arch/x86/entry/entry_32.S | 1 + arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/include/asm/pgtable.h | 7 +- arch/x86/include/asm/pgtable_types.h | 2 +- arch/x86/kernel/acpi/boot.c | 2 +- arch/x86/kernel/tsc_msr.c | 128 +++++++++-- arch/x86/kvm/svm.c | 4 + arch/x86/kvm/vmx/nested.c | 18 +- arch/x86/kvm/vmx/ops.h | 28 ++- arch/x86/kvm/vmx/vmenter.S | 58 +++++ arch/x86/kvm/vmx/vmx.c | 92 +++----- arch/x86/kvm/x86.c | 21 +- arch/x86/platform/efi/efi.c | 2 + block/bfq-cgroup.c | 14 +- block/bfq-iosched.c | 16 +- block/blk-ioc.c | 7 + block/blk-mq.c | 1 - block/blk-settings.c | 3 + block/blk-zoned.c | 2 +- crypto/rng.c | 8 +- drivers/acpi/acpica/achware.h | 2 +- drivers/acpi/acpica/evxfgpe.c | 17 +- drivers/acpi/acpica/hwgpe.c | 47 ++++- drivers/acpi/ec.c | 26 ++- drivers/acpi/internal.h | 1 + drivers/acpi/sleep.c | 19 +- drivers/ata/libata-pmp.c | 1 + drivers/ata/libata-scsi.c | 9 +- drivers/base/firmware_loader/fallback.c | 2 +- drivers/base/power/domain.c | 2 +- drivers/base/power/wakeup.c | 4 +- drivers/block/null_blk_main.c | 10 +- drivers/block/xen-blkfront.c | 17 +- drivers/bus/sunxi-rsb.c | 2 +- drivers/char/ipmi/ipmi_msghandler.c | 4 +- drivers/char/tpm/eventlog/common.c | 12 +- drivers/char/tpm/eventlog/tpm1.c | 2 +- drivers/char/tpm/eventlog/tpm2.c | 2 +- drivers/char/tpm/tpm-chip.c | 4 +- drivers/char/tpm/tpm.h | 2 +- drivers/clk/ingenic/jz4770-cgu.c | 4 +- drivers/clk/ingenic/tcu.c | 2 +- drivers/cpufreq/imx6q-cpufreq.c | 12 +- drivers/cpufreq/powernv-cpufreq.c | 6 + drivers/crypto/caam/caamalg_desc.c | 30 ++- drivers/crypto/ccree/cc_buffer_mgr.c | 76 ++++--- drivers/crypto/ccree/cc_buffer_mgr.h | 1 + drivers/crypto/mxs-dcp.c | 58 +++-- drivers/edac/edac_mc.c | 21 +- drivers/firmware/arm_sdei.c | 32 ++- drivers/firmware/efi/efi.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 5 +- drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 2 + .../drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 2 +- drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 6 + drivers/gpu/drm/amd/powerplay/renoir_ppt.h | 2 +- drivers/gpu/drm/bridge/analogix-anx78xx.c | 5 +- drivers/gpu/drm/drm_dp_mst_topology.c | 19 +- drivers/gpu/drm/drm_pci.c | 23 +- drivers/gpu/drm/drm_prime.c | 37 ++-- drivers/gpu/drm/etnaviv/etnaviv_perfmon.c | 59 +++++- drivers/gpu/drm/i915/display/intel_ddi.c | 6 +- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +- drivers/gpu/drm/i915/gt/intel_rps.c | 13 ++ drivers/gpu/drm/scheduler/sched_main.c | 2 + drivers/gpu/drm/vboxvideo/vbox_drv.c | 4 + drivers/i2c/busses/i2c-pca-platform.c | 2 +- drivers/i2c/busses/i2c-st.c | 1 + drivers/input/keyboard/tm2-touchkey.c | 11 + drivers/input/serio/i8042-x86ia64io.h | 11 + drivers/irqchip/irq-gic-v3-its.c | 6 + drivers/irqchip/irq-versatile-fpga.c | 18 +- drivers/md/dm-clone-metadata.c | 15 +- drivers/md/dm-clone-metadata.h | 2 +- drivers/md/dm-clone-target.c | 66 ++++-- drivers/md/dm-integrity.c | 4 +- drivers/md/dm-verity-fec.c | 1 + drivers/md/dm-writecache.c | 6 +- drivers/md/dm-zoned-metadata.c | 1 - drivers/md/md.c | 2 +- drivers/media/i2c/ov5695.c | 49 +++-- drivers/media/i2c/video-i2c.c | 2 +- drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c | 9 +- drivers/media/platform/mtk-vcodec/vdec_vpu_if.c | 6 +- drivers/media/platform/mtk-vcodec/venc_vpu_if.c | 12 +- drivers/media/platform/mtk-vpu/mtk_vpu.c | 45 ++-- drivers/media/platform/mtk-vpu/mtk_vpu.h | 2 +- drivers/media/platform/qcom/venus/core.h | 1 + drivers/media/platform/qcom/venus/firmware.c | 10 +- drivers/media/platform/qcom/venus/helpers.c | 20 +- drivers/media/platform/qcom/venus/hfi_parser.c | 1 + drivers/media/platform/ti-vpe/cal.c | 29 +-- drivers/media/platform/vimc/vimc-streamer.c | 9 +- drivers/media/rc/keymaps/Makefile | 1 + drivers/media/rc/keymaps/rc-videostrong-kii-pro.c | 83 ++++++++ drivers/mmc/host/mmci_stm32_sdmmc.c | 4 +- drivers/mmc/host/sdhci-of-esdhc.c | 43 +++- drivers/mmc/host/sdhci.c | 41 ++-- drivers/mmc/host/sdhci.h | 2 + drivers/mtd/nand/raw/cadence-nand-controller.c | 13 +- drivers/mtd/nand/spi/core.c | 17 +- drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c | 3 + drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 5 +- drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c | 51 +---- drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c | 26 ++- drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c | 5 +- drivers/net/ethernet/huawei/hinic/hinic_rx.c | 3 + drivers/net/ethernet/huawei/hinic/hinic_tx.c | 4 +- drivers/net/ethernet/neterion/vxge/vxge-config.h | 2 +- drivers/net/ethernet/neterion/vxge/vxge-main.h | 14 +- .../net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c | 2 +- drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c | 31 +-- .../net/ethernet/stmicro/stmmac/stmmac_platform.c | 14 +- drivers/net/wireless/ath/ath9k/main.c | 3 + drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c | 29 ++- .../net/wireless/intel/iwlwifi/mvm/time-event.c | 4 + drivers/nvme/host/fc.c | 14 +- drivers/nvme/target/fcloop.c | 1 - drivers/nvme/target/tcp.c | 2 +- drivers/pci/controller/dwc/pcie-qcom.c | 8 +- drivers/pci/endpoint/pci-epc-mem.c | 10 +- drivers/pci/hotplug/pciehp_hpc.c | 14 +- drivers/pci/pcie/aspm.c | 4 +- drivers/pci/quirks.c | 80 ++++++- drivers/pci/switch/switchtec.c | 2 +- drivers/platform/x86/asus-wmi.c | 5 +- drivers/remoteproc/qcom_q6v5_mss.c | 54 +++-- drivers/remoteproc/remoteproc_virtio.c | 7 + drivers/s390/scsi/zfcp_erp.c | 2 +- drivers/scsi/lpfc/lpfc.h | 1 + drivers/scsi/lpfc/lpfc_hbadisc.c | 59 ++++-- drivers/scsi/lpfc/lpfc_nvme.c | 2 - drivers/scsi/lpfc/lpfc_scsi.c | 4 +- drivers/scsi/mpt3sas/mpt3sas_scsih.c | 8 +- drivers/scsi/qla2xxx/qla_nvme.c | 1 - drivers/scsi/ufs/ufshcd.c | 3 +- drivers/scsi/ufs/ufshcd.h | 6 + drivers/soc/fsl/dpio/dpio-driver.c | 8 +- drivers/spi/spi-fsl-dspi.c | 25 +-- drivers/staging/media/allegro-dvt/allegro-core.c | 5 +- drivers/staging/media/hantro/hantro_h1_jpeg_enc.c | 9 +- .../staging/media/hantro/rk3399_vpu_hw_jpeg_enc.c | 9 +- drivers/staging/media/imx/imx7-media-csi.c | 4 + drivers/staging/media/imx/imx7-mipi-csis.c | 2 +- drivers/staging/wilc1000/wlan.c | 1 - drivers/usb/dwc3/core.c | 5 + drivers/usb/dwc3/core.h | 4 + drivers/usb/gadget/composite.c | 9 + drivers/usb/gadget/function/f_fs.c | 1 + drivers/usb/host/xhci.c | 4 +- drivers/usb/typec/ucsi/ucsi_ccg.c | 2 + drivers/vfio/platform/vfio_platform.c | 2 +- fs/afs/rxrpc.c | 3 +- fs/btrfs/async-thread.c | 8 + fs/btrfs/async-thread.h | 1 + fs/btrfs/delayed-inode.c | 13 ++ fs/btrfs/disk-io.c | 27 ++- fs/btrfs/extent_io.c | 35 +++- fs/btrfs/file.c | 11 + fs/btrfs/qgroup.c | 11 +- fs/btrfs/relocation.c | 62 +++--- fs/cifs/file.c | 2 +- fs/cifs/inode.c | 23 +- fs/debugfs/file.c | 18 +- fs/erofs/utils.c | 2 +- fs/exec.c | 2 +- fs/ext4/inode.c | 2 +- fs/filesystems.c | 4 +- fs/gfs2/glock.c | 3 + fs/gfs2/glops.c | 27 ++- fs/gfs2/log.c | 2 +- fs/gfs2/log.h | 1 + fs/hfsplus/attributes.c | 4 + fs/io_uring.c | 17 +- fs/nfs/namespace.c | 12 +- fs/nfs/pagelist.c | 48 ++--- fs/nfs/write.c | 1 + fs/nfsd/nfsctl.c | 1 + fs/ocfs2/alloc.c | 4 + fs/pstore/inode.c | 5 +- fs/pstore/platform.c | 4 +- include/acpi/acpixf.h | 2 +- include/linux/cpu.h | 12 +- include/linux/devfreq_cooling.h | 2 +- include/linux/iocontext.h | 1 + include/linux/nvme-fc-driver.h | 4 - include/linux/pci-epc.h | 3 + include/linux/sched.h | 4 +- include/linux/xarray.h | 6 +- include/media/rc-map.h | 1 + include/net/af_rxrpc.h | 8 +- include/trace/events/rcu.h | 1 + kernel/cpu.c | 4 +- kernel/dma/mapping.c | 2 + kernel/events/core.c | 150 ++++++------- kernel/irq/debugfs.c | 11 +- kernel/irq/irqdomain.c | 10 +- kernel/kmod.c | 4 +- kernel/locking/lockdep.c | 4 + kernel/rcu/tree.c | 36 ++-- kernel/sched/core.c | 1 - kernel/sched/cputime.c | 41 ++-- kernel/sched/fair.c | 29 ++- kernel/sched/sched.h | 8 +- kernel/seccomp.c | 1 + kernel/signal.c | 2 +- kernel/time/sched_clock.c | 9 +- kernel/trace/bpf_trace.c | 2 +- kernel/trace/trace_kprobe.c | 2 + lib/test_xarray.c | 37 ++++ lib/xarray.c | 4 +- mm/memcontrol.c | 3 + net/rxrpc/af_rxrpc.c | 4 +- net/rxrpc/ar-internal.h | 4 +- net/rxrpc/call_object.c | 3 +- net/rxrpc/conn_client.c | 13 +- net/rxrpc/sendmsg.c | 71 +++++-- net/wireless/scan.c | 6 +- security/keys/key.c | 2 +- security/keys/keyctl.c | 4 +- sound/core/oss/pcm_plugin.c | 32 ++- sound/pci/hda/hda_beep.c | 6 +- sound/pci/hda/hda_intel.c | 16 ++ sound/pci/hda/patch_realtek.c | 233 ++++++++++++--------- sound/pci/ice1712/prodigy_hifi.c | 4 +- sound/soc/soc-dapm.c | 8 +- sound/soc/soc-ops.c | 4 +- sound/soc/soc-pcm.c | 6 +- sound/soc/soc-topology.c | 2 +- sound/usb/mixer_maps.c | 28 +++ tools/gpio/Makefile | 2 +- tools/perf/Makefile.config | 11 +- tools/testing/radix-tree/Makefile | 4 +- tools/testing/radix-tree/iteration_check_2.c | 87 ++++++++ tools/testing/radix-tree/main.c | 1 + tools/testing/radix-tree/test.h | 1 + tools/testing/selftests/net/reuseport_addr_any.c | 4 + tools/testing/selftests/powerpc/mm/.gitignore | 1 + tools/testing/selftests/powerpc/pmu/ebb/Makefile | 1 + tools/testing/selftests/vm/map_hugetlb.c | 14 +- tools/testing/selftests/vm/mlock2-tests.c | 233 ++++----------------- tools/testing/selftests/x86/ptrace_syscall.c | 8 +- 289 files changed, 2710 insertions(+), 1405 deletions(-)
From: Ondrej Jirman megous@megous.com
[ Upstream commit a40550952c000667b20082d58077bc647da6c890 ]
Lowering the voltage solves the quick image degradation over time (minutes), that was probably caused by overheating.
Signed-off-by: Ondrej Jirman megous@megous.com Signed-off-by: Maxime Ripard maxime@cerno.tech Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts b/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts index f781d330cff50..e8b3669e0e5d8 100644 --- a/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts +++ b/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts @@ -374,8 +374,8 @@ };
®_dldo3 { - regulator-min-microvolt = <2800000>; - regulator-max-microvolt = <2800000>; + regulator-min-microvolt = <1800000>; + regulator-max-microvolt = <1800000>; regulator-name = "vdd-csi"; };
From: Ondrej Jirman megous@megous.com
[ Upstream commit a43ab30dcd4a1abcdd0d2461bf1cf7c0817f6cd3 ]
When doing a 16-bit read that returns data in the MSB byte, the RSB_DATA register will keep the MSB byte unchanged when doing the following 8-bit read. sunxi_rsb_read() will then return a result that contains high byte from 16-bit read mixed with the 8-bit result.
The consequence is that after this happens the PMIC's regmap will look like this: (0x33 is the high byte from the 16-bit read)
% cat /sys/kernel/debug/regmap/sunxi-rsb-3a3/registers 00: 33 01: 33 02: 33 03: 33 04: 33 05: 33 06: 33 07: 33 08: 33 09: 33 0a: 33 0b: 33 0c: 33 0d: 33 0e: 33 [snip]
Fix this by masking the result of the read with the correct mask based on the size of the read. There are no 16-bit users in the mainline kernel, so this doesn't need to get into the stable tree.
Signed-off-by: Ondrej Jirman megous@megous.com Acked-by: Chen-Yu Tsai wens@csie.org Signed-off-by: Maxime Ripard maxime@cerno.tech Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bus/sunxi-rsb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/bus/sunxi-rsb.c b/drivers/bus/sunxi-rsb.c index be79d6c6a4e45..1bb00a959c67f 100644 --- a/drivers/bus/sunxi-rsb.c +++ b/drivers/bus/sunxi-rsb.c @@ -345,7 +345,7 @@ static int sunxi_rsb_read(struct sunxi_rsb *rsb, u8 rtaddr, u8 addr, if (ret) goto unlock;
- *buf = readl(rsb->regs + RSB_DATA); + *buf = readl(rsb->regs + RSB_DATA) & GENMASK(len * 8 - 1, 0);
unlock: mutex_unlock(&rsb->lock);
From: Tony Lindgren tony@atomide.com
[ Upstream commit b46b2b7ba6e104d265ab705914859ec0db7a98c5 ]
Commit cd28d1d6e52e ("net: phy: at803x: Disable phy delay for RGMII mode") caused a regression for dm814x boards where NFSroot would no longer work.
Let's fix the issue by configuring "rgmii-id" mode as internal delays are needed that is no longer the case with "rgmii" mode.
Signed-off-by: Tony Lindgren tony@atomide.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/dm8148-evm.dts | 4 ++-- arch/arm/boot/dts/dm8148-t410.dts | 4 ++-- arch/arm/boot/dts/dra62x-j5eco-evm.dts | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm/boot/dts/dm8148-evm.dts b/arch/arm/boot/dts/dm8148-evm.dts index 3931fb068ff09..91d1018ab75fc 100644 --- a/arch/arm/boot/dts/dm8148-evm.dts +++ b/arch/arm/boot/dts/dm8148-evm.dts @@ -24,12 +24,12 @@
&cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&cpsw_emac1 { phy-handle = <ðphy1>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&davinci_mdio { diff --git a/arch/arm/boot/dts/dm8148-t410.dts b/arch/arm/boot/dts/dm8148-t410.dts index 9e43d5ec0bb2f..79ccdd4470f4c 100644 --- a/arch/arm/boot/dts/dm8148-t410.dts +++ b/arch/arm/boot/dts/dm8148-t410.dts @@ -33,12 +33,12 @@
&cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&cpsw_emac1 { phy-handle = <ðphy1>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&davinci_mdio { diff --git a/arch/arm/boot/dts/dra62x-j5eco-evm.dts b/arch/arm/boot/dts/dra62x-j5eco-evm.dts index 861ab90a3f3aa..c16e183822bee 100644 --- a/arch/arm/boot/dts/dra62x-j5eco-evm.dts +++ b/arch/arm/boot/dts/dra62x-j5eco-evm.dts @@ -24,12 +24,12 @@
&cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&cpsw_emac1 { phy-handle = <ðphy1>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&davinci_mdio {
From: Yonghong Song yhs@fb.com
[ Upstream commit 1bc7896e9ef44fd77858b3ef0b8a6840be3a4494 ]
When experimenting with bpf_send_signal() helper in our production environment (5.2 based), we experienced a deadlock in NMI mode: #5 [ffffc9002219f770] queued_spin_lock_slowpath at ffffffff8110be24 #6 [ffffc9002219f770] _raw_spin_lock_irqsave at ffffffff81a43012 #7 [ffffc9002219f780] try_to_wake_up at ffffffff810e7ecd #8 [ffffc9002219f7e0] signal_wake_up_state at ffffffff810c7b55 #9 [ffffc9002219f7f0] __send_signal at ffffffff810c8602 #10 [ffffc9002219f830] do_send_sig_info at ffffffff810ca31a #11 [ffffc9002219f868] bpf_send_signal at ffffffff8119d227 #12 [ffffc9002219f988] bpf_overflow_handler at ffffffff811d4140 #13 [ffffc9002219f9e0] __perf_event_overflow at ffffffff811d68cf #14 [ffffc9002219fa10] perf_swevent_overflow at ffffffff811d6a09 #15 [ffffc9002219fa38] ___perf_sw_event at ffffffff811e0f47 #16 [ffffc9002219fc30] __schedule at ffffffff81a3e04d #17 [ffffc9002219fc90] schedule at ffffffff81a3e219 #18 [ffffc9002219fca0] futex_wait_queue_me at ffffffff8113d1b9 #19 [ffffc9002219fcd8] futex_wait at ffffffff8113e529 #20 [ffffc9002219fdf0] do_futex at ffffffff8113ffbc #21 [ffffc9002219fec0] __x64_sys_futex at ffffffff81140d1c #22 [ffffc9002219ff38] do_syscall_64 at ffffffff81002602 #23 [ffffc9002219ff50] entry_SYSCALL_64_after_hwframe at ffffffff81c00068
The above call stack is actually very similar to an issue reported by Commit eac9153f2b58 ("bpf/stackmap: Fix deadlock with rq_lock in bpf_get_stack()") by Song Liu. The only difference is bpf_send_signal() helper instead of bpf_get_stack() helper.
The above deadlock is triggered with a perf_sw_event. Similar to Commit eac9153f2b58, the below almost identical reproducer used tracepoint point sched/sched_switch so the issue can be easily caught. /* stress_test.c */ #include <stdio.h> #include <stdlib.h> #include <sys/mman.h> #include <pthread.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h>
#define THREAD_COUNT 1000 char *filename; void *worker(void *p) { void *ptr; int fd; char *pptr;
fd = open(filename, O_RDONLY); if (fd < 0) return NULL; while (1) { struct timespec ts = {0, 1000 + rand() % 2000};
ptr = mmap(NULL, 4096 * 64, PROT_READ, MAP_PRIVATE, fd, 0); usleep(1); if (ptr == MAP_FAILED) { printf("failed to mmap\n"); break; } munmap(ptr, 4096 * 64); usleep(1); pptr = malloc(1); usleep(1); pptr[0] = 1; usleep(1); free(pptr); usleep(1); nanosleep(&ts, NULL); } close(fd); return NULL; }
int main(int argc, char *argv[]) { void *ptr; int i; pthread_t threads[THREAD_COUNT];
if (argc < 2) return 0;
filename = argv[1];
for (i = 0; i < THREAD_COUNT; i++) { if (pthread_create(threads + i, NULL, worker, NULL)) { fprintf(stderr, "Error creating thread\n"); return 0; } }
for (i = 0; i < THREAD_COUNT; i++) pthread_join(threads[i], NULL); return 0; } and the following command: 1. run `stress_test /bin/ls` in one windown 2. hack bcc trace.py with the following change: # --- a/tools/trace.py # +++ b/tools/trace.py @@ -513,6 +513,7 @@ BPF_PERF_OUTPUT(%s); __data.tgid = __tgid; __data.pid = __pid; bpf_get_current_comm(&__data.comm, sizeof(__data.comm)); + bpf_send_signal(10); %s %s %s.perf_submit(%s, &__data, sizeof(__data)); 3. in a different window run ./trace.py -p $(pidof stress_test) t:sched:sched_switch
The deadlock can be reproduced in our production system.
Similar to Song's fix, the fix is to delay sending signal if irqs is disabled to avoid deadlocks involving with rq_lock. With this change, my above stress-test in our production system won't cause deadlock any more.
I also implemented a scale-down version of reproducer in the selftest (a subsequent commit). With latest bpf-next, it complains for the following potential deadlock. [ 32.832450] -> #1 (&p->pi_lock){-.-.}: [ 32.833100] _raw_spin_lock_irqsave+0x44/0x80 [ 32.833696] task_rq_lock+0x2c/0xa0 [ 32.834182] task_sched_runtime+0x59/0xd0 [ 32.834721] thread_group_cputime+0x250/0x270 [ 32.835304] thread_group_cputime_adjusted+0x2e/0x70 [ 32.835959] do_task_stat+0x8a7/0xb80 [ 32.836461] proc_single_show+0x51/0xb0 ... [ 32.839512] -> #0 (&(&sighand->siglock)->rlock){....}: [ 32.840275] __lock_acquire+0x1358/0x1a20 [ 32.840826] lock_acquire+0xc7/0x1d0 [ 32.841309] _raw_spin_lock_irqsave+0x44/0x80 [ 32.841916] __lock_task_sighand+0x79/0x160 [ 32.842465] do_send_sig_info+0x35/0x90 [ 32.842977] bpf_send_signal+0xa/0x10 [ 32.843464] bpf_prog_bc13ed9e4d3163e3_send_signal_tp_sched+0x465/0x1000 [ 32.844301] trace_call_bpf+0x115/0x270 [ 32.844809] perf_trace_run_bpf_submit+0x4a/0xc0 [ 32.845411] perf_trace_sched_switch+0x10f/0x180 [ 32.846014] __schedule+0x45d/0x880 [ 32.846483] schedule+0x5f/0xd0 ...
[ 32.853148] Chain exists of: [ 32.853148] &(&sighand->siglock)->rlock --> &p->pi_lock --> &rq->lock [ 32.853148] [ 32.854451] Possible unsafe locking scenario: [ 32.854451] [ 32.855173] CPU0 CPU1 [ 32.855745] ---- ---- [ 32.856278] lock(&rq->lock); [ 32.856671] lock(&p->pi_lock); [ 32.857332] lock(&rq->lock); [ 32.857999] lock(&(&sighand->siglock)->rlock);
Deadlock happens on CPU0 when it tries to acquire &sighand->siglock but it has been held by CPU1 and CPU1 tries to grab &rq->lock and cannot get it.
This is not exactly the callstack in our production environment, but sympotom is similar and both locks are using spin_lock_irqsave() to acquire the lock, and both involves rq_lock. The fix to delay sending signal when irq is disabled also fixed this issue.
Signed-off-by: Yonghong Song yhs@fb.com Signed-off-by: Alexei Starovoitov ast@kernel.org Cc: Song Liu songliubraving@fb.com Link: https://lore.kernel.org/bpf/20200304191104.2796501-1-yhs@fb.com Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/bpf_trace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -731,7 +731,7 @@ BPF_CALL_1(bpf_send_signal, u32, sig) if (unlikely(!nmi_uaccess_okay())) return -EPERM;
- if (in_nmi()) { + if (irqs_disabled()) { /* Do an early check on signal validity. Otherwise, * the error is lost in deferred irq_work. */
From: Avraham Stern avraham.stern@intel.com
[ Upstream commit 089e5016d7eb063712063670e6da7c1a4de1a5c1 ]
When receiving a session protection end notification, the time event data is cleared without holding the required lock. Fix it.
Signed-off-by: Avraham Stern avraham.stern@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Link: https://lore.kernel.org/r/iwlwifi.20200306151128.a49846a634e4.Id1ada7c5a964f... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/mvm/time-event.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c index c0b420fe5e48f..1babc4bb5194b 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c @@ -785,7 +785,9 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm, if (!le32_to_cpu(notif->status)) { iwl_mvm_te_check_disconnect(mvm, vif, "Session protection failure"); + spin_lock_bh(&mvm->time_event_lock); iwl_mvm_te_clear_data(mvm, te_data); + spin_unlock_bh(&mvm->time_event_lock); }
if (le32_to_cpu(notif->start)) { @@ -801,7 +803,9 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm, */ iwl_mvm_te_check_disconnect(mvm, vif, "No beacon heard and the session protection is over already..."); + spin_lock_bh(&mvm->time_event_lock); iwl_mvm_te_clear_data(mvm, te_data); + spin_unlock_bh(&mvm->time_event_lock); }
goto out_unlock;
From: Ilan Peer ilan.peer@intel.com
[ Upstream commit ce19801ba75a902ab515dda03b57738c708d0781 ]
The TLC configuration did not take into consideration the station's SMPS configuration, and thus configured rates for 2 NSS even if static SMPS was reported by the station. Fix this.
Signed-off-by: Ilan Peer ilan.peer@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Link: https://lore.kernel.org/r/iwlwifi.20200306151129.b4f940d13eca.Ieebfa889d0820... Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/wireless/intel/iwlwifi/mvm/rs-fw.c | 29 ++++++++++++++----- 1 file changed, 21 insertions(+), 8 deletions(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c index 80ef238a84884..ca99a9c4f70ef 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c @@ -6,7 +6,7 @@ * GPL LICENSE SUMMARY * * Copyright(c) 2017 Intel Deutschland GmbH - * Copyright(c) 2018 - 2019 Intel Corporation + * Copyright(c) 2018 - 2020 Intel Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of version 2 of the GNU General Public License as @@ -27,7 +27,7 @@ * BSD LICENSE * * Copyright(c) 2017 Intel Deutschland GmbH - * Copyright(c) 2018 - 2019 Intel Corporation + * Copyright(c) 2018 - 2020 Intel Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -195,11 +195,13 @@ rs_fw_vht_set_enabled_rates(const struct ieee80211_sta *sta, { u16 supp; int i, highest_mcs; + u8 nss = sta->rx_nss;
- for (i = 0; i < sta->rx_nss; i++) { - if (i == IWL_TLC_NSS_MAX) - break; + /* the station support only a single receive chain */ + if (sta->smps_mode == IEEE80211_SMPS_STATIC) + nss = 1;
+ for (i = 0; i < nss && i < IWL_TLC_NSS_MAX; i++) { highest_mcs = rs_fw_vht_highest_rx_mcs_index(vht_cap, i + 1); if (!highest_mcs) continue; @@ -245,8 +247,13 @@ rs_fw_he_set_enabled_rates(const struct ieee80211_sta *sta, u16 tx_mcs_160 = le16_to_cpu(sband->iftype_data->he_cap.he_mcs_nss_supp.tx_mcs_160); int i; + u8 nss = sta->rx_nss;
- for (i = 0; i < sta->rx_nss && i < IWL_TLC_NSS_MAX; i++) { + /* the station support only a single receive chain */ + if (sta->smps_mode == IEEE80211_SMPS_STATIC) + nss = 1; + + for (i = 0; i < nss && i < IWL_TLC_NSS_MAX; i++) { u16 _mcs_160 = (mcs_160 >> (2 * i)) & 0x3; u16 _mcs_80 = (mcs_80 >> (2 * i)) & 0x3; u16 _tx_mcs_160 = (tx_mcs_160 >> (2 * i)) & 0x3; @@ -307,8 +314,14 @@ static void rs_fw_set_supp_rates(struct ieee80211_sta *sta, cmd->mode = IWL_TLC_MNG_MODE_HT; cmd->ht_rates[IWL_TLC_NSS_1][IWL_TLC_HT_BW_NONE_160] = cpu_to_le16(ht_cap->mcs.rx_mask[0]); - cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_HT_BW_NONE_160] = - cpu_to_le16(ht_cap->mcs.rx_mask[1]); + + /* the station support only a single receive chain */ + if (sta->smps_mode == IEEE80211_SMPS_STATIC) + cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_HT_BW_NONE_160] = + 0; + else + cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_HT_BW_NONE_160] = + cpu_to_le16(ht_cap->mcs.rx_mask[1]); } }
From: Nick Reitemeyer nick.reitemeyer@web.de
[ Upstream commit da3289044833769188c0da945d2cec90af35e87e ]
The Coreriver TouchCore 360 is like the midas board touchkey, but it is using a fixed regulator.
Signed-off-by: Nick Reitemeyer nick.reitemeyer@web.de Link: https://lore.kernel.org/r/20200121141525.3404-3-nick.reitemeyer@web.de Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/input/keyboard/tm2-touchkey.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/drivers/input/keyboard/tm2-touchkey.c b/drivers/input/keyboard/tm2-touchkey.c index 14b55bacdd0f1..fb078e049413f 100644 --- a/drivers/input/keyboard/tm2-touchkey.c +++ b/drivers/input/keyboard/tm2-touchkey.c @@ -75,6 +75,14 @@ static struct touchkey_variant aries_touchkey_variant = { .cmd_led_off = ARIES_TOUCHKEY_CMD_LED_OFF, };
+static const struct touchkey_variant tc360_touchkey_variant = { + .keycode_reg = 0x00, + .base_reg = 0x00, + .fixed_regulator = true, + .cmd_led_on = TM2_TOUCHKEY_CMD_LED_ON, + .cmd_led_off = TM2_TOUCHKEY_CMD_LED_OFF, +}; + static int tm2_touchkey_led_brightness_set(struct led_classdev *led_dev, enum led_brightness brightness) { @@ -327,6 +335,9 @@ static const struct of_device_id tm2_touchkey_of_match[] = { }, { .compatible = "cypress,aries-touchkey", .data = &aries_touchkey_variant, + }, { + .compatible = "coreriver,tc360-touchkey", + .data = &tc360_touchkey_variant, }, { }, };
From: Grigore Popescu grigore.popescu@nxp.com
[ Upstream commit fe8fe7723a3a824790bda681b40efd767e2251a7 ]
The dpio irqs must be registered when you can actually receive interrupts, ie when the dpios are created. Kernel goes through NULL pointer dereference errors followed by kernel panic [1] because the dpio irqs are enabled before the dpio is created.
[1] Unable to handle kernel NULL pointer dereference at virtual address 0040 fsl_mc_dpio dpio.14: probed fsl_mc_dpio dpio.13: Adding to iommu group 11 ISV = 0, ISS = 0x00000004 Unable to handle kernel NULL pointer dereference at virtual address 0040 Mem abort info: ESR = 0x96000004 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x00000004 CM = 0, WnR = 0 [0000000000000040] user address but active_mm is swapper Internal error: Oops: 96000004 [#1] PREEMPT SMP Modules linked in: CPU: 2 PID: 151 Comm: kworker/2:1 Not tainted 5.6.0-rc4-next-20200304 #1 Hardware name: NXP Layerscape LX2160ARDB (DT) Workqueue: events deferred_probe_work_func pstate: 00000085 (nzcv daIf -PAN -UAO) pc : dpaa2_io_irq+0x18/0xe0 lr : dpio_irq_handler+0x1c/0x28 sp : ffff800010013e20 x29: ffff800010013e20 x28: ffff0026d9b4c140 x27: ffffa1d38a142018 x26: ffff0026d2953400 x25: ffffa1d38a142018 x24: ffffa1d38a7ba1d8 x23: ffff800010013f24 x22: 0000000000000000 x21: 0000000000000072 x20: ffff0026d2953400 x19: ffff0026d2a68b80 x18: 0000000000000001 x17: 000000002fb37f3d x16: 0000000035eafadd x15: ffff0026d9b4c5b8 x14: ffffffffffffffff x13: ff00000000000000 x12: 0000000000000038 x11: 0101010101010101 x10: 0000000000000040 x9 : ffffa1d388db11e4 x8 : ffffa1d38a7e40f0 x7 : ffff0026da414f38 x6 : 0000000000000000 x5 : ffff0026da414d80 x4 : ffff5e5353d0c000 x3 : ffff800010013f60 x2 : ffffa1d388db11c8 x1 : ffff0026d2a67c00 x0 : 0000000000000000 Call trace: dpaa2_io_irq+0x18/0xe0 dpio_irq_handler+0x1c/0x28 __handle_irq_event_percpu+0x78/0x2c0 handle_irq_event_percpu+0x38/0x90 handle_irq_event+0x4c/0xd0 handle_fasteoi_irq+0xbc/0x168 generic_handle_irq+0x2c/0x40 __handle_domain_irq+0x68/0xc0 gic_handle_irq+0x64/0x150 el1_irq+0xb8/0x180 _raw_spin_unlock_irqrestore+0x14/0x48 irq_set_affinity_hint+0x6c/0xa0 dpaa2_dpio_probe+0x2a4/0x518 fsl_mc_driver_probe+0x28/0x70 really_probe+0xdc/0x320 driver_probe_device+0x5c/0xf0 __device_attach_driver+0x88/0xc0 bus_for_each_drv+0x7c/0xc8 __device_attach+0xe4/0x140 device_initial_probe+0x18/0x20 bus_probe_device+0x98/0xa0 device_add+0x41c/0x758 fsl_mc_device_add+0x184/0x530 dprc_scan_objects+0x280/0x370 dprc_probe+0x124/0x3b0 fsl_mc_driver_probe+0x28/0x70 really_probe+0xdc/0x320 driver_probe_device+0x5c/0xf0 __device_attach_driver+0x88/0xc0 bus_for_each_drv+0x7c/0xc8 __device_attach+0xe4/0x140 device_initial_probe+0x18/0x20 bus_probe_device+0x98/0xa0 deferred_probe_work_func+0x74/0xa8 process_one_work+0x1c8/0x470 worker_thread+0x1f8/0x428 kthread+0x124/0x128 ret_from_fork+0x10/0x18 Code: a9bc7bfd 910003fd a9025bf5 a90363f7 (f9402015) ---[ end trace 38298e1a29e7a570 ]--- Kernel panic - not syncing: Fatal exception in interrupt SMP: stopping secondary CPUs Mem abort info: ESR = 0x96000004 CM = 0, WnR = 0 EC = 0x25: DABT (current EL), IL = 32 bits [0000000000000040] user address but active_mm is swapper SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x00000004 CM = 0, WnR = 0 [0000000000000040] user address but active_mm is swapper SMP: failed to stop secondary CPUs 0-2 Kernel Offset: 0x21d378600000 from 0xffff800010000000 PHYS_OFFSET: 0xffffe92180000000 CPU features: 0x10002,21806008 Memory Limit: none ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
Signed-off-by: Laurentiu Tudor laurentiu.tudor@nxp.com Signed-off-by: Grigore Popescu grigore.popescu@nxp.com Signed-off-by: Li Yang leoyang.li@nxp.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/fsl/dpio/dpio-driver.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/soc/fsl/dpio/dpio-driver.c b/drivers/soc/fsl/dpio/dpio-driver.c index 70014ecce2a7e..7b642c330977f 100644 --- a/drivers/soc/fsl/dpio/dpio-driver.c +++ b/drivers/soc/fsl/dpio/dpio-driver.c @@ -233,10 +233,6 @@ static int dpaa2_dpio_probe(struct fsl_mc_device *dpio_dev) goto err_allocate_irqs; }
- err = register_dpio_irq_handlers(dpio_dev, desc.cpu); - if (err) - goto err_register_dpio_irq; - priv->io = dpaa2_io_create(&desc, dev); if (!priv->io) { dev_err(dev, "dpaa2_io_create failed\n"); @@ -244,6 +240,10 @@ static int dpaa2_dpio_probe(struct fsl_mc_device *dpio_dev) goto err_dpaa2_io_create; }
+ err = register_dpio_irq_handlers(dpio_dev, desc.cpu); + if (err) + goto err_register_dpio_irq; + dev_info(dev, "probed\n"); dev_dbg(dev, " receives_notifications = %d\n", desc.receives_notifications);
From: David Howells dhowells@redhat.com
[ Upstream commit 158fe6665389964a1de212818b4a5c52b7f7aff4 ]
Abstract out the calculation of there being sufficient Tx buffer space. This is reproduced several times in the rxrpc sendmsg code.
Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/rxrpc/sendmsg.c | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 136eb465bfcb2..1cbd43eeda937 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -17,6 +17,21 @@ #include <net/af_rxrpc.h> #include "ar-internal.h"
+/* + * Return true if there's sufficient Tx queue space. + */ +static bool rxrpc_check_tx_space(struct rxrpc_call *call, rxrpc_seq_t *_tx_win) +{ + unsigned int win_size = + min_t(unsigned int, call->tx_winsize, + call->cong_cwnd + call->cong_extra); + rxrpc_seq_t tx_win = READ_ONCE(call->tx_hard_ack); + + if (_tx_win) + *_tx_win = tx_win; + return call->tx_top - tx_win < win_size; +} + /* * Wait for space to appear in the Tx queue or a signal to occur. */ @@ -26,9 +41,7 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx, { for (;;) { set_current_state(TASK_INTERRUPTIBLE); - if (call->tx_top - call->tx_hard_ack < - min_t(unsigned int, call->tx_winsize, - call->cong_cwnd + call->cong_extra)) + if (rxrpc_check_tx_space(call, NULL)) return 0;
if (call->state >= RXRPC_CALL_COMPLETE) @@ -68,9 +81,7 @@ static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, set_current_state(TASK_UNINTERRUPTIBLE);
tx_win = READ_ONCE(call->tx_hard_ack); - if (call->tx_top - tx_win < - min_t(unsigned int, call->tx_winsize, - call->cong_cwnd + call->cong_extra)) + if (rxrpc_check_tx_space(call, &tx_win)) return 0;
if (call->state >= RXRPC_CALL_COMPLETE) @@ -302,9 +313,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
_debug("alloc");
- if (call->tx_top - call->tx_hard_ack >= - min_t(unsigned int, call->tx_winsize, - call->cong_cwnd + call->cong_extra)) { + if (!rxrpc_check_tx_space(call, NULL)) { ret = -EAGAIN; if (msg->msg_flags & MSG_DONTWAIT) goto maybe_error;
From: David Howells dhowells@redhat.com
[ Upstream commit e138aa7d3271ac1b0690ae2c9b04d51468dce1d6 ]
Fix the interruptibility of kernel-initiated client calls so that they're either only interruptible when they're waiting for a call slot to come available or they're not interruptible at all. Either way, they're not interruptible during transmission.
This should help prevent StoreData calls from being interrupted when writeback is in progress. It doesn't, however, handle interruption during the receive phase.
Userspace-initiated calls are still interruptable. After the signal has been handled, sendmsg() will return the amount of data copied out of the buffer and userspace can perform another sendmsg() call to continue transmission.
Fixes: bc5e3a546d55 ("rxrpc: Use MSG_WAITALL to tell sendmsg() to temporarily ignore signals") Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/rxrpc.c | 3 ++- include/net/af_rxrpc.h | 8 +++++++- net/rxrpc/af_rxrpc.c | 4 ++-- net/rxrpc/ar-internal.h | 4 ++-- net/rxrpc/call_object.c | 3 +-- net/rxrpc/conn_client.c | 13 +++++++++--- net/rxrpc/sendmsg.c | 44 +++++++++++++++++++++++++++++++++-------- 7 files changed, 60 insertions(+), 19 deletions(-)
diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index 27a879eaa5a4e..1ecc67da6c1a4 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -414,7 +414,8 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp) afs_wake_up_async_call : afs_wake_up_call_waiter), call->upgrade, - call->intr, + (call->intr ? RXRPC_PREINTERRUPTIBLE : + RXRPC_UNINTERRUPTIBLE), call->debug_id); if (IS_ERR(rxcall)) { ret = PTR_ERR(rxcall); diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h index 299240df79e4a..04e97bab6f28b 100644 --- a/include/net/af_rxrpc.h +++ b/include/net/af_rxrpc.h @@ -16,6 +16,12 @@ struct sock; struct socket; struct rxrpc_call;
+enum rxrpc_interruptibility { + RXRPC_INTERRUPTIBLE, /* Call is interruptible */ + RXRPC_PREINTERRUPTIBLE, /* Call can be cancelled whilst waiting for a slot */ + RXRPC_UNINTERRUPTIBLE, /* Call should not be interruptible at all */ +}; + /* * Debug ID counter for tracing. */ @@ -41,7 +47,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *, gfp_t, rxrpc_notify_rx_t, bool, - bool, + enum rxrpc_interruptibility, unsigned int); int rxrpc_kernel_send_data(struct socket *, struct rxrpc_call *, struct msghdr *, size_t, diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index bad0d6adcc49f..15ee92d795815 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -285,7 +285,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, gfp_t gfp, rxrpc_notify_rx_t notify_rx, bool upgrade, - bool intr, + enum rxrpc_interruptibility interruptibility, unsigned int debug_id) { struct rxrpc_conn_parameters cp; @@ -310,7 +310,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, memset(&p, 0, sizeof(p)); p.user_call_ID = user_call_ID; p.tx_total_len = tx_total_len; - p.intr = intr; + p.interruptibility = interruptibility;
memset(&cp, 0, sizeof(cp)); cp.local = rx->local; diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 394d18857979a..3eb1ab40ca5cb 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -489,7 +489,6 @@ enum rxrpc_call_flag { RXRPC_CALL_BEGAN_RX_TIMER, /* We began the expect_rx_by timer */ RXRPC_CALL_RX_HEARD, /* The peer responded at least once to this call */ RXRPC_CALL_RX_UNDERRUN, /* Got data underrun */ - RXRPC_CALL_IS_INTR, /* The call is interruptible */ RXRPC_CALL_DISCONNECTED, /* The call has been disconnected */ };
@@ -598,6 +597,7 @@ struct rxrpc_call { atomic_t usage; u16 service_id; /* service ID */ u8 security_ix; /* Security type */ + enum rxrpc_interruptibility interruptibility; /* At what point call may be interrupted */ u32 call_id; /* call ID on connection */ u32 cid; /* connection ID plus channel index */ int debug_id; /* debug ID for printks */ @@ -720,7 +720,7 @@ struct rxrpc_call_params { u32 normal; /* Max time since last call packet (msec) */ } timeouts; u8 nr_timeouts; /* Number of timeouts specified */ - bool intr; /* The call is interruptible */ + enum rxrpc_interruptibility interruptibility; /* How is interruptible is the call? */ };
struct rxrpc_send_params { diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index c9f34b0a11df4..f07970207b544 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -237,8 +237,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, return call; }
- if (p->intr) - __set_bit(RXRPC_CALL_IS_INTR, &call->flags); + call->interruptibility = p->interruptibility; call->tx_total_len = p->tx_total_len; trace_rxrpc_call(call->debug_id, rxrpc_call_new_client, atomic_read(&call->usage), diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index ea7d4c21f8893..f2a1a5dbb5a7b 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -655,13 +655,20 @@ static int rxrpc_wait_for_channel(struct rxrpc_call *call, gfp_t gfp)
add_wait_queue_exclusive(&call->waitq, &myself); for (;;) { - if (test_bit(RXRPC_CALL_IS_INTR, &call->flags)) + switch (call->interruptibility) { + case RXRPC_INTERRUPTIBLE: + case RXRPC_PREINTERRUPTIBLE: set_current_state(TASK_INTERRUPTIBLE); - else + break; + case RXRPC_UNINTERRUPTIBLE: + default: set_current_state(TASK_UNINTERRUPTIBLE); + break; + } if (call->call_id) break; - if (test_bit(RXRPC_CALL_IS_INTR, &call->flags) && + if ((call->interruptibility == RXRPC_INTERRUPTIBLE || + call->interruptibility == RXRPC_PREINTERRUPTIBLE) && signal_pending(current)) { ret = -ERESTARTSYS; break; diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 1cbd43eeda937..0fcf157aa09f8 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -62,7 +62,7 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx, * Wait for space to appear in the Tx queue uninterruptibly, but with * a timeout of 2*RTT if no progress was made and a signal occurred. */ -static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, +static int rxrpc_wait_for_tx_window_waitall(struct rxrpc_sock *rx, struct rxrpc_call *call) { rxrpc_seq_t tx_start, tx_win; @@ -87,8 +87,7 @@ static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, if (call->state >= RXRPC_CALL_COMPLETE) return call->error;
- if (test_bit(RXRPC_CALL_IS_INTR, &call->flags) && - timeout == 0 && + if (timeout == 0 && tx_win == tx_start && signal_pending(current)) return -EINTR;
@@ -102,6 +101,26 @@ static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, } }
+/* + * Wait for space to appear in the Tx queue uninterruptibly. + */ +static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, + struct rxrpc_call *call, + long *timeo) +{ + for (;;) { + set_current_state(TASK_UNINTERRUPTIBLE); + if (rxrpc_check_tx_space(call, NULL)) + return 0; + + if (call->state >= RXRPC_CALL_COMPLETE) + return call->error; + + trace_rxrpc_transmit(call, rxrpc_transmit_wait); + *timeo = schedule_timeout(*timeo); + } +} + /* * wait for space to appear in the transmit/ACK window * - caller holds the socket locked @@ -119,10 +138,19 @@ static int rxrpc_wait_for_tx_window(struct rxrpc_sock *rx,
add_wait_queue(&call->waitq, &myself);
- if (waitall) - ret = rxrpc_wait_for_tx_window_nonintr(rx, call); - else - ret = rxrpc_wait_for_tx_window_intr(rx, call, timeo); + switch (call->interruptibility) { + case RXRPC_INTERRUPTIBLE: + if (waitall) + ret = rxrpc_wait_for_tx_window_waitall(rx, call); + else + ret = rxrpc_wait_for_tx_window_intr(rx, call, timeo); + break; + case RXRPC_PREINTERRUPTIBLE: + case RXRPC_UNINTERRUPTIBLE: + default: + ret = rxrpc_wait_for_tx_window_nonintr(rx, call, timeo); + break; + }
remove_wait_queue(&call->waitq, &myself); set_current_state(TASK_RUNNING); @@ -628,7 +656,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) .call.tx_total_len = -1, .call.user_call_ID = 0, .call.nr_timeouts = 0, - .call.intr = true, + .call.interruptibility = RXRPC_INTERRUPTIBLE, .abort_code = 0, .command = RXRPC_CMD_SEND_DATA, .exclusive = false,
From: Markus Fuchs mklntf@gmail.com
[ Upstream commit fc191af1bb0d069dc7e981076e8b80af21f1e61d ]
Not every stmmac based platform makes use of the eth_wake_irq or eth_lpi interrupts. Use the platform_get_irq_byname_optional variant for these interrupts, so no error message is displayed, if they can't be found. Rather print an information to hint something might be wrong to assist debugging on platforms which use these interrupts.
Signed-off-by: Markus Fuchs mklntf@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/stmicro/stmmac/stmmac_platform.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c index d10ac54bf385a..13fafd905db87 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c @@ -663,16 +663,22 @@ int stmmac_get_platform_resources(struct platform_device *pdev, * In case the wake up interrupt is not passed from the platform * so the driver will continue to use the mac irq (ndev->irq) */ - stmmac_res->wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq"); + stmmac_res->wol_irq = + platform_get_irq_byname_optional(pdev, "eth_wake_irq"); if (stmmac_res->wol_irq < 0) { if (stmmac_res->wol_irq == -EPROBE_DEFER) return -EPROBE_DEFER; + dev_info(&pdev->dev, "IRQ eth_wake_irq not found\n"); stmmac_res->wol_irq = stmmac_res->irq; }
- stmmac_res->lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); - if (stmmac_res->lpi_irq == -EPROBE_DEFER) - return -EPROBE_DEFER; + stmmac_res->lpi_irq = + platform_get_irq_byname_optional(pdev, "eth_lpi"); + if (stmmac_res->lpi_irq < 0) { + if (stmmac_res->lpi_irq == -EPROBE_DEFER) + return -EPROBE_DEFER; + dev_info(&pdev->dev, "IRQ eth_lpi not found\n"); + }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); stmmac_res->addr = devm_ioremap_resource(&pdev->dev, res);
From: Zheng Wei wei.zheng@vivo.com
[ Upstream commit b317538c47943f9903860d83cc0060409e12d2ff ]
printk in macro vxge_debug_ll uses __VA_ARGS__ without "##" prefix, it causes a build error when there is no variable arguments(e.g. only fmt is specified.).
Signed-off-by: Zheng Wei wei.zheng@vivo.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/neterion/vxge/vxge-config.h | 2 +- drivers/net/ethernet/neterion/vxge/vxge-main.h | 14 +++++++------- 2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-config.h b/drivers/net/ethernet/neterion/vxge/vxge-config.h index e678ba379598e..628fa9b2f7416 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-config.h +++ b/drivers/net/ethernet/neterion/vxge/vxge-config.h @@ -2045,7 +2045,7 @@ vxge_hw_vpath_strip_fcs_check(struct __vxge_hw_device *hldev, u64 vpath_mask); if ((level >= VXGE_ERR && VXGE_COMPONENT_LL & VXGE_DEBUG_ERR_MASK) || \ (level >= VXGE_TRACE && VXGE_COMPONENT_LL & VXGE_DEBUG_TRACE_MASK))\ if ((mask & VXGE_DEBUG_MASK) == mask) \ - printk(fmt "\n", __VA_ARGS__); \ + printk(fmt "\n", ##__VA_ARGS__); \ } while (0) #else #define vxge_debug_ll(level, mask, fmt, ...) diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.h b/drivers/net/ethernet/neterion/vxge/vxge-main.h index 59a57ff5e96af..9c86f4f9cd424 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-main.h +++ b/drivers/net/ethernet/neterion/vxge/vxge-main.h @@ -452,49 +452,49 @@ int vxge_fw_upgrade(struct vxgedev *vdev, char *fw_name, int override);
#if (VXGE_DEBUG_LL_CONFIG & VXGE_DEBUG_MASK) #define vxge_debug_ll_config(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, ##__VA_ARGS__) #else #define vxge_debug_ll_config(level, fmt, ...) #endif
#if (VXGE_DEBUG_INIT & VXGE_DEBUG_MASK) #define vxge_debug_init(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, ##__VA_ARGS__) #else #define vxge_debug_init(level, fmt, ...) #endif
#if (VXGE_DEBUG_TX & VXGE_DEBUG_MASK) #define vxge_debug_tx(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, ##__VA_ARGS__) #else #define vxge_debug_tx(level, fmt, ...) #endif
#if (VXGE_DEBUG_RX & VXGE_DEBUG_MASK) #define vxge_debug_rx(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, ##__VA_ARGS__) #else #define vxge_debug_rx(level, fmt, ...) #endif
#if (VXGE_DEBUG_MEM & VXGE_DEBUG_MASK) #define vxge_debug_mem(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, ##__VA_ARGS__) #else #define vxge_debug_mem(level, fmt, ...) #endif
#if (VXGE_DEBUG_ENTRYEXIT & VXGE_DEBUG_MASK) #define vxge_debug_entryexit(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, ##__VA_ARGS__) #else #define vxge_debug_entryexit(level, fmt, ...) #endif
#if (VXGE_DEBUG_INTR & VXGE_DEBUG_MASK) #define vxge_debug_intr(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, ##__VA_ARGS__) #else #define vxge_debug_intr(level, fmt, ...) #endif
From: Tony Lindgren tony@atomide.com
[ Upstream commit 4abd9930d189dedaa59097144c6d8f623342fa72 ]
Looks like we can have the maxtouch touchscreen stop producing interrupts if an edge interrupt is lost. This can happen easily when the SoC idles as the gpio controller may not see any state for an edge interrupt if it is briefly triggered when the system is idle.
Also it looks like maxtouch stops sending any further interrupts if the interrupt is not handled. And we do have several cases of maxtouch already configured with a level interrupt, so let's do that.
With level interrupt the gpio controller has the interrupt state visible after idle. Note that eventually we will probably also be using the Linux generic wakeirq configured for the controller, but that cannot be done until the maxtouch driver supports runtime PM.
Cc: maemo-leste@lists.dyne.org Cc: Arthur Demchenkov spinal.by@gmail.com Cc: Ivaylo Dimitrov ivo.g.dimitrov.75@gmail.com Cc: Merlijn Wajer merlijn@wizzup.org Cc: Pavel Machek pavel@ucw.cz Cc: Sebastian Reichel sre@kernel.org Signed-off-by: Tony Lindgren tony@atomide.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/motorola-mapphone-common.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/boot/dts/motorola-mapphone-common.dtsi b/arch/arm/boot/dts/motorola-mapphone-common.dtsi index da6b107da84a4..aeb5a673c209e 100644 --- a/arch/arm/boot/dts/motorola-mapphone-common.dtsi +++ b/arch/arm/boot/dts/motorola-mapphone-common.dtsi @@ -413,7 +413,7 @@ reset-gpios = <&gpio6 13 GPIO_ACTIVE_HIGH>; /* gpio173 */
/* gpio_183 with sys_nirq2 pad as wakeup */ - interrupts-extended = <&gpio6 23 IRQ_TYPE_EDGE_FALLING>, + interrupts-extended = <&gpio6 23 IRQ_TYPE_LEVEL_LOW>, <&omap4_pmx_core 0x160>; interrupt-names = "irq", "wakeup"; wakeup-source;
From: Greentime Hu greentime.hu@sifive.com
[ Upstream commit adccfb1a805ea84d2db38eb53032533279bdaa97 ]
It might have the unaligned access exception when trying to exchange data with user space program. In this case, it failed in tty_ioctl(). Therefore we should enable uaccess.S for NOMMU mode since the generic code doesn't handle the unaligned access cases.
0x8013a212 <tty_ioctl+462>: ld a5,460(s1)
[ 0.115279] Oops - load address misaligned [#1] [ 0.115284] CPU: 0 PID: 29 Comm: sh Not tainted 5.4.0-rc5-00020-gb4c27160d562-dirty #36 [ 0.115294] epc: 000000008013a212 ra : 000000008013a212 sp : 000000008f48dd50 [ 0.115303] gp : 00000000801cac28 tp : 000000008fb80000 t0 : 00000000000000e8 [ 0.115312] t1 : 000000008f58f108 t2 : 0000000000000009 s0 : 000000008f48ddf0 [ 0.115321] s1 : 000000008f8c6220 a0 : 0000000000000001 a1 : 000000008f48dd28 [ 0.115330] a2 : 000000008fb80000 a3 : 00000000801a7398 a4 : 0000000000000000 [ 0.115339] a5 : 0000000000000000 a6 : 000000008f58f0c6 a7 : 000000000000001d [ 0.115348] s2 : 000000008f8c6308 s3 : 000000008f78b7c8 s4 : 000000008fb834c0 [ 0.115357] s5 : 0000000000005413 s6 : 0000000000000000 s7 : 000000008f58f2b0 [ 0.115366] s8 : 000000008f858008 s9 : 000000008f776818 s10: 000000008f776830 [ 0.115375] s11: 000000008fb840a8 t3 : 1999999999999999 t4 : 000000008f78704c [ 0.115384] t5 : 0000000000000005 t6 : 0000000000000002 [ 0.115391] status: 0000000200001880 badaddr: 000000008f8c63ec cause: 0000000000000004 [ 0.115401] ---[ end trace 00d490c6a8b6c9ac ]---
This failure could be fixed after this patch applied.
[ 0.002282] Run /init as init process Initializing random number generator... [ 0.005573] random: dd: uninitialized urandom read (512 bytes read) done.
Welcome to Buildroot buildroot login: root Password: Jan 1 00:00:00 login[62]: root login on 'ttySIF0' ~ #
Signed-off-by: Greentime Hu greentime.hu@sifive.com Reviewed-by: Palmer Dabbelt palmerdabbelt@google.com Signed-off-by: Palmer Dabbelt palmerdabbelt@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/Kconfig | 1 - arch/riscv/include/asm/uaccess.h | 36 ++++++++++++++++---------------- arch/riscv/lib/Makefile | 2 +- 3 files changed, 19 insertions(+), 20 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 1be11c23fa335..50cfa272f9e3d 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -52,7 +52,6 @@ config RISCV select PCI_DOMAINS_GENERIC if PCI select PCI_MSI if PCI select RISCV_TIMER - select UACCESS_MEMCPY if !MMU select GENERIC_IRQ_MULTI_HANDLER select GENERIC_ARCH_TOPOLOGY if SMP select ARCH_HAS_PTE_SPECIAL diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h index f462a183a9c23..8ce9d607b53dc 100644 --- a/arch/riscv/include/asm/uaccess.h +++ b/arch/riscv/include/asm/uaccess.h @@ -11,6 +11,24 @@ /* * User space memory access functions */ + +extern unsigned long __must_check __asm_copy_to_user(void __user *to, + const void *from, unsigned long n); +extern unsigned long __must_check __asm_copy_from_user(void *to, + const void __user *from, unsigned long n); + +static inline unsigned long +raw_copy_from_user(void *to, const void __user *from, unsigned long n) +{ + return __asm_copy_from_user(to, from, n); +} + +static inline unsigned long +raw_copy_to_user(void __user *to, const void *from, unsigned long n) +{ + return __asm_copy_to_user(to, from, n); +} + #ifdef CONFIG_MMU #include <linux/errno.h> #include <linux/compiler.h> @@ -367,24 +385,6 @@ do { \ -EFAULT; \ })
- -extern unsigned long __must_check __asm_copy_to_user(void __user *to, - const void *from, unsigned long n); -extern unsigned long __must_check __asm_copy_from_user(void *to, - const void __user *from, unsigned long n); - -static inline unsigned long -raw_copy_from_user(void *to, const void __user *from, unsigned long n) -{ - return __asm_copy_from_user(to, from, n); -} - -static inline unsigned long -raw_copy_to_user(void __user *to, const void *from, unsigned long n) -{ - return __asm_copy_to_user(to, from, n); -} - extern long strncpy_from_user(char *dest, const char __user *src, long count);
extern long __must_check strlen_user(const char __user *str); diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index 47e7a82044608..0d0db80800c4e 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -2,5 +2,5 @@ lib-y += delay.o lib-y += memcpy.o lib-y += memset.o -lib-$(CONFIG_MMU) += uaccess.o +lib-y += uaccess.o lib-$(CONFIG_64BIT) += tishift.o
From: Luo bin luobin9@huawei.com
[ Upstream commit 96758117dc528e6d84bd23d205e8cf7f31eda029 ]
it's unreliable for fw to check whether IO is stopped, so driver wait for enough time to ensure IO process is done in hw before freeing resources
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/huawei/hinic/hinic_hw_dev.c | 51 +------------------ 1 file changed, 2 insertions(+), 49 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c index 79b3d53f2fbfa..c7c75b772a866 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c @@ -360,50 +360,6 @@ static int wait_for_db_state(struct hinic_hwdev *hwdev) return -EFAULT; }
-static int wait_for_io_stopped(struct hinic_hwdev *hwdev) -{ - struct hinic_cmd_io_status cmd_io_status; - struct hinic_hwif *hwif = hwdev->hwif; - struct pci_dev *pdev = hwif->pdev; - struct hinic_pfhwdev *pfhwdev; - unsigned long end; - u16 out_size; - int err; - - if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) { - dev_err(&pdev->dev, "Unsupported PCI Function type\n"); - return -EINVAL; - } - - pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev); - - cmd_io_status.func_idx = HINIC_HWIF_FUNC_IDX(hwif); - - end = jiffies + msecs_to_jiffies(IO_STATUS_TIMEOUT); - do { - err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM, - HINIC_COMM_CMD_IO_STATUS_GET, - &cmd_io_status, sizeof(cmd_io_status), - &cmd_io_status, &out_size, - HINIC_MGMT_MSG_SYNC); - if ((err) || (out_size != sizeof(cmd_io_status))) { - dev_err(&pdev->dev, "Failed to get IO status, ret = %d\n", - err); - return err; - } - - if (cmd_io_status.status == IO_STOPPED) { - dev_info(&pdev->dev, "IO stopped\n"); - return 0; - } - - msleep(20); - } while (time_before(jiffies, end)); - - dev_err(&pdev->dev, "Wait for IO stopped - Timeout\n"); - return -ETIMEDOUT; -} - /** * clear_io_resource - set the IO resources as not active in the NIC * @hwdev: the NIC HW device @@ -423,11 +379,8 @@ static int clear_io_resources(struct hinic_hwdev *hwdev) return -EINVAL; }
- err = wait_for_io_stopped(hwdev); - if (err) { - dev_err(&pdev->dev, "IO has not stopped yet\n"); - return err; - } + /* sleep 100ms to wait for firmware stopping I/O */ + msleep(100);
cmd_clear_io_res.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
From: Luo bin luobin9@huawei.com
[ Upstream commit 614eaa943e9fc3fcdbd4aa0692ae84973d363333 ]
should disable eq irq before freeing it, must clear event queue depth in hw before freeing relevant memory to avoid illegal memory access and update consumer idx to avoid invalid interrupt
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/huawei/hinic/hinic_hw_eqs.c | 24 +++++++++++++------ 1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c index 79243b626ddbe..6a723c4757bce 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c @@ -188,7 +188,7 @@ static u8 eq_cons_idx_checksum_set(u32 val) * eq_update_ci - update the HW cons idx of event queue * @eq: the event queue to update the cons idx for **/ -static void eq_update_ci(struct hinic_eq *eq) +static void eq_update_ci(struct hinic_eq *eq, u32 arm_state) { u32 val, addr = EQ_CONS_IDX_REG_ADDR(eq);
@@ -202,7 +202,7 @@ static void eq_update_ci(struct hinic_eq *eq)
val |= HINIC_EQ_CI_SET(eq->cons_idx, IDX) | HINIC_EQ_CI_SET(eq->wrapped, WRAPPED) | - HINIC_EQ_CI_SET(EQ_ARMED, INT_ARMED); + HINIC_EQ_CI_SET(arm_state, INT_ARMED);
val |= HINIC_EQ_CI_SET(eq_cons_idx_checksum_set(val), XOR_CHKSUM);
@@ -347,7 +347,7 @@ static void eq_irq_handler(void *data) else if (eq->type == HINIC_CEQ) ceq_irq_handler(eq);
- eq_update_ci(eq); + eq_update_ci(eq, EQ_ARMED); }
/** @@ -702,7 +702,7 @@ static int init_eq(struct hinic_eq *eq, struct hinic_hwif *hwif, }
set_eq_ctrls(eq); - eq_update_ci(eq); + eq_update_ci(eq, EQ_ARMED);
err = alloc_eq_pages(eq); if (err) { @@ -752,18 +752,28 @@ err_req_irq: **/ static void remove_eq(struct hinic_eq *eq) { - struct msix_entry *entry = &eq->msix_entry; - - free_irq(entry->vector, eq); + hinic_set_msix_state(eq->hwif, eq->msix_entry.entry, + HINIC_MSIX_DISABLE); + free_irq(eq->msix_entry.vector, eq);
if (eq->type == HINIC_AEQ) { struct hinic_eq_work *aeq_work = &eq->aeq_work;
cancel_work_sync(&aeq_work->work); + /* clear aeq_len to avoid hw access host memory */ + hinic_hwif_write_reg(eq->hwif, + HINIC_CSR_AEQ_CTRL_1_ADDR(eq->q_id), 0); } else if (eq->type == HINIC_CEQ) { tasklet_kill(&eq->ceq_tasklet); + /* clear ceq_len to avoid hw access host memory */ + hinic_hwif_write_reg(eq->hwif, + HINIC_CSR_CEQ_CTRL_1_ADDR(eq->q_id), 0); }
+ /* update cons_idx to avoid invalid interrupt */ + eq->cons_idx = hinic_hwif_read_reg(eq->hwif, EQ_PROD_IDX_REG_ADDR(eq)); + eq_update_ci(eq, EQ_NOT_ARMED); + free_eq_pages(eq); }
From: Luo bin luobin9@huawei.com
[ Upstream commit 33f15da216a1f4566b4ec880942556ace30615df ]
add read barrier in driver code to keep from reading other fileds in dma memory which is writable for hw until we have verified the memory is valid for driver
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 2 ++ drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c | 2 ++ drivers/net/ethernet/huawei/hinic/hinic_rx.c | 3 +++ drivers/net/ethernet/huawei/hinic/hinic_tx.c | 2 ++ 4 files changed, 9 insertions(+)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c index eb53c15b13f33..33f93cc25193a 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c @@ -623,6 +623,8 @@ static int cmdq_cmd_ceq_handler(struct hinic_cmdq *cmdq, u16 ci, if (!CMDQ_WQE_COMPLETED(be32_to_cpu(ctrl->ctrl_info))) return -EBUSY;
+ dma_rmb(); + errcode = CMDQ_WQE_ERRCODE_GET(be32_to_cpu(status->status_info), VAL);
cmdq_sync_cmd_handler(cmdq, ci, errcode); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c index 6a723c4757bce..c0b6bcb067cd4 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c @@ -235,6 +235,8 @@ static void aeq_irq_handler(struct hinic_eq *eq) if (HINIC_EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) == eq->wrapped) break;
+ dma_rmb(); + event = HINIC_EQ_ELEM_DESC_GET(aeqe_desc, TYPE); if (event >= HINIC_MAX_AEQ_EVENTS) { dev_err(&pdev->dev, "Unknown AEQ Event %d\n", event); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.c b/drivers/net/ethernet/huawei/hinic/hinic_rx.c index 2695ad69fca60..815649e37cb15 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_rx.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.c @@ -350,6 +350,9 @@ static int rxq_recv(struct hinic_rxq *rxq, int budget) if (!rq_wqe) break;
+ /* make sure we read rx_done before packet length */ + dma_rmb(); + cqe = rq->cqe[ci]; status = be32_to_cpu(cqe->status); hinic_rq_get_sge(rxq->rq, rq_wqe, ci, &sge); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c index 0e13d1c7e4746..375d81d03e866 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c @@ -622,6 +622,8 @@ static int free_tx_poll(struct napi_struct *napi, int budget) do { hw_ci = HW_CONS_IDX(sq) & wq->mask;
+ dma_rmb(); + /* Reading a WQEBB to get real WQE size and consumer index. */ sq_wqe = hinic_sq_read_wqebb(sq, &skb, &wqe_size, &sw_ci); if ((!sq_wqe) ||
From: Luo bin luobin9@huawei.com
[ Upstream commit 0da7c322f116210ebfdda59c7da663a6fc5e9cc8 ]
the second input parameter of wait_for_completion_timeout should be jiffies instead of millisecond
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 3 ++- drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c | 5 +++-- 2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c index 33f93cc25193a..5f2d57d1b2d37 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c @@ -389,7 +389,8 @@ static int cmdq_sync_cmd_direct_resp(struct hinic_cmdq *cmdq,
spin_unlock_bh(&cmdq->cmdq_lock);
- if (!wait_for_completion_timeout(&done, CMDQ_TIMEOUT)) { + if (!wait_for_completion_timeout(&done, + msecs_to_jiffies(CMDQ_TIMEOUT))) { spin_lock_bh(&cmdq->cmdq_lock);
if (cmdq->errcode[curr_prod_idx] == &errcode) diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c index c1a6be6bf6a8c..8995e32dd1c00 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c @@ -43,7 +43,7 @@
#define MSG_NOT_RESP 0xFFFF
-#define MGMT_MSG_TIMEOUT 1000 +#define MGMT_MSG_TIMEOUT 5000
#define mgmt_to_pfhwdev(pf_mgmt) \ container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt) @@ -267,7 +267,8 @@ static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt, goto unlock_sync_msg; }
- if (!wait_for_completion_timeout(recv_done, MGMT_MSG_TIMEOUT)) { + if (!wait_for_completion_timeout(recv_done, + msecs_to_jiffies(MGMT_MSG_TIMEOUT))) { dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id); err = -ETIMEDOUT; goto unlock_sync_msg;
From: Luo bin luobin9@huawei.com
[ Upstream commit 7296695fc16dd1761dbba8b68a9181c71cef0633 ]
the minimum value of skb len that hw supports is 32 rather than 17
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/huawei/hinic/hinic_tx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c index 375d81d03e866..365016450bdbe 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c @@ -45,7 +45,7 @@
#define HW_CONS_IDX(sq) be16_to_cpu(*(u16 *)((sq)->hw_ci_addr))
-#define MIN_SKB_LEN 17 +#define MIN_SKB_LEN 32
#define MAX_PAYLOAD_OFFSET 221 #define TRANSPORT_OFFSET(l4_hdr, skb) ((u32)((l4_hdr) - (skb)->data))
From: Alan Maguire alan.maguire@oracle.com
[ Upstream commit 83a9b6f639e9f6b632337f9776de17d51d969c77 ]
Many systems build/test up-to-date kernels with older libcs, and an older glibc (2.17) lacks the definition of SOL_DCCP in /usr/include/bits/socket.h (it was added in the 4.6 timeframe).
Adding the definition to the test program avoids a compilation failure that gets in the way of building tools/testing/selftests/net. The test itself will work once the definition is added; either skipping due to DCCP not being configured in the kernel under test or passing, so there are no other more up-to-date glibc dependencies here it seems beyond that missing definition.
Fixes: 11fb60d1089f ("selftests: net: reuseport_addr_any: add DCCP") Signed-off-by: Alan Maguire alan.maguire@oracle.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/net/reuseport_addr_any.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/net/reuseport_addr_any.c b/tools/testing/selftests/net/reuseport_addr_any.c index c6233935fed14..b8475cb29be7a 100644 --- a/tools/testing/selftests/net/reuseport_addr_any.c +++ b/tools/testing/selftests/net/reuseport_addr_any.c @@ -21,6 +21,10 @@ #include <sys/socket.h> #include <unistd.h>
+#ifndef SOL_DCCP +#define SOL_DCCP 269 +#endif + static const char *IP4_ADDR = "127.0.0.1"; static const char *IP6_ADDR = "::1"; static const char *IP4_MAPPED6 = "::ffff:127.0.0.1";
From: Raju Rangoju rajur@chelsio.com
[ Upstream commit 50e0d28d3808146cc19b0d5564ef4ba9e5bf3846 ]
cxgb4_ptp_fineadjtime() doesn't pass the signedness of offset delta in FW_PTP_CMD. Fix it by passing correct sign.
Signed-off-by: Raju Rangoju rajur@chelsio.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c index 58a039c3224ad..af1f40cbccc88 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c @@ -246,6 +246,9 @@ static int cxgb4_ptp_fineadjtime(struct adapter *adapter, s64 delta) FW_PTP_CMD_PORTID_V(0)); c.retval_len16 = cpu_to_be32(FW_CMD_LEN16_V(sizeof(c) / 16)); c.u.ts.sc = FW_PTP_SC_ADJ_FTIME; + c.u.ts.sign = (delta < 0) ? 1 : 0; + if (delta < 0) + delta = -delta; c.u.ts.tm = cpu_to_be64(delta);
err = t4_wr_mbox(adapter, adapter->mbox, &c, sizeof(c), NULL);
From: Yintian Tao yttao@amd.com
[ Upstream commit 3c0fdf3302cb4f186c871684eac5c407a107e480 ]
There is one one corner case at dma_fence_signal_locked which will raise the NULL pointer problem just like below. ->dma_fence_signal ->dma_fence_signal_locked ->test_and_set_bit here trigger dma_fence_release happen due to the zero of fence refcount.
->dma_fence_put ->dma_fence_release ->drm_sched_fence_release_scheduled ->call_rcu here make the union fled “cb_list” at finished fence to NULL because struct rcu_head contains two pointer which is same as struct list_head cb_list
Therefore, to hold the reference of finished fence at drm_sched_process_job to prevent the null pointer during finished fence dma_fence_signal
[ 732.912867] BUG: kernel NULL pointer dereference, address: 0000000000000008 [ 732.914815] #PF: supervisor write access in kernel mode [ 732.915731] #PF: error_code(0x0002) - not-present page [ 732.916621] PGD 0 P4D 0 [ 732.917072] Oops: 0002 [#1] SMP PTI [ 732.917682] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G OE 5.4.0-rc7 #1 [ 732.918980] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014 [ 732.920906] RIP: 0010:dma_fence_signal_locked+0x3e/0x100 [ 732.938569] Call Trace: [ 732.939003] <IRQ> [ 732.939364] dma_fence_signal+0x29/0x50 [ 732.940036] drm_sched_fence_finished+0x12/0x20 [gpu_sched] [ 732.940996] drm_sched_process_job+0x34/0xa0 [gpu_sched] [ 732.941910] dma_fence_signal_locked+0x85/0x100 [ 732.942692] dma_fence_signal+0x29/0x50 [ 732.943457] amdgpu_fence_process+0x99/0x120 [amdgpu] [ 732.944393] sdma_v4_0_process_trap_irq+0x81/0xa0 [amdgpu]
v2: hold the finished fence at drm_sched_process_job instead of amdgpu_fence_process v3: resume the blank line
Signed-off-by: Yintian Tao yttao@amd.com Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/scheduler/sched_main.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 3c57e84222ca9..5bb9feddbfd6b 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -632,7 +632,9 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)
trace_drm_sched_process_job(s_fence);
+ dma_fence_get(&s_fence->finished); drm_sched_fence_finished(s_fence); + dma_fence_put(&s_fence->finished); wake_up_interruptible(&sched->wake_up_worker); }
From: Ilan Peer ilan.peer@intel.com
[ Upstream commit 05dcb8bb258575a8dd3499d0d78bd2db633c2b23 ]
When cfg80211_update_assoc_bss_entry() is called, there is a verification that the BSS channel actually changed. As some APs use CSA also for bandwidth changes, this would result with a kernel warning.
Fix this by removing the WARN_ON().
Signed-off-by: Ilan Peer ilan.peer@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Link: https://lore.kernel.org/r/iwlwifi.20200326150855.96316ada0e8d.I6710376b1b425... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/wireless/scan.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/net/wireless/scan.c b/net/wireless/scan.c index aef240fdf8df6..328402ab64a3f 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -2022,7 +2022,11 @@ void cfg80211_update_assoc_bss_entry(struct wireless_dev *wdev,
spin_lock_bh(&rdev->bss_lock);
- if (WARN_ON(cbss->pub.channel == chan)) + /* + * Some APs use CSA also for bandwidth changes, i.e., without actually + * changing the control channel, so no need to update in such a case. + */ + if (cbss->pub.channel == chan) goto done;
/* use transmitting bss */
From: Xu Wang vulab@iscas.ac.cn
[ Upstream commit bcaeb886ade124331a6f3a5cef34a3f1484c0a03 ]
In qlcnic_83xx_get_reset_instruction_template, the variable of null test is bad, so correct it.
Signed-off-by: Xu Wang vulab@iscas.ac.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c index 07f9067affc65..cda5b0a9e9489 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c @@ -1720,7 +1720,7 @@ static int qlcnic_83xx_get_reset_instruction_template(struct qlcnic_adapter *p_d
ahw->reset.seq_error = 0; ahw->reset.buff = kzalloc(QLC_83XX_RESTART_TEMPLATE_SIZE, GFP_KERNEL); - if (p_dev->ahw->reset.buff == NULL) + if (ahw->reset.buff == NULL) return -ENOMEM;
p_buff = p_dev->ahw->reset.buff;
From: Alain Volmat avolmat@me.com
[ Upstream commit f491c6687332920e296d0209e366fe2ca7eab1c6 ]
Fix a missing struct parameter description to allow warning free W=1 compilation.
Signed-off-by: Alain Volmat avolmat@me.com Reviewed-by: Patrice Chotard patrice.chotard@st.com Signed-off-by: Wolfram Sang wsa@the-dreams.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i2c/busses/i2c-st.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/i2c/busses/i2c-st.c b/drivers/i2c/busses/i2c-st.c index 54e1fc8a495e7..f7f7b5b64720e 100644 --- a/drivers/i2c/busses/i2c-st.c +++ b/drivers/i2c/busses/i2c-st.c @@ -434,6 +434,7 @@ static void st_i2c_wr_fill_tx_fifo(struct st_i2c_dev *i2c_dev) /** * st_i2c_rd_fill_tx_fifo() - Fill the Tx FIFO in read mode * @i2c_dev: Controller's private data + * @max: Maximum amount of data to fill into the Tx FIFO * * This functions fills the Tx FIFO with fixed pattern when * in read mode to trigger clock.
From: Chris Packham chris.packham@alliedtelesis.co.nz
[ Upstream commit 14c1fe699cad9cb0acda4559c584f136d18fea50 ]
The interrupt is not required so use platform_irq_get_optional() to avoid error messages like
i2c-pca-platform 22080000.i2c: IRQ index 0 not found
Signed-off-by: Chris Packham chris.packham@alliedtelesis.co.nz Signed-off-by: Wolfram Sang wsa@the-dreams.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i2c/busses/i2c-pca-platform.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/i2c/busses/i2c-pca-platform.c b/drivers/i2c/busses/i2c-pca-platform.c index a7a81846d5b1d..635dd697ac0bb 100644 --- a/drivers/i2c/busses/i2c-pca-platform.c +++ b/drivers/i2c/busses/i2c-pca-platform.c @@ -140,7 +140,7 @@ static int i2c_pca_pf_probe(struct platform_device *pdev) int ret = 0; int irq;
- irq = platform_get_irq(pdev, 0); + irq = platform_get_irq_optional(pdev, 0); /* If irq is 0, we do polling. */ if (irq < 0) irq = 0;
From: Mohammad Rasim mohammad.rasim96@gmail.com
[ Upstream commit 30defecb98400575349a7d32f0526e1dc42ea83e ]
This is an NEC remote control device shipped with the Videostrong KII Pro tv box as well as other devices from videostrong.
Signed-off-by: Mohammad Rasim mohammad.rasim96@gmail.com Signed-off-by: Sean Young sean@mess.org Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/rc/keymaps/Makefile | 1 + .../media/rc/keymaps/rc-videostrong-kii-pro.c | 83 +++++++++++++++++++ include/media/rc-map.h | 1 + 3 files changed, 85 insertions(+) create mode 100644 drivers/media/rc/keymaps/rc-videostrong-kii-pro.c
diff --git a/drivers/media/rc/keymaps/Makefile b/drivers/media/rc/keymaps/Makefile index 63261ef6380a9..aaa1bf81d00d4 100644 --- a/drivers/media/rc/keymaps/Makefile +++ b/drivers/media/rc/keymaps/Makefile @@ -119,6 +119,7 @@ obj-$(CONFIG_RC_MAP) += rc-adstech-dvb-t-pci.o \ rc-videomate-m1f.o \ rc-videomate-s350.o \ rc-videomate-tv-pvr.o \ + rc-videostrong-kii-pro.o \ rc-wetek-hub.o \ rc-wetek-play2.o \ rc-winfast.o \ diff --git a/drivers/media/rc/keymaps/rc-videostrong-kii-pro.c b/drivers/media/rc/keymaps/rc-videostrong-kii-pro.c new file mode 100644 index 0000000000000..414d4d231e7ed --- /dev/null +++ b/drivers/media/rc/keymaps/rc-videostrong-kii-pro.c @@ -0,0 +1,83 @@ +// SPDX-License-Identifier: GPL-2.0+ +// +// Copyright (C) 2019 Mohammad Rasim mohammad.rasim96@gmail.com + +#include <media/rc-map.h> +#include <linux/module.h> + +// +// Keytable for the Videostrong KII Pro STB remote control +// + +static struct rc_map_table kii_pro[] = { + { 0x59, KEY_POWER }, + { 0x19, KEY_MUTE }, + { 0x42, KEY_RED }, + { 0x40, KEY_GREEN }, + { 0x00, KEY_YELLOW }, + { 0x03, KEY_BLUE }, + { 0x4a, KEY_BACK }, + { 0x48, KEY_FORWARD }, + { 0x08, KEY_PREVIOUSSONG}, + { 0x0b, KEY_NEXTSONG}, + { 0x46, KEY_PLAYPAUSE }, + { 0x44, KEY_STOP }, + { 0x1f, KEY_FAVORITES}, //KEY_F5? + { 0x04, KEY_PVR }, + { 0x4d, KEY_EPG }, + { 0x02, KEY_INFO }, + { 0x09, KEY_SUBTITLE }, + { 0x01, KEY_AUDIO }, + { 0x0d, KEY_HOMEPAGE }, + { 0x11, KEY_TV }, // DTV ? + { 0x06, KEY_UP }, + { 0x5a, KEY_LEFT }, + { 0x1a, KEY_ENTER }, // KEY_OK ? + { 0x1b, KEY_RIGHT }, + { 0x16, KEY_DOWN }, + { 0x45, KEY_MENU }, + { 0x05, KEY_ESC }, + { 0x13, KEY_VOLUMEUP }, + { 0x17, KEY_VOLUMEDOWN }, + { 0x58, KEY_APPSELECT }, + { 0x12, KEY_VENDOR }, // mouse + { 0x55, KEY_PAGEUP }, // KEY_CHANNELUP ? + { 0x15, KEY_PAGEDOWN }, // KEY_CHANNELDOWN ? + { 0x52, KEY_1 }, + { 0x50, KEY_2 }, + { 0x10, KEY_3 }, + { 0x56, KEY_4 }, + { 0x54, KEY_5 }, + { 0x14, KEY_6 }, + { 0x4e, KEY_7 }, + { 0x4c, KEY_8 }, + { 0x0c, KEY_9 }, + { 0x18, KEY_WWW }, // KEY_F7 + { 0x0f, KEY_0 }, + { 0x51, KEY_BACKSPACE }, +}; + +static struct rc_map_list kii_pro_map = { + .map = { + .scan = kii_pro, + .size = ARRAY_SIZE(kii_pro), + .rc_proto = RC_PROTO_NEC, + .name = RC_MAP_KII_PRO, + } +}; + +static int __init init_rc_map_kii_pro(void) +{ + return rc_map_register(&kii_pro_map); +} + +static void __exit exit_rc_map_kii_pro(void) +{ + rc_map_unregister(&kii_pro_map); +} + +module_init(init_rc_map_kii_pro) +module_exit(exit_rc_map_kii_pro) + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Mohammad Rasim mohammad.rasim96@gmail.com"); diff --git a/include/media/rc-map.h b/include/media/rc-map.h index f99575a0d29c8..d22810dcd85c6 100644 --- a/include/media/rc-map.h +++ b/include/media/rc-map.h @@ -274,6 +274,7 @@ struct rc_map *rc_map_get(const char *name); #define RC_MAP_VIDEOMATE_K100 "rc-videomate-k100" #define RC_MAP_VIDEOMATE_S350 "rc-videomate-s350" #define RC_MAP_VIDEOMATE_TV_PVR "rc-videomate-tv-pvr" +#define RC_MAP_KII_PRO "rc-videostrong-kii-pro" #define RC_MAP_WETEK_HUB "rc-wetek-hub" #define RC_MAP_WETEK_PLAY2 "rc-wetek-play2" #define RC_MAP_WINFAST "rc-winfast"
From: Christoph Niedermaier cniedermaier@dh-electronics.com
[ Upstream commit 36eb7dc1bd42fe5f850329c893768ff89b696fba ]
imx6ul_opp_check_speed_grading is called for both i.MX6UL and i.MX6ULL. Since the i.MX6ULL was introduced to a separate ocotp compatible node later, it is possible that the i.MX6ULL has also dtbs with "fsl,imx6ull-ocotp". On a system without nvmem-cell speed grade a missing check on this node causes a driver fail without considering the cpu speed grade.
This patch prevents unwanted cpu overclocking on i.MX6ULL with compatible node "fsl,imx6ull-ocotp" in old dtbs without nvmem-cell speed grade.
Fixes: 2733fb0d0699 ("cpufreq: imx6q: read OCOTP through nvmem for imx6ul/imx6ull") Signed-off-by: Christoph Niedermaier cniedermaier@dh-electronics.com Signed-off-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpufreq/imx6q-cpufreq.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c index 648a09a1778a3..1fcbbd53a48a2 100644 --- a/drivers/cpufreq/imx6q-cpufreq.c +++ b/drivers/cpufreq/imx6q-cpufreq.c @@ -280,6 +280,9 @@ static int imx6ul_opp_check_speed_grading(struct device *dev) void __iomem *base;
np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-ocotp"); + if (!np) + np = of_find_compatible_node(NULL, NULL, + "fsl,imx6ull-ocotp"); if (!np) return -ENOENT;
From: Robert Richter rrichter@marvell.com
[ Upstream commit 65bb4d1af92cf007adc0a0c59dadcc393c5cada6 ]
There is a limitation to report only EDAC_MAX_LABELS in e->label of the error descriptor. This is to prevent a potential string overflow.
The current implementation falls back to "any memory" in this case and also stops all further processing to find a unique row and channel of the possible error location.
Reporting "any memory" is wrong as the memory controller reported an error location for one of the layers. Instead, report "unknown memory" and also do not break early in the loop to further check row and channel for uniqueness.
[ bp: Massage commit message. ]
Signed-off-by: Robert Richter rrichter@marvell.com Signed-off-by: Borislav Petkov bp@suse.de Acked-by: Aristeu Rozanski aris@redhat.com Link: https://lkml.kernel.org/r/20200123090210.26933-7-rrichter@marvell.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/edac/edac_mc.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c index 69e0d90460e6c..2349f2ad946bb 100644 --- a/drivers/edac/edac_mc.c +++ b/drivers/edac/edac_mc.c @@ -1180,20 +1180,21 @@ void edac_mc_handle_error(const enum hw_event_mc_err_type type, * channel/memory controller/... may be affected. * Also, don't show errors for empty DIMM slots. */ - if (!e->enable_per_layer_report || !dimm->nr_pages) + if (!dimm->nr_pages) continue;
- if (n_labels >= EDAC_MAX_LABELS) { - e->enable_per_layer_report = false; - break; - } n_labels++; - if (p != e->label) { - strcpy(p, OTHER_LABEL); - p += strlen(OTHER_LABEL); + if (n_labels > EDAC_MAX_LABELS) { + p = e->label; + *p = '\0'; + } else { + if (p != e->label) { + strcpy(p, OTHER_LABEL); + p += strlen(OTHER_LABEL); + } + strcpy(p, dimm->label); + p += strlen(p); } - strcpy(p, dimm->label); - p += strlen(p);
/* * get csrow/channel of the DIMM, in order to allow
From: Ajay Gupta ajayg@nvidia.com
[ Upstream commit 57a5e5f936be583d2c6cef3661c169e3ea4bf922 ]
Ucsi ppm is unregistered during fw flashing so disable runtime pm also and reenable after fw flashing is completed and ppm is re-registered.
Signed-off-by: Ajay Gupta ajayg@nvidia.com Signed-off-by: Heikki Krogerus heikki.krogerus@linux.intel.com Link: https://lore.kernel.org/r/20200217144913.55330-3-heikki.krogerus@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/typec/ucsi/ucsi_ccg.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c index 3370b3fc37b10..bd374cea3ba6c 100644 --- a/drivers/usb/typec/ucsi/ucsi_ccg.c +++ b/drivers/usb/typec/ucsi/ucsi_ccg.c @@ -1032,6 +1032,7 @@ static int ccg_restart(struct ucsi_ccg *uc) return status; }
+ pm_runtime_enable(uc->dev); return 0; }
@@ -1047,6 +1048,7 @@ static void ccg_update_firmware(struct work_struct *work)
if (flash_mode != FLASH_NOT_NEEDED) { ucsi_unregister(uc->ucsi); + pm_runtime_disable(uc->dev); free_irq(uc->irq, uc);
ccg_fw_update(uc, flash_mode);
From: Ajay Singh ajay.kathat@microchip.com
[ Upstream commit 6c411581caef6e3b2c286871641018364c6db50a ]
Possible double unlocking of 'wilc->hif_cs' mutex was identified by smatch [1]. Removed the extra call to release_bus() in wilc_wlan_handle_txq() which was missed in earlier commit fdc2ac1aafc6 ("staging: wilc1000: support suspend/resume functionality").
[1]. https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org/thread/NOEVW7C3...
Reported-by: kbuild test robot lkp@intel.com Reported-by: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: Ajay Singh ajay.kathat@microchip.com Link: https://lore.kernel.org/r/20200221170120.15739-1-ajay.kathat@microchip.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/staging/wilc1000/wlan.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/staging/wilc1000/wlan.c b/drivers/staging/wilc1000/wlan.c index d3de76126b787..3098399741d7b 100644 --- a/drivers/staging/wilc1000/wlan.c +++ b/drivers/staging/wilc1000/wlan.c @@ -578,7 +578,6 @@ int wilc_wlan_handle_txq(struct wilc *wilc, u32 *txq_count) entries = ((reg >> 3) & 0x3f); break; } - release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); } while (--timeout); if (timeout <= 0) { ret = func->hif_write_reg(wilc, WILC_HOST_VMM_CTL, 0x0);
From: Dafna Hirschfeld dafna.hirschfeld@collabora.com
[ Upstream commit ceeb2e6166dddf3c9757abbbf84032027e2fa2d2 ]
In case kthread_run fails, the vimc subdevices should be notified that streaming stopped so they can release the memory for the streaming. Also, kthread should be set to NULL.
Signed-off-by: Dafna Hirschfeld dafna.hirschfeld@collabora.com Acked-by: Helen Koike helen.koike@collabora.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/vimc/vimc-streamer.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c index cd6b55433c9ee..43e494df61d88 100644 --- a/drivers/media/platform/vimc/vimc-streamer.c +++ b/drivers/media/platform/vimc/vimc-streamer.c @@ -207,8 +207,13 @@ int vimc_streamer_s_stream(struct vimc_stream *stream, stream->kthread = kthread_run(vimc_streamer_thread, stream, "vimc-streamer thread");
- if (IS_ERR(stream->kthread)) - return PTR_ERR(stream->kthread); + if (IS_ERR(stream->kthread)) { + ret = PTR_ERR(stream->kthread); + dev_err(ved->dev, "kthread_run failed with %d\n", ret); + vimc_streamer_pipeline_terminate(stream); + stream->kthread = NULL; + return ret; + }
} else { if (!stream->kthread)
From: Stephan Gerhold stephan@gerhold.net
[ Upstream commit c50cc6dc6c48300af63a6fbc71b647053c15fc80 ]
Some older MSM8916 Venus firmware versions also seem to indicate support for encoding HEVC, even though they really can't. This will lead to errors later because hfi_session_init() fails in this case.
HEVC is already ignored for "dec_codecs", so add the same for "enc_codecs" to make these old firmware versions work correctly.
Signed-off-by: Stephan Gerhold stephan@gerhold.net Signed-off-by: Stanimir Varbanov stanimir.varbanov@linaro.org Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/qcom/venus/hfi_parser.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/media/platform/qcom/venus/hfi_parser.c b/drivers/media/platform/qcom/venus/hfi_parser.c index 2293d936e49ca..7f515a4b9bd12 100644 --- a/drivers/media/platform/qcom/venus/hfi_parser.c +++ b/drivers/media/platform/qcom/venus/hfi_parser.c @@ -181,6 +181,7 @@ static void parse_codecs(struct venus_core *core, void *data) if (IS_V1(core)) { core->dec_codecs &= ~HFI_VIDEO_CODEC_HEVC; core->dec_codecs &= ~HFI_VIDEO_CODEC_SPARK; + core->enc_codecs &= ~HFI_VIDEO_CODEC_HEVC; } }
From: James Morse james.morse@arm.com
[ Upstream commit 6ded0b61cf638bf9f8efe60ab8ba23db60ea9763 ]
SDEI has private events that must be registered on each CPU. When CPUs come and go they must re-register and re-enable their private events. Each event has flags to indicate whether this should happen to protect against an event being registered on a CPU coming online, while all the others are unregistering the event.
These flags are protected by the sdei_list_lock spinlock, because the cpuhp callbacks can't take the mutex.
Hibernate needs to unregister all events, but keep the in-memory re-register and re-enable as they are. sdei_unregister_shared() takes the spinlock to walk the list, then calls _sdei_event_unregister() on each shared event. _sdei_event_unregister() tries to take the same spinlock to update re-register and re-enable. This doesn't go so well.
Push the re-register and re-enable updates out to their callers. sdei_unregister_shared() doesn't want these values updated, so doesn't need to do anything.
This also fixes shared events getting lost over hibernate as this path made them look unregistered.
Fixes: da351827240e ("firmware: arm_sdei: Add support for CPU and system power states") Reported-by: Liguang Zhang zhangliguang@linux.alibaba.com Signed-off-by: James Morse james.morse@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_sdei.c | 32 +++++++++++++++----------------- 1 file changed, 15 insertions(+), 17 deletions(-)
diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c index a479023fa036e..77eaa9a2fd156 100644 --- a/drivers/firmware/arm_sdei.c +++ b/drivers/firmware/arm_sdei.c @@ -491,11 +491,6 @@ static int _sdei_event_unregister(struct sdei_event *event) { lockdep_assert_held(&sdei_events_lock);
- spin_lock(&sdei_list_lock); - event->reregister = false; - event->reenable = false; - spin_unlock(&sdei_list_lock); - if (event->type == SDEI_EVENT_TYPE_SHARED) return sdei_api_event_unregister(event->event_num);
@@ -518,6 +513,11 @@ int sdei_event_unregister(u32 event_num) break; }
+ spin_lock(&sdei_list_lock); + event->reregister = false; + event->reenable = false; + spin_unlock(&sdei_list_lock); + err = _sdei_event_unregister(event); if (err) break; @@ -585,26 +585,15 @@ static int _sdei_event_register(struct sdei_event *event)
lockdep_assert_held(&sdei_events_lock);
- spin_lock(&sdei_list_lock); - event->reregister = true; - spin_unlock(&sdei_list_lock); - if (event->type == SDEI_EVENT_TYPE_SHARED) return sdei_api_event_register(event->event_num, sdei_entry_point, event->registered, SDEI_EVENT_REGISTER_RM_ANY, 0);
- err = sdei_do_cross_call(_local_event_register, event); - if (err) { - spin_lock(&sdei_list_lock); - event->reregister = false; - event->reenable = false; - spin_unlock(&sdei_list_lock); - + if (err) sdei_do_cross_call(_local_event_unregister, event); - }
return err; } @@ -632,8 +621,17 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg) break; }
+ spin_lock(&sdei_list_lock); + event->reregister = true; + spin_unlock(&sdei_list_lock); + err = _sdei_event_register(event); if (err) { + spin_lock(&sdei_list_lock); + event->reregister = false; + event->reenable = false; + spin_unlock(&sdei_list_lock); + sdei_event_destroy(event); pr_warn("Failed to register event %u: %d\n", event_num, err);
From: Chris Wilson chris@chris-wilson.co.uk
[ Upstream commit f1dfdab694eb3838ac26f4b73695929c07d92a33 ]
As the vtime is sampled under loose seqcount protection by kcpustat, the vtime fields may change as the code flows. Where logic dictates a field has a static value, use a READ_ONCE.
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Fixes: 74722bb223d0 ("sched/vtime: Bring up complete kcpustat accessor") Link: https://lkml.kernel.org/r/20200123180849.28486-1-frederic@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/sched/cputime.c | 41 ++++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 19 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index d43318a489f24..df3577149d2ed 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -912,8 +912,10 @@ void task_cputime(struct task_struct *t, u64 *utime, u64 *stime) } while (read_seqcount_retry(&vtime->seqcount, seq)); }
-static int vtime_state_check(struct vtime *vtime, int cpu) +static int vtime_state_fetch(struct vtime *vtime, int cpu) { + int state = READ_ONCE(vtime->state); + /* * We raced against a context switch, fetch the * kcpustat task again. @@ -930,10 +932,10 @@ static int vtime_state_check(struct vtime *vtime, int cpu) * * Case 1) is ok but 2) is not. So wait for a safe VTIME state. */ - if (vtime->state == VTIME_INACTIVE) + if (state == VTIME_INACTIVE) return -EAGAIN;
- return 0; + return state; }
static u64 kcpustat_user_vtime(struct vtime *vtime) @@ -952,14 +954,15 @@ static int kcpustat_field_vtime(u64 *cpustat, { struct vtime *vtime = &tsk->vtime; unsigned int seq; - int err;
do { + int state; + seq = read_seqcount_begin(&vtime->seqcount);
- err = vtime_state_check(vtime, cpu); - if (err < 0) - return err; + state = vtime_state_fetch(vtime, cpu); + if (state < 0) + return state;
*val = cpustat[usage];
@@ -972,7 +975,7 @@ static int kcpustat_field_vtime(u64 *cpustat, */ switch (usage) { case CPUTIME_SYSTEM: - if (vtime->state == VTIME_SYS) + if (state == VTIME_SYS) *val += vtime->stime + vtime_delta(vtime); break; case CPUTIME_USER: @@ -984,11 +987,11 @@ static int kcpustat_field_vtime(u64 *cpustat, *val += kcpustat_user_vtime(vtime); break; case CPUTIME_GUEST: - if (vtime->state == VTIME_GUEST && task_nice(tsk) <= 0) + if (state == VTIME_GUEST && task_nice(tsk) <= 0) *val += vtime->gtime + vtime_delta(vtime); break; case CPUTIME_GUEST_NICE: - if (vtime->state == VTIME_GUEST && task_nice(tsk) > 0) + if (state == VTIME_GUEST && task_nice(tsk) > 0) *val += vtime->gtime + vtime_delta(vtime); break; default: @@ -1039,23 +1042,23 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst, { struct vtime *vtime = &tsk->vtime; unsigned int seq; - int err;
do { u64 *cpustat; u64 delta; + int state;
seq = read_seqcount_begin(&vtime->seqcount);
- err = vtime_state_check(vtime, cpu); - if (err < 0) - return err; + state = vtime_state_fetch(vtime, cpu); + if (state < 0) + return state;
*dst = *src; cpustat = dst->cpustat;
/* Task is sleeping, dead or idle, nothing to add */ - if (vtime->state < VTIME_SYS) + if (state < VTIME_SYS) continue;
delta = vtime_delta(vtime); @@ -1064,15 +1067,15 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst, * Task runs either in user (including guest) or kernel space, * add pending nohz time to the right place. */ - if (vtime->state == VTIME_SYS) { + if (state == VTIME_SYS) { cpustat[CPUTIME_SYSTEM] += vtime->stime + delta; - } else if (vtime->state == VTIME_USER) { + } else if (state == VTIME_USER) { if (task_nice(tsk) > 0) cpustat[CPUTIME_NICE] += vtime->utime + delta; else cpustat[CPUTIME_USER] += vtime->utime + delta; } else { - WARN_ON_ONCE(vtime->state != VTIME_GUEST); + WARN_ON_ONCE(state != VTIME_GUEST); if (task_nice(tsk) > 0) { cpustat[CPUTIME_GUEST_NICE] += vtime->gtime + delta; cpustat[CPUTIME_NICE] += vtime->gtime + delta; @@ -1083,7 +1086,7 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst, } } while (read_seqcount_retry(&vtime->seqcount, seq));
- return err; + return 0; }
void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
From: Bart Van Assche bvanassche@acm.org
[ Upstream commit 2004bfdef945fe55196db6b9cdf321fbc75bb0de ]
If null_add_dev() fails, clear dev->nullb.
This patch fixes the following KASAN complaint:
BUG: KASAN: use-after-free in nullb_device_submit_queues_store+0xcf/0x160 [null_blk] Read of size 8 at addr ffff88803280fc30 by task check/8409
Call Trace: dump_stack+0xa5/0xe6 print_address_description.constprop.0+0x26/0x260 __kasan_report.cold+0x7b/0x99 kasan_report+0x16/0x20 __asan_load8+0x58/0x90 nullb_device_submit_queues_store+0xcf/0x160 [null_blk] configfs_write_file+0x1c4/0x250 [configfs] __vfs_write+0x4c/0x90 vfs_write+0x145/0x2c0 ksys_write+0xd7/0x180 __x64_sys_write+0x47/0x50 do_syscall_64+0x6f/0x2f0 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7ff370926317 Code: 64 89 02 48 c7 c0 ff ff ff ff eb bb 0f 1f 80 00 00 00 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24 RSP: 002b:00007fff2dd2da48 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007ff370926317 RDX: 0000000000000002 RSI: 0000559437ef23f0 RDI: 0000000000000001 RBP: 0000559437ef23f0 R08: 000000000000000a R09: 0000000000000001 R10: 0000559436703471 R11: 0000000000000246 R12: 0000000000000002 R13: 00007ff370a006a0 R14: 00007ff370a014a0 R15: 00007ff370a008a0
Allocated by task 8409: save_stack+0x23/0x90 __kasan_kmalloc.constprop.0+0xcf/0xe0 kasan_kmalloc+0xd/0x10 kmem_cache_alloc_node_trace+0x129/0x4c0 null_add_dev+0x24a/0xe90 [null_blk] nullb_device_power_store+0x1b6/0x270 [null_blk] configfs_write_file+0x1c4/0x250 [configfs] __vfs_write+0x4c/0x90 vfs_write+0x145/0x2c0 ksys_write+0xd7/0x180 __x64_sys_write+0x47/0x50 do_syscall_64+0x6f/0x2f0 entry_SYSCALL_64_after_hwframe+0x49/0xbe
Freed by task 8409: save_stack+0x23/0x90 __kasan_slab_free+0x112/0x160 kasan_slab_free+0x12/0x20 kfree+0xdf/0x250 null_add_dev+0xaf3/0xe90 [null_blk] nullb_device_power_store+0x1b6/0x270 [null_blk] configfs_write_file+0x1c4/0x250 [configfs] __vfs_write+0x4c/0x90 vfs_write+0x145/0x2c0 ksys_write+0xd7/0x180 __x64_sys_write+0x47/0x50 do_syscall_64+0x6f/0x2f0 entry_SYSCALL_64_after_hwframe+0x49/0xbe
Fixes: 2984c8684f96 ("nullb: factor disk parameters") Signed-off-by: Bart Van Assche bvanassche@acm.org Reviewed-by: Chaitanya Kulkarni chaitanya.kulkarni@wdc.com Cc: Johannes Thumshirn jth@kernel.org Cc: Hannes Reinecke hare@suse.com Cc: Ming Lei ming.lei@redhat.com Cc: Christoph Hellwig hch@infradead.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/block/null_blk_main.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index ae8d4bc532b0b..97bb53d7fa348 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1790,6 +1790,7 @@ out_cleanup_queues: cleanup_queues(nullb); out_free_nullb: kfree(nullb); + dev->nullb = NULL; out: return rv; }
From: Bart Van Assche bvanassche@acm.org
[ Upstream commit d0930bb8f46b8fb4a7d429c0bf1c91b3ed00a7cf ]
q->nr_hw_queues must only be updated once it is known that blk_mq_realloc_hw_ctxs() has succeeded. Otherwise it can happen that reallocation fails and that q->nr_hw_queues is larger than the number of allocated hardware queues. This patch fixes the following crash if increasing the number of hardware queues fails:
BUG: KASAN: null-ptr-deref in blk_mq_map_swqueue+0x775/0x810 Write of size 8 at addr 0000000000000118 by task check/977
CPU: 3 PID: 977 Comm: check Not tainted 5.6.0-rc1-dbg+ #8 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Call Trace: dump_stack+0xa5/0xe6 __kasan_report.cold+0x65/0x99 kasan_report+0x16/0x20 check_memory_region+0x140/0x1b0 memset+0x28/0x40 blk_mq_map_swqueue+0x775/0x810 blk_mq_update_nr_hw_queues+0x468/0x710 nullb_device_submit_queues_store+0xf7/0x1a0 [null_blk] configfs_write_file+0x1c4/0x250 [configfs] __vfs_write+0x4c/0x90 vfs_write+0x145/0x2c0 ksys_write+0xd7/0x180 __x64_sys_write+0x47/0x50 do_syscall_64+0x6f/0x2f0 entry_SYSCALL_64_after_hwframe+0x49/0xbe
Fixes: ac0d6b926e74 ("block: Reduce the amount of memory required per request queue") Signed-off-by: Bart Van Assche bvanassche@acm.org Reviewed-by: Ming Lei ming.lei@redhat.com Cc: Keith Busch kbusch@kernel.org Cc: Johannes Thumshirn jth@kernel.org Cc: Hannes Reinecke hare@suse.com Cc: Christoph Hellwig hch@infradead.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-mq.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c index 7d7800e958952..8391e8e2a5044 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2766,7 +2766,6 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, memcpy(new_hctxs, hctxs, q->nr_hw_queues * sizeof(*hctxs)); q->queue_hw_ctx = new_hctxs; - q->nr_hw_queues = set->nr_hw_queues; kfree(hctxs); hctxs = new_hctxs; }
From: Bart Van Assche bvanassche@acm.org
[ Upstream commit 9b03b713082a31a5b90e0a893c72aa620e255c26 ]
If null_add_dev() fails then null_del_dev() is called with a NULL argument. Make null_del_dev() handle this scenario correctly. This patch fixes the following KASAN complaint:
null-ptr-deref in null_del_dev+0x28/0x280 [null_blk] Read of size 8 at addr 0000000000000000 by task find/1062
Call Trace: dump_stack+0xa5/0xe6 __kasan_report.cold+0x65/0x99 kasan_report+0x16/0x20 __asan_load8+0x58/0x90 null_del_dev+0x28/0x280 [null_blk] nullb_group_drop_item+0x7e/0xa0 [null_blk] client_drop_item+0x53/0x80 [configfs] configfs_rmdir+0x395/0x4e0 [configfs] vfs_rmdir+0xb6/0x220 do_rmdir+0x238/0x2c0 __x64_sys_unlinkat+0x75/0x90 do_syscall_64+0x6f/0x2f0 entry_SYSCALL_64_after_hwframe+0x49/0xbe
Signed-off-by: Bart Van Assche bvanassche@acm.org Reviewed-by: Chaitanya Kulkarni chaitanya.kulkarni@wdc.com Cc: Johannes Thumshirn jth@kernel.org Cc: Hannes Reinecke hare@suse.com Cc: Ming Lei ming.lei@redhat.com Cc: Christoph Hellwig hch@infradead.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/block/null_blk_main.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 97bb53d7fa348..635904bfcf777 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1432,7 +1432,12 @@ static void cleanup_queues(struct nullb *nullb)
static void null_del_dev(struct nullb *nullb) { - struct nullb_device *dev = nullb->dev; + struct nullb_device *dev; + + if (!nullb) + return; + + dev = nullb->dev;
ida_simple_remove(&nullb_indexes, nullb->index);
From: Alexey Dobriyan adobriyan@gmail.com
[ Upstream commit ff77042296d0a54535ddf74412c5ae92cb4ec76a ]
Steps to reproduce:
BLKRESETZONE zone 0
// force EIO pwrite(fd, buf, 4096, 4096);
[issue more IO including zone ioctls]
It will start failing randomly including IO to unrelated zones because of ->error "reuse". Trigger can be partition detection as well if test is not run immediately which is even more entertaining.
The fix is of course to clear ->error where necessary.
Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Alexey Dobriyan (SK hynix) adobriyan@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/block/null_blk_main.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 635904bfcf777..0cafad09c9b29 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -605,6 +605,7 @@ static struct nullb_cmd *__alloc_cmd(struct nullb_queue *nq) if (tag != -1U) { cmd = &nq->cmds[tag]; cmd->tag = tag; + cmd->error = BLK_STS_OK; cmd->nq = nq; if (nq->dev->irqmode == NULL_IRQ_TIMER) { hrtimer_init(&cmd->timer, CLOCK_MONOTONIC, @@ -1385,6 +1386,7 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx, cmd->timer.function = null_cmd_timer_expired; } cmd->rq = bd->rq; + cmd->error = BLK_STS_OK; cmd->nq = nq;
blk_mq_start_request(bd->rq);
From: Laurent Pinchart laurent.pinchart@ideasonboard.com
[ Upstream commit 770cbf89f90b0663499dbb3f03aa81b3322757ec ]
The .s_stream() implementation incorrectly powers on the source when stopping the stream. Power it off instead.
Fixes: 7807063b862b ("media: staging/imx7: add MIPI CSI-2 receiver subdev for i.MX7") Signed-off-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Reviewed-by: Rui Miguel Silva rmfrfs@gmail.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/staging/media/imx/imx7-mipi-csis.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/media/imx/imx7-mipi-csis.c b/drivers/staging/media/imx/imx7-mipi-csis.c index 99166afca071b..aa1749b1e28fc 100644 --- a/drivers/staging/media/imx/imx7-mipi-csis.c +++ b/drivers/staging/media/imx/imx7-mipi-csis.c @@ -579,7 +579,7 @@ static int mipi_csis_s_stream(struct v4l2_subdev *mipi_sd, int enable) state->flags |= ST_STREAMING; } else { v4l2_subdev_call(state->src_sd, video, s_stream, 0); - ret = v4l2_subdev_call(state->src_sd, core, s_power, 1); + ret = v4l2_subdev_call(state->src_sd, core, s_power, 0); mipi_csis_stop_stream(state); state->flags &= ~ST_STREAMING; if (state->debug)
From: Laurent Pinchart laurent.pinchart@ideasonboard.com
[ Upstream commit f7b8488bd39ae8feced4dfbb41cf1431277b893f ]
Commit 4791bd7d6adc ("media: imx: Try colorimetry at both sink and source pads") reworked the way that formats are set on the sink pad of the CSI subdevice, and accidentally removed video field handling. Restore it by defaulting to V4L2_FIELD_NONE if the field value isn't supported, with the only two supported value being V4L2_FIELD_NONE and V4L2_FIELD_INTERLACED.
Fixes: 4791bd7d6adc ("media: imx: Try colorimetry at both sink and source pads") Signed-off-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Reviewed-by: Rui Miguel Silva rmfrfs@gmail.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/staging/media/imx/imx7-media-csi.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c index db30e2c70f2fe..f45920b3137e4 100644 --- a/drivers/staging/media/imx/imx7-media-csi.c +++ b/drivers/staging/media/imx/imx7-media-csi.c @@ -1009,6 +1009,7 @@ static int imx7_csi_try_fmt(struct imx7_csi *csi, sdformat->format.width = in_fmt->width; sdformat->format.height = in_fmt->height; sdformat->format.code = in_fmt->code; + sdformat->format.field = in_fmt->field; *cc = in_cc;
sdformat->format.colorspace = in_fmt->colorspace; @@ -1023,6 +1024,9 @@ static int imx7_csi_try_fmt(struct imx7_csi *csi, false); sdformat->format.code = (*cc)->codes[0]; } + + if (sdformat->format.field != V4L2_FIELD_INTERLACED) + sdformat->format.field = V4L2_FIELD_NONE; break; default: return -EINVAL;
From: Mathias Nyman mathias.nyman@linux.intel.com
[ Upstream commit 72ae194704da212e2ec312ab182a96799d070755 ]
Bail out early if the xHC host needs to be reset at resume but driver can't access xHC PCI registers.
If xhci driver already fails to reset the controller then there is no point in attempting to free, re-initialize, re-allocate and re-start the host. If failure to access the host is detected later, failing the resume, xhci interrupts will be double freed when remove is called.
Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20200312144517.1593-2-mathias.nyman@linux.intel.co... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index dbac0fa9748d5..fe38275363e0f 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -1157,8 +1157,10 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated) xhci_dbg(xhci, "Stop HCD\n"); xhci_halt(xhci); xhci_zero_64b_regs(xhci); - xhci_reset(xhci); + retval = xhci_reset(xhci); spin_unlock_irq(&xhci->lock); + if (retval) + return retval; xhci_cleanup_msix(xhci);
xhci_dbg(xhci, "// Disabling event ring interrupts\n");
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
[ Upstream commit 65a691f5f8f0bb63d6a82eec7b0ffd193d8d8a5f ]
The reason for clearing boot_ec_is_ecdt in acpi_ec_add() (if a PNP0C09 device object matching the ECDT boot EC had been found in the namespace) was to cause acpi_ec_ecdt_start() to return early, but since the latter does not look at boot_ec_is_ecdt any more, acpi_ec_add() need not clear it.
Moreover, doing that may be confusing as it may cause "DSDT" to be printed instead of "ECDT" in the EC initialization completion message, so stop doing it.
While at it, split the EC initialization completion message into two messages, one regarding the boot EC and another one printed regardless of whether or not the EC at hand is the boot one.
Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/acpi/ec.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c index bd74c78366759..f351d0711e495 100644 --- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -1649,7 +1649,6 @@ static int acpi_ec_add(struct acpi_device *device)
if (boot_ec && ec->command_addr == boot_ec->command_addr && ec->data_addr == boot_ec->data_addr) { - boot_ec_is_ecdt = false; /* * Trust PNP0C09 namespace location rather than * ECDT ID. But trust ECDT GPE rather than _GPE @@ -1669,9 +1668,12 @@ static int acpi_ec_add(struct acpi_device *device)
if (ec == boot_ec) acpi_handle_info(boot_ec->handle, - "Boot %s EC used to handle transactions and events\n", + "Boot %s EC initialization complete\n", boot_ec_is_ecdt ? "ECDT" : "DSDT");
+ acpi_handle_info(ec->handle, + "EC: Used to handle transactions and events\n"); + device->driver_data = ec;
ret = !!request_region(ec->data_addr, 1, "EC data");
From: Thomas Hellstrom thellstrom@vmware.com
[ Upstream commit 6db73f17c5f155dbcfd5e48e621c706270b84df0 ]
When SEV or SME is enabled and active, vm_get_page_prot() typically returns with the encryption bit set. This means that users of pgprot_modify(, vm_get_page_prot()) (mprotect_fixup(), do_mmap()) end up with a value of vma->vm_pg_prot that is not consistent with the intended protection of the PTEs.
This is also important for fault handlers that rely on the VMA vm_page_prot to set the page protection. Fix this by not allowing pgprot_modify() to change the encryption bit, similar to how it's done for PAT bits.
Signed-off-by: Thomas Hellstrom thellstrom@vmware.com Signed-off-by: Borislav Petkov bp@suse.de Reviewed-by: Dave Hansen dave.hansen@linux.intel.com Acked-by: Tom Lendacky thomas.lendacky@amd.com Link: https://lkml.kernel.org/r/20200304114527.3636-2-thomas_os@shipmail.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/include/asm/pgtable.h | 7 +++++-- arch/x86/include/asm/pgtable_types.h | 2 +- 2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index ad97dc1551959..9de80cbdd887e 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -624,12 +624,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) return __pmd(val); }
-/* mprotect needs to preserve PAT bits when updating vm_page_prot */ +/* + * mprotect needs to preserve PAT and encryption bits when updating + * vm_page_prot + */ #define pgprot_modify pgprot_modify static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot) { pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK; - pgprotval_t addbits = pgprot_val(newprot); + pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK; return __pgprot(preservebits | addbits); }
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index b5e49e6bac635..8267dd426b152 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -123,7 +123,7 @@ */ #define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \ - _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) + _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC) #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
/*
From: Thomas Hellstrom thellstrom@vmware.com
[ Upstream commit 17c4a2ae15a7aaefe84bdb271952678c5c9cd8e1 ]
When dma_mmap_coherent() sets up a mapping to unencrypted coherent memory under SEV encryption and sometimes under SME encryption, it will actually set up an encrypted mapping rather than an unencrypted, causing devices that DMAs from that memory to read encrypted contents. Fix this.
When force_dma_unencrypted() returns true, the linear kernel map of the coherent pages have had the encryption bit explicitly cleared and the page content is unencrypted. Make sure that any additional PTEs we set up to these pages also have the encryption bit cleared by having dma_pgprot() return a protection with the encryption bit cleared in this case.
Signed-off-by: Thomas Hellstrom thellstrom@vmware.com Signed-off-by: Borislav Petkov bp@suse.de Reviewed-by: Christoph Hellwig hch@lst.de Acked-by: Tom Lendacky thomas.lendacky@amd.com Link: https://lkml.kernel.org/r/20200304114527.3636-3-thomas_os@shipmail.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/dma/mapping.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 12ff766ec1fa3..98e3d873792ea 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -154,6 +154,8 @@ EXPORT_SYMBOL(dma_get_sgtable_attrs); */ pgprot_t dma_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs) { + if (force_dma_unencrypted(dev)) + prot = pgprot_decrypted(prot); if (dev_is_dma_coherent(dev) || (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && (attrs & DMA_ATTR_NON_CONSISTENT)))
From: Konstantin Khlebnikov khlebnikov@yandex-team.ru
[ Upstream commit e74d93e96d721c4297f2a900ad0191890d2fc2b0 ]
Field bdi->io_pages added in commit 9491ae4aade6 ("mm: don't cap request size based on read-ahead setting") removes unneeded split of read requests.
Stacked drivers do not call blk_queue_max_hw_sectors(). Instead they set limits of their devices by blk_set_stacking_limits() + disk_stack_limits(). Field bio->io_pages stays zero until user set max_sectors_kb via sysfs.
This patch updates io_pages after merging limits in disk_stack_limits().
Commit c6d6e9b0f6b4 ("dm: do not allow readahead to limit IO size") fixed the same problem for device-mapper devices, this one fixes MD RAIDs.
Fixes: 9491ae4aade6 ("mm: don't cap request size based on read-ahead setting") Reviewed-by: Paul Menzel pmenzel@molgen.mpg.de Reviewed-by: Bob Liu bob.liu@oracle.com Signed-off-by: Konstantin Khlebnikov khlebnikov@yandex-team.ru Signed-off-by: Song Liu songliubraving@fb.com Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-settings.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/block/blk-settings.c b/block/blk-settings.c index c8eda2e7b91e4..be1dca0103a45 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -664,6 +664,9 @@ void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, printk(KERN_NOTICE "%s: Warning: Device %s is misaligned\n", top, bottom); } + + t->backing_dev_info->io_pages = + t->limits.max_sectors >> (PAGE_SHIFT - 9); } EXPORT_SYMBOL(disk_stack_limits);
From: Taehee Yoo ap420073@gmail.com
[ Upstream commit 275678e7a9be6a0ea9c1bb493e48abf2f4a01be5 ]
When the module is being removed, the module state is set to MODULE_STATE_GOING. At this point, try_module_get() fails. And when {full/open}_proxy_open() is being called, it calls try_module_get() to try to hold module reference count. If it fails, it warns about the possibility of debugfs file leak.
If {full/open}_proxy_open() is called while the module is being removed, it fails to hold the module. So, It warns about debugfs file leak. But it is not the debugfs file leak case. So, this patch just adds module state checking routine in the {full/open}_proxy_open().
Test commands: #SHELL1 while : do modprobe netdevsim echo 1 > /sys/bus/netdevsim/new_device modprobe -rv netdevsim done
#SHELL2 while : do cat /sys/kernel/debug/netdevsim/netdevsim1/ports/0/ipsec done
Splat looks like: [ 298.766738][T14664] debugfs file owner did not clean up at exit: ipsec [ 298.766766][T14664] WARNING: CPU: 2 PID: 14664 at fs/debugfs/file.c:312 full_proxy_open+0x10f/0x650 [ 298.768595][T14664] Modules linked in: netdevsim(-) openvswitch nsh nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 n][ 298.771343][T14664] CPU: 2 PID: 14664 Comm: cat Tainted: G W 5.5.0+ #1 [ 298.772373][T14664] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [ 298.773545][T14664] RIP: 0010:full_proxy_open+0x10f/0x650 [ 298.774247][T14664] Code: 48 c1 ea 03 80 3c 02 00 0f 85 c1 04 00 00 49 8b 3c 24 e8 e4 b5 78 ff 84 c0 75 2d 4c 89 ee 48 [ 298.776782][T14664] RSP: 0018:ffff88805b7df9b8 EFLAGS: 00010282[ 298.777583][T14664] RAX: dffffc0000000008 RBX: ffff8880511725c0 RCX: 0000000000000000 [ 298.778610][T14664] RDX: 0000000000000000 RSI: 0000000000000006 RDI: ffff8880540c5c14 [ 298.779637][T14664] RBP: 0000000000000000 R08: fffffbfff15235ad R09: 0000000000000000 [ 298.780664][T14664] R10: 0000000000000001 R11: 0000000000000000 R12: ffffffffc06b5000 [ 298.781702][T14664] R13: ffff88804c234a88 R14: ffff88804c22dd00 R15: ffffffff8a1b5660 [ 298.782722][T14664] FS: 00007fafa13a8540(0000) GS:ffff88806c800000(0000) knlGS:0000000000000000 [ 298.783845][T14664] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 298.784672][T14664] CR2: 00007fafa0e9cd10 CR3: 000000004b286005 CR4: 00000000000606e0 [ 298.785739][T14664] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 298.786769][T14664] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 298.787785][T14664] Call Trace: [ 298.788237][T14664] do_dentry_open+0x63c/0xf50 [ 298.788872][T14664] ? open_proxy_open+0x270/0x270 [ 298.789524][T14664] ? __x64_sys_fchdir+0x180/0x180 [ 298.790169][T14664] ? inode_permission+0x65/0x390 [ 298.790832][T14664] path_openat+0xc45/0x2680 [ 298.791425][T14664] ? save_stack+0x69/0x80 [ 298.791988][T14664] ? save_stack+0x19/0x80 [ 298.792544][T14664] ? path_mountpoint+0x2e0/0x2e0 [ 298.793233][T14664] ? check_chain_key+0x236/0x5d0 [ 298.793910][T14664] ? sched_clock_cpu+0x18/0x170 [ 298.794527][T14664] ? find_held_lock+0x39/0x1d0 [ 298.795153][T14664] do_filp_open+0x16a/0x260 [ ... ]
Fixes: 9fd4dcece43a ("debugfs: prevent access to possibly dead file_operations at file open") Reported-by: kbuild test robot lkp@intel.com Signed-off-by: Taehee Yoo ap420073@gmail.com Link: https://lore.kernel.org/r/20200218043150.29447-1-ap420073@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/debugfs/file.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c index 18eeeb093a688..331d4071a8560 100644 --- a/fs/debugfs/file.c +++ b/fs/debugfs/file.c @@ -175,8 +175,13 @@ static int open_proxy_open(struct inode *inode, struct file *filp) if (r) goto out;
- real_fops = fops_get(real_fops); - if (!real_fops) { + if (!fops_get(real_fops)) { +#ifdef MODULE + if (real_fops->owner && + real_fops->owner->state == MODULE_STATE_GOING) + goto out; +#endif + /* Huh? Module did not clean up after itself at exit? */ WARN(1, "debugfs file owner did not clean up at exit: %pd", dentry); @@ -305,8 +310,13 @@ static int full_proxy_open(struct inode *inode, struct file *filp) if (r) goto out;
- real_fops = fops_get(real_fops); - if (!real_fops) { + if (!fops_get(real_fops)) { +#ifdef MODULE + if (real_fops->owner && + real_fops->owner->state == MODULE_STATE_GOING) + goto out; +#endif + /* Huh? Module did not cleanup after itself at exit? */ WARN(1, "debugfs file owner did not clean up at exit: %pd", dentry);
From: Vladimir Oltean vladimir.oltean@nxp.com
[ Upstream commit 3d6224e63be39ff26cf416492cb3923cd3d07dd0 ]
The driver does not create the dspi->dma structure unless operating in DSPI_DMA_MODE, so it makes sense to check for that.
Fixes: f4b323905d8b ("spi: Introduce dspi_slave_abort() function for NXP's dspi SPI driver") Signed-off-by: Vladimir Oltean vladimir.oltean@nxp.com Tested-by: Michael Walle michael@walle.cc Link: https://lore.kernel.org/r/20200318001603.9650-8-olteanv@gmail.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-fsl-dspi.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c index 8428b69c858bc..a534b8af27b8d 100644 --- a/drivers/spi/spi-fsl-dspi.c +++ b/drivers/spi/spi-fsl-dspi.c @@ -1021,8 +1021,10 @@ static int dspi_slave_abort(struct spi_master *master) * Terminate all pending DMA transactions for the SPI working * in SLAVE mode. */ - dmaengine_terminate_sync(dspi->dma->chan_rx); - dmaengine_terminate_sync(dspi->dma->chan_tx); + if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) { + dmaengine_terminate_sync(dspi->dma->chan_rx); + dmaengine_terminate_sync(dspi->dma->chan_tx); + }
/* Clear the internal DSPI RX and TX FIFO buffers */ regmap_update_bits(dspi->regmap, SPI_MCR,
From: Sungbo Eo mans0n@gorani.run
[ Upstream commit 486562da598c59e9f835b551d7cf19507de2d681 ]
Enclose the chained handler with chained_irq_{enter,exit}(), so that the muxed interrupts get properly acked.
This patch also fixes a reboot bug on OX820 SoC, where the jiffies timer interrupt is never acked. The kernel waits a clock tick forever in calibrate_delay_converge(), which leads to a boot hang.
Fixes: c41b16f8c9d9 ("ARM: integrator/versatile: consolidate FPGA IRQ handling code") Signed-off-by: Sungbo Eo mans0n@gorani.run Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20200319023448.1479701-1-mans0n@gorani.run Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/irqchip/irq-versatile-fpga.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/irqchip/irq-versatile-fpga.c b/drivers/irqchip/irq-versatile-fpga.c index 928858dada756..70e2cfff8175f 100644 --- a/drivers/irqchip/irq-versatile-fpga.c +++ b/drivers/irqchip/irq-versatile-fpga.c @@ -6,6 +6,7 @@ #include <linux/irq.h> #include <linux/io.h> #include <linux/irqchip.h> +#include <linux/irqchip/chained_irq.h> #include <linux/irqchip/versatile-fpga.h> #include <linux/irqdomain.h> #include <linux/module.h> @@ -68,12 +69,16 @@ static void fpga_irq_unmask(struct irq_data *d)
static void fpga_irq_handle(struct irq_desc *desc) { + struct irq_chip *chip = irq_desc_get_chip(desc); struct fpga_irq_data *f = irq_desc_get_handler_data(desc); - u32 status = readl(f->base + IRQ_STATUS); + u32 status; + + chained_irq_enter(chip, desc);
+ status = readl(f->base + IRQ_STATUS); if (status == 0) { do_bad_IRQ(desc); - return; + goto out; }
do { @@ -82,6 +87,9 @@ static void fpga_irq_handle(struct irq_desc *desc) status &= ~(1 << irq); generic_handle_irq(irq_find_mapping(f->domain, irq)); } while (status); + +out: + chained_irq_exit(chip, desc); }
/*
From: Ahmed S. Darwish a.darwish@linutronix.de
[ Upstream commit 2c8bd58812ee3dbf0d78b566822f7eacd34bdd7b ]
To minimize latency, PREEMPT_RT kernels expires hrtimers in preemptible softirq context by default. This can be overriden by marking the timer's expiry with HRTIMER_MODE_HARD.
sched_clock_timer is missing this annotation: if its callback is preempted and the duration of the preemption exceeds the wrap around time of the underlying clocksource, sched clock will get out of sync.
Mark the sched_clock_timer for expiry in hard interrupt context.
Signed-off-by: Ahmed S. Darwish a.darwish@linutronix.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20200309181529.26558-1-a.darwish@linutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/time/sched_clock.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c index dbd69052eaa66..a5538dd76a819 100644 --- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -207,7 +207,8 @@ sched_clock_register(u64 (*read)(void), int bits, unsigned long rate)
if (sched_clock_timer.function != NULL) { /* update timeout for clock wrap */ - hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL); + hrtimer_start(&sched_clock_timer, cd.wrap_kt, + HRTIMER_MODE_REL_HARD); }
r = rate; @@ -251,9 +252,9 @@ void __init generic_sched_clock_init(void) * Start the timer to keep sched_clock() properly updated and * sets the initial epoch. */ - hrtimer_init(&sched_clock_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + hrtimer_init(&sched_clock_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); sched_clock_timer.function = sched_clock_poll; - hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL); + hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL_HARD); }
/* @@ -290,7 +291,7 @@ void sched_clock_resume(void) struct clock_read_data *rd = &cd.read_data[0];
rd->epoch_cyc = cd.actual_read_sched_clock(); - hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL); + hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL_HARD); rd->read_sched_clock = cd.actual_read_sched_clock; }
From: Michael Tretter m.tretter@pengutronix.de
[ Upstream commit 8277815349327b8e65226eb58ddb680f90c2c0c0 ]
The gop_length field is actually only u16 and there are two more u8 fields in the message:
- the number of consecutive b-frames - frequency of golden frames
Fix the message and thus fix the configuration of the GOP length.
Signed-off-by: Michael Tretter m.tretter@pengutronix.de Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/staging/media/allegro-dvt/allegro-core.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/media/allegro-dvt/allegro-core.c b/drivers/staging/media/allegro-dvt/allegro-core.c index 6f0cd07847863..c5a262a12e401 100644 --- a/drivers/staging/media/allegro-dvt/allegro-core.c +++ b/drivers/staging/media/allegro-dvt/allegro-core.c @@ -393,7 +393,10 @@ struct mcu_msg_create_channel { u32 freq_ird; u32 freq_lt; u32 gdr_mode; - u32 gop_length; + u16 gop_length; + u8 num_b; + u8 freq_golden_ref; + u32 unknown39;
u32 subframe_latency;
From: Michael Wang yun.wang@linux.alibaba.com
[ Upstream commit 26cf52229efc87e2effa9d788f9b33c40fb3358a ]
During our testing, we found a case that shares no longer working correctly, the cgroup topology is like:
/sys/fs/cgroup/cpu/A (shares=102400) /sys/fs/cgroup/cpu/A/B (shares=2) /sys/fs/cgroup/cpu/A/B/C (shares=1024)
/sys/fs/cgroup/cpu/D (shares=1024) /sys/fs/cgroup/cpu/D/E (shares=1024) /sys/fs/cgroup/cpu/D/E/F (shares=1024)
The same benchmark is running in group C & F, no other tasks are running, the benchmark is capable to consumed all the CPUs.
We suppose the group C will win more CPU resources since it could enjoy all the shares of group A, but it's F who wins much more.
The reason is because we have group B with shares as 2, since A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus, so A->cfs_rq.load.weight become very small.
And in calc_group_shares() we calculate shares as:
load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg); shares = (tg_shares * load) / tg_weight;
Since the 'cfs_rq->load.weight' is too small, the load become 0 after scale down, although 'tg_shares' is 102400, shares of the se which stand for group A on root cfs_rq become 2.
While the se of D on root cfs_rq is far more bigger than 2, so it wins the battle.
Thus when scale_load_down() scale real weight down to 0, it's no longer telling the real story, the caller will have the wrong information and the calculation will be buggy.
This patch add check in scale_load_down(), so the real weight will be >= MIN_SHARES after scale, after applied the group C wins as expected.
Suggested-by: Peter Zijlstra peterz@infradead.org Signed-off-by: Michael Wang yun.wang@linux.alibaba.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Vincent Guittot vincent.guittot@linaro.org Link: https://lkml.kernel.org/r/38e8e212-59a1-64b2-b247-b6d0b52d8dc1@linux.alibaba... Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/sched/sched.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 280a3c7359355..0502ea8e0e62a 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust); #ifdef CONFIG_64BIT # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT) # define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT) -# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT) +# define scale_load_down(w) \ +({ \ + unsigned long __w = (w); \ + if (__w) \ + __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \ + __w; \ +}) #else # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT) # define scale_load(w) (w)
From: Tao Zhou ouwen210@hotmail.com
[ Upstream commit 6c8116c914b65be5e4d6f66d69c8142eb0648c22 ]
In update_sg_wakeup_stats(), the comment says:
Computing avg_load makes sense only when group is fully busy or overloaded.
But, the code below this comment does not check like this.
From reading the code about avg_load in other functions, I
confirm that avg_load should be calculated in fully busy or overloaded case. The comment is correct and the checking condition is wrong. So, change that condition.
Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()") Signed-off-by: Tao Zhou ouwen210@hotmail.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Vincent Guittot vincent.guittot@linaro.org Acked-by: Mel Gorman mgorman@suse.de Link: https://lkml.kernel.org/r/Message-ID: Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/sched/fair.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0ff2f43ac9cd7..1f5ea23c752be 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8323,7 +8323,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd, * Computing avg_load makes sense only when group is fully busy or * overloaded */ - if (sgs->group_type < group_fully_busy) + if (sgs->group_type == group_fully_busy || + sgs->group_type == group_overloaded) sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) / sgs->group_capacity; }
From: Andy Lutomirski luto@kernel.org
[ Upstream commit 630b99ab60aa972052a4202a1ff96c7e45eb0054 ]
If AT_SYSINFO is not present, don't try to call a NULL pointer.
Reported-by: kbuild test robot lkp@intel.com Signed-off-by: Andy Lutomirski luto@kernel.org Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/faaf688265a7e1a5b944d6f8bc0f6368158306d3.158405240... Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/x86/ptrace_syscall.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/x86/ptrace_syscall.c b/tools/testing/selftests/x86/ptrace_syscall.c index 6f22238f32173..12aaa063196e7 100644 --- a/tools/testing/selftests/x86/ptrace_syscall.c +++ b/tools/testing/selftests/x86/ptrace_syscall.c @@ -414,8 +414,12 @@ int main()
#if defined(__i386__) && (!defined(__GLIBC__) || __GLIBC__ > 2 || __GLIBC_MINOR__ >= 16) vsyscall32 = (void *)getauxval(AT_SYSINFO); - printf("[RUN]\tCheck AT_SYSINFO return regs\n"); - test_sys32_regs(do_full_vsyscall32); + if (vsyscall32) { + printf("[RUN]\tCheck AT_SYSINFO return regs\n"); + test_sys32_regs(do_full_vsyscall32); + } else { + printf("[SKIP]\tAT_SYSINFO is not available\n"); + } #endif
test_ptrace_syscall_restart();
From: Logan Gunthorpe logang@deltatee.com
[ Upstream commit efbdc769601f4d50018bf7ca50fc9f7c67392ece ]
The call to init_completion() in mrpc_queue_cmd() can theoretically race with the call to poll_wait() in switchtec_dev_poll().
poll() write() switchtec_dev_poll() switchtec_dev_write() poll_wait(&s->comp.wait); mrpc_queue_cmd() init_completion(&s->comp) init_waitqueue_head(&s->comp.wait)
To my knowledge, no one has hit this bug.
Fix this by using reinit_completion() instead of init_completion() in mrpc_queue_cmd().
Fixes: 080b47def5e5 ("MicroSemi Switchtec management interface driver")
Reported-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Signed-off-by: Logan Gunthorpe logang@deltatee.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Acked-by: Bjorn Helgaas bhelgaas@google.com Link: https://lkml.kernel.org/r/20200313183608.2646-1-logang@deltatee.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pci/switch/switchtec.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c index 9c3ad09d30222..fb4602d44eb10 100644 --- a/drivers/pci/switch/switchtec.c +++ b/drivers/pci/switch/switchtec.c @@ -175,7 +175,7 @@ static int mrpc_queue_cmd(struct switchtec_user *stuser) kref_get(&stuser->kref); stuser->read_len = sizeof(stuser->data); stuser_set_state(stuser, MRPC_QUEUED); - init_completion(&stuser->comp); + reinit_completion(&stuser->comp); list_add_tail(&stuser->list, &stdev->mrpc_queue);
mrpc_cmd_submit(stdev);
From: Paolo Valente paolo.valente@linaro.org
[ Upstream commit fd1bb3ae54a9a2e0c42709de861c69aa146b8955 ]
Commit ecedd3d7e199 ("block, bfq: get extra ref to prevent a queue from being freed during a group move") gets an extra reference to a bfq_queue before possibly deactivating it (temporarily), in bfq_bfqq_move(). This prevents the bfq_queue from disappearing before being reactivated in its new group.
Yet, the bfq_queue may also be expired (i.e., its service may be stopped) before the bfq_queue is deactivated. And also an expiration may lead to a premature freeing. This commit fixes this issue by simply moving forward the getting of the extra reference already introduced by commit ecedd3d7e199 ("block, bfq: get extra ref to prevent a queue from being freed during a group move").
Reported-by: cki-project@redhat.com Tested-by: cki-project@redhat.com Signed-off-by: Paolo Valente paolo.valente@linaro.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/bfq-cgroup.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 5a64607ce7744..a6d42339fb342 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -641,6 +641,12 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, { struct bfq_entity *entity = &bfqq->entity;
+ /* + * Get extra reference to prevent bfqq from being freed in + * next possible expire or deactivate. + */ + bfqq->ref++; + /* If bfqq is empty, then bfq_bfqq_expire also invokes * bfq_del_bfqq_busy, thereby removing bfqq and its entity * from data structures related to current group. Otherwise we @@ -651,12 +657,6 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, bfq_bfqq_expire(bfqd, bfqd->in_service_queue, false, BFQQE_PREEMPTED);
- /* - * get extra reference to prevent bfqq from being freed in - * next possible deactivate - */ - bfqq->ref++; - if (bfq_bfqq_busy(bfqq)) bfq_deactivate_bfqq(bfqd, bfqq, false, false); else if (entity->on_st) @@ -676,7 +676,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
if (!bfqd->in_service_queue && !bfqd->rq_in_driver) bfq_schedule_dispatch(bfqd); - /* release extra ref taken above */ + /* release extra ref taken above, bfqq may happen to be freed now */ bfq_put_queue(bfqq); }
From: Matt Ranostay matt.ranostay@konsulko.com
[ Upstream commit 64d4fc9926f09861a35d8f0f7d81f056e6d5af7b ]
Fix build fault when CONFIG_HWMON is a module, and CONFIG_VIDEO_I2C as builtin. This is due to 'imply hwmon' in the respective Kconfig.
Issue build log:
ld: drivers/media/i2c/video-i2c.o: in function `amg88xx_hwmon_init': video-i2c.c:(.text+0x2e1): undefined reference to `devm_hwmon_device_register_with_info
Cc: rdunlap@infradead.org Fixes: acbea6798955 (media: video-i2c: add hwmon support for amg88xx) Signed-off-by: Matt Ranostay matt.ranostay@konsulko.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/i2c/video-i2c.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/i2c/video-i2c.c b/drivers/media/i2c/video-i2c.c index 078141712c887..0b977e73ceb29 100644 --- a/drivers/media/i2c/video-i2c.c +++ b/drivers/media/i2c/video-i2c.c @@ -255,7 +255,7 @@ static int amg88xx_set_power(struct video_i2c_data *data, bool on) return amg88xx_set_power_off(data); }
-#if IS_ENABLED(CONFIG_HWMON) +#if IS_REACHABLE(CONFIG_HWMON)
static const u32 amg88xx_temp_config[] = { HWMON_T_INPUT,
From: John Garry john.garry@huawei.com
[ Upstream commit 1d72f7aec3595249dbb83291ccac041a2d676c57 ]
If the call to scsi_add_host_with_dma() in ata_scsi_add_hosts() fails, then we may get use-after-free KASAN warns:
================================================================== BUG: KASAN: use-after-free in kobject_put+0x24/0x180 Read of size 1 at addr ffff0026b8c80364 by task swapper/0/1 CPU: 1 PID: 1 Comm: swapper/0 Tainted: G W 5.6.0-rc3-00004-g5a71b206ea82-dirty #1765 Hardware name: Huawei TaiShan 200 (Model 2280)/BC82AMDD, BIOS 2280-V2 CS V3.B160.01 02/24/2020 Call trace: dump_backtrace+0x0/0x298 show_stack+0x14/0x20 dump_stack+0x118/0x190 print_address_description.isra.9+0x6c/0x3b8 __kasan_report+0x134/0x23c kasan_report+0xc/0x18 __asan_load1+0x5c/0x68 kobject_put+0x24/0x180 put_device+0x10/0x20 scsi_host_put+0x10/0x18 ata_devres_release+0x74/0xb0 release_nodes+0x2d0/0x470 devres_release_all+0x50/0x78 really_probe+0x2d4/0x560 driver_probe_device+0x7c/0x148 device_driver_attach+0x94/0xa0 __driver_attach+0xa8/0x110 bus_for_each_dev+0xe8/0x158 driver_attach+0x30/0x40 bus_add_driver+0x220/0x2e0 driver_register+0xbc/0x1d0 __pci_register_driver+0xbc/0xd0 ahci_pci_driver_init+0x20/0x28 do_one_initcall+0xf0/0x608 kernel_init_freeable+0x31c/0x384 kernel_init+0x10/0x118 ret_from_fork+0x10/0x18
Allocated by task 5: save_stack+0x28/0xc8 __kasan_kmalloc.isra.8+0xbc/0xd8 kasan_kmalloc+0xc/0x18 __kmalloc+0x1a8/0x280 scsi_host_alloc+0x44/0x678 ata_scsi_add_hosts+0x74/0x268 ata_host_register+0x228/0x488 ahci_host_activate+0x1c4/0x2a8 ahci_init_one+0xd18/0x1298 local_pci_probe+0x74/0xf0 work_for_cpu_fn+0x2c/0x48 process_one_work+0x488/0xc08 worker_thread+0x330/0x5d0 kthread+0x1c8/0x1d0 ret_from_fork+0x10/0x18
Freed by task 5: save_stack+0x28/0xc8 __kasan_slab_free+0x118/0x180 kasan_slab_free+0x10/0x18 slab_free_freelist_hook+0xa4/0x1a0 kfree+0xd4/0x3a0 scsi_host_dev_release+0x100/0x148 device_release+0x7c/0xe0 kobject_put+0xb0/0x180 put_device+0x10/0x20 scsi_host_put+0x10/0x18 ata_scsi_add_hosts+0x210/0x268 ata_host_register+0x228/0x488 ahci_host_activate+0x1c4/0x2a8 ahci_init_one+0xd18/0x1298 local_pci_probe+0x74/0xf0 work_for_cpu_fn+0x2c/0x48 process_one_work+0x488/0xc08 worker_thread+0x330/0x5d0 kthread+0x1c8/0x1d0 ret_from_fork+0x10/0x18
There is also refcount issue, as well: WARNING: CPU: 1 PID: 1 at lib/refcount.c:28 refcount_warn_saturate+0xf8/0x170
The issue is that we make an erroneous extra call to scsi_host_put() for that host:
So in ahci_init_one()->ata_host_alloc_pinfo()->ata_host_alloc(), we setup a device release method - ata_devres_release() - which intends to release the SCSI hosts:
static void ata_devres_release(struct device *gendev, void *res) { ... for (i = 0; i < host->n_ports; i++) { struct ata_port *ap = host->ports[i];
if (!ap) continue;
if (ap->scsi_host) scsi_host_put(ap->scsi_host);
} ... }
However in the ata_scsi_add_hosts() error path, we also call scsi_host_put() for the SCSI hosts.
Fix by removing the the scsi_host_put() calls in ata_scsi_add_hosts() and leave this to ata_devres_release().
Fixes: f31871951b38 ("libata: separate out ata_host_alloc() and ata_host_register()") Signed-off-by: John Garry john.garry@huawei.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/ata/libata-scsi.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c index 58e09ffe8b9cb..5af34a3201ed2 100644 --- a/drivers/ata/libata-scsi.c +++ b/drivers/ata/libata-scsi.c @@ -4553,22 +4553,19 @@ int ata_scsi_add_hosts(struct ata_host *host, struct scsi_host_template *sht) */ shost->max_host_blocked = 1;
- rc = scsi_add_host_with_dma(ap->scsi_host, - &ap->tdev, ap->host->dev); + rc = scsi_add_host_with_dma(shost, &ap->tdev, ap->host->dev); if (rc) - goto err_add; + goto err_alloc; }
return 0;
- err_add: - scsi_host_put(host->ports[i]->scsi_host); err_alloc: while (--i >= 0) { struct Scsi_Host *shost = host->ports[i]->scsi_host;
+ /* scsi_host_put() is in ata_devres_release() */ scsi_remove_host(shost); - scsi_host_put(shost); } return rc; }
From: chenqiwu chenqiwu@xiaomi.com
[ Upstream commit 8a57d6d4ddfa41c49014e20493152c41a38fcbf8 ]
There is a potential mem leak when pstore_init_fs failed, since the pstore compression maybe unlikey to initialized successfully. We must clean up the allocation once this unlikey issue happens.
Signed-off-by: chenqiwu chenqiwu@xiaomi.com Link: https://lore.kernel.org/r/1581068800-13817-1-git-send-email-qiwuchen55@gmail... Signed-off-by: Kees Cook keescook@chromium.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/pstore/platform.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c index d896457e7c117..408277ee3cdb9 100644 --- a/fs/pstore/platform.c +++ b/fs/pstore/platform.c @@ -823,9 +823,9 @@ static int __init pstore_init(void)
ret = pstore_init_fs(); if (ret) - return ret; + free_buf_for_compression();
- return 0; + return ret; } late_initcall(pstore_init);
From: Bob Peterson rpeterso@redhat.com
[ Upstream commit 9ff78289356af640941bbb0dd3f46af2063f0046 ]
Before this patch, if gfs2_ail_empty_gl saw there was nothing on the ail list, it would return and not flush the log. The problem is that there could still be a revoke for the rgrp sitting on the sd_log_le_revoke list that's been recently taken off the ail list. But that revoke still needs to be written, and the rgrp_go_inval still needs to call log_flush_wait to ensure the revokes are all properly written to the journal before we relinquish control of the glock to another node. If we give the glock to another node before we have this knowledge, the node might crash and its journal replayed, in which case the missing revoke would allow the journal replay to replay the rgrp over top of the rgrp we already gave to another node, thus overwriting its changes and corrupting the file system.
This patch makes gfs2_ail_empty_gl still call gfs2_log_flush rather than returning.
Signed-off-by: Bob Peterson rpeterso@redhat.com Reviewed-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/gfs2/glops.c | 27 ++++++++++++++++++++++++++- fs/gfs2/log.c | 2 +- fs/gfs2/log.h | 1 + 3 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c index 4ede1f18de85e..41542ef428f1a 100644 --- a/fs/gfs2/glops.c +++ b/fs/gfs2/glops.c @@ -89,8 +89,32 @@ static void gfs2_ail_empty_gl(struct gfs2_glock *gl) INIT_LIST_HEAD(&tr.tr_databuf); tr.tr_revokes = atomic_read(&gl->gl_ail_count);
- if (!tr.tr_revokes) + if (!tr.tr_revokes) { + bool have_revokes; + bool log_in_flight; + + /* + * We have nothing on the ail, but there could be revokes on + * the sdp revoke queue, in which case, we still want to flush + * the log and wait for it to finish. + * + * If the sdp revoke list is empty too, we might still have an + * io outstanding for writing revokes, so we should wait for + * it before returning. + * + * If none of these conditions are true, our revokes are all + * flushed and we can return. + */ + gfs2_log_lock(sdp); + have_revokes = !list_empty(&sdp->sd_log_revokes); + log_in_flight = atomic_read(&sdp->sd_log_in_flight); + gfs2_log_unlock(sdp); + if (have_revokes) + goto flush; + if (log_in_flight) + log_flush_wait(sdp); return; + }
/* A shortened, inline version of gfs2_trans_begin() * tr->alloced is not set since the transaction structure is @@ -105,6 +129,7 @@ static void gfs2_ail_empty_gl(struct gfs2_glock *gl) __gfs2_ail_flush(gl, 0, tr.tr_revokes);
gfs2_trans_end(sdp); +flush: gfs2_log_flush(sdp, NULL, GFS2_LOG_HEAD_FLUSH_NORMAL | GFS2_LFC_AIL_EMPTY_GL); } diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c index eb3f2e7b80856..99b33c6f84404 100644 --- a/fs/gfs2/log.c +++ b/fs/gfs2/log.c @@ -516,7 +516,7 @@ static void log_pull_tail(struct gfs2_sbd *sdp, unsigned int new_tail) }
-static void log_flush_wait(struct gfs2_sbd *sdp) +void log_flush_wait(struct gfs2_sbd *sdp) { DEFINE_WAIT(wait);
diff --git a/fs/gfs2/log.h b/fs/gfs2/log.h index 2ff163a8dce1f..76cb79f225996 100644 --- a/fs/gfs2/log.h +++ b/fs/gfs2/log.h @@ -73,6 +73,7 @@ extern void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 type); extern void gfs2_log_commit(struct gfs2_sbd *sdp, struct gfs2_trans *trans); extern void gfs2_ail1_flush(struct gfs2_sbd *sdp, struct writeback_control *wbc); +extern void log_flush_wait(struct gfs2_sbd *sdp);
extern int gfs2_logd(void *data); extern void gfs2_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd);
From: Bob Peterson rpeterso@redhat.com
[ Upstream commit df5db5f9ee112e76b5202fbc331f990a0fc316d6 ]
Before this patch, run_queue would demote glocks based on whether there are any more holders. But if the glock has pending revokes that haven't been written to the media, giving up the glock might end in file system corruption if the revokes never get written due to io errors, node crashes and fences, etc. In that case, another node will replay the metadata blocks associated with the glock, but because the revoke was never written, it could replay that block even though the glock had since been granted to another node who might have made changes.
This patch changes the logic in run_queue so that it never demotes a glock until its count of pending revokes reaches zero.
Signed-off-by: Bob Peterson rpeterso@redhat.com Reviewed-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/gfs2/glock.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index b7123de7c180e..a5e145d4e9916 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -645,6 +645,9 @@ __acquires(&gl->gl_lockref.lock) goto out_unlock; if (nonblock) goto out_sched; + smp_mb(); + if (atomic_read(&gl->gl_revokes) != 0) + goto out_sched; set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags); GLOCK_BUG_ON(gl, gl->gl_demote_state == LM_ST_EXCLUSIVE); gl->gl_target = gl->gl_demote_state;
From: Peng Fan peng.fan@nxp.com
[ Upstream commit 3646f50a3838c5949a89ecbdb868497cdc05b8fd ]
When speed checking failed, direclty jumping to put_node label is not correct. Need jump to out_free_opp to avoid resources leak.
Fixes: 2733fb0d0699 ("cpufreq: imx6q: read OCOTP through nvmem for imx6ul/imx6ull") Signed-off-by: Peng Fan peng.fan@nxp.com Signed-off-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpufreq/imx6q-cpufreq.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c index 1fcbbd53a48a2..edef3399c9794 100644 --- a/drivers/cpufreq/imx6q-cpufreq.c +++ b/drivers/cpufreq/imx6q-cpufreq.c @@ -381,23 +381,24 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev) goto put_reg; }
+ /* Because we have added the OPPs here, we must free them */ + free_opp = true; + if (of_machine_is_compatible("fsl,imx6ul") || of_machine_is_compatible("fsl,imx6ull")) { ret = imx6ul_opp_check_speed_grading(cpu_dev); if (ret) { if (ret == -EPROBE_DEFER) - goto put_node; + goto out_free_opp;
dev_err(cpu_dev, "failed to read ocotp: %d\n", ret); - goto put_node; + goto out_free_opp; } } else { imx6q_opp_check_speed_grading(cpu_dev); }
- /* Because we have added the OPPs here, we must free them */ - free_opp = true; num = dev_pm_opp_get_opp_count(cpu_dev); if (num < 0) { ret = num;
From: Arvind Sankar nivedita@alum.mit.edu
[ Upstream commit 81a34892c2c7c809f9c4e22c5ac936ae673fb9a2 ]
The load address is compared with LOAD_PHYSICAL_ADDR using a signed comparison currently (using jge instruction).
When loading a 64-bit kernel using the new efi32_pe_entry() point added by:
97aa276579b2 ("efi/x86: Add true mixed mode entry point into .compat section")
using Qemu with -m 3072, the firmware actually loads us above 2Gb, resulting in a very early crash.
Use the JAE instruction to perform a unsigned comparison instead, as physical addresses should be considered unsigned.
Signed-off-by: Arvind Sankar nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/20200301230436.2246909-6-nivedita@alum.mit.edu Link: https://lore.kernel.org/r/20200308080859.21568-14-ardb@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/boot/compressed/head_32.S | 2 +- arch/x86/boot/compressed/head_64.S | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S index f2dfd6d083ef2..777cf7d659cea 100644 --- a/arch/x86/boot/compressed/head_32.S +++ b/arch/x86/boot/compressed/head_32.S @@ -106,7 +106,7 @@ SYM_FUNC_START(startup_32) notl %eax andl %eax, %ebx cmpl $LOAD_PHYSICAL_ADDR, %ebx - jge 1f + jae 1f #endif movl $LOAD_PHYSICAL_ADDR, %ebx 1: diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index ee60b81944a77..f5ee513f0195c 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -106,7 +106,7 @@ SYM_FUNC_START(startup_32) notl %eax andl %eax, %ebx cmpl $LOAD_PHYSICAL_ADDR, %ebx - jge 1f + jae 1f #endif movl $LOAD_PHYSICAL_ADDR, %ebx 1: @@ -297,7 +297,7 @@ SYM_CODE_START(startup_64) notq %rax andq %rax, %rbp cmpq $LOAD_PHYSICAL_ADDR, %rbp - jge 1f + jae 1f #endif movq $LOAD_PHYSICAL_ADDR, %rbp 1:
From: Ard Biesheuvel ardb@kernel.org
[ Upstream commit dd09fad9d2caad2325a39b766ce9e79cfc690184 ]
Commit:
3a6b6c6fb23667fa ("efi: Make EFI_MEMORY_ATTRIBUTES_TABLE initialization common across all architectures")
moved the call to efi_memattr_init() from ARM specific to the generic EFI init code, in order to be able to apply the restricted permissions described in that table on x86 as well.
We never enabled this feature fully on i386, and so mapping and reserving this table is pointless. However, due to the early call to memblock_reserve(), the memory bookkeeping gets confused to the point where it produces the splat below when we try to map the memory later on:
------------[ cut here ]------------ ioremap on RAM at 0x3f251000 - 0x3fa1afff WARNING: CPU: 0 PID: 0 at arch/x86/mm/ioremap.c:166 __ioremap_caller ... Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.20.0 #48 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 EIP: __ioremap_caller.constprop.0+0x249/0x260 Code: 90 0f b7 05 4e 38 40 de 09 45 e0 e9 09 ff ff ff 90 8d 45 ec c6 05 ... EAX: 00000029 EBX: 00000000 ECX: de59c228 EDX: 00000001 ESI: 3f250fff EDI: 00000000 EBP: de3edf20 ESP: de3edee0 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00200296 CR0: 80050033 CR2: ffd17000 CR3: 1e58c000 CR4: 00040690 Call Trace: ioremap_cache+0xd/0x10 ? old_map_region+0x72/0x9d old_map_region+0x72/0x9d efi_map_region+0x8/0xa efi_enter_virtual_mode+0x260/0x43b start_kernel+0x329/0x3aa i386_start_kernel+0xa7/0xab startup_32_smp+0x164/0x168 ---[ end trace e15ccf6b9f356833 ]---
Let's work around this by disregarding the memory attributes table altogether on i386, which does not result in a loss of functionality or protection, given that we never consumed the contents.
Fixes: 3a6b6c6fb23667fa ("efi: Make EFI_MEMORY_ATTRIBUTES_TABLE ... ") Tested-by: Arvind Sankar nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/20200304165917.5893-1-ardb@kernel.org Link: https://lore.kernel.org/r/20200308080859.21568-21-ardb@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/efi/efi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index a9778591341bb..8c32054a266c9 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -570,7 +570,7 @@ int __init efi_config_parse_tables(void *config_tables, int count, int sz, } }
- if (efi_enabled(EFI_MEMMAP)) + if (!IS_ENABLED(CONFIG_X86_32) && efi_enabled(EFI_MEMMAP)) efi_memattr_init();
efi_tpm_eventlog_init();
From: Alexander Sverdlin alexander.sverdlin@nokia.com
[ Upstream commit 87f2d1c662fa1761359fdf558246f97e484d177a ]
irq_domain_alloc_irqs_hierarchy() has 3 call sites in the compilation unit but only one of them checks for the pointer which is being dereferenced inside the called function. Move the check into the function. This allows for catching the error instead of the following crash:
Unable to handle kernel NULL pointer dereference at virtual address 00000000 PC is at 0x0 LR is at gpiochip_hierarchy_irq_domain_alloc+0x11f/0x140 ... [<c06c23ff>] (gpiochip_hierarchy_irq_domain_alloc) [<c0462a89>] (__irq_domain_alloc_irqs) [<c0462dad>] (irq_create_fwspec_mapping) [<c06c2251>] (gpiochip_to_irq) [<c06c1c9b>] (gpiod_to_irq) [<bf973073>] (gpio_irqs_init [gpio_irqs]) [<bf974048>] (gpio_irqs_exit+0xecc/0xe84 [gpio_irqs]) Code: bad PC value
Signed-off-by: Alexander Sverdlin alexander.sverdlin@nokia.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20200306174720.82604-1-alexander.sverdlin@nokia.co... Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/irq/irqdomain.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c index 480df36597206..c776b8e86fbcc 100644 --- a/kernel/irq/irqdomain.c +++ b/kernel/irq/irqdomain.c @@ -1293,6 +1293,11 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain, unsigned int irq_base, unsigned int nr_irqs, void *arg) { + if (!domain->ops->alloc) { + pr_debug("domain->ops->alloc() is NULL\n"); + return -ENOSYS; + } + return domain->ops->alloc(domain, irq_base, nr_irqs, arg); }
@@ -1330,11 +1335,6 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, return -EINVAL; }
- if (!domain->ops->alloc) { - pr_debug("domain->ops->alloc() is NULL\n"); - return -ENOSYS; - } - if (realloc && irq_base >= 0) { virq = irq_base; } else {
From: Sahitya Tummala stummala@codeaurora.org
[ Upstream commit 30a2da7b7e225ef6c87a660419ea04d3cef3f6a7 ]
There is a potential race between ioc_release_fn() and ioc_clear_queue() as shown below, due to which below kernel crash is observed. It also can result into use-after-free issue.
context#1: context#2: ioc_release_fn() __ioc_clear_queue() gets the same icq ->spin_lock(&ioc->lock); ->spin_lock(&ioc->lock); ->ioc_destroy_icq(icq); ->list_del_init(&icq->q_node); ->call_rcu(&icq->__rcu_head, icq_free_icq_rcu); ->spin_unlock(&ioc->lock); ->ioc_destroy_icq(icq); ->hlist_del_init(&icq->ioc_node); This results into below crash as this memory is now used by icq->__rcu_head in context#1. There is a chance that icq could be free'd as well.
22150.386550: <6> Unable to handle kernel write to read-only memory at virtual address ffffffaa8d31ca50 ... Call trace: 22150.607350: <2> ioc_destroy_icq+0x44/0x110 22150.611202: <2> ioc_clear_queue+0xac/0x148 22150.615056: <2> blk_cleanup_queue+0x11c/0x1a0 22150.619174: <2> __scsi_remove_device+0xdc/0x128 22150.623465: <2> scsi_forget_host+0x2c/0x78 22150.627315: <2> scsi_remove_host+0x7c/0x2a0 22150.631257: <2> usb_stor_disconnect+0x74/0xc8 22150.635371: <2> usb_unbind_interface+0xc8/0x278 22150.639665: <2> device_release_driver_internal+0x198/0x250 22150.644897: <2> device_release_driver+0x24/0x30 22150.649176: <2> bus_remove_device+0xec/0x140 22150.653204: <2> device_del+0x270/0x460 22150.656712: <2> usb_disable_device+0x120/0x390 22150.660918: <2> usb_disconnect+0xf4/0x2e0 22150.664684: <2> hub_event+0xd70/0x17e8 22150.668197: <2> process_one_work+0x210/0x480 22150.672222: <2> worker_thread+0x32c/0x4c8
Fix this by adding a new ICQ_DESTROYED flag in ioc_destroy_icq() to indicate this icq is once marked as destroyed. Also, ensure __ioc_clear_queue() is accessing icq within rcu_read_lock/unlock so that icq doesn't get free'd up while it is still using it.
Signed-off-by: Sahitya Tummala stummala@codeaurora.org Co-developed-by: Pradeep P V K ppvk@codeaurora.org Signed-off-by: Pradeep P V K ppvk@codeaurora.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-ioc.c | 7 +++++++ include/linux/iocontext.h | 1 + 2 files changed, 8 insertions(+)
diff --git a/block/blk-ioc.c b/block/blk-ioc.c index 5ed59ac6ae58b..9df50fb507caf 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -84,6 +84,7 @@ static void ioc_destroy_icq(struct io_cq *icq) * making it impossible to determine icq_cache. Record it in @icq. */ icq->__rcu_icq_cache = et->icq_cache; + icq->flags |= ICQ_DESTROYED; call_rcu(&icq->__rcu_head, icq_free_icq_rcu); }
@@ -212,15 +213,21 @@ static void __ioc_clear_queue(struct list_head *icq_list) { unsigned long flags;
+ rcu_read_lock(); while (!list_empty(icq_list)) { struct io_cq *icq = list_entry(icq_list->next, struct io_cq, q_node); struct io_context *ioc = icq->ioc;
spin_lock_irqsave(&ioc->lock, flags); + if (icq->flags & ICQ_DESTROYED) { + spin_unlock_irqrestore(&ioc->lock, flags); + continue; + } ioc_destroy_icq(icq); spin_unlock_irqrestore(&ioc->lock, flags); } + rcu_read_unlock(); }
/** diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h index dba15ca8e60bc..1dcd9198beb7f 100644 --- a/include/linux/iocontext.h +++ b/include/linux/iocontext.h @@ -8,6 +8,7 @@
enum { ICQ_EXITED = 1 << 2, + ICQ_DESTROYED = 1 << 3, };
/*
From: Alexey Dobriyan adobriyan@gmail.com
[ Upstream commit 11bde986002c0af67eb92d73321d06baefae7128 ]
Check for overflow in addition before checking for end-of-block-device.
Steps to reproduce:
#define _GNU_SOURCE 1 #include <sys/ioctl.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h>
typedef unsigned long long __u64;
struct blk_zone_range { __u64 sector; __u64 nr_sectors; };
#define BLKRESETZONE _IOW(0x12, 131, struct blk_zone_range)
int main(void) { int fd = open("/dev/nullb0", O_RDWR|O_DIRECT); struct blk_zone_range zr = {4096, 0xfffffffffffff000ULL}; ioctl(fd, BLKRESETZONE, &zr); return 0; }
BUG: KASAN: null-ptr-deref in submit_bio_wait+0x74/0xe0 Write of size 8 at addr 0000000000000040 by task a.out/1590
CPU: 8 PID: 1590 Comm: a.out Not tainted 5.6.0-rc1-00019-g359c92c02bfa #2 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190711_202441-buildvm-armv7-10.arm.fedoraproject.org-2.fc31 04/01/2014 Call Trace: dump_stack+0x76/0xa0 __kasan_report.cold+0x5/0x3e kasan_report+0xe/0x20 submit_bio_wait+0x74/0xe0 blkdev_zone_mgmt+0x26f/0x2a0 blkdev_zone_mgmt_ioctl+0x14b/0x1b0 blkdev_ioctl+0xb28/0xe60 block_ioctl+0x69/0x80 ksys_ioctl+0x3af/0xa50
Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Alexey Dobriyan (SK hynix) adobriyan@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-zoned.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-zoned.c b/block/blk-zoned.c index d00fcfd71dfea..eb27e80e9075c 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -173,7 +173,7 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, if (!op_is_zone_mgmt(op)) return -EOPNOTSUPP;
- if (!nr_sectors || end_sector > capacity) + if (end_sector <= sector || end_sector > capacity) /* Out of range */ return -EINVAL;
From: Hsin-Yi Wang hsinyi@chromium.org
[ Upstream commit e6599adfad30c340d06574e49a86afa7015c5c60 ]
Previously, vpu->recv_buf and send_buf are forced cast from void __iomem *tcm. vpu->recv_buf->share_buf is passed to vpu_ipi_desc.handler(). It's not able to do unaligned access. Otherwise kernel would crash due to unable to handle kernel paging request.
struct vpu_run { u32 signaled; char fw_ver[VPU_FW_VER_LEN]; unsigned int dec_capability; unsigned int enc_capability; wait_queue_head_t wq; };
fw_ver starts at 4 byte boundary. If system enables CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS, strscpy() will do read_word_at_a_time(), which tries to read 8-byte: *(unsigned long *)addr
vpu_init_ipi_handler() calls strscpy(), which would lead to crash.
vpu_init_ipi_handler() and several other handlers (eg. vpu_dec_ipi_handler) only do read access to this data, so they can be const, and we can use memcpy_fromio() to copy the buf to another non iomem buffer then pass to handler.
Fixes: 85709cbf1524 ("media: replace strncpy() by strscpy()") Signed-off-by: Hsin-Yi Wang hsinyi@chromium.org Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c | 9 ++-- .../media/platform/mtk-vcodec/vdec_vpu_if.c | 6 +-- .../media/platform/mtk-vcodec/venc_vpu_if.c | 12 ++--- drivers/media/platform/mtk-vpu/mtk_vpu.c | 45 ++++++++++--------- drivers/media/platform/mtk-vpu/mtk_vpu.h | 2 +- 5 files changed, 38 insertions(+), 36 deletions(-)
diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c b/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c index 6720d11f50cf6..b065ccd069140 100644 --- a/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c +++ b/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c @@ -15,7 +15,7 @@ static inline struct mtk_mdp_ctx *vpu_to_ctx(struct mtk_mdp_vpu *vpu) return container_of(vpu, struct mtk_mdp_ctx, vpu); }
-static void mtk_mdp_vpu_handle_init_ack(struct mdp_ipi_comm_ack *msg) +static void mtk_mdp_vpu_handle_init_ack(const struct mdp_ipi_comm_ack *msg) { struct mtk_mdp_vpu *vpu = (struct mtk_mdp_vpu *) (unsigned long)msg->ap_inst; @@ -26,10 +26,11 @@ static void mtk_mdp_vpu_handle_init_ack(struct mdp_ipi_comm_ack *msg) vpu->inst_addr = msg->vpu_inst_addr; }
-static void mtk_mdp_vpu_ipi_handler(void *data, unsigned int len, void *priv) +static void mtk_mdp_vpu_ipi_handler(const void *data, unsigned int len, + void *priv) { - unsigned int msg_id = *(unsigned int *)data; - struct mdp_ipi_comm_ack *msg = (struct mdp_ipi_comm_ack *)data; + const struct mdp_ipi_comm_ack *msg = data; + unsigned int msg_id = msg->msg_id; struct mtk_mdp_vpu *vpu = (struct mtk_mdp_vpu *) (unsigned long)msg->ap_inst; struct mtk_mdp_ctx *ctx; diff --git a/drivers/media/platform/mtk-vcodec/vdec_vpu_if.c b/drivers/media/platform/mtk-vcodec/vdec_vpu_if.c index 70abfd4cd4b9f..948a12fd9d46a 100644 --- a/drivers/media/platform/mtk-vcodec/vdec_vpu_if.c +++ b/drivers/media/platform/mtk-vcodec/vdec_vpu_if.c @@ -9,7 +9,7 @@ #include "vdec_ipi_msg.h" #include "vdec_vpu_if.h"
-static void handle_init_ack_msg(struct vdec_vpu_ipi_init_ack *msg) +static void handle_init_ack_msg(const struct vdec_vpu_ipi_init_ack *msg) { struct vdec_vpu_inst *vpu = (struct vdec_vpu_inst *) (unsigned long)msg->ap_inst_addr; @@ -34,9 +34,9 @@ static void handle_init_ack_msg(struct vdec_vpu_ipi_init_ack *msg) * This function runs in interrupt context and it means there's an IPI MSG * from VPU. */ -static void vpu_dec_ipi_handler(void *data, unsigned int len, void *priv) +static void vpu_dec_ipi_handler(const void *data, unsigned int len, void *priv) { - struct vdec_vpu_ipi_ack *msg = data; + const struct vdec_vpu_ipi_ack *msg = data; struct vdec_vpu_inst *vpu = (struct vdec_vpu_inst *) (unsigned long)msg->ap_inst_addr;
diff --git a/drivers/media/platform/mtk-vcodec/venc_vpu_if.c b/drivers/media/platform/mtk-vcodec/venc_vpu_if.c index 3e931b0ed0965..9540709c19058 100644 --- a/drivers/media/platform/mtk-vcodec/venc_vpu_if.c +++ b/drivers/media/platform/mtk-vcodec/venc_vpu_if.c @@ -8,26 +8,26 @@ #include "venc_ipi_msg.h" #include "venc_vpu_if.h"
-static void handle_enc_init_msg(struct venc_vpu_inst *vpu, void *data) +static void handle_enc_init_msg(struct venc_vpu_inst *vpu, const void *data) { - struct venc_vpu_ipi_msg_init *msg = data; + const struct venc_vpu_ipi_msg_init *msg = data;
vpu->inst_addr = msg->vpu_inst_addr; vpu->vsi = vpu_mapping_dm_addr(vpu->dev, msg->vpu_inst_addr); }
-static void handle_enc_encode_msg(struct venc_vpu_inst *vpu, void *data) +static void handle_enc_encode_msg(struct venc_vpu_inst *vpu, const void *data) { - struct venc_vpu_ipi_msg_enc *msg = data; + const struct venc_vpu_ipi_msg_enc *msg = data;
vpu->state = msg->state; vpu->bs_size = msg->bs_size; vpu->is_key_frm = msg->is_key_frm; }
-static void vpu_enc_ipi_handler(void *data, unsigned int len, void *priv) +static void vpu_enc_ipi_handler(const void *data, unsigned int len, void *priv) { - struct venc_vpu_ipi_msg_common *msg = data; + const struct venc_vpu_ipi_msg_common *msg = data; struct venc_vpu_inst *vpu = (struct venc_vpu_inst *)(unsigned long)msg->venc_inst;
diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.c b/drivers/media/platform/mtk-vpu/mtk_vpu.c index a768707abb942..2fbccc9b247b0 100644 --- a/drivers/media/platform/mtk-vpu/mtk_vpu.c +++ b/drivers/media/platform/mtk-vpu/mtk_vpu.c @@ -203,8 +203,8 @@ struct mtk_vpu { struct vpu_run run; struct vpu_wdt wdt; struct vpu_ipi_desc ipi_desc[IPI_MAX]; - struct share_obj *recv_buf; - struct share_obj *send_buf; + struct share_obj __iomem *recv_buf; + struct share_obj __iomem *send_buf; struct device *dev; struct clk *clk; bool fw_loaded; @@ -292,7 +292,7 @@ int vpu_ipi_send(struct platform_device *pdev, unsigned int len) { struct mtk_vpu *vpu = platform_get_drvdata(pdev); - struct share_obj *send_obj = vpu->send_buf; + struct share_obj __iomem *send_obj = vpu->send_buf; unsigned long timeout; int ret = 0;
@@ -325,9 +325,9 @@ int vpu_ipi_send(struct platform_device *pdev, } } while (vpu_cfg_readl(vpu, HOST_TO_VPU));
- memcpy((void *)send_obj->share_buf, buf, len); - send_obj->len = len; - send_obj->id = id; + memcpy_toio(send_obj->share_buf, buf, len); + writel(len, &send_obj->len); + writel(id, &send_obj->id);
vpu->ipi_id_ack[id] = false; /* send the command to VPU */ @@ -600,10 +600,10 @@ OUT_LOAD_FW: } EXPORT_SYMBOL_GPL(vpu_load_firmware);
-static void vpu_init_ipi_handler(void *data, unsigned int len, void *priv) +static void vpu_init_ipi_handler(const void *data, unsigned int len, void *priv) { - struct mtk_vpu *vpu = (struct mtk_vpu *)priv; - struct vpu_run *run = (struct vpu_run *)data; + struct mtk_vpu *vpu = priv; + const struct vpu_run *run = data;
vpu->run.signaled = run->signaled; strscpy(vpu->run.fw_ver, run->fw_ver, sizeof(vpu->run.fw_ver)); @@ -700,19 +700,21 @@ static int vpu_alloc_ext_mem(struct mtk_vpu *vpu, u32 fw_type)
static void vpu_ipi_handler(struct mtk_vpu *vpu) { - struct share_obj *rcv_obj = vpu->recv_buf; + struct share_obj __iomem *rcv_obj = vpu->recv_buf; struct vpu_ipi_desc *ipi_desc = vpu->ipi_desc; - - if (rcv_obj->id < IPI_MAX && ipi_desc[rcv_obj->id].handler) { - ipi_desc[rcv_obj->id].handler(rcv_obj->share_buf, - rcv_obj->len, - ipi_desc[rcv_obj->id].priv); - if (rcv_obj->id > IPI_VPU_INIT) { - vpu->ipi_id_ack[rcv_obj->id] = true; + unsigned char data[SHARE_BUF_SIZE]; + s32 id = readl(&rcv_obj->id); + + memcpy_fromio(data, rcv_obj->share_buf, sizeof(data)); + if (id < IPI_MAX && ipi_desc[id].handler) { + ipi_desc[id].handler(data, readl(&rcv_obj->len), + ipi_desc[id].priv); + if (id > IPI_VPU_INIT) { + vpu->ipi_id_ack[id] = true; wake_up(&vpu->ack_wq); } } else { - dev_err(vpu->dev, "No such ipi id = %d\n", rcv_obj->id); + dev_err(vpu->dev, "No such ipi id = %d\n", id); } }
@@ -722,11 +724,10 @@ static int vpu_ipi_init(struct mtk_vpu *vpu) vpu_cfg_writel(vpu, 0x0, VPU_TO_HOST);
/* shared buffer initialization */ - vpu->recv_buf = (__force struct share_obj *)(vpu->reg.tcm + - VPU_DTCM_OFFSET); + vpu->recv_buf = vpu->reg.tcm + VPU_DTCM_OFFSET; vpu->send_buf = vpu->recv_buf + 1; - memset(vpu->recv_buf, 0, sizeof(struct share_obj)); - memset(vpu->send_buf, 0, sizeof(struct share_obj)); + memset_io(vpu->recv_buf, 0, sizeof(struct share_obj)); + memset_io(vpu->send_buf, 0, sizeof(struct share_obj));
return 0; } diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.h b/drivers/media/platform/mtk-vpu/mtk_vpu.h index d4453b4bcee92..ee7c552ce9289 100644 --- a/drivers/media/platform/mtk-vpu/mtk_vpu.h +++ b/drivers/media/platform/mtk-vpu/mtk_vpu.h @@ -15,7 +15,7 @@ * VPU interfaces with other blocks by share memory and interrupt. **/
-typedef void (*ipi_handler_t) (void *data, +typedef void (*ipi_handler_t) (const void *data, unsigned int len, void *priv);
From: Dongchun Zhu dongchun.zhu@mediatek.com
[ Upstream commit f1a64f56663e9d03e509439016dcbddd0166b2da ]
From the measured hardware signal, OV5695 reset pin goes high for a
short period of time during boot-up. From the sensor specification, the reset pin is active low and the DT binding defines the pin as active low, which means that the values set by the driver are inverted and thus the value requested in probe ends up high.
Fix it by changing probe to request the reset GPIO initialized to high, which makes the initial state of the physical signal low.
In addition, DOVDD rising must occur before DVDD rising from spec., but regulator_bulk_enable() API enables all the regulators asynchronously. Use an explicit loops of regulator_enable() instead.
For power off sequence, it is required that DVDD falls first. Given the bulk API does not give any guarantee about the order of regulators, change the driver to use regulator_disable() instead.
The sensor also requires a delay between reset high and first I2C transaction, which was assumed to be 8192 XVCLK cycles, but 1ms is recommended by the vendor. Fix this as well.
Signed-off-by: Dongchun Zhu dongchun.zhu@mediatek.com Signed-off-by: Tomasz Figa tfiga@chromium.org Signed-off-by: Sakari Ailus sakari.ailus@linux.intel.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/i2c/ov5695.c | 49 ++++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 18 deletions(-)
diff --git a/drivers/media/i2c/ov5695.c b/drivers/media/i2c/ov5695.c index d6cd15bb699ac..cc678d9d2e0da 100644 --- a/drivers/media/i2c/ov5695.c +++ b/drivers/media/i2c/ov5695.c @@ -971,16 +971,9 @@ unlock_and_return: return ret; }
-/* Calculate the delay in us by clock rate and clock cycles */ -static inline u32 ov5695_cal_delay(u32 cycles) -{ - return DIV_ROUND_UP(cycles, OV5695_XVCLK_FREQ / 1000 / 1000); -} - static int __ov5695_power_on(struct ov5695 *ov5695) { - int ret; - u32 delay_us; + int i, ret; struct device *dev = &ov5695->client->dev;
ret = clk_prepare_enable(ov5695->xvclk); @@ -991,21 +984,28 @@ static int __ov5695_power_on(struct ov5695 *ov5695)
gpiod_set_value_cansleep(ov5695->reset_gpio, 1);
- ret = regulator_bulk_enable(OV5695_NUM_SUPPLIES, ov5695->supplies); - if (ret < 0) { - dev_err(dev, "Failed to enable regulators\n"); - goto disable_clk; + /* + * The hardware requires the regulators to be powered on in order, + * so enable them one by one. + */ + for (i = 0; i < OV5695_NUM_SUPPLIES; i++) { + ret = regulator_enable(ov5695->supplies[i].consumer); + if (ret) { + dev_err(dev, "Failed to enable %s: %d\n", + ov5695->supplies[i].supply, ret); + goto disable_reg_clk; + } }
gpiod_set_value_cansleep(ov5695->reset_gpio, 0);
- /* 8192 cycles prior to first SCCB transaction */ - delay_us = ov5695_cal_delay(8192); - usleep_range(delay_us, delay_us * 2); + usleep_range(1000, 1200);
return 0;
-disable_clk: +disable_reg_clk: + for (--i; i >= 0; i--) + regulator_disable(ov5695->supplies[i].consumer); clk_disable_unprepare(ov5695->xvclk);
return ret; @@ -1013,9 +1013,22 @@ disable_clk:
static void __ov5695_power_off(struct ov5695 *ov5695) { + struct device *dev = &ov5695->client->dev; + int i, ret; + clk_disable_unprepare(ov5695->xvclk); gpiod_set_value_cansleep(ov5695->reset_gpio, 1); - regulator_bulk_disable(OV5695_NUM_SUPPLIES, ov5695->supplies); + + /* + * The hardware requires the regulators to be powered off in order, + * so disable them one by one. + */ + for (i = OV5695_NUM_SUPPLIES - 1; i >= 0; i--) { + ret = regulator_disable(ov5695->supplies[i].consumer); + if (ret) + dev_err(dev, "Failed to disable %s: %d\n", + ov5695->supplies[i].supply, ret); + } }
static int __maybe_unused ov5695_runtime_resume(struct device *dev) @@ -1285,7 +1298,7 @@ static int ov5695_probe(struct i2c_client *client, if (clk_get_rate(ov5695->xvclk) != OV5695_XVCLK_FREQ) dev_warn(dev, "xvclk mismatched, modes are based on 24MHz\n");
- ov5695->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); + ov5695->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); if (IS_ERR(ov5695->reset_gpio)) { dev_err(dev, "Failed to get reset-gpios\n"); return -EINVAL;
From: Neil Armstrong narmstrong@baylibre.com
[ Upstream commit 7ba6b09fda5e0cb741ee56f3264665e0edc64822 ]
In certain circumstances, the XHCI SuperSpeed instance in park mode can fail to recover, thus on Amlogic G12A/G12B/SM1 SoCs when there is high load on the single XHCI SuperSpeed instance, the controller can crash like: xhci-hcd xhci-hcd.0.auto: xHCI host not responding to stop endpoint command. xhci-hcd xhci-hcd.0.auto: Host halt failed, -110 xhci-hcd xhci-hcd.0.auto: xHCI host controller not responding, assume dead xhci-hcd xhci-hcd.0.auto: xHCI host not responding to stop endpoint command. hub 2-1.1:1.0: hub_ext_port_status failed (err = -22) xhci-hcd xhci-hcd.0.auto: HC died; cleaning up usb 2-1.1-port1: cannot reset (err = -22)
Setting the PARKMODE_DISABLE_SS bit in the DWC3_USB3_GUCTL1 mitigates the issue. The bit is described as : "When this bit is set to '1' all SS bus instances in park mode are disabled"
Synopsys explains: The GUCTL1.PARKMODE_DISABLE_SS is only available in dwc_usb3 controller running in host mode. This should not be set for other IPs. This can be disabled by default based on IP, but I recommend to have a property to enable this feature for devices that need this.
CC: Dongjin Kim tobetter@gmail.com Cc: Jianxin Pan jianxin.pan@amlogic.com Cc: Thinh Nguyen thinhn@synopsys.com Cc: Jun Li lijun.kernel@gmail.com Reported-by: Tim elatllat@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/dwc3/core.c | 5 +++++ drivers/usb/dwc3/core.h | 4 ++++ 2 files changed, 9 insertions(+)
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c index 1d85c42b9c674..43bd5b1ea9e2c 100644 --- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -1029,6 +1029,9 @@ static int dwc3_core_init(struct dwc3 *dwc) if (dwc->dis_tx_ipgap_linecheck_quirk) reg |= DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS;
+ if (dwc->parkmode_disable_ss_quirk) + reg |= DWC3_GUCTL1_PARKMODE_DISABLE_SS; + dwc3_writel(dwc->regs, DWC3_GUCTL1, reg); }
@@ -1342,6 +1345,8 @@ static void dwc3_get_properties(struct dwc3 *dwc) "snps,dis-del-phy-power-chg-quirk"); dwc->dis_tx_ipgap_linecheck_quirk = device_property_read_bool(dev, "snps,dis-tx-ipgap-linecheck-quirk"); + dwc->parkmode_disable_ss_quirk = device_property_read_bool(dev, + "snps,parkmode-disable-ss-quirk");
dwc->tx_de_emphasis_quirk = device_property_read_bool(dev, "snps,tx_de_emphasis_quirk"); diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h index 77c4a9abe3652..3ecc69c5b150f 100644 --- a/drivers/usb/dwc3/core.h +++ b/drivers/usb/dwc3/core.h @@ -249,6 +249,7 @@ #define DWC3_GUCTL_HSTINAUTORETRY BIT(14)
/* Global User Control 1 Register */ +#define DWC3_GUCTL1_PARKMODE_DISABLE_SS BIT(17) #define DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS BIT(28) #define DWC3_GUCTL1_DEV_L1_EXIT_BY_HW BIT(24)
@@ -1024,6 +1025,8 @@ struct dwc3_scratchpad_array { * change quirk. * @dis_tx_ipgap_linecheck_quirk: set if we disable u2mac linestate * check during HS transmit. + * @parkmode_disable_ss_quirk: set if we need to disable all SuperSpeed + * instances in park mode. * @tx_de_emphasis_quirk: set if we enable Tx de-emphasis quirk * @tx_de_emphasis: Tx de-emphasis value * 0 - -6dB de-emphasis @@ -1215,6 +1218,7 @@ struct dwc3 { unsigned dis_u2_freeclk_exists_quirk:1; unsigned dis_del_phy_power_chg_quirk:1; unsigned dis_tx_ipgap_linecheck_quirk:1; + unsigned parkmode_disable_ss_quirk:1;
unsigned tx_de_emphasis_quirk:1; unsigned tx_de_emphasis:2;
From: Marc Zyngier maz@kernel.org
[ Upstream commit 7809f7011c3bce650e502a98afeb05961470d865 ]
On a very heavily loaded D05 with GICv4, I managed to trigger the following lockdep splat:
[ 6022.598864] ====================================================== [ 6022.605031] WARNING: possible circular locking dependency detected [ 6022.611200] 5.6.0-rc4-00026-geee7c7b0f498 #680 Tainted: G E [ 6022.618061] ------------------------------------------------------ [ 6022.624227] qemu-system-aar/7569 is trying to acquire lock: [ 6022.629789] ffff042f97606808 (&p->pi_lock){-.-.}, at: try_to_wake_up+0x54/0x7a0 [ 6022.637102] [ 6022.637102] but task is already holding lock: [ 6022.642921] ffff002fae424cf0 (&irq_desc_lock_class){-.-.}, at: __irq_get_desc_lock+0x5c/0x98 [ 6022.651350] [ 6022.651350] which lock already depends on the new lock. [ 6022.651350] [ 6022.659512] [ 6022.659512] the existing dependency chain (in reverse order) is: [ 6022.666980] [ 6022.666980] -> #2 (&irq_desc_lock_class){-.-.}: [ 6022.672983] _raw_spin_lock_irqsave+0x50/0x78 [ 6022.677848] __irq_get_desc_lock+0x5c/0x98 [ 6022.682453] irq_set_vcpu_affinity+0x40/0xc0 [ 6022.687236] its_make_vpe_non_resident+0x6c/0xb8 [ 6022.692364] vgic_v4_put+0x54/0x70 [ 6022.696273] vgic_v3_put+0x20/0xd8 [ 6022.700183] kvm_vgic_put+0x30/0x48 [ 6022.704182] kvm_arch_vcpu_put+0x34/0x50 [ 6022.708614] kvm_sched_out+0x34/0x50 [ 6022.712700] __schedule+0x4bc/0x7f8 [ 6022.716697] schedule+0x50/0xd8 [ 6022.720347] kvm_arch_vcpu_ioctl_run+0x5f0/0x978 [ 6022.725473] kvm_vcpu_ioctl+0x3d4/0x8f8 [ 6022.729820] ksys_ioctl+0x90/0xd0 [ 6022.733642] __arm64_sys_ioctl+0x24/0x30 [ 6022.738074] el0_svc_common.constprop.3+0xa8/0x1e8 [ 6022.743373] do_el0_svc+0x28/0x88 [ 6022.747198] el0_svc+0x14/0x40 [ 6022.750761] el0_sync_handler+0x124/0x2b8 [ 6022.755278] el0_sync+0x140/0x180 [ 6022.759100] [ 6022.759100] -> #1 (&rq->lock){-.-.}: [ 6022.764143] _raw_spin_lock+0x38/0x50 [ 6022.768314] task_fork_fair+0x40/0x128 [ 6022.772572] sched_fork+0xe0/0x210 [ 6022.776484] copy_process+0x8c4/0x18d8 [ 6022.780742] _do_fork+0x88/0x6d8 [ 6022.784478] kernel_thread+0x64/0x88 [ 6022.788563] rest_init+0x30/0x270 [ 6022.792390] arch_call_rest_init+0x14/0x1c [ 6022.796995] start_kernel+0x498/0x4c4 [ 6022.801164] [ 6022.801164] -> #0 (&p->pi_lock){-.-.}: [ 6022.806382] __lock_acquire+0xdd8/0x15c8 [ 6022.810813] lock_acquire+0xd0/0x218 [ 6022.814896] _raw_spin_lock_irqsave+0x50/0x78 [ 6022.819761] try_to_wake_up+0x54/0x7a0 [ 6022.824018] wake_up_process+0x1c/0x28 [ 6022.828276] wakeup_softirqd+0x38/0x40 [ 6022.832533] __tasklet_schedule_common+0xc4/0xf0 [ 6022.837658] __tasklet_schedule+0x24/0x30 [ 6022.842176] check_irq_resend+0xc8/0x158 [ 6022.846609] irq_startup+0x74/0x128 [ 6022.850606] __enable_irq+0x6c/0x78 [ 6022.854602] enable_irq+0x54/0xa0 [ 6022.858431] its_make_vpe_non_resident+0xa4/0xb8 [ 6022.863557] vgic_v4_put+0x54/0x70 [ 6022.867469] kvm_arch_vcpu_blocking+0x28/0x38 [ 6022.872336] kvm_vcpu_block+0x48/0x490 [ 6022.876594] kvm_handle_wfx+0x18c/0x310 [ 6022.880938] handle_exit+0x138/0x198 [ 6022.885022] kvm_arch_vcpu_ioctl_run+0x4d4/0x978 [ 6022.890148] kvm_vcpu_ioctl+0x3d4/0x8f8 [ 6022.894494] ksys_ioctl+0x90/0xd0 [ 6022.898317] __arm64_sys_ioctl+0x24/0x30 [ 6022.902748] el0_svc_common.constprop.3+0xa8/0x1e8 [ 6022.908046] do_el0_svc+0x28/0x88 [ 6022.911871] el0_svc+0x14/0x40 [ 6022.915434] el0_sync_handler+0x124/0x2b8 [ 6022.919951] el0_sync+0x140/0x180 [ 6022.923773] [ 6022.923773] other info that might help us debug this: [ 6022.923773] [ 6022.931762] Chain exists of: [ 6022.931762] &p->pi_lock --> &rq->lock --> &irq_desc_lock_class [ 6022.931762] [ 6022.942101] Possible unsafe locking scenario: [ 6022.942101] [ 6022.948007] CPU0 CPU1 [ 6022.952523] ---- ---- [ 6022.957039] lock(&irq_desc_lock_class); [ 6022.961036] lock(&rq->lock); [ 6022.966595] lock(&irq_desc_lock_class); [ 6022.973109] lock(&p->pi_lock); [ 6022.976324] [ 6022.976324] *** DEADLOCK ***
This is happening because we have a pending doorbell that requires retrigger. As SW retriggering is done in a tasklet, we trigger the circular dependency above.
The easy cop-out is to provide a retrigger callback that doesn't require acquiring any extra lock.
Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20200310184921.23552-5-maz@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/irqchip/irq-gic-v3-its.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 50f89056c16bb..8c757a125a55d 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -3142,12 +3142,18 @@ static int its_vpe_set_irqchip_state(struct irq_data *d, return 0; }
+static int its_vpe_retrigger(struct irq_data *d) +{ + return !its_vpe_set_irqchip_state(d, IRQCHIP_STATE_PENDING, true); +} + static struct irq_chip its_vpe_irq_chip = { .name = "GICv4-vpe", .irq_mask = its_vpe_mask_irq, .irq_unmask = its_vpe_unmask_irq, .irq_eoi = irq_chip_eoi_parent, .irq_set_affinity = its_vpe_set_affinity, + .irq_retrigger = its_vpe_retrigger, .irq_set_irqchip_state = its_vpe_set_irqchip_state, .irq_set_vcpu_affinity = its_vpe_set_vcpu_affinity, };
From: Guoqing Jiang guoqing.jiang@cloud.ionos.com
[ Upstream commit 6b40bec3b13278d21fa6c1ae7a0bdf2e550eed5f ]
Don't call quiesce(1) and quiesce(0) if array is already suspended, otherwise in level_store, the array is writable after mddev_detach in below part though the intention is to make array writable after resume.
mddev_suspend(mddev); mddev_detach(mddev); ... mddev_resume(mddev);
And it also causes calltrace as follows in [1].
[48005.653834] WARNING: CPU: 1 PID: 45380 at kernel/kthread.c:510 kthread_park+0x77/0x90 [...] [48005.653976] CPU: 1 PID: 45380 Comm: mdadm Tainted: G OE 5.4.10-arch1-1 #1 [48005.653979] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./J4105-ITX, BIOS P1.40 08/06/2018 [48005.653984] RIP: 0010:kthread_park+0x77/0x90 [48005.654015] Call Trace: [48005.654039] r5l_quiesce+0x3c/0x70 [raid456] [48005.654052] raid5_quiesce+0x228/0x2e0 [raid456] [48005.654073] mddev_detach+0x30/0x70 [md_mod] [48005.654090] level_store+0x202/0x670 [md_mod] [48005.654099] ? security_capable+0x40/0x60 [48005.654114] md_attr_store+0x7b/0xc0 [md_mod] [48005.654123] kernfs_fop_write+0xce/0x1b0 [48005.654132] vfs_write+0xb6/0x1a0 [48005.654138] ksys_write+0x67/0xe0 [48005.654146] do_syscall_64+0x4e/0x140 [48005.654155] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [48005.654161] RIP: 0033:0x7fa0c8737497
[1]: https://bugzilla.kernel.org/show_bug.cgi?id=206161
Signed-off-by: Guoqing Jiang guoqing.jiang@cloud.ionos.com Signed-off-by: Song Liu songliubraving@fb.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c index 4e7c9f398bc66..6b69a12ca2d80 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -6040,7 +6040,7 @@ EXPORT_SYMBOL_GPL(md_stop_writes); static void mddev_detach(struct mddev *mddev) { md_bitmap_wait_behind_writes(mddev); - if (mddev->pers && mddev->pers->quiesce) { + if (mddev->pers && mddev->pers->quiesce && !mddev->suspended) { mddev->pers->quiesce(mddev, 1); mddev->pers->quiesce(mddev, 0); }
From: Junyong Sun sunjy516@gmail.com
[ Upstream commit bcfbd3523f3c6eea51a74d217a8ebc5463bcb7f4 ]
fw_sysfs_wait_timeout may return err with -ENOENT at fw_load_sysfs_fallback and firmware is already in abort status, no need to abort again, so skip it.
This issue is caused by concurrent situation like below: when thread 1# wait firmware loading, thread 2# may write -1 to abort loading and wakeup thread 1# before it timeout. so wait_for_completion_killable_timeout of thread 1# would return remaining time which is != 0 with fw_st->status FW_STATUS_ABORTED.And the results would be converted into err -ENOENT in __fw_state_wait_common and transfered to fw_load_sysfs_fallback in thread 1#. The -ENOENT means firmware status is already at ABORTED, so fw_load_sysfs_fallback no need to get mutex to abort again. ----------------------------- thread 1#,wait for loading fw_load_sysfs_fallback ->fw_sysfs_wait_timeout ->__fw_state_wait_common ->wait_for_completion_killable_timeout
in __fw_state_wait_common, ... 93 ret = wait_for_completion_killable_timeout(&fw_st->completion, timeout); 94 if (ret != 0 && fw_st->status == FW_STATUS_ABORTED) 95 return -ENOENT; 96 if (!ret) 97 return -ETIMEDOUT; 98 99 return ret < 0 ? ret : 0; ----------------------------- thread 2#, write -1 to abort loading firmware_loading_store ->fw_load_abort ->__fw_load_abort ->fw_state_aborted ->__fw_state_set ->complete_all
in __fw_state_set, ... 111 if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) 112 complete_all(&fw_st->completion); ------------------------------------------- BTW,the double abort issue would not cause kernel panic or create an issue, but slow down it sometimes.The change is just a minor optimization.
Signed-off-by: Junyong Sun sunjunyong@xiaomi.com Acked-by: Luis Chamberlain mcgrof@kernel.org Link: https://lore.kernel.org/r/1583202968-28792-1-git-send-email-sunjunyong@xiaom... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/base/firmware_loader/fallback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c index 62ee90b4db56e..70efbb22dfc30 100644 --- a/drivers/base/firmware_loader/fallback.c +++ b/drivers/base/firmware_loader/fallback.c @@ -525,7 +525,7 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, }
retval = fw_sysfs_wait_timeout(fw_priv, timeout); - if (retval < 0) { + if (retval < 0 && retval != -ENOENT) { mutex_lock(&fw_lock); fw_load_abort(fw_sysfs); mutex_unlock(&fw_lock);
From: Vladimir Oltean vladimir.oltean@nxp.com
[ Upstream commit 4f5ee75ea1718a09149460b3df993f389a67b56a ]
Currently the driver puts the process in interruptible sleep waiting for the interrupt train to finish transfer to/from the tx_buf and rx_buf.
But exiting the process with ctrl-c may make the kernel panic: the wait_event_interruptible call will return -ERESTARTSYS, which a proper driver implementation is perhaps supposed to handle, but nonetheless this one doesn't, and aborts the transfer altogether.
Actually when the task is interrupted, there is still a high chance that the dspi_interrupt is still triggering. And if dspi_transfer_one_message returns execution all the way to the spi_device driver, that can free the spi_message and spi_transfer structures, leaving the interrupts to access a freed tx_buf and rx_buf.
hexdump -C /dev/mtd0 00000000 00 75 68 75 0a ff ff ff ff ff ff ff ff ff ff ff |.uhu............| 00000010 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................| * ^C[ 38.495955] fsl-dspi 2120000.spi: Waiting for transfer to complete failed! [ 38.503097] spi_master spi2: failed to transfer one message from queue [ 38.509729] Unable to handle kernel paging request at virtual address ffff800095ab3377 [ 38.517676] Mem abort info: [ 38.520474] ESR = 0x96000045 [ 38.523533] EC = 0x25: DABT (current EL), IL = 32 bits [ 38.528861] SET = 0, FnV = 0 [ 38.531921] EA = 0, S1PTW = 0 [ 38.535067] Data abort info: [ 38.537952] ISV = 0, ISS = 0x00000045 [ 38.541797] CM = 0, WnR = 1 [ 38.544771] swapper pgtable: 4k pages, 48-bit VAs, pgdp=0000000082621000 [ 38.551494] [ffff800095ab3377] pgd=00000020fffff003, p4d=00000020fffff003, pud=0000000000000000 [ 38.560229] Internal error: Oops: 96000045 [#1] PREEMPT SMP [ 38.565819] Modules linked in: [ 38.568882] CPU: 0 PID: 2729 Comm: hexdump Not tainted 5.6.0-rc4-next-20200306-00052-gd8730cdc8a0b-dirty #193 [ 38.578834] Hardware name: Kontron SMARC-sAL28 (Single PHY) on SMARC Eval 2.0 carrier (DT) [ 38.587129] pstate: 20000085 (nzCv daIf -PAN -UAO) [ 38.591941] pc : ktime_get_real_ts64+0x3c/0x110 [ 38.596487] lr : spi_take_timestamp_pre+0x40/0x90 [ 38.601203] sp : ffff800010003d90 [ 38.604525] x29: ffff800010003d90 x28: ffff80001200e000 [ 38.609854] x27: ffff800011da9000 x26: ffff002079c40400 [ 38.615184] x25: ffff8000117fe018 x24: ffff800011daa1a0 [ 38.620513] x23: ffff800015ab3860 x22: ffff800095ab3377 [ 38.625841] x21: 000000000000146e x20: ffff8000120c3000 [ 38.631170] x19: ffff0020795f6e80 x18: ffff800011da9948 [ 38.636498] x17: 0000000000000000 x16: 0000000000000000 [ 38.641826] x15: ffff800095ab3377 x14: 0720072007200720 [ 38.647155] x13: 0720072007200765 x12: 0775076507750771 [ 38.652483] x11: 0720076d076f0772 x10: 0000000000000040 [ 38.657812] x9 : ffff8000108e2100 x8 : ffff800011dcabe8 [ 38.663139] x7 : 0000000000000000 x6 : ffff800015ab3a60 [ 38.668468] x5 : 0000000007200720 x4 : ffff800095ab3377 [ 38.673796] x3 : 0000000000000000 x2 : 0000000000000ab0 [ 38.679125] x1 : ffff800011daa000 x0 : 0000000000000026 [ 38.684454] Call trace: [ 38.686905] ktime_get_real_ts64+0x3c/0x110 [ 38.691100] spi_take_timestamp_pre+0x40/0x90 [ 38.695470] dspi_fifo_write+0x58/0x2c0 [ 38.699315] dspi_interrupt+0xbc/0xd0 [ 38.702987] __handle_irq_event_percpu+0x78/0x2c0 [ 38.707706] handle_irq_event_percpu+0x3c/0x90 [ 38.712161] handle_irq_event+0x4c/0xd0 [ 38.716008] handle_fasteoi_irq+0xbc/0x170 [ 38.720115] generic_handle_irq+0x2c/0x40 [ 38.724135] __handle_domain_irq+0x68/0xc0 [ 38.728243] gic_handle_irq+0xc8/0x160 [ 38.732000] el1_irq+0xb8/0x180 [ 38.735149] spi_nor_spimem_read_data+0xe0/0x140 [ 38.739779] spi_nor_read+0xc4/0x120 [ 38.743364] mtd_read_oob+0xa8/0xc0 [ 38.746860] mtd_read+0x4c/0x80 [ 38.750007] mtdchar_read+0x108/0x2a0 [ 38.753679] __vfs_read+0x20/0x50 [ 38.757002] vfs_read+0xa4/0x190 [ 38.760237] ksys_read+0x6c/0xf0 [ 38.763471] __arm64_sys_read+0x20/0x30 [ 38.767319] el0_svc_common.constprop.3+0x90/0x160 [ 38.772125] do_el0_svc+0x28/0x90 [ 38.775449] el0_sync_handler+0x118/0x190 [ 38.779468] el0_sync+0x140/0x180 [ 38.782793] Code: 91000294 1400000f d50339bf f9405e80 (f90002c0) [ 38.788910] ---[ end trace 55da560db4d6bef7 ]--- [ 38.793540] Kernel panic - not syncing: Fatal exception in interrupt [ 38.799914] SMP: stopping secondary CPUs [ 38.803849] Kernel Offset: disabled [ 38.807344] CPU features: 0x10002,20006008 [ 38.811451] Memory Limit: none [ 38.814513] ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
So it is clear that the "interruptible" part isn't handled correctly. When the process receives a signal, one could either attempt a clean abort (which appears to be difficult with this hardware) or just keep restarting the sleep until the wait queue really completes. But checking in a loop for -ERESTARTSYS is a bit too complicated for this driver, so just make the sleep uninterruptible, to avoid all that nonsense.
The wait queue was actually restructured as a completion, after polling other drivers for the most "popular" approach.
Fixes: 349ad66c0ab0 ("spi:Add Freescale DSPI driver for Vybrid VF610 platform") Reported-by: Michael Walle michael@walle.cc Signed-off-by: Vladimir Oltean vladimir.oltean@nxp.com Tested-by: Michael Walle michael@walle.cc Link: https://lore.kernel.org/r/20200318001603.9650-7-olteanv@gmail.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-fsl-dspi.c | 19 ++++++------------- 1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c index a534b8af27b8d..f9d44bb1040f8 100644 --- a/drivers/spi/spi-fsl-dspi.c +++ b/drivers/spi/spi-fsl-dspi.c @@ -196,8 +196,7 @@ struct fsl_dspi { u8 bytes_per_word; const struct fsl_dspi_devtype_data *devtype_data;
- wait_queue_head_t waitq; - u32 waitflags; + struct completion xfer_done;
struct fsl_dspi_dma *dma; }; @@ -714,10 +713,8 @@ static irqreturn_t dspi_interrupt(int irq, void *dev_id) if (!(spi_sr & SPI_SR_EOQF)) return IRQ_NONE;
- if (dspi_rxtx(dspi) == 0) { - dspi->waitflags = 1; - wake_up_interruptible(&dspi->waitq); - } + if (dspi_rxtx(dspi) == 0) + complete(&dspi->xfer_done);
return IRQ_HANDLED; } @@ -815,13 +812,9 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr, status = dspi_poll(dspi); } while (status == -EINPROGRESS); } else if (trans_mode != DSPI_DMA_MODE) { - status = wait_event_interruptible(dspi->waitq, - dspi->waitflags); - dspi->waitflags = 0; + wait_for_completion(&dspi->xfer_done); + reinit_completion(&dspi->xfer_done); } - if (status) - dev_err(&dspi->pdev->dev, - "Waiting for transfer to complete failed!\n");
spi_transfer_delay_exec(transfer); } @@ -1161,7 +1154,7 @@ static int dspi_probe(struct platform_device *pdev) goto out_clk_put; }
- init_waitqueue_head(&dspi->waitq); + init_completion(&dspi->xfer_done);
poll_mode:
From: Boqun Feng boqun.feng@gmail.com
[ Upstream commit 25016bd7f4caf5fc983bbab7403d08e64cba3004 ]
Qian Cai reported a bug when PROVE_RCU_LIST=y, and read on /proc/lockdep triggered a warning:
[ ] DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled) ... [ ] Call Trace: [ ] lock_is_held_type+0x5d/0x150 [ ] ? rcu_lockdep_current_cpu_online+0x64/0x80 [ ] rcu_read_lock_any_held+0xac/0x100 [ ] ? rcu_read_lock_held+0xc0/0xc0 [ ] ? __slab_free+0x421/0x540 [ ] ? kasan_kmalloc+0x9/0x10 [ ] ? __kmalloc_node+0x1d7/0x320 [ ] ? kvmalloc_node+0x6f/0x80 [ ] __bfs+0x28a/0x3c0 [ ] ? class_equal+0x30/0x30 [ ] lockdep_count_forward_deps+0x11a/0x1a0
The warning got triggered because lockdep_count_forward_deps() call __bfs() without current->lockdep_recursion being set, as a result a lockdep internal function (__bfs()) is checked by lockdep, which is unexpected, and the inconsistency between the irq-off state and the state traced by lockdep caused the warning.
Apart from this warning, lockdep internal functions like __bfs() should always be protected by current->lockdep_recursion to avoid potential deadlocks and data inconsistency, therefore add the current->lockdep_recursion on-and-off section to protect __bfs() in both lockdep_count_forward_deps() and lockdep_count_backward_deps()
Reported-by: Qian Cai cai@lca.pw Signed-off-by: Boqun Feng boqun.feng@gmail.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20200312151258.128036-1-boqun.feng@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/locking/lockdep.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 32406ef0d6a2d..5142a6b11bf5b 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -1719,9 +1719,11 @@ unsigned long lockdep_count_forward_deps(struct lock_class *class) this.class = class;
raw_local_irq_save(flags); + current->lockdep_recursion = 1; arch_spin_lock(&lockdep_lock); ret = __lockdep_count_forward_deps(&this); arch_spin_unlock(&lockdep_lock); + current->lockdep_recursion = 0; raw_local_irq_restore(flags);
return ret; @@ -1746,9 +1748,11 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class) this.class = class;
raw_local_irq_save(flags); + current->lockdep_recursion = 1; arch_spin_lock(&lockdep_lock); ret = __lockdep_count_backward_deps(&this); arch_spin_unlock(&lockdep_lock); + current->lockdep_recursion = 0; raw_local_irq_restore(flags);
return ret;
From: Zhiqiang Liu liuzhiqiang26@huawei.com
[ Upstream commit 2f95fa5c955d0a9987ffdc3a095e2f4e62c5f2a9 ]
In bfq_idle_slice_timer func, bfqq = bfqd->in_service_queue is not in bfqd-lock critical section. The bfqq, which is not equal to NULL in bfq_idle_slice_timer, may be freed after passing to bfq_idle_slice_timer_body. So we will access the freed memory.
In addition, considering the bfqq may be in race, we should firstly check whether bfqq is in service before doing something on it in bfq_idle_slice_timer_body func. If the bfqq in race is not in service, it means the bfqq has been expired through __bfq_bfqq_expire func, and wait_request flags has been cleared in __bfq_bfqd_reset_in_service func. So we do not need to re-clear the wait_request of bfqq which is not in service.
KASAN log is given as follows: [13058.354613] ================================================================== [13058.354640] BUG: KASAN: use-after-free in bfq_idle_slice_timer+0xac/0x290 [13058.354644] Read of size 8 at addr ffffa02cf3e63f78 by task fork13/19767 [13058.354646] [13058.354655] CPU: 96 PID: 19767 Comm: fork13 [13058.354661] Call trace: [13058.354667] dump_backtrace+0x0/0x310 [13058.354672] show_stack+0x28/0x38 [13058.354681] dump_stack+0xd8/0x108 [13058.354687] print_address_description+0x68/0x2d0 [13058.354690] kasan_report+0x124/0x2e0 [13058.354697] __asan_load8+0x88/0xb0 [13058.354702] bfq_idle_slice_timer+0xac/0x290 [13058.354707] __hrtimer_run_queues+0x298/0x8b8 [13058.354710] hrtimer_interrupt+0x1b8/0x678 [13058.354716] arch_timer_handler_phys+0x4c/0x78 [13058.354722] handle_percpu_devid_irq+0xf0/0x558 [13058.354731] generic_handle_irq+0x50/0x70 [13058.354735] __handle_domain_irq+0x94/0x110 [13058.354739] gic_handle_irq+0x8c/0x1b0 [13058.354742] el1_irq+0xb8/0x140 [13058.354748] do_wp_page+0x260/0xe28 [13058.354752] __handle_mm_fault+0x8ec/0x9b0 [13058.354756] handle_mm_fault+0x280/0x460 [13058.354762] do_page_fault+0x3ec/0x890 [13058.354765] do_mem_abort+0xc0/0x1b0 [13058.354768] el0_da+0x24/0x28 [13058.354770] [13058.354773] Allocated by task 19731: [13058.354780] kasan_kmalloc+0xe0/0x190 [13058.354784] kasan_slab_alloc+0x14/0x20 [13058.354788] kmem_cache_alloc_node+0x130/0x440 [13058.354793] bfq_get_queue+0x138/0x858 [13058.354797] bfq_get_bfqq_handle_split+0xd4/0x328 [13058.354801] bfq_init_rq+0x1f4/0x1180 [13058.354806] bfq_insert_requests+0x264/0x1c98 [13058.354811] blk_mq_sched_insert_requests+0x1c4/0x488 [13058.354818] blk_mq_flush_plug_list+0x2d4/0x6e0 [13058.354826] blk_flush_plug_list+0x230/0x548 [13058.354830] blk_finish_plug+0x60/0x80 [13058.354838] read_pages+0xec/0x2c0 [13058.354842] __do_page_cache_readahead+0x374/0x438 [13058.354846] ondemand_readahead+0x24c/0x6b0 [13058.354851] page_cache_sync_readahead+0x17c/0x2f8 [13058.354858] generic_file_buffered_read+0x588/0xc58 [13058.354862] generic_file_read_iter+0x1b4/0x278 [13058.354965] ext4_file_read_iter+0xa8/0x1d8 [ext4] [13058.354972] __vfs_read+0x238/0x320 [13058.354976] vfs_read+0xbc/0x1c0 [13058.354980] ksys_read+0xdc/0x1b8 [13058.354984] __arm64_sys_read+0x50/0x60 [13058.354990] el0_svc_common+0xb4/0x1d8 [13058.354994] el0_svc_handler+0x50/0xa8 [13058.354998] el0_svc+0x8/0xc [13058.354999] [13058.355001] Freed by task 19731: [13058.355007] __kasan_slab_free+0x120/0x228 [13058.355010] kasan_slab_free+0x10/0x18 [13058.355014] kmem_cache_free+0x288/0x3f0 [13058.355018] bfq_put_queue+0x134/0x208 [13058.355022] bfq_exit_icq_bfqq+0x164/0x348 [13058.355026] bfq_exit_icq+0x28/0x40 [13058.355030] ioc_exit_icq+0xa0/0x150 [13058.355035] put_io_context_active+0x250/0x438 [13058.355038] exit_io_context+0xd0/0x138 [13058.355045] do_exit+0x734/0xc58 [13058.355050] do_group_exit+0x78/0x220 [13058.355054] __wake_up_parent+0x0/0x50 [13058.355058] el0_svc_common+0xb4/0x1d8 [13058.355062] el0_svc_handler+0x50/0xa8 [13058.355066] el0_svc+0x8/0xc [13058.355067] [13058.355071] The buggy address belongs to the object at ffffa02cf3e63e70#012 which belongs to the cache bfq_queue of size 464 [13058.355075] The buggy address is located 264 bytes inside of#012 464-byte region [ffffa02cf3e63e70, ffffa02cf3e64040) [13058.355077] The buggy address belongs to the page: [13058.355083] page:ffff7e80b3cf9800 count:1 mapcount:0 mapping:ffff802db5c90780 index:0xffffa02cf3e606f0 compound_mapcount: 0 [13058.366175] flags: 0x2ffffe0000008100(slab|head) [13058.370781] raw: 2ffffe0000008100 ffff7e80b53b1408 ffffa02d730c1c90 ffff802db5c90780 [13058.370787] raw: ffffa02cf3e606f0 0000000000370023 00000001ffffffff 0000000000000000 [13058.370789] page dumped because: kasan: bad access detected [13058.370791] [13058.370792] Memory state around the buggy address: [13058.370797] ffffa02cf3e63e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fb fb [13058.370801] ffffa02cf3e63e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [13058.370805] >ffffa02cf3e63f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [13058.370808] ^ [13058.370811] ffffa02cf3e63f80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [13058.370815] ffffa02cf3e64000: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc [13058.370817] ================================================================== [13058.370820] Disabling lock debugging due to kernel taint
Here, we directly pass the bfqd to bfq_idle_slice_timer_body func. -- V2->V3: rewrite the comment as suggested by Paolo Valente V1->V2: add one comment, and add Fixes and Reported-by tag.
Fixes: aee69d78d ("block, bfq: introduce the BFQ-v0 I/O scheduler as an extra scheduler") Acked-by: Paolo Valente paolo.valente@linaro.org Reported-by: Wang Wang wangwang2@huawei.com Signed-off-by: Zhiqiang Liu liuzhiqiang26@huawei.com Signed-off-by: Feilong Lin linfeilong@huawei.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/bfq-iosched.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 8fe4b69195119..43fbe5d096e3c 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6214,20 +6214,28 @@ static struct bfq_queue *bfq_init_rq(struct request *rq) return bfqq; }
-static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq) +static void +bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq) { - struct bfq_data *bfqd = bfqq->bfqd; enum bfqq_expiration reason; unsigned long flags;
spin_lock_irqsave(&bfqd->lock, flags); - bfq_clear_bfqq_wait_request(bfqq);
+ /* + * Considering that bfqq may be in race, we should firstly check + * whether bfqq is in service before doing something on it. If + * the bfqq in race is not in service, it has already been expired + * through __bfq_bfqq_expire func and its wait_request flags has + * been cleared in __bfq_bfqd_reset_in_service func. + */ if (bfqq != bfqd->in_service_queue) { spin_unlock_irqrestore(&bfqd->lock, flags); return; }
+ bfq_clear_bfqq_wait_request(bfqq); + if (bfq_bfqq_budget_timeout(bfqq)) /* * Also here the queue can be safely expired @@ -6272,7 +6280,7 @@ static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer) * early. */ if (bfqq) - bfq_idle_slice_timer_body(bfqq); + bfq_idle_slice_timer_body(bfqd, bfqq);
return HRTIMER_NORESTART; }
From: Qu Wenruo wqu@suse.com
[ Upstream commit d61acbbf54c612ea9bf67eed609494cda0857b3a ]
[BUG] There are some reports about btrfs wait forever to unmount itself, with the following call trace:
INFO: task umount:4631 blocked for more than 491 seconds. Tainted: G X 5.3.8-2-default #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. umount D 0 4631 3337 0x00000000 Call Trace: ([<00000000174adf7a>] __schedule+0x342/0x748) [<00000000174ae3ca>] schedule+0x4a/0xd8 [<00000000174b1f08>] schedule_timeout+0x218/0x420 [<00000000174af10c>] wait_for_common+0x104/0x1d8 [<000003ff804d6994>] btrfs_qgroup_wait_for_completion+0x84/0xb0 [btrfs] [<000003ff8044a616>] close_ctree+0x4e/0x380 [btrfs] [<0000000016fa3136>] generic_shutdown_super+0x8e/0x158 [<0000000016fa34d6>] kill_anon_super+0x26/0x40 [<000003ff8041ba88>] btrfs_kill_super+0x28/0xc8 [btrfs] [<0000000016fa39f8>] deactivate_locked_super+0x68/0x98 [<0000000016fcb198>] cleanup_mnt+0xc0/0x140 [<0000000016d6a846>] task_work_run+0xc6/0x110 [<0000000016d04f76>] do_notify_resume+0xae/0xb8 [<00000000174b30ae>] system_call+0xe2/0x2c8
[CAUSE] The problem happens when we have called qgroup_rescan_init(), but not queued the worker. It can be caused mostly by error handling.
Qgroup ioctl thread | Unmount thread ----------------------------------------+----------------------------------- | btrfs_qgroup_rescan() | |- qgroup_rescan_init() | | |- qgroup_rescan_running = true; | | | |- trans = btrfs_join_transaction() | | Some error happened | | | |- btrfs_qgroup_rescan() returns error | But qgroup_rescan_running == true; | | close_ctree() | |- btrfs_qgroup_wait_for_completion() | |- running == true; | |- wait_for_completion();
btrfs_qgroup_rescan_worker is never queued, thus no one is going to wake up close_ctree() and we get a deadlock.
All involved qgroup_rescan_init() callers are:
- btrfs_qgroup_rescan() The example above. It's possible to trigger the deadlock when error happened.
- btrfs_quota_enable() Not possible. Just after qgroup_rescan_init() we queue the work.
- btrfs_read_qgroup_config() It's possible to trigger the deadlock. It only init the work, the work queueing happens in btrfs_qgroup_rescan_resume(). Thus if error happened in between, deadlock is possible.
We shouldn't set fs_info->qgroup_rescan_running just in qgroup_rescan_init(), as at that stage we haven't yet queued qgroup rescan worker to run.
[FIX] Set qgroup_rescan_running before queueing the work, so that we ensure the rescan work is queued when we wait for it.
Fixes: 8d9eddad1946 ("Btrfs: fix qgroup rescan worker initialization") Signed-off-by: Jeff Mahoney jeffm@suse.com [ Change subject and cause analyse, use a smaller fix ] Signed-off-by: Qu Wenruo wqu@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/qgroup.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index 410b791f28a56..06d4c219742f2 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -1030,6 +1030,7 @@ out_add_root: ret = qgroup_rescan_init(fs_info, 0, 1); if (!ret) { qgroup_rescan_zero_tracking(fs_info); + fs_info->qgroup_rescan_running = true; btrfs_queue_work(fs_info->qgroup_rescan_workers, &fs_info->qgroup_rescan_work); } @@ -3276,7 +3277,6 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid, sizeof(fs_info->qgroup_rescan_progress)); fs_info->qgroup_rescan_progress.objectid = progress_objectid; init_completion(&fs_info->qgroup_rescan_completion); - fs_info->qgroup_rescan_running = true;
spin_unlock(&fs_info->qgroup_lock); mutex_unlock(&fs_info->qgroup_rescan_lock); @@ -3339,8 +3339,11 @@ btrfs_qgroup_rescan(struct btrfs_fs_info *fs_info)
qgroup_rescan_zero_tracking(fs_info);
+ mutex_lock(&fs_info->qgroup_rescan_lock); + fs_info->qgroup_rescan_running = true; btrfs_queue_work(fs_info->qgroup_rescan_workers, &fs_info->qgroup_rescan_work); + mutex_unlock(&fs_info->qgroup_rescan_lock);
return 0; } @@ -3376,9 +3379,13 @@ int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info, void btrfs_qgroup_rescan_resume(struct btrfs_fs_info *fs_info) { - if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN) + if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN) { + mutex_lock(&fs_info->qgroup_rescan_lock); + fs_info->qgroup_rescan_running = true; btrfs_queue_work(fs_info->qgroup_rescan_workers, &fs_info->qgroup_rescan_work); + mutex_unlock(&fs_info->qgroup_rescan_lock); + } }
/*
From: Josef Bacik josef@toxicpanda.com
[ Upstream commit 7b7b74315b24dc064bc1c683659061c3d48f8668 ]
This was pretty subtle, we default to reloc roots having 0 root refs, so if we crash in the middle of the relocation they can just be deleted. If we successfully complete the relocation operations we'll set our root refs to 1 in prepare_to_merge() and then go on to merge_reloc_roots().
At prepare_to_merge() time if any of the reloc roots have a 0 reference still, we will remove that reloc root from our reloc root rb tree, and then clean it up later.
However this only happens if we successfully start a transaction. If we've aborted previously we will skip this step completely, and only have reloc roots with a reference count of 0, but were never properly removed from the reloc control's rb tree.
This isn't a problem per-se, our references are held by the list the reloc roots are on, and by the original root the reloc root belongs to. If we end up in this situation all the reloc roots will be added to the dirty_reloc_list, and then properly dropped at that point. The reloc control will be free'd and the rb tree is no longer used.
There were two options when fixing this, one was to remove the BUG_ON(), the other was to make prepare_to_merge() handle the case where we couldn't start a trans handle.
IMO this is the cleaner solution. I started with handling the error in prepare_to_merge(), but it turned out super ugly. And in the end this BUG_ON() simply doesn't matter, the cleanup was happening properly, we were just panicing because this BUG_ON() only matters in the success case. So I've opted to just remove it and add a comment where it was.
Reviewed-by: Qu Wenruo wqu@suse.com Signed-off-by: Josef Bacik josef@toxicpanda.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/relocation.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index da5abd62db223..f7060b2172c44 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -2561,7 +2561,21 @@ out: free_reloc_roots(&reloc_roots); }
- BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root)); + /* + * We used to have + * + * BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root)); + * + * here, but it's wrong. If we fail to start the transaction in + * prepare_to_merge() we will have only 0 ref reloc roots, none of which + * have actually been removed from the reloc_root_tree rb tree. This is + * fine because we're bailing here, and we hold a reference on the root + * for the list that holds it, so these roots will be cleaned up when we + * do the reloc_dirty_list afterwards. Meanwhile the root->reloc_root + * will be cleaned up on unmount. + * + * The remaining nodes will be cleaned up by free_reloc_control. + */ }
static void free_block_list(struct rb_root *blocks)
From: Josef Bacik josef@toxicpanda.com
[ Upstream commit 50dbbb71c79df89532ec41d118d59386e5a877e3 ]
There are two bugs here, but fixing them independently would just result in pain if you happened to bisect between the two patches.
First is how we handle the -EAGAIN from relocate_tree_block(). We don't set error, unless we happen to be the first node, which makes no sense, I have no idea what the code was trying to accomplish here.
We in fact _do_ want err set here so that we know we need to restart in relocate_block_group(). Also we need finish_pending_nodes() to not actually call link_to_upper(), because we didn't actually relocate the block.
And then if we do get -EAGAIN we do not want to set our backref cache last_trans to the one before ours. This would force us to update our backref cache if we didn't cross transaction ids, which would mean we'd have some nodes updated to their new_bytenr, but still able to find their old bytenr because we're searching the same commit root as the last time we went through relocate_tree_blocks.
Fixing these two things keeps us from panicing when we start breaking out of relocate_tree_blocks() either for delayed ref flushing or enospc.
Signed-off-by: Josef Bacik josef@toxicpanda.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/relocation.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index f7060b2172c44..963e70141e398 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -3175,9 +3175,8 @@ int relocate_tree_blocks(struct btrfs_trans_handle *trans, ret = relocate_tree_block(trans, rc, node, &block->key, path); if (ret < 0) { - if (ret != -EAGAIN || &block->rb_node == rb_first(blocks)) - err = ret; - goto out; + err = ret; + break; } } out: @@ -4151,12 +4150,6 @@ restart: if (!RB_EMPTY_ROOT(&blocks)) { ret = relocate_tree_blocks(trans, rc, &blocks); if (ret < 0) { - /* - * if we fail to relocate tree blocks, force to update - * backref cache when committing transaction. - */ - rc->backref_cache.last_trans = trans->transid - 1; - if (ret != -EAGAIN) { err = ret; break;
From: Josef Bacik josef@toxicpanda.com
[ Upstream commit ea287ab157c2816bf12aad4cece41372f9d146b4 ]
We always search the commit root of the extent tree for looking up back references, however we track the reloc roots based on their current bytenr.
This is wrong, if we commit the transaction between relocating tree blocks we could end up in this code in build_backref_tree
if (key.objectid == key.offset) { /* * Only root blocks of reloc trees use backref * pointing to itself. */ root = find_reloc_root(rc, cur->bytenr); ASSERT(root); cur->root = root; break; }
find_reloc_root() is looking based on the bytenr we had in the commit root, but if we've COWed this reloc root we will not find that bytenr, and we will trip over the ASSERT(root).
Fix this by using the commit_root->start bytenr for indexing the commit root. Then we change the __update_reloc_root() caller to be used when we switch the commit root for the reloc root during commit.
This fixes the panic I was seeing when we started throttling relocation for delayed refs.
Signed-off-by: Josef Bacik josef@toxicpanda.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/relocation.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index 963e70141e398..ddaefecaa7c42 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -1298,7 +1298,7 @@ static int __must_check __add_reloc_root(struct btrfs_root *root) if (!node) return -ENOMEM;
- node->bytenr = root->node->start; + node->bytenr = root->commit_root->start; node->data = root;
spin_lock(&rc->reloc_root_tree.lock); @@ -1329,10 +1329,11 @@ static void __del_reloc_root(struct btrfs_root *root) if (rc && root->node) { spin_lock(&rc->reloc_root_tree.lock); rb_node = tree_search(&rc->reloc_root_tree.rb_root, - root->node->start); + root->commit_root->start); if (rb_node) { node = rb_entry(rb_node, struct mapping_node, rb_node); rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root); + RB_CLEAR_NODE(&node->rb_node); } spin_unlock(&rc->reloc_root_tree.lock); if (!node) @@ -1350,7 +1351,7 @@ static void __del_reloc_root(struct btrfs_root *root) * helper to update the 'address of tree root -> reloc tree' * mapping */ -static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr) +static int __update_reloc_root(struct btrfs_root *root) { struct btrfs_fs_info *fs_info = root->fs_info; struct rb_node *rb_node; @@ -1359,7 +1360,7 @@ static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr)
spin_lock(&rc->reloc_root_tree.lock); rb_node = tree_search(&rc->reloc_root_tree.rb_root, - root->node->start); + root->commit_root->start); if (rb_node) { node = rb_entry(rb_node, struct mapping_node, rb_node); rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root); @@ -1371,7 +1372,7 @@ static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr) BUG_ON((struct btrfs_root *)node->data != root);
spin_lock(&rc->reloc_root_tree.lock); - node->bytenr = new_bytenr; + node->bytenr = root->node->start; rb_node = tree_insert(&rc->reloc_root_tree.rb_root, node->bytenr, &node->rb_node); spin_unlock(&rc->reloc_root_tree.lock); @@ -1529,6 +1530,7 @@ int btrfs_update_reloc_root(struct btrfs_trans_handle *trans, }
if (reloc_root->commit_root != reloc_root->node) { + __update_reloc_root(reloc_root); btrfs_set_root_node(root_item, reloc_root->node); free_extent_buffer(reloc_root->commit_root); reloc_root->commit_root = btrfs_root_node(reloc_root); @@ -4715,11 +4717,6 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans, BUG_ON(rc->stage == UPDATE_DATA_PTRS && root->root_key.objectid == BTRFS_DATA_RELOC_TREE_OBJECTID);
- if (root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) { - if (buf == root->node) - __update_reloc_root(root, cow->start); - } - level = btrfs_header_level(buf); if (btrfs_header_generation(buf) <= btrfs_root_last_snapshot(&root->root_item))
From: 이경택 gt82.lee@samsung.com
commit 0ab070917afdc93670c2d0ea02ab6defb6246a7c upstream.
If regwshift is 32 and the selected architecture compiles '<<' operator for signed int literal into rotating shift, '1<<regwshift' became 1 and it makes regwmask to 0x0. The literal is set to unsigned long to get intended regwmask.
Signed-off-by: Gyeongtaek Lee gt82.lee@samsung.com Link: https://lore.kernel.org/r/001001d60665%24db7af3e0%249270dba0%24@samsung.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/soc/soc-ops.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/sound/soc/soc-ops.c +++ b/sound/soc/soc-ops.c @@ -825,7 +825,7 @@ int snd_soc_get_xr_sx(struct snd_kcontro unsigned int regbase = mc->regbase; unsigned int regcount = mc->regcount; unsigned int regwshift = component->val_bytes * BITS_PER_BYTE; - unsigned int regwmask = (1<<regwshift)-1; + unsigned int regwmask = (1UL<<regwshift)-1; unsigned int invert = mc->invert; unsigned long mask = (1UL<<mc->nbits)-1; long min = mc->min; @@ -874,7 +874,7 @@ int snd_soc_put_xr_sx(struct snd_kcontro unsigned int regbase = mc->regbase; unsigned int regcount = mc->regcount; unsigned int regwshift = component->val_bytes * BITS_PER_BYTE; - unsigned int regwmask = (1<<regwshift)-1; + unsigned int regwmask = (1UL<<regwshift)-1; unsigned int invert = mc->invert; unsigned long mask = (1UL<<mc->nbits)-1; long max = mc->max;
From: 이경택 gt82.lee@samsung.com
commit 3bbbb7728fc853d71dbce4073fef9f281fbfb4dd upstream.
Since a virtual mixer has no backing registers to decide which path to connect, it will try to match with initial state. This is to ensure that the default mixer choice will be correctly powered up during initialization. Invert flag is used to select initial state of the virtual switch. Since actual hardware can't be disconnected by virtual switch, connected is better choice as initial state in many cases.
Signed-off-by: Gyeongtaek Lee gt82.lee@samsung.com Link: https://lore.kernel.org/r/01a301d60731%24b724ea10%24256ebe30%24@samsung.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/soc/soc-dapm.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
--- a/sound/soc/soc-dapm.c +++ b/sound/soc/soc-dapm.c @@ -802,7 +802,13 @@ static void dapm_set_mixer_path_status(s val = max - val; p->connect = !!val; } else { - p->connect = 0; + /* since a virtual mixer has no backing registers to + * decide which path to connect, it will try to match + * with initial state. This is to ensure + * that the default mixer choice will be + * correctly powered up during initialization. + */ + p->connect = invert; } }
From: 이경택 gt82.lee@samsung.com
commit 21fca8bdbb64df1297e8c65a746c4c9f4a689751 upstream.
soc_compr_trigger_fe() allows start or stop after pause_push. In dpcm_be_dai_trigger(), however, only pause_release is allowed command after pause_push. So, start or stop after pause in compress offload is always returned as error if the compress offload is used with dpcm. To fix the problem, SND_SOC_DPCM_STATE_PAUSED should be allowed for start or stop command.
Signed-off-by: Gyeongtaek Lee gt82.lee@samsung.com Reviewed-by: Vinod Koul vkoul@kernel.org Link: https://lore.kernel.org/r/004d01d607c1%247a3d5250%246eb7f6f0%24@samsung.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/soc/soc-pcm.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/sound/soc/soc-pcm.c +++ b/sound/soc/soc-pcm.c @@ -2256,7 +2256,8 @@ int dpcm_be_dai_trigger(struct snd_soc_p switch (cmd) { case SNDRV_PCM_TRIGGER_START: if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) && - (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP)) + (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) && + (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED)) continue;
ret = dpcm_do_trigger(dpcm, be_substream, cmd); @@ -2286,7 +2287,8 @@ int dpcm_be_dai_trigger(struct snd_soc_p be->dpcm[stream].state = SND_SOC_DPCM_STATE_START; break; case SNDRV_PCM_TRIGGER_STOP: - if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) + if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) && + (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED)) continue;
if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
From: 이경택 gt82.lee@samsung.com
commit abca9e4a04fbe9c6df4d48ca7517e1611812af25 upstream.
Current topology doesn't add prefix of component to new kcontrol.
Signed-off-by: Gyeongtaek Lee gt82.lee@samsung.com Link: https://lore.kernel.org/r/009b01d60804%24ae25c2d0%240a714870%24@samsung.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/soc/soc-topology.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/sound/soc/soc-topology.c +++ b/sound/soc/soc-topology.c @@ -362,7 +362,7 @@ static int soc_tplg_add_kcontrol(struct struct snd_soc_component *comp = tplg->comp;
return soc_tplg_add_dcontrol(comp->card->snd_card, - comp->dev, k, NULL, comp, kcontrol); + comp->dev, k, comp->name_prefix, comp, kcontrol); }
/* remove a mixer kcontrol */
From: Sriharsha Allenki sallenki@codeaurora.org
commit f63ec55ff904b2f2e126884fcad93175f16ab4bb upstream.
In AIO case, the request is freed up if ep_queue fails. However, io_data->req still has the reference to this freed request. In the case of this failure if there is aio_cancel call on this io_data it will lead to an invalid dequeue operation and a potential use after free issue. Fix this by setting the io_data->req to NULL when the request is freed as part of queue failure.
Fixes: 2e4c7553cd6f ("usb: gadget: f_fs: add aio support") Signed-off-by: Sriharsha Allenki sallenki@codeaurora.org CC: stable stable@vger.kernel.org Reviewed-by: Peter Chen peter.chen@nxp.com Link: https://lore.kernel.org/r/20200326115620.12571-1-sallenki@codeaurora.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/gadget/function/f_fs.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -1120,6 +1120,7 @@ static ssize_t ffs_epfile_io(struct file
ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); if (unlikely(ret)) { + io_data->req = NULL; usb_ep_free_request(ep->ep, req); goto error_lock; }
From: Thinh Nguyen Thinh.Nguyen@synopsys.com
commit 5e5caf4fa8d3039140b4548b6ab23dd17fce9b2c upstream.
Different configuration/condition may draw different power. Inform the controller driver of the change so it can respond properly (e.g. GET_STATUS request). This fixes an issue with setting MaxPower from configfs. The composite driver doesn't check this value when setting self-powered.
Cc: stable@vger.kernel.org Fixes: 88af8bbe4ef7 ("usb: gadget: the start of the configfs interface") Signed-off-by: Thinh Nguyen thinhn@synopsys.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/gadget/composite.c | 9 +++++++++ 1 file changed, 9 insertions(+)
--- a/drivers/usb/gadget/composite.c +++ b/drivers/usb/gadget/composite.c @@ -861,6 +861,11 @@ static int set_config(struct usb_composi else power = min(power, 900U); done: + if (power <= USB_SELF_POWER_VBUS_MAX_DRAW) + usb_gadget_set_selfpowered(gadget); + else + usb_gadget_clear_selfpowered(gadget); + usb_gadget_vbus_draw(gadget, power); if (result >= 0 && cdev->delayed_status) result = USB_GADGET_DELAYED_STATUS; @@ -2279,6 +2284,7 @@ void composite_suspend(struct usb_gadget
cdev->suspended = 1;
+ usb_gadget_set_selfpowered(gadget); usb_gadget_vbus_draw(gadget, 2); }
@@ -2307,6 +2313,9 @@ void composite_resume(struct usb_gadget else maxpower = min(maxpower, 900U);
+ if (maxpower > USB_SELF_POWER_VBUS_MAX_DRAW) + usb_gadget_clear_selfpowered(gadget); + usb_gadget_vbus_draw(gadget, maxpower); }
From: Takashi Iwai tiwai@suse.de
commit 2a48218f8e23d47bd3e23cfdfb8aa9066f7dc3e6 upstream.
Some recent boards (supposedly with a new AMD platform) contain the USB audio class 2 device that is often tied with HD-audio. The device exposes an Input Gain Pad control (id=19, control=12) but this node doesn't behave correctly, returning an error for each inquiry of GET_MIN and GET_MAX that should have been mandatory.
As a workaround, simply ignore this node by adding a usbmix_name_map table entry. The currently known devices are: * 0414:a002 - Gigabyte TRX40 Aorus Pro WiFi * 0b05:1916 - ASUS ROG Zenith II * 0b05:1917 - ASUS ROG Strix * 0db0:0d64 - MSI TRX40 Creator * 0db0:543d - MSI TRX40
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=206543 Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200408140449.22319-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/usb/mixer_maps.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+)
--- a/sound/usb/mixer_maps.c +++ b/sound/usb/mixer_maps.c @@ -349,6 +349,14 @@ static const struct usbmix_name_map dell { 0 } };
+/* Some mobos shipped with a dummy HD-audio show the invalid GET_MIN/GET_MAX + * response for Input Gain Pad (id=19, control=12). Skip it. + */ +static const struct usbmix_name_map asus_rog_map[] = { + { 19, NULL, 12 }, /* FU, Input Gain Pad */ + {} +}; + /* * Control map entries */ @@ -468,6 +476,26 @@ static struct usbmix_ctl_map usbmix_ctl_ .id = USB_ID(0x05a7, 0x1020), .map = bose_companion5_map, }, + { /* Gigabyte TRX40 Aorus Pro WiFi */ + .id = USB_ID(0x0414, 0xa002), + .map = asus_rog_map, + }, + { /* ASUS ROG Zenith II */ + .id = USB_ID(0x0b05, 0x1916), + .map = asus_rog_map, + }, + { /* ASUS ROG Strix */ + .id = USB_ID(0x0b05, 0x1917), + .map = asus_rog_map, + }, + { /* MSI TRX40 Creator */ + .id = USB_ID(0x0db0, 0x0d64), + .map = asus_rog_map, + }, + { /* MSI TRX40 */ + .id = USB_ID(0x0db0, 0x543d), + .map = asus_rog_map, + }, { 0 } /* terminator */ };
From: Takashi Iwai tiwai@suse.de
commit 3c6fd1f07ed03a04debbb9a9d782205f1ef5e2ab upstream.
The recent AMD platform exposes an HD-audio bus but without any actual codecs, which is internally tied with a USB-audio device, supposedly. It results in "no codecs" error of HD-audio bus driver, and it's nothing but a waste of resources.
This patch introduces a static blacklist table for skipping such a known bogus PCI SSID entry. As of writing this patch, the known SSIDs are: * 1043:874f - ASUS ROG Zenith II / Strix * 1462:cb59 - MSI TRX40 Creator * 1462:cb60 - MSI TRX40
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=206543 Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200408140449.22319-2-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/hda_intel.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
--- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -2074,6 +2074,17 @@ static void pcm_mmap_prepare(struct snd_ #endif }
+/* Blacklist for skipping the whole probe: + * some HD-audio PCI entries are exposed without any codecs, and such devices + * should be ignored from the beginning. + */ +static const struct snd_pci_quirk driver_blacklist[] = { + SND_PCI_QUIRK(0x1043, 0x874f, "ASUS ROG Zenith II / Strix", 0), + SND_PCI_QUIRK(0x1462, 0xcb59, "MSI TRX40 Creator", 0), + SND_PCI_QUIRK(0x1462, 0xcb60, "MSI TRX40", 0), + {} +}; + static const struct hda_controller_ops pci_hda_ops = { .disable_msi_reset_irq = disable_msi_reset_irq, .pcm_mmap_prepare = pcm_mmap_prepare, @@ -2090,6 +2101,11 @@ static int azx_probe(struct pci_dev *pci bool schedule_probe; int err;
+ if (snd_pci_quirk_lookup(pci, driver_blacklist)) { + dev_info(&pci->dev, "Skipping the blacklisted device\n"); + return -ENODEV; + } + if (dev >= SNDRV_CARDS) return -ENODEV; if (!enable[dev]) {
From: Takashi Iwai tiwai@suse.de
commit 0ad3f0b384d58f3bd1f4fb87d0af5b8f6866f41a upstream.
The beep control helper function blindly stores the values in two stereo channels no matter whether the actual control is mono or stereo. This is practically harmless, but it annoys the recently introduced sanity check, resulting in an error when the checker is enabled.
This patch corrects the behavior to store only on the defined array member.
Fixes: 0401e8548eac ("ALSA: hda - Move beep helper functions to hda_beep.c") BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=207139 Reviewed-by: Jaroslav Kysela perex@perex.cz Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200407084402.25589-2-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/hda_beep.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/sound/pci/hda/hda_beep.c +++ b/sound/pci/hda/hda_beep.c @@ -290,8 +290,12 @@ int snd_hda_mixer_amp_switch_get_beep(st { struct hda_codec *codec = snd_kcontrol_chip(kcontrol); struct hda_beep *beep = codec->beep; + int chs = get_amp_channels(kcontrol); + if (beep && (!beep->enabled || !ctl_has_mute(kcontrol))) { - ucontrol->value.integer.value[0] = + if (chs & 1) + ucontrol->value.integer.value[0] = beep->enabled; + if (chs & 2) ucontrol->value.integer.value[1] = beep->enabled; return 0; }
From: Takashi Iwai tiwai@suse.de
commit c47914c00be346bc5b48c48de7b0da5c2d1a296c upstream.
The access to Analog Capture Source control value implemented in prodigy_hifi.c is wrong, as caught by the recently introduced sanity check; it should be accessing value.enumerated.item[] instead of value.integer.value[]. This patch corrects the wrong access pattern.
Fixes: 6b8d6e5518e2 ("[ALSA] ICE1724: Added support for Audiotrak Prodigy 7.1 HiFi & HD2, Hercules Fortissimo IV") BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=207139 Reviewed-by: Jaroslav Kysela perex@perex.cz Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200407084402.25589-3-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/ice1712/prodigy_hifi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/sound/pci/ice1712/prodigy_hifi.c +++ b/sound/pci/ice1712/prodigy_hifi.c @@ -536,7 +536,7 @@ static int wm_adc_mux_enum_get(struct sn struct snd_ice1712 *ice = snd_kcontrol_chip(kcontrol);
mutex_lock(&ice->gpio_mutex); - ucontrol->value.integer.value[0] = wm_get(ice, WM_ADC_MUX) & 0x1f; + ucontrol->value.enumerated.item[0] = wm_get(ice, WM_ADC_MUX) & 0x1f; mutex_unlock(&ice->gpio_mutex); return 0; } @@ -550,7 +550,7 @@ static int wm_adc_mux_enum_put(struct sn
mutex_lock(&ice->gpio_mutex); oval = wm_get(ice, WM_ADC_MUX); - nval = (oval & 0xe0) | ucontrol->value.integer.value[0]; + nval = (oval & 0xe0) | ucontrol->value.enumerated.item[0]; if (nval != oval) { wm_put(ice, WM_ADC_MUX, nval); change = 1;
From: Takashi Iwai tiwai@suse.de
commit ae769d3556644888c964635179ef192995f40793 upstream.
The recent fix for the OOB access in PCM OSS plugins (commit f2ecf903ef06: "ALSA: pcm: oss: Avoid plugin buffer overflow") caused a regression on OSS applications. The patch introduced the size check in client and slave size calculations to limit to each plugin's buffer size, but I overlooked that some code paths call those without allocating the buffer but just for estimation.
This patch fixes the bug by skipping the size check for those code paths while keeping checking in the actual transfer calls.
Fixes: f2ecf903ef06 ("ALSA: pcm: oss: Avoid plugin buffer overflow") Tested-and-reported-by: Jari Ruusu jari.ruusu@gmail.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200403072515.25539-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/core/oss/pcm_plugin.c | 32 ++++++++++++++++++++++++-------- 1 file changed, 24 insertions(+), 8 deletions(-)
--- a/sound/core/oss/pcm_plugin.c +++ b/sound/core/oss/pcm_plugin.c @@ -196,7 +196,9 @@ int snd_pcm_plugin_free(struct snd_pcm_p return 0; }
-snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, snd_pcm_uframes_t drv_frames) +static snd_pcm_sframes_t plug_client_size(struct snd_pcm_substream *plug, + snd_pcm_uframes_t drv_frames, + bool check_size) { struct snd_pcm_plugin *plugin, *plugin_prev, *plugin_next; int stream; @@ -209,7 +211,7 @@ snd_pcm_sframes_t snd_pcm_plug_client_si if (stream == SNDRV_PCM_STREAM_PLAYBACK) { plugin = snd_pcm_plug_last(plug); while (plugin && drv_frames > 0) { - if (drv_frames > plugin->buf_frames) + if (check_size && drv_frames > plugin->buf_frames) drv_frames = plugin->buf_frames; plugin_prev = plugin->prev; if (plugin->src_frames) @@ -222,7 +224,7 @@ snd_pcm_sframes_t snd_pcm_plug_client_si plugin_next = plugin->next; if (plugin->dst_frames) drv_frames = plugin->dst_frames(plugin, drv_frames); - if (drv_frames > plugin->buf_frames) + if (check_size && drv_frames > plugin->buf_frames) drv_frames = plugin->buf_frames; plugin = plugin_next; } @@ -231,7 +233,9 @@ snd_pcm_sframes_t snd_pcm_plug_client_si return drv_frames; }
-snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug, snd_pcm_uframes_t clt_frames) +static snd_pcm_sframes_t plug_slave_size(struct snd_pcm_substream *plug, + snd_pcm_uframes_t clt_frames, + bool check_size) { struct snd_pcm_plugin *plugin, *plugin_prev, *plugin_next; snd_pcm_sframes_t frames; @@ -252,14 +256,14 @@ snd_pcm_sframes_t snd_pcm_plug_slave_siz if (frames < 0) return frames; } - if (frames > plugin->buf_frames) + if (check_size && frames > plugin->buf_frames) frames = plugin->buf_frames; plugin = plugin_next; } } else if (stream == SNDRV_PCM_STREAM_CAPTURE) { plugin = snd_pcm_plug_last(plug); while (plugin) { - if (frames > plugin->buf_frames) + if (check_size && frames > plugin->buf_frames) frames = plugin->buf_frames; plugin_prev = plugin->prev; if (plugin->src_frames) { @@ -274,6 +278,18 @@ snd_pcm_sframes_t snd_pcm_plug_slave_siz return frames; }
+snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, + snd_pcm_uframes_t drv_frames) +{ + return plug_client_size(plug, drv_frames, false); +} + +snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug, + snd_pcm_uframes_t clt_frames) +{ + return plug_slave_size(plug, clt_frames, false); +} + static int snd_pcm_plug_formats(const struct snd_mask *mask, snd_pcm_format_t format) { @@ -630,7 +646,7 @@ snd_pcm_sframes_t snd_pcm_plug_write_tra src_channels = dst_channels; plugin = next; } - return snd_pcm_plug_client_size(plug, frames); + return plug_client_size(plug, frames, true); }
snd_pcm_sframes_t snd_pcm_plug_read_transfer(struct snd_pcm_substream *plug, struct snd_pcm_plugin_channel *dst_channels_final, snd_pcm_uframes_t size) @@ -640,7 +656,7 @@ snd_pcm_sframes_t snd_pcm_plug_read_tran snd_pcm_sframes_t frames = size; int err;
- frames = snd_pcm_plug_slave_size(plug, frames); + frames = plug_slave_size(plug, frames, true); if (frames < 0) return frames;
From: Kai-Heng Feng kai.heng.feng@canonical.com
commit f5a88b0accc24c4a9021247d7a3124f90aa4c586 upstream.
The system in question uses ALC285, and it uses GPIO 0x04 to control its mute LED.
The mic mute LED can be controlled by GPIO 0x01, however the system uses DMIC so we should use that to control mic mute LED.
Signed-off-by: Kai-Heng Feng kai.heng.feng@canonical.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200327044626.29582-1-kai.heng.feng@canonical.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_realtek.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -4008,6 +4008,12 @@ static void alc269_fixup_hp_gpio_led(str alc_fixup_hp_gpio_led(codec, action, 0x08, 0x10); }
+static void alc285_fixup_hp_gpio_led(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + alc_fixup_hp_gpio_led(codec, action, 0x04, 0x00); +} + static void alc286_fixup_hp_gpio_led(struct hda_codec *codec, const struct hda_fixup *fix, int action) { @@ -5923,6 +5929,7 @@ enum { ALC294_FIXUP_ASUS_DUAL_SPK, ALC285_FIXUP_THINKPAD_HEADSET_JACK, ALC294_FIXUP_ASUS_HPE, + ALC285_FIXUP_HP_GPIO_LED, };
static const struct hda_fixup alc269_fixups[] = { @@ -7061,6 +7068,10 @@ static const struct hda_fixup alc269_fix .chained = true, .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC }, + [ALC285_FIXUP_HP_GPIO_LED] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc285_fixup_hp_gpio_led, + }, };
static const struct snd_pci_quirk alc269_fixup_tbl[] = { @@ -7208,6 +7219,7 @@ static const struct snd_pci_quirk alc269 SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), + SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
From: Hui Wang hui.wang@canonical.com
commit 476c02e0b4fd9071d158f6a1a1dfea1d36ee0ffd upstream.
On the Lenovo X1C7 machines, after we plug the headset, the rt_resume() and rt_suspend() of the codec driver will be called periodically, the driver can't stay in the rt_suspend state even users doen't use the sound card.
Through debugging, I found when running rt_suspend(), it will call alc225_shutup(), in this function, it will change 3k pull down control by alc_update_coef_idx(codec, 0x4a, 0, 3 << 10), this will trigger a fake key event and that event will resume the codec, when codec suspend agin, it will trigger the fake key event one more time, this process will repeat.
If disable the key event before changing the pull down control, it will not trigger fake key event. It also needs to restore the pull down control and re-enable the key event, otherwise the system can't get key event when codec is in rt_suspend state.
Also move some functions ahead of alc225_shutup(), this can save the function declaration.
Fixes: 76f7dec08fd6 (ALSA: hda/realtek - Add Headset Button supported for ThinkPad X1) Cc: Kailang Yang kailang@realtek.com Cc: stable@vger.kernel.org Signed-off-by: Hui Wang hui.wang@canonical.com Link: https://lore.kernel.org/r/20200329082018.20486-1-hui.wang@canonical.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_realtek.c | 170 ++++++++++++++++++++++++++---------------- 1 file changed, 107 insertions(+), 63 deletions(-)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -107,6 +107,7 @@ struct alc_spec { unsigned int done_hp_init:1; unsigned int no_shutup_pins:1; unsigned int ultra_low_power:1; + unsigned int has_hs_key:1;
/* for PLL fix */ hda_nid_t pll_nid; @@ -2982,6 +2983,107 @@ static int alc269_parse_auto_config(stru return alc_parse_auto_config(codec, alc269_ignore, ssids); }
+static const struct hda_jack_keymap alc_headset_btn_keymap[] = { + { SND_JACK_BTN_0, KEY_PLAYPAUSE }, + { SND_JACK_BTN_1, KEY_VOICECOMMAND }, + { SND_JACK_BTN_2, KEY_VOLUMEUP }, + { SND_JACK_BTN_3, KEY_VOLUMEDOWN }, + {} +}; + +static void alc_headset_btn_callback(struct hda_codec *codec, + struct hda_jack_callback *jack) +{ + int report = 0; + + if (jack->unsol_res & (7 << 13)) + report |= SND_JACK_BTN_0; + + if (jack->unsol_res & (1 << 16 | 3 << 8)) + report |= SND_JACK_BTN_1; + + /* Volume up key */ + if (jack->unsol_res & (7 << 23)) + report |= SND_JACK_BTN_2; + + /* Volume down key */ + if (jack->unsol_res & (7 << 10)) + report |= SND_JACK_BTN_3; + + jack->jack->button_state = report; +} + +static void alc_disable_headset_jack_key(struct hda_codec *codec) +{ + struct alc_spec *spec = codec->spec; + + if (!spec->has_hs_key) + return; + + switch (codec->core.vendor_id) { + case 0x10ec0215: + case 0x10ec0225: + case 0x10ec0285: + case 0x10ec0295: + case 0x10ec0289: + case 0x10ec0299: + alc_write_coef_idx(codec, 0x48, 0x0); + alc_update_coef_idx(codec, 0x49, 0x0045, 0x0); + alc_update_coef_idx(codec, 0x44, 0x0045 << 8, 0x0); + break; + case 0x10ec0236: + case 0x10ec0256: + alc_write_coef_idx(codec, 0x48, 0x0); + alc_update_coef_idx(codec, 0x49, 0x0045, 0x0); + break; + } +} + +static void alc_enable_headset_jack_key(struct hda_codec *codec) +{ + struct alc_spec *spec = codec->spec; + + if (!spec->has_hs_key) + return; + + switch (codec->core.vendor_id) { + case 0x10ec0215: + case 0x10ec0225: + case 0x10ec0285: + case 0x10ec0295: + case 0x10ec0289: + case 0x10ec0299: + alc_write_coef_idx(codec, 0x48, 0xd011); + alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045); + alc_update_coef_idx(codec, 0x44, 0x007f << 8, 0x0045 << 8); + break; + case 0x10ec0236: + case 0x10ec0256: + alc_write_coef_idx(codec, 0x48, 0xd011); + alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045); + break; + } +} + +static void alc_fixup_headset_jack(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + struct alc_spec *spec = codec->spec; + + switch (action) { + case HDA_FIXUP_ACT_PRE_PROBE: + spec->has_hs_key = 1; + snd_hda_jack_detect_enable_callback(codec, 0x55, + alc_headset_btn_callback); + snd_hda_jack_add_kctl(codec, 0x55, "Headset Jack", false, + SND_JACK_HEADSET, alc_headset_btn_keymap); + break; + case HDA_FIXUP_ACT_INIT: + alc_enable_headset_jack_key(codec); + break; + } +} + static void alc269vb_toggle_power_output(struct hda_codec *codec, int power_up) { alc_update_coef_idx(codec, 0x04, 1 << 11, power_up ? (1 << 11) : 0); @@ -3372,6 +3474,8 @@ static void alc225_shutup(struct hda_cod
if (!hp_pin) hp_pin = 0x21; + + alc_disable_headset_jack_key(codec); /* 3k pull low control for Headset jack. */ alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
@@ -3411,6 +3515,9 @@ static void alc225_shutup(struct hda_cod alc_update_coef_idx(codec, 0x4a, 3<<4, 2<<4); msleep(30); } + + alc_update_coef_idx(codec, 0x4a, 3 << 10, 0); + alc_enable_headset_jack_key(codec); }
static void alc_default_init(struct hda_codec *codec) @@ -5668,69 +5775,6 @@ static void alc285_fixup_invalidate_dacs snd_hda_override_wcaps(codec, 0x03, 0); }
-static const struct hda_jack_keymap alc_headset_btn_keymap[] = { - { SND_JACK_BTN_0, KEY_PLAYPAUSE }, - { SND_JACK_BTN_1, KEY_VOICECOMMAND }, - { SND_JACK_BTN_2, KEY_VOLUMEUP }, - { SND_JACK_BTN_3, KEY_VOLUMEDOWN }, - {} -}; - -static void alc_headset_btn_callback(struct hda_codec *codec, - struct hda_jack_callback *jack) -{ - int report = 0; - - if (jack->unsol_res & (7 << 13)) - report |= SND_JACK_BTN_0; - - if (jack->unsol_res & (1 << 16 | 3 << 8)) - report |= SND_JACK_BTN_1; - - /* Volume up key */ - if (jack->unsol_res & (7 << 23)) - report |= SND_JACK_BTN_2; - - /* Volume down key */ - if (jack->unsol_res & (7 << 10)) - report |= SND_JACK_BTN_3; - - jack->jack->button_state = report; -} - -static void alc_fixup_headset_jack(struct hda_codec *codec, - const struct hda_fixup *fix, int action) -{ - - switch (action) { - case HDA_FIXUP_ACT_PRE_PROBE: - snd_hda_jack_detect_enable_callback(codec, 0x55, - alc_headset_btn_callback); - snd_hda_jack_add_kctl(codec, 0x55, "Headset Jack", false, - SND_JACK_HEADSET, alc_headset_btn_keymap); - break; - case HDA_FIXUP_ACT_INIT: - switch (codec->core.vendor_id) { - case 0x10ec0215: - case 0x10ec0225: - case 0x10ec0285: - case 0x10ec0295: - case 0x10ec0289: - case 0x10ec0299: - alc_write_coef_idx(codec, 0x48, 0xd011); - alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045); - alc_update_coef_idx(codec, 0x44, 0x007f << 8, 0x0045 << 8); - break; - case 0x10ec0236: - case 0x10ec0256: - alc_write_coef_idx(codec, 0x48, 0xd011); - alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045); - break; - } - break; - } -} - static void alc295_fixup_chromebook(struct hda_codec *codec, const struct hda_fixup *fix, int action) {
From: Thomas Hebb tommyhebb@gmail.com
commit f128090491c3f5aacef91a863f8c52abf869c436 upstream.
This codec (among others) has a hidden set of audio routes, apparently designed to allow PC Beep output without a mixer widget on the output path, which are controlled by an undocumented Realtek vendor register. The default configuration of these routes means that certain inputs aren't accessible, necessitating driver control of the register. However, Realtek has provided no documentation of the register, instead opting to fix issues by providing magic numbers, most of which have been at least somewhat erroneous. These magic numbers then get copied by others into model-specific fixups, leading to a fragmented and buggy set of configurations.
To get out of this situation, I've reverse engineered the register by flipping bits and observing how the codec's behavior changes. This commit documents my findings. It does not change any code.
Cc: stable@vger.kernel.org Signed-off-by: Thomas Hebb tommyhebb@gmail.com Link: https://lore.kernel.org/r/bd69dfdeaf40ff31c4b7b797c829bb320031739c.158558449... Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- Documentation/sound/hd-audio/index.rst | 1 Documentation/sound/hd-audio/realtek-pc-beep.rst | 129 +++++++++++++++++++++++ 2 files changed, 130 insertions(+)
--- a/Documentation/sound/hd-audio/index.rst +++ b/Documentation/sound/hd-audio/index.rst @@ -8,3 +8,4 @@ HD-Audio models controls dp-mst + realtek-pc-beep --- /dev/null +++ b/Documentation/sound/hd-audio/realtek-pc-beep.rst @@ -0,0 +1,129 @@ +=============================== +Realtek PC Beep Hidden Register +=============================== + +This file documents the "PC Beep Hidden Register", which is present in certain +Realtek HDA codecs and controls a muxer and pair of passthrough mixers that can +route audio between pins but aren't themselves exposed as HDA widgets. As far +as I can tell, these hidden routes are designed to allow flexible PC Beep output +for codecs that don't have mixer widgets in their output paths. Why it's easier +to hide a mixer behind an undocumented vendor register than to just expose it +as a widget, I have no idea. + +Register Description +==================== + +The register is accessed via processing coefficient 0x36 on NID 20h. Bits not +identified below have no discernible effect on my machine, a Dell XPS 13 9350:: + + MSB LSB + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | |h|S|L| | B |R| | Known bits + +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ + |0|0|1|1| 0x7 |0|0x0|1| 0x7 | Reset value + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +1Ah input select (B): 2 bits + When zero, expose the PC Beep line (from the internal beep generator, when + enabled with the Set Beep Generation verb on NID 01h, or else from the + external PCBEEP pin) on the 1Ah pin node. When nonzero, expose the headphone + jack (or possibly Line In on some machines) input instead. If PC Beep is + selected, the 1Ah boost control has no effect. + +Amplify 1Ah loopback, left (L): 1 bit + Amplify the left channel of 1Ah before mixing it into outputs as specified + by h and S bits. Does not affect the level of 1Ah exposed to other widgets. + +Amplify 1Ah loopback, right (R): 1 bit + Amplify the right channel of 1Ah before mixing it into outputs as specified + by h and S bits. Does not affect the level of 1Ah exposed to other widgets. + +Loopback 1Ah to 21h [active low] (h): 1 bit + When zero, mix 1Ah (possibly with amplification, depending on L and R bits) + into 21h (headphone jack on my machine). Mixed signal respects the mute + setting on 21h. + +Loopback 1Ah to 14h (S): 1 bit + When one, mix 1Ah (possibly with amplification, depending on L and R bits) + into 14h (internal speaker on my machine). Mixed signal **ignores** the mute + setting on 14h and is present whenever 14h is configured as an output. + +Path diagrams +============= + +1Ah input selection (DIV is the PC Beep divider set on NID 01h):: + + <Beep generator> <PCBEEP pin> <Headphone jack> + | | | + +--DIV--+--!DIV--+ {1Ah boost control} + | | + +--(b == 0)--+--(b != 0)--+ + | + >1Ah (Beep/Headphone Mic/Line In)< + +Loopback of 1Ah to 21h/14h:: + + <1Ah (Beep/Headphone Mic/Line In)> + | + {amplify if L/R} + | + +-----!h-----+-----S-----+ + | | + {21h mute control} | + | | + >21h (Headphone)< >14h (Internal Speaker)< + +Background +========== + +All Realtek HDA codecs have a vendor-defined widget with node ID 20h which +provides access to a bank of registers that control various codec functions. +Registers are read and written via the standard HDA processing coefficient +verbs (Set/Get Coefficient Index, Set/Get Processing Coefficient). The node is +named "Realtek Vendor Registers" in public datasheets' verb listings and, +apart from that, is entirely undocumented. + +This particular register, exposed at coefficient 0x36 and named in commits from +Realtek, is of note: unlike most registers, which seem to control detailed +amplifier parameters not in scope of the HDA specification, it controls audio +routing which could just as easily have been defined using standard HDA mixer +and selector widgets. + +Specifically, it selects between two sources for the input pin widget with Node +ID (NID) 1Ah: the widget's signal can come either from an audio jack (on my +laptop, a Dell XPS 13 9350, it's the headphone jack, but comments in Realtek +commits indicate that it might be a Line In on some machines) or from the PC +Beep line (which is itself multiplexed between the codec's internal beep +generator and external PCBEEP pin, depending on if the beep generator is +enabled via verbs on NID 01h). Additionally, it can mix (with optional +amplification) that signal onto the 21h and/or 14h output pins. + +The register's reset value is 0x3717, corresponding to PC Beep on 1Ah that is +then amplified and mixed into both the headphones and the speakers. Not only +does this violate the HDA specification, which says that "[a vendor defined +beep input pin] connection may be maintained *only* while the Link reset +(**RST#**) is asserted", it means that we cannot ignore the register if we care +about the input that 1Ah would otherwise expose or if the PCBEEP trace is +poorly shielded and picks up chassis noise (both of which are the case on my +machine). + +Unfortunately, there are lots of ways to get this register configuration wrong. +Linux, it seems, has gone through most of them. For one, the register resets +after S3 suspend: judging by existing code, this isn't the case for all vendor +registers, and it's led to some fixes that improve behavior on cold boot but +don't last after suspend. Other fixes have successfully switched the 1Ah input +away from PC Beep but have failed to disable both loopback paths. On my +machine, this means that the headphone input is amplified and looped back to +the headphone output, which uses the exact same pins! As you might expect, this +causes terrible headphone noise, the character of which is controlled by the +1Ah boost control. (If you've seen instructions online to fix XPS 13 headphone +noise by changing "Headphone Mic Boost" in ALSA, now you know why.) + +The information here has been obtained through black-box reverse engineering of +the ALC256 codec's behavior and is not guaranteed to be correct. It likely +also applies for the ALC255, ALC257, ALC235, and ALC236, since those codecs +seem to be close relatives of the ALC256. (They all share one initialization +function.) Additionally, other codecs like the ALC225 and ALC285 also have this +register, judging by existing fixups in ``patch_realtek.c``, but specific +data (e.g. node IDs, bit positions, pin mappings) for those codecs may differ +from what I've described here.
From: Thomas Hebb tommyhebb@gmail.com
commit c44737449468a0bdc50e09ec75e530f208391561 upstream.
The Realtek PC Beep Hidden Register[1] is currently set by patch_realtek.c in two different places:
In alc_fill_eapd_coef(), it's set to the value 0x5757, corresponding to non-beep input on 1Ah and no 1Ah loopback to either headphones or speakers. (Although, curiously, the loopback amp is still enabled.) This write was added fairly recently by commit e3743f431143 ("ALSA: hda/realtek - Dell headphone has noise on unmute for ALC236") and is a safe default. However, it happens in the wrong place: alc_fill_eapd_coef() runs on module load and cold boot but not on S3 resume, meaning the register loses its value after suspend.
Conversely, in alc256_init(), the register is updated to unset bit 13 (disable speaker loopback) and set bit 5 (set non-beep input on 1Ah). Although this write does run on S3 resume, it's not quite enough to fix up the register's default value of 0x3717. What's missing is a set of bit 14 to disable headphone loopback. Without that, we end up with a feedback loop where the headphone jack is being driven by amplified samples of itself[2].
This change eliminates the update in alc256_init() and replaces it with the 0x5757 write from alc_fill_eapd_coef(). Kailang says that 0x5757 is supposed to be the codec's default value, so using it will make debugging easier for Realtek.
Affects the ALC255, ALC256, ALC257, ALC235, and ALC236 codecs.
[1] Newly documented in Documentation/sound/hd-audio/realtek-pc-beep.rst
[2] Setting the "Headphone Mic Boost" control from userspace changes this feedback loop and has been a widely-shared workaround for headphone noise on laptops like the Dell XPS 13 9350. This commit eliminates the feedback loop and makes the workaround unnecessary.
Fixes: e1e8c1fdce8b ("ALSA: hda/realtek - Dell headphone has noise on unmute for ALC236") Cc: stable@vger.kernel.org Signed-off-by: Thomas Hebb tommyhebb@gmail.com Link: https://lore.kernel.org/r/bf22b417d1f2474b12011c2a39ed6cf8b06d3bf5.158558449... Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_realtek.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -368,7 +368,9 @@ static void alc_fill_eapd_coef(struct hd case 0x10ec0215: case 0x10ec0233: case 0x10ec0235: + case 0x10ec0236: case 0x10ec0255: + case 0x10ec0256: case 0x10ec0257: case 0x10ec0282: case 0x10ec0283: @@ -380,11 +382,6 @@ static void alc_fill_eapd_coef(struct hd case 0x10ec0300: alc_update_coef_idx(codec, 0x10, 1<<9, 0); break; - case 0x10ec0236: - case 0x10ec0256: - alc_write_coef_idx(codec, 0x36, 0x5757); - alc_update_coef_idx(codec, 0x10, 1<<9, 0); - break; case 0x10ec0275: alc_update_coef_idx(codec, 0xe, 0, 1<<0); break; @@ -3371,7 +3368,13 @@ static void alc256_init(struct hda_codec alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15); - alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/ + /* + * Expose headphone mic (or possibly Line In on some machines) instead + * of PC Beep on 1Ah, and disable 1Ah loopback for all outputs. See + * Documentation/sound/hd-audio/realtek-pc-beep.rst for details of + * this register. + */ + alc_write_coef_idx(codec, 0x36, 0x5757); }
static void alc256_shutup(struct hda_codec *codec)
From: Thomas Hebb tommyhebb@gmail.com
commit f36938aa7440f46a0a365f1cfde5f5985af2bef3 upstream.
patch_realtek.c has historically failed to properly configure the PC Beep Hidden Register for the ALC256 codec (among others). Depending on your kernel version, symptoms of this misconfiguration can range from chassis noise, picked up by a poorly-shielded PCBEEP trace, getting amplified and played on your internal speaker and/or headphones to loud feedback, which responds to the "Headphone Mic Boost" ALSA control, getting played through your headphones. For details of the problem, see the patch in this series titled "ALSA: hda/realtek - Set principled PC Beep configuration for ALC256", which fixes the configuration.
These symptoms have been most noticed on the Dell XPS 13 9350 and 9360, popular laptops that use the ALC256. As a result, several model-specific fixups have been introduced to try and fix the problem, the most egregious of which locks the "Headphone Mic Boost" control as a hack to minimize noise from a feedback loop that shouldn't have been there in the first place.
Now that the underlying issue has been fixed, remove all these fixups. Remaining fixups needed by the XPS 13 are all picked up by existing pin quirks.
This change should, for the XPS 13 9350/9360
- Significantly increase volume and audio quality on headphones - Eliminate headphone popping on suspend/resume - Allow "Headphone Mic Boost" to be set again, making the headphone jack fully usable as a microphone jack too.
Fixes: 8c69729b4439 ("ALSA: hda - Fix headphone noise after Dell XPS 13 resume back from S3") Fixes: 423cd785619a ("ALSA: hda - Fix headphone noise on Dell XPS 13 9360") Fixes: e4c9fd10eb21 ("ALSA: hda - Apply headphone noise quirk for another Dell XPS 13 variant") Fixes: 1099f48457d0 ("ALSA: hda/realtek: Reduce the Headphone static noise on XPS 9350/9360") Cc: stable@vger.kernel.org Signed-off-by: Thomas Hebb tommyhebb@gmail.com Link: https://lore.kernel.org/r/b649a00edfde150cf6eebbb4390e15e0c2deb39a.158558449... Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- Documentation/sound/hd-audio/models.rst | 2 - sound/pci/hda/patch_realtek.c | 34 -------------------------------- 2 files changed, 36 deletions(-)
--- a/Documentation/sound/hd-audio/models.rst +++ b/Documentation/sound/hd-audio/models.rst @@ -216,8 +216,6 @@ alc298-dell-aio ALC298 fixups on Dell AIO machines alc275-dell-xps ALC275 fixups on Dell XPS models -alc256-dell-xps13 - ALC256 fixups on Dell XPS13 lenovo-spk-noise Workaround for speaker noise on Lenovo machines lenovo-hotkey --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -5491,17 +5491,6 @@ static void alc271_hp_gate_mic_jack(stru } }
-static void alc256_fixup_dell_xps_13_headphone_noise2(struct hda_codec *codec, - const struct hda_fixup *fix, - int action) -{ - if (action != HDA_FIXUP_ACT_PRE_PROBE) - return; - - snd_hda_codec_amp_stereo(codec, 0x1a, HDA_INPUT, 0, HDA_AMP_VOLMASK, 1); - snd_hda_override_wcaps(codec, 0x1a, get_wcaps(codec, 0x1a) & ~AC_WCAP_IN_AMP); -} - static void alc269_fixup_limit_int_mic_boost(struct hda_codec *codec, const struct hda_fixup *fix, int action) @@ -5916,8 +5905,6 @@ enum { ALC298_FIXUP_DELL1_MIC_NO_PRESENCE, ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE, ALC275_FIXUP_DELL_XPS, - ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE, - ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2, ALC293_FIXUP_LENOVO_SPK_NOISE, ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY, ALC255_FIXUP_DELL_SPK_NOISE, @@ -6658,23 +6645,6 @@ static const struct hda_fixup alc269_fix {} } }, - [ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE] = { - .type = HDA_FIXUP_VERBS, - .v.verbs = (const struct hda_verb[]) { - /* Disable pass-through path for FRONT 14h */ - {0x20, AC_VERB_SET_COEF_INDEX, 0x36}, - {0x20, AC_VERB_SET_PROC_COEF, 0x1737}, - {} - }, - .chained = true, - .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE - }, - [ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2] = { - .type = HDA_FIXUP_FUNC, - .v.func = alc256_fixup_dell_xps_13_headphone_noise2, - .chained = true, - .chain_id = ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE - }, [ALC293_FIXUP_LENOVO_SPK_NOISE] = { .type = HDA_FIXUP_FUNC, .v.func = alc_fixup_disable_aamix, @@ -7172,17 +7142,14 @@ static const struct snd_pci_quirk alc269 SND_PCI_QUIRK(0x1028, 0x06de, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK), SND_PCI_QUIRK(0x1028, 0x06df, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK), SND_PCI_QUIRK(0x1028, 0x06e0, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK), - SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13 9350", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2), SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER), SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE), SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP), - SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2), SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME), SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME), SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3), SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER), SND_PCI_QUIRK(0x1028, 0x080c, "Dell WYSE", ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE), - SND_PCI_QUIRK(0x1028, 0x082a, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2), SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB), SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB), SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC), @@ -7536,7 +7503,6 @@ static const struct hda_model_fixup alc2 {.id = ALC298_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "alc298-dell1"}, {.id = ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE, .name = "alc298-dell-aio"}, {.id = ALC275_FIXUP_DELL_XPS, .name = "alc275-dell-xps"}, - {.id = ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE, .name = "alc256-dell-xps13"}, {.id = ALC293_FIXUP_LENOVO_SPK_NOISE, .name = "lenovo-spk-noise"}, {.id = ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY, .name = "lenovo-hotkey"}, {.id = ALC255_FIXUP_DELL_SPK_NOISE, .name = "dell-spk-noise"},
From: Hans de Goede hdegoede@redhat.com
commit ca707b3f00b4f31a6e1eb37e8ae99f15f2bb1fe5 upstream.
The audio setup on the Lenovo Carbon X1 8th gen is the same as that on the Lenovo Carbon X1 7th gen, as such it needs the same ALC285_FIXUP_THINKPAD_HEADSET_JACK quirk.
This fixes volume control of the speaker not working among other things.
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1820196 Cc: stable@vger.kernel.org Suggested-by: Jaroslav Kysela perex@perex.cz Signed-off-by: Hans de Goede hdegoede@redhat.com Reviewed-by: Jaroslav Kysela perex@perex.cz Link: https://lore.kernel.org/r/20200402174311.238614-1-hdegoede@redhat.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -7325,6 +7325,7 @@ static const struct snd_pci_quirk alc269 SND_PCI_QUIRK(0x17aa, 0x225d, "Thinkpad T480", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), SND_PCI_QUIRK(0x17aa, 0x2292, "Thinkpad X1 Yoga 7th", ALC285_FIXUP_THINKPAD_HEADSET_JACK), SND_PCI_QUIRK(0x17aa, 0x2293, "Thinkpad X1 Carbon 7th", ALC285_FIXUP_THINKPAD_HEADSET_JACK), + SND_PCI_QUIRK(0x17aa, 0x22be, "Thinkpad X1 Carbon 8th", ALC285_FIXUP_THINKPAD_HEADSET_JACK), SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
From: Takashi Iwai tiwai@suse.de
commit 1d3aa4a5516d2e4933fe3cca11d3349ef63bc547 upstream.
MSI GL63 laptop requires the similar quirk like other MSI models, ALC1220_FIXUP_CLEVO_P950. The board BIOS doesn't provide a PCI SSID for the device, hence we need to take the codec SSID (1462:1275) instead.
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=207157 Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200408135645.21896-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -2447,6 +2447,7 @@ static const struct snd_pci_quirk alc882 SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS), SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950), SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950), + SND_PCI_QUIRK(0x1462, 0x1275, "MSI-GL63", ALC1220_FIXUP_CLEVO_P950), SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950), SND_PCI_QUIRK(0x1462, 0x1293, "MSI-GP65", ALC1220_FIXUP_CLEVO_P950), SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD),
From: Stanimir Varbanov stanimir.varbanov@linaro.org
commit fd1ee315dcd4a0f913a74939eb88f6d9b0bd9250 upstream.
Instead of iterate over previously queued buffers in clock scaling code do cache the payload in instance context structure for later use when calculating new clock rate.
This will avoid to use spin locks during buffer list iteration in clock_scaling.
This fixes following kernel Oops:
Unable to handle kernel paging request at virtual address deacfffffffffd6c Mem abort info: ESR = 0x96000004 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x00000004 CM = 0, WnR = 0 [deacfffffffffd6c] address between user and kernel address ranges Internal error: Oops: 96000004 [#1] PREEMPT SMP CPU: 7 PID: 5763 Comm: V4L2DecoderThre Tainted: G S W 5.4.11 #8 pstate: 20400009 (nzCv daif +PAN -UAO) pc : load_scale_v4+0x4c/0x2bc [venus_core] lr : session_process_buf+0x18c/0x1c0 [venus_core] sp : ffffffc01376b8d0 x29: ffffffc01376b8d0 x28: ffffff80cf1b0220 x27: ffffffc01376bba0 x26: ffffffd8f562b2d8 x25: ffffff80cf1b0220 x24: 0000000000000005 x23: ffffffd8f5620d98 x22: ffffff80ca01c800 x21: ffffff80cf1b0000 x20: ffffff8149490080 x19: ffffff8174b2c010 x18: 0000000000000000 x17: 0000000000000000 x16: ffffffd96ee3a0dc x15: 0000000000000026 x14: 0000000000000026 x13: 00000000000055ac x12: 0000000000000001 x11: deacfffffffffd6c x10: dead000000000100 x9 : ffffff80ca01cf28 x8 : 0000000000000026 x7 : 0000000000000000 x6 : ffffff80cdd899c0 x5 : ffffff80cdd899c0 x4 : 0000000000000008 x3 : ffffff80ca01cf28 x2 : ffffff80ca01cf28 x1 : ffffff80d47ffc00 x0 : ffffff80cf1b0000 Call trace: load_scale_v4+0x4c/0x2bc [venus_core] session_process_buf+0x18c/0x1c0 [venus_core] venus_helper_vb2_buf_queue+0x7c/0xf0 [venus_core] __enqueue_in_driver+0xe4/0xfc [videobuf2_common] vb2_core_qbuf+0x15c/0x338 [videobuf2_common] vb2_qbuf+0x78/0xb8 [videobuf2_v4l2] v4l2_m2m_qbuf+0x80/0xf8 [v4l2_mem2mem] v4l2_m2m_ioctl_qbuf+0x2c/0x38 [v4l2_mem2mem] v4l_qbuf+0x48/0x58 __video_do_ioctl+0x2b0/0x39c video_usercopy+0x394/0x710 video_ioctl2+0x38/0x48 v4l2_ioctl+0x6c/0x80 do_video_ioctl+0xb00/0x2874 v4l2_compat_ioctl32+0x5c/0xcc __se_compat_sys_ioctl+0x100/0x2074 __arm64_compat_sys_ioctl+0x20/0x2c el0_svc_common+0xa4/0x154 el0_svc_compat_handler+0x2c/0x38 el0_svc_compat+0x8/0x10 Code: eb0a013f 54000200 aa1f03e8 d10e514b (b940016c) ---[ end trace e11304b46552e0b9 ]---
Fixes: c0e284ccfeda ("media: venus: Update clock scaling")
Cc: stable@vger.kernel.org # v5.5+ Signed-off-by: Stanimir Varbanov stanimir.varbanov@linaro.org Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/platform/qcom/venus/core.h | 1 + drivers/media/platform/qcom/venus/helpers.c | 20 +++++++++++++------- 2 files changed, 14 insertions(+), 7 deletions(-)
--- a/drivers/media/platform/qcom/venus/core.h +++ b/drivers/media/platform/qcom/venus/core.h @@ -344,6 +344,7 @@ struct venus_inst { unsigned int subscriptions; int buf_count; struct venus_ts_metadata tss[VIDEO_MAX_FRAME]; + unsigned long payloads[VIDEO_MAX_FRAME]; u64 fps; struct v4l2_fract timeperframe; const struct venus_format *fmt_out; --- a/drivers/media/platform/qcom/venus/helpers.c +++ b/drivers/media/platform/qcom/venus/helpers.c @@ -544,18 +544,13 @@ static int scale_clocks_v4(struct venus_ struct venus_core *core = inst->core; const struct freq_tbl *table = core->res->freq_tbl; unsigned int num_rows = core->res->freq_tbl_size; - struct v4l2_m2m_ctx *m2m_ctx = inst->m2m_ctx; struct device *dev = core->dev; unsigned long freq = 0, freq_core1 = 0, freq_core2 = 0; unsigned long filled_len = 0; - struct venus_buffer *buf, *n; - struct vb2_buffer *vb; int i, ret;
- v4l2_m2m_for_each_src_buf_safe(m2m_ctx, buf, n) { - vb = &buf->vb.vb2_buf; - filled_len = max(filled_len, vb2_get_plane_payload(vb, 0)); - } + for (i = 0; i < inst->num_input_bufs; i++) + filled_len = max(filled_len, inst->payloads[i]);
if (inst->session_type == VIDC_SESSION_TYPE_DEC && !filled_len) return 0; @@ -1289,6 +1284,15 @@ int venus_helper_vb2_buf_prepare(struct } EXPORT_SYMBOL_GPL(venus_helper_vb2_buf_prepare);
+static void cache_payload(struct venus_inst *inst, struct vb2_buffer *vb) +{ + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + unsigned int idx = vbuf->vb2_buf.index; + + if (vbuf->vb2_buf.type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) + inst->payloads[idx] = vb2_get_plane_payload(vb, 0); +} + void venus_helper_vb2_buf_queue(struct vb2_buffer *vb) { struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); @@ -1300,6 +1304,8 @@ void venus_helper_vb2_buf_queue(struct v
v4l2_m2m_buf_queue(m2m_ctx, vbuf);
+ cache_payload(inst, vb); + if (inst->session_type == VIDC_SESSION_TYPE_ENC && !(inst->streamon_out && inst->streamon_cap)) goto unlock;
From: Stanimir Varbanov stanimir.varbanov@linaro.org
commit 2632e7b618a7730969f9782593c29ca53553aa22 upstream.
With the latest cleanup in qcom scm driver the secure monitor call for setting the remote processor state returns EINVAL when it is called for the first time and after another scm call auth_and_reset. The error returned from scm call could be ignored because the state transition is already done in auth_and_reset.
Acked-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Stanimir Varbanov stanimir.varbanov@linaro.org Cc: stable@vger.kernel.org Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/platform/qcom/venus/firmware.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- a/drivers/media/platform/qcom/venus/firmware.c +++ b/drivers/media/platform/qcom/venus/firmware.c @@ -44,8 +44,14 @@ static void venus_reset_cpu(struct venus
int venus_set_hw_state(struct venus_core *core, bool resume) { - if (core->use_tz) - return qcom_scm_set_remote_state(resume, 0); + int ret; + + if (core->use_tz) { + ret = qcom_scm_set_remote_state(resume, 0); + if (resume && ret == -EINVAL) + ret = 0; + return ret; + }
if (resume) venus_reset_cpu(core);
From: Andrzej Pietrasiewicz andrzej.p@collabora.com
commit e34bca49e4953e5c2afc0425303199a5fd515f82 upstream.
Since (luma/chroma)_qtable is an array of unsigned char, indexing it returns consecutive byte locations, but we are supposed to read the arrays in four-byte words. Consequently, we should be pointing get_unaligned_be32() at consecutive word locations instead.
Signed-off-by: Andrzej Pietrasiewicz andrzej.p@collabora.com Reviewed-by: Ezequiel Garcia ezequiel@collabora.com Tested-by: Ezequiel Garcia ezequiel@collabora.com Cc: stable@vger.kernel.org Fixes: 00c30f42c7595f "media: rockchip vpu: remove some unused vars" Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/staging/media/hantro/hantro_h1_jpeg_enc.c | 9 +++++++-- drivers/staging/media/hantro/rk3399_vpu_hw_jpeg_enc.c | 9 +++++++-- 2 files changed, 14 insertions(+), 4 deletions(-)
--- a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c +++ b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c @@ -67,12 +67,17 @@ hantro_h1_jpeg_enc_set_qtable(struct han unsigned char *chroma_qtable) { u32 reg, i; + __be32 *luma_qtable_p; + __be32 *chroma_qtable_p; + + luma_qtable_p = (__be32 *)luma_qtable; + chroma_qtable_p = (__be32 *)chroma_qtable;
for (i = 0; i < H1_JPEG_QUANT_TABLE_COUNT; i++) { - reg = get_unaligned_be32(&luma_qtable[i]); + reg = get_unaligned_be32(&luma_qtable_p[i]); vepu_write_relaxed(vpu, reg, H1_REG_JPEG_LUMA_QUAT(i));
- reg = get_unaligned_be32(&chroma_qtable[i]); + reg = get_unaligned_be32(&chroma_qtable_p[i]); vepu_write_relaxed(vpu, reg, H1_REG_JPEG_CHROMA_QUAT(i)); } } --- a/drivers/staging/media/hantro/rk3399_vpu_hw_jpeg_enc.c +++ b/drivers/staging/media/hantro/rk3399_vpu_hw_jpeg_enc.c @@ -98,12 +98,17 @@ rk3399_vpu_jpeg_enc_set_qtable(struct ha unsigned char *chroma_qtable) { u32 reg, i; + __be32 *luma_qtable_p; + __be32 *chroma_qtable_p; + + luma_qtable_p = (__be32 *)luma_qtable; + chroma_qtable_p = (__be32 *)chroma_qtable;
for (i = 0; i < VEPU_JPEG_QUANT_TABLE_COUNT; i++) { - reg = get_unaligned_be32(&luma_qtable[i]); + reg = get_unaligned_be32(&luma_qtable_p[i]); vepu_write_relaxed(vpu, reg, VEPU_REG_JPEG_LUMA_QUAT(i));
- reg = get_unaligned_be32(&chroma_qtable[i]); + reg = get_unaligned_be32(&chroma_qtable_p[i]); vepu_write_relaxed(vpu, reg, VEPU_REG_JPEG_CHROMA_QUAT(i)); } }
From: Benoit Parrot bparrot@ti.com
commit 1db56284b9da9056093681f28db48a09a243274b upstream.
disable_irqs() was mistakenly disabling all interrupts when called. This cause all port stream to stop even if only stopping one of them.
Cc: stable stable@vger.kernel.org Signed-off-by: Benoit Parrot bparrot@ti.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/platform/ti-vpe/cal.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
--- a/drivers/media/platform/ti-vpe/cal.c +++ b/drivers/media/platform/ti-vpe/cal.c @@ -537,16 +537,16 @@ static void enable_irqs(struct cal_ctx *
static void disable_irqs(struct cal_ctx *ctx) { + u32 val; + /* Disable IRQ_WDMA_END 0/1 */ - reg_write_field(ctx->dev, - CAL_HL_IRQENABLE_CLR(2), - CAL_HL_IRQ_CLEAR, - CAL_HL_IRQ_MASK(ctx->csi2_port)); + val = 0; + set_field(&val, CAL_HL_IRQ_CLEAR, CAL_HL_IRQ_MASK(ctx->csi2_port)); + reg_write(ctx->dev, CAL_HL_IRQENABLE_CLR(2), val); /* Disable IRQ_WDMA_START 0/1 */ - reg_write_field(ctx->dev, - CAL_HL_IRQENABLE_CLR(3), - CAL_HL_IRQ_CLEAR, - CAL_HL_IRQ_MASK(ctx->csi2_port)); + val = 0; + set_field(&val, CAL_HL_IRQ_CLEAR, CAL_HL_IRQ_MASK(ctx->csi2_port)); + reg_write(ctx->dev, CAL_HL_IRQENABLE_CLR(3), val); /* Todo: Add VC_IRQ and CSI2_COMPLEXIO_IRQ handling */ reg_write(ctx->dev, CAL_CSI2_VC_IRQENABLE(1), 0); }
From: Benoit Parrot bparrot@ti.com
commit 80264809ea0a3fd2ee8251f31a9eb85d2c3fc77e upstream.
After the switch to use v4l2_async_notifier_add_subdev() and v4l2_async_notifier_cleanup(), unloading the ti_cal module would cause a kernel oops.
This was root cause to the fact that v4l2_async_notifier_cleanup() tries to kfree the asd pointer passed into v4l2_async_notifier_add_subdev().
In our case the asd reference was from a statically allocated struct. So in effect v4l2_async_notifier_cleanup() was trying to free a pointer that was not kalloc.
So here we switch to using a kzalloc struct instead of a static one. To achieve this we re-order some of the calls to prevent asd allocation from leaking.
Fixes: d079f94c9046 ("media: platform: Switch to v4l2_async_notifier_add_subdev") Cc: stable@vger.kernel.org Signed-off-by: Benoit Parrot bparrot@ti.com Reviewed-by: Tomi Valkeinen tomi.valkeinen@ti.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/platform/ti-vpe/cal.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
--- a/drivers/media/platform/ti-vpe/cal.c +++ b/drivers/media/platform/ti-vpe/cal.c @@ -266,8 +266,6 @@ struct cal_ctx { struct v4l2_subdev *sensor; struct v4l2_fwnode_endpoint endpoint;
- struct v4l2_async_subdev asd; - struct v4l2_fh fh; struct cal_dev *dev; struct cc_data *cc; @@ -1648,7 +1646,6 @@ static int of_cal_create_instance(struct
parent = pdev->dev.of_node;
- asd = &ctx->asd; endpoint = &ctx->endpoint;
ep_node = NULL; @@ -1695,8 +1692,6 @@ static int of_cal_create_instance(struct ctx_dbg(3, ctx, "can't get remote parent\n"); goto cleanup_exit; } - asd->match_type = V4L2_ASYNC_MATCH_FWNODE; - asd->match.fwnode = of_fwnode_handle(sensor_node);
v4l2_fwnode_endpoint_parse(of_fwnode_handle(ep_node), endpoint);
@@ -1726,9 +1721,17 @@ static int of_cal_create_instance(struct
v4l2_async_notifier_init(&ctx->notifier);
+ asd = kzalloc(sizeof(*asd), GFP_KERNEL); + if (!asd) + goto cleanup_exit; + + asd->match_type = V4L2_ASYNC_MATCH_FWNODE; + asd->match.fwnode = of_fwnode_handle(sensor_node); + ret = v4l2_async_notifier_add_subdev(&ctx->notifier, asd); if (ret) { ctx_err(ctx, "Error adding asd\n"); + kfree(asd); goto cleanup_exit; }
From: Sven Schnelle svens@linux.ibm.com
commit 3db81afd99494a33f1c3839103f0429c8f30cb9d upstream.
Executing the seccomp_bpf testsuite under a 64-bit kernel with 32-bit userland (both s390 and x86) doesn't work because there's no compat_ioctl handler defined. Add the handler.
Signed-off-by: Sven Schnelle svens@linux.ibm.com Fixes: 6a21cc50f0c7 ("seccomp: add a return code to trap to userspace") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200310123332.42255-1-svens@linux.ibm.com Signed-off-by: Kees Cook keescook@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/seccomp.c | 1 + 1 file changed, 1 insertion(+)
--- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -1221,6 +1221,7 @@ static const struct file_operations secc .poll = seccomp_notify_poll, .release = seccomp_notify_release, .unlocked_ioctl = seccomp_notify_ioctl, + .compat_ioctl = seccomp_notify_ioctl, };
static struct file *init_listener(struct seccomp_filter *filter)
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit c823c17a8ea4db4943152388a0beb9a556715716 upstream.
It doesn't really make sense to pass ec->handle of the ECDT EC to acpi_handle_info(), because it is set to ACPI_ROOT_OBJECT in acpi_ec_ecdt_probe(), so rework acpi_ec_setup() to avoid using acpi_handle_info() for printing messages.
First, notice that the "Used as first EC" message is not really useful, because it is immediately followed by a more meaningful one from either acpi_ec_ecdt_probe() or acpi_ec_dsdt_probe() (the latter also includes the EC object path), so drop it altogether.
Second, use pr_info() for printing the EC configuration information.
While at it, make the code in question avoid printing invalid GPE or IRQ numbers and make it print the GPE/IRQ information only when the driver is ready to handle events.
Fixes: 72c77b7ea9ce ("ACPI / EC: Cleanup first_ec/boot_ec code") Fixes: 406857f773b0 ("ACPI: EC: add support for hardware-reduced systems") Cc: 5.5+ stable@vger.kernel.org # 5.5+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/ec.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
--- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -1592,14 +1592,19 @@ static int acpi_ec_setup(struct acpi_ec return ret;
/* First EC capable of handling transactions */ - if (!first_ec) { + if (!first_ec) first_ec = ec; - acpi_handle_info(first_ec->handle, "Used as first EC\n"); + + pr_info("EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n", ec->command_addr, + ec->data_addr); + + if (test_bit(EC_FLAGS_EVENT_HANDLER_INSTALLED, &ec->flags)) { + if (ec->gpe >= 0) + pr_info("GPE=0x%x\n", ec->gpe); + else + pr_info("IRQ=%d\n", ec->irq); }
- acpi_handle_info(ec->handle, - "GPE=0x%x, IRQ=%d, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n", - ec->gpe, ec->irq, ec->command_addr, ec->data_addr); return ret; }
From: Jan Engelhardt jengelh@inai.de
commit ecb9c790999fd6c5af0f44783bd0217f0b89ec2b upstream.
The value in "new" is constructed from "old" such that all bits defined as reserved by the ACPI spec[1] are left untouched. But if those bits do not happen to be all zero, "new < 3" will not evaluate to true.
The firmware of the laptop(s) Medion MD63490 / Akoya P15648 comes with garbage inside the "FACS" ACPI table. The starting value is old=0x4944454d, therefore new=0x4944454e, which is >= 3. Mask off the reserved bits.
[1] https://uefi.org/sites/default/files/resources/ACPI_6_2.pdf
Link: https://bugzilla.kernel.org/show_bug.cgi?id=206553 Cc: All applicable stable@vger.kernel.org Signed-off-by: Jan Engelhardt jengelh@inai.de Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/acpi/boot.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/acpi/boot.c +++ b/arch/x86/kernel/acpi/boot.c @@ -1740,7 +1740,7 @@ int __acpi_acquire_global_lock(unsigned new = (((old & ~0x3) + 2) + ((old >> 1) & 0x1)); val = cmpxchg(lock, old, new); } while (unlikely (val != old)); - return (new < 3) ? -1 : 0; + return ((new & 0x3) < 3) ? -1 : 0; }
int __acpi_release_global_lock(unsigned int *lock)
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit 0ce792d660bda990c675eaf14ce09594a9b85cbf upstream.
The check carried out by acpi_any_gpe_status_set() is not precise enough for the suspend-to-idle implementation in Linux and in some cases it is necessary make it skip one GPE (specifically, the EC GPE) from the check to prevent a race condition leading to a premature system resume from occurring.
For this reason, redefine acpi_any_gpe_status_set() to take the number of a GPE to skip as an argument.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=206629 Tested-by: Ondřej Caletka ondrej@caletka.cz Cc: 5.4+ stable@vger.kernel.org # 5.4+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/acpica/achware.h | 2 - drivers/acpi/acpica/evxfgpe.c | 17 ++++++++++----- drivers/acpi/acpica/hwgpe.c | 47 +++++++++++++++++++++++++++++++++--------- drivers/acpi/sleep.c | 2 - include/acpi/acpixf.h | 2 - 5 files changed, 53 insertions(+), 17 deletions(-)
--- a/drivers/acpi/acpica/achware.h +++ b/drivers/acpi/acpica/achware.h @@ -101,7 +101,7 @@ acpi_status acpi_hw_enable_all_runtime_g
acpi_status acpi_hw_enable_all_wakeup_gpes(void);
-u8 acpi_hw_check_all_gpes(void); +u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number);
acpi_status acpi_hw_enable_runtime_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info, --- a/drivers/acpi/acpica/evxfgpe.c +++ b/drivers/acpi/acpica/evxfgpe.c @@ -799,17 +799,19 @@ ACPI_EXPORT_SYMBOL(acpi_enable_all_wakeu * * FUNCTION: acpi_any_gpe_status_set * - * PARAMETERS: None + * PARAMETERS: gpe_skip_number - Number of the GPE to skip * * RETURN: Whether or not the status bit is set for any GPE * - * DESCRIPTION: Check the status bits of all enabled GPEs and return TRUE if any - * of them is set or FALSE otherwise. + * DESCRIPTION: Check the status bits of all enabled GPEs, except for the one + * represented by the "skip" argument, and return TRUE if any of + * them is set or FALSE otherwise. * ******************************************************************************/ -u32 acpi_any_gpe_status_set(void) +u32 acpi_any_gpe_status_set(u32 gpe_skip_number) { acpi_status status; + acpi_handle gpe_device; u8 ret;
ACPI_FUNCTION_TRACE(acpi_any_gpe_status_set); @@ -819,7 +821,12 @@ u32 acpi_any_gpe_status_set(void) return (FALSE); }
- ret = acpi_hw_check_all_gpes(); + status = acpi_get_gpe_device(gpe_skip_number, &gpe_device); + if (ACPI_FAILURE(status)) { + gpe_device = NULL; + } + + ret = acpi_hw_check_all_gpes(gpe_device, gpe_skip_number); (void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
return (ret); --- a/drivers/acpi/acpica/hwgpe.c +++ b/drivers/acpi/acpica/hwgpe.c @@ -444,12 +444,19 @@ acpi_hw_enable_wakeup_gpe_block(struct a return (AE_OK); }
+struct acpi_gpe_block_status_context { + struct acpi_gpe_register_info *gpe_skip_register_info; + u8 gpe_skip_mask; + u8 retval; +}; + /****************************************************************************** * * FUNCTION: acpi_hw_get_gpe_block_status * * PARAMETERS: gpe_xrupt_info - GPE Interrupt info * gpe_block - Gpe Block info + * context - GPE list walk context data * * RETURN: Success * @@ -460,12 +467,13 @@ acpi_hw_enable_wakeup_gpe_block(struct a static acpi_status acpi_hw_get_gpe_block_status(struct acpi_gpe_xrupt_info *gpe_xrupt_info, struct acpi_gpe_block_info *gpe_block, - void *ret_ptr) + void *context) { + struct acpi_gpe_block_status_context *c = context; struct acpi_gpe_register_info *gpe_register_info; u64 in_enable, in_status; acpi_status status; - u8 *ret = ret_ptr; + u8 ret_mask; u32 i;
/* Examine each GPE Register within the block */ @@ -485,7 +493,11 @@ acpi_hw_get_gpe_block_status(struct acpi continue; }
- *ret |= in_enable & in_status; + ret_mask = in_enable & in_status; + if (ret_mask && c->gpe_skip_register_info == gpe_register_info) { + ret_mask &= ~c->gpe_skip_mask; + } + c->retval |= ret_mask; }
return (AE_OK); @@ -561,24 +573,41 @@ acpi_status acpi_hw_enable_all_wakeup_gp * * FUNCTION: acpi_hw_check_all_gpes * - * PARAMETERS: None + * PARAMETERS: gpe_skip_device - GPE devoce of the GPE to skip + * gpe_skip_number - Number of the GPE to skip * * RETURN: Combined status of all GPEs * - * DESCRIPTION: Check all enabled GPEs in all GPE blocks and return TRUE if the + * DESCRIPTION: Check all enabled GPEs in all GPE blocks, except for the one + * represented by the "skip" arguments, and return TRUE if the * status bit is set for at least one of them of FALSE otherwise. * ******************************************************************************/
-u8 acpi_hw_check_all_gpes(void) +u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number) { - u8 ret = 0; + struct acpi_gpe_block_status_context context = { + .gpe_skip_register_info = NULL, + .retval = 0, + }; + struct acpi_gpe_event_info *gpe_event_info; + acpi_cpu_flags flags;
ACPI_FUNCTION_TRACE(acpi_hw_check_all_gpes);
- (void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &ret); + flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock); + + gpe_event_info = acpi_ev_get_gpe_event_info(gpe_skip_device, + gpe_skip_number); + if (gpe_event_info) { + context.gpe_skip_register_info = gpe_event_info->register_info; + context.gpe_skip_mask = acpi_hw_get_gpe_register_bit(gpe_event_info); + } + + acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
- return (ret != 0); + (void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &context); + return (context.retval != 0); }
#endif /* !ACPI_REDUCED_HARDWARE */ --- a/drivers/acpi/sleep.c +++ b/drivers/acpi/sleep.c @@ -1023,7 +1023,7 @@ static bool acpi_s2idle_wake(void) * status bit from unset to set between the checks with the * status bits of all the other GPEs unset. */ - if (acpi_any_gpe_status_set() && !acpi_ec_dispatch_gpe()) + if (acpi_any_gpe_status_set(U32_MAX) && !acpi_ec_dispatch_gpe()) return true;
/* --- a/include/acpi/acpixf.h +++ b/include/acpi/acpixf.h @@ -752,7 +752,7 @@ ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void)) ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void)) ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void)) -ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_gpe_status_set(void)) +ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_gpe_status_set(u32 gpe_skip_number)) ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_fixed_event_status_set(void))
ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit d5406284ff803a578ca503373624312770319054 upstream.
The check for any active GPEs added by commit fdde0ff8590b ("ACPI: PM: s2idle: Prevent spurious SCIs from waking up the system") turns out to be insufficiently precise to prevent some systems from resuming prematurely due to a spurious EC wakeup, so refine it by first checking if any GPEs other than the EC GPE are active and skipping all of the SCIs coming from the EC that do not produce any genuine wakeup events after processing.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=206629 Fixes: fdde0ff8590b ("ACPI: PM: s2idle: Prevent spurious SCIs from waking up the system") Reported-by: Ondřej Caletka ondrej@caletka.cz Tested-by: Ondřej Caletka ondrej@caletka.cz Cc: 5.4+ stable@vger.kernel.org # 5.4+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/ec.c | 5 +++++ drivers/acpi/internal.h | 1 + drivers/acpi/sleep.c | 19 ++++++++++--------- 3 files changed, 16 insertions(+), 9 deletions(-)
--- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -2052,6 +2052,11 @@ void acpi_ec_set_gpe_wake_mask(u8 action acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action); }
+bool acpi_ec_other_gpes_active(void) +{ + return acpi_any_gpe_status_set(first_ec ? first_ec->gpe : U32_MAX); +} + bool acpi_ec_dispatch_gpe(void) { u32 ret; --- a/drivers/acpi/internal.h +++ b/drivers/acpi/internal.h @@ -202,6 +202,7 @@ void acpi_ec_remove_query_handler(struct
#ifdef CONFIG_PM_SLEEP void acpi_ec_flush_work(void); +bool acpi_ec_other_gpes_active(void); bool acpi_ec_dispatch_gpe(void); #endif
--- a/drivers/acpi/sleep.c +++ b/drivers/acpi/sleep.c @@ -1014,19 +1014,20 @@ static bool acpi_s2idle_wake(void) return true;
/* - * If there are no EC events to process and at least one of the - * other enabled GPEs is active, the wakeup is regarded as a - * genuine one. - * - * Note that the checks below must be carried out in this order - * to avoid returning prematurely due to a change of the EC GPE - * status bit from unset to set between the checks with the - * status bits of all the other GPEs unset. + * If the status bit is set for any enabled GPE other than the + * EC one, the wakeup is regarded as a genuine one. */ - if (acpi_any_gpe_status_set(U32_MAX) && !acpi_ec_dispatch_gpe()) + if (acpi_ec_other_gpes_active()) return true;
/* + * If the EC GPE status bit has not been set, the wakeup is + * regarded as a spurious one. + */ + if (!acpi_ec_dispatch_gpe()) + return false; + + /* * Cancel the wakeup and process all pending events in case * there are any wakeup ones in there. *
From: Martin Blumenstingl martin.blumenstingl@googlemail.com
commit 3f5b9959041e0db6dacbea80bb833bff5900999f upstream.
When CONFIG_DEVFREQ_THERMAL is disabled all functions except of_devfreq_cooling_register_power() were already inlined. Also inline the last function to avoid compile errors when multiple drivers call of_devfreq_cooling_register_power() when CONFIG_DEVFREQ_THERMAL is not set. Compilation failed with the following message: multiple definition of `of_devfreq_cooling_register_power' (which then lists all usages of of_devfreq_cooling_register_power())
Thomas Zimmermann reported this problem [0] on a kernel config with CONFIG_DRM_LIMA={m,y}, CONFIG_DRM_PANFROST={m,y} and CONFIG_DEVFREQ_THERMAL=n after both, the lima and panfrost drivers gained devfreq cooling support.
[0] https://www.spinics.net/lists/dri-devel/msg252825.html
Fixes: a76caf55e5b356 ("thermal: Add devfreq cooling") Cc: stable@vger.kernel.org Reported-by: Thomas Zimmermann tzimmermann@suse.de Signed-off-by: Martin Blumenstingl martin.blumenstingl@googlemail.com Tested-by: Thomas Zimmermann tzimmermann@suse.de Signed-off-by: Daniel Lezcano daniel.lezcano@linaro.org Link: https://lore.kernel.org/r/20200403205133.1101808-1-martin.blumenstingl@googl... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/linux/devfreq_cooling.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/include/linux/devfreq_cooling.h +++ b/include/linux/devfreq_cooling.h @@ -75,7 +75,7 @@ void devfreq_cooling_unregister(struct t
#else /* !CONFIG_DEVFREQ_THERMAL */
-struct thermal_cooling_device * +static inline struct thermal_cooling_device * of_devfreq_cooling_register_power(struct device_node *np, struct devfreq *df, struct devfreq_cooling_power *dfc_power) {
From: Sagi Grimberg sagi@grimberg.me
commit 9cda34e37489244a8c8628617e24b2dbc8a8edad upstream.
MAXH2CDATA is not zero based. Also no reason to limit ourselves to 1M transfers as we can do more easily. Make this an arbitrary limit of 16M.
Reported-by: Wenhua Liu liuw@vmware.com Cc: stable@vger.kernel.org # v5.0+ Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Keith Busch kbusch@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/nvme/target/tcp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -794,7 +794,7 @@ static int nvmet_tcp_handle_icreq(struct icresp->hdr.pdo = 0; icresp->hdr.plen = cpu_to_le32(icresp->hdr.hlen); icresp->pfv = cpu_to_le16(NVME_TCP_PFV_1_0); - icresp->maxdata = cpu_to_le32(0xffff); /* FIXME: support r2t */ + icresp->maxdata = cpu_to_le32(0x400000); /* 16M arbitrary limit */ icresp->cpda = 0; if (queue->hdr_digest) icresp->digest |= NVME_TCP_HDR_DIGEST_ENABLE;
From: James Smart jsmart2021@gmail.com
commit 8c5c660529209a0e324c1c1a35ce3f83d67a2aa5 upstream.
The original patch was to resolve the lldd being able to be unloaded while being used to talk to the boot device of the system. However, the end result of the original patch is that any driver unload while a nvme controller is live via the lldd is now being prohibited. Given the module reference, the module teardown routine can't be called, thus there's no way, other than manual actions to terminate the controllers.
Fixes: 863fbae929c7 ("nvme_fc: add module to ops template to allow module references") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: James Smart jsmart2021@gmail.com Reviewed-by: Himanshu Madhani himanshu.madhani@oracle.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/nvme/host/fc.c | 14 ++------------ drivers/nvme/target/fcloop.c | 1 - drivers/scsi/lpfc/lpfc_nvme.c | 2 -- drivers/scsi/qla2xxx/qla_nvme.c | 1 - include/linux/nvme-fc-driver.h | 4 ---- 5 files changed, 2 insertions(+), 20 deletions(-)
--- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -342,8 +342,7 @@ nvme_fc_register_localport(struct nvme_f !template->ls_req || !template->fcp_io || !template->ls_abort || !template->fcp_abort || !template->max_hw_queues || !template->max_sgl_segments || - !template->max_dif_sgl_segments || !template->dma_boundary || - !template->module) { + !template->max_dif_sgl_segments || !template->dma_boundary) { ret = -EINVAL; goto out_reghost_failed; } @@ -2016,7 +2015,6 @@ nvme_fc_ctrl_free(struct kref *ref) { struct nvme_fc_ctrl *ctrl = container_of(ref, struct nvme_fc_ctrl, ref); - struct nvme_fc_lport *lport = ctrl->lport; unsigned long flags;
if (ctrl->ctrl.tagset) { @@ -2043,7 +2041,6 @@ nvme_fc_ctrl_free(struct kref *ref) if (ctrl->ctrl.opts) nvmf_free_options(ctrl->ctrl.opts); kfree(ctrl); - module_put(lport->ops->module); }
static void @@ -3074,15 +3071,10 @@ nvme_fc_init_ctrl(struct device *dev, st goto out_fail; }
- if (!try_module_get(lport->ops->module)) { - ret = -EUNATCH; - goto out_free_ctrl; - } - idx = ida_simple_get(&nvme_fc_ctrl_cnt, 0, 0, GFP_KERNEL); if (idx < 0) { ret = -ENOSPC; - goto out_mod_put; + goto out_free_ctrl; }
ctrl->ctrl.opts = opts; @@ -3235,8 +3227,6 @@ out_free_queues: out_free_ida: put_device(ctrl->dev); ida_simple_remove(&nvme_fc_ctrl_cnt, ctrl->cnum); -out_mod_put: - module_put(lport->ops->module); out_free_ctrl: kfree(ctrl); out_fail: --- a/drivers/nvme/target/fcloop.c +++ b/drivers/nvme/target/fcloop.c @@ -850,7 +850,6 @@ fcloop_targetport_delete(struct nvmet_fc #define FCLOOP_DMABOUND_4G 0xFFFFFFFF
static struct nvme_fc_port_template fctemplate = { - .module = THIS_MODULE, .localport_delete = fcloop_localport_delete, .remoteport_delete = fcloop_remoteport_delete, .create_queue = fcloop_create_queue, --- a/drivers/scsi/lpfc/lpfc_nvme.c +++ b/drivers/scsi/lpfc/lpfc_nvme.c @@ -1985,8 +1985,6 @@ out_unlock:
/* Declare and initialization an instance of the FC NVME template. */ static struct nvme_fc_port_template lpfc_nvme_template = { - .module = THIS_MODULE, - /* initiator-based functions */ .localport_delete = lpfc_nvme_localport_delete, .remoteport_delete = lpfc_nvme_remoteport_delete, --- a/drivers/scsi/qla2xxx/qla_nvme.c +++ b/drivers/scsi/qla2xxx/qla_nvme.c @@ -610,7 +610,6 @@ static void qla_nvme_remoteport_delete(s }
static struct nvme_fc_port_template qla_nvme_fc_transport = { - .module = THIS_MODULE, .localport_delete = qla_nvme_localport_delete, .remoteport_delete = qla_nvme_remoteport_delete, .create_queue = qla_nvme_alloc_queue, --- a/include/linux/nvme-fc-driver.h +++ b/include/linux/nvme-fc-driver.h @@ -270,8 +270,6 @@ struct nvme_fc_remote_port { * * Host/Initiator Transport Entrypoints/Parameters: * - * @module: The LLDD module using the interface - * * @localport_delete: The LLDD initiates deletion of a localport via * nvme_fc_deregister_localport(). However, the teardown is * asynchronous. This routine is called upon the completion of the @@ -385,8 +383,6 @@ struct nvme_fc_remote_port { * Value is Mandatory. Allowed to be zero. */ struct nvme_fc_port_template { - struct module *module; - /* initiator-based functions */ void (*localport_delete)(struct nvme_fc_local_port *); void (*remoteport_delete)(struct nvme_fc_remote_port *);
From: Tom Lendacky thomas.lendacky@amd.com
commit f10e80a19b07b58fc2adad7945f8313b01503bae upstream.
When booting with SME active, EFI tables must be mapped unencrypted since they were built by UEFI in unencrypted memory. Update the list of tables to be checked during early_memremap() processing to account for the EFI TPM tables.
This fixes a bug where an EFI TPM log table has been created by UEFI, but it lives in memory that has been marked as usable rather than reserved.
Signed-off-by: Tom Lendacky thomas.lendacky@amd.com Signed-off-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Cc: linux-efi@vger.kernel.org Cc: Ingo Molnar mingo@kernel.org Cc: Thomas Gleixner tglx@linutronix.de Cc: David Hildenbrand david@redhat.com Cc: Heinrich Schuchardt xypron.glpk@gmx.de Cc: stable@vger.kernel.org # v5.4+ Link: https://lore.kernel.org/r/4144cd813f113c20cdfa511cf59500a64e6015be.158266284... Link: https://lore.kernel.org/r/20200228121408.9075-2-ardb@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/platform/efi/efi.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/arch/x86/platform/efi/efi.c +++ b/arch/x86/platform/efi/efi.c @@ -85,6 +85,8 @@ static const unsigned long * const efi_t #ifdef CONFIG_EFI_RCI2_TABLE &rci2_table_phys, #endif + &efi.tpm_log, + &efi.tpm_final_log, };
u64 efi_setup; /* efi setup_data physical address */
From: Lukas Wunner lukas@wunner.de
commit 3e487d2e4aa466decd226353755c9d423e8fbacc upstream.
David Hoyer reports that powering pciehp slots up or down via sysfs may hang: The call to wait_event() in pciehp_sysfs_enable_slot() and _disable_slot() does not return because ctrl->ist_running remains true.
This flag, which was introduced by commit 157c1062fcd8 ("PCI: pciehp: Avoid returning prematurely from sysfs requests"), signifies that the IRQ thread pciehp_ist() is running. It is set to true at the top of pciehp_ist() and reset to false at the end. However there are two additional return statements in pciehp_ist() before which the commit neglected to reset the flag to false and wake up waiters for the flag.
That omission opens up the following race when powering up the slot:
* pciehp_ist() runs because a PCI_EXP_SLTSTA_PDC event was requested by pciehp_sysfs_enable_slot()
* pciehp_ist() turns on slot power via the following call stack: pciehp_handle_presence_or_link_change() -> pciehp_enable_slot() -> __pciehp_enable_slot() -> board_added() -> pciehp_power_on_slot()
* after slot power is turned on, the link comes up, resulting in a PCI_EXP_SLTSTA_DLLSC event
* the IRQ handler pciehp_isr() stores the event in ctrl->pending_events and returns IRQ_WAKE_THREAD
* the IRQ thread is already woken (it's bringing up the slot), but the genirq code remembers to re-run the IRQ thread after it has finished (such that it can deal with the new event) by setting IRQTF_RUNTHREAD via __handle_irq_event_percpu() -> __irq_wake_thread()
* the IRQ thread removes PCI_EXP_SLTSTA_DLLSC from ctrl->pending_events via board_added() -> pciehp_check_link_status() in order to deal with presence and link flaps per commit 6c35a1ac3da6 ("PCI: pciehp: Tolerate initially unstable link")
* after pciehp_ist() has successfully brought up the slot, it resets ctrl->ist_running to false and wakes up the sysfs requester
* the genirq code re-runs pciehp_ist(), which sets ctrl->ist_running to true but then returns with IRQ_NONE because ctrl->pending_events is empty
* pciehp_sysfs_enable_slot() is finally woken but notices that ctrl->ist_running is true, hence continues waiting
The only way to get the hung task going again is to trigger a hotplug event which brings down the slot, e.g. by yanking out the card.
The same race exists when powering down the slot because remove_board() likewise clears link or presence changes in ctrl->pending_events per commit 3943af9d01e9 ("PCI: pciehp: Ignore Link State Changes after powering off a slot") and thereby may cause a re-run of pciehp_ist() which returns with IRQ_NONE without resetting ctrl->ist_running to false.
Fix by adding a goto label before the teardown steps at the end of pciehp_ist() and jumping to that label from the two return statements which currently neglect to reset the ctrl->ist_running flag.
Fixes: 157c1062fcd8 ("PCI: pciehp: Avoid returning prematurely from sysfs requests") Link: https://lore.kernel.org/r/cca1effa488065cb055120aa01b65719094bdcb5.158453032... Reported-by: David Hoyer David.Hoyer@netapp.com Signed-off-by: Lukas Wunner lukas@wunner.de Signed-off-by: Bjorn Helgaas bhelgaas@google.com Reviewed-by: Keith Busch kbusch@kernel.org Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/hotplug/pciehp_hpc.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
--- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c @@ -625,17 +625,15 @@ static irqreturn_t pciehp_ist(int irq, v if (atomic_fetch_and(~RERUN_ISR, &ctrl->pending_events) & RERUN_ISR) { ret = pciehp_isr(irq, dev_id); enable_irq(irq); - if (ret != IRQ_WAKE_THREAD) { - pci_config_pm_runtime_put(pdev); - return ret; - } + if (ret != IRQ_WAKE_THREAD) + goto out; }
synchronize_hardirq(irq); events = atomic_xchg(&ctrl->pending_events, 0); if (!events) { - pci_config_pm_runtime_put(pdev); - return IRQ_NONE; + ret = IRQ_NONE; + goto out; }
/* Check Attention Button Pressed */ @@ -664,10 +662,12 @@ static irqreturn_t pciehp_ist(int irq, v pciehp_handle_presence_or_link_change(ctrl, events); up_read(&ctrl->reset_lock);
+ ret = IRQ_HANDLED; +out: pci_config_pm_runtime_put(pdev); ctrl->ist_running = false; wake_up(&ctrl->requester); - return IRQ_HANDLED; + return ret; }
static int pciehp_poll(void *data)
From: Yicong Yang yangyicong@hisilicon.com
commit 58a3862a10a317a81097ab0c78aecebabb1704f5 upstream.
In pcie_config_aspm_l1ss(), we cleared the wrong bits when enabling ASPM L1 Substates. Instead of the L1.x enable bits (PCI_L1SS_CTL1_L1SS_MASK, 0xf), we cleared the Link Activation Interrupt Enable bit (PCI_L1SS_CAP_L1_PM_SS, 0x10).
Clear the L1.x enable bits before writing the new L1.x configuration.
[bhelgaas: changelog] Fixes: aeda9adebab8 ("PCI/ASPM: Configure L1 substate settings") Link: https://lore.kernel.org/r/1584093227-1292-1-git-send-email-yangyicong@hisili... Signed-off-by: Yicong Yang yangyicong@hisilicon.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com CC: stable@vger.kernel.org # v4.11+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/pcie/aspm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -747,9 +747,9 @@ static void pcie_config_aspm_l1ss(struct
/* Enable what we need to enable */ pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, - PCI_L1SS_CAP_L1_PM_SS, val); + PCI_L1SS_CTL1_L1SS_MASK, val); pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1, - PCI_L1SS_CAP_L1_PM_SS, val); + PCI_L1SS_CTL1_L1SS_MASK, val); }
static void pcie_config_aspm_dev(struct pci_dev *pdev, u32 val)
From: Sean V Kelley sean.v.kelley@linux.intel.com
commit b88bf6c3b6ff77948c153cac4e564642b0b90632 upstream.
The following was observed by Kar Hin Ong with RT patchset:
Backtrace: irq 19: nobody cared (try booting with the "irqpoll" option) CPU: 0 PID: 3329 Comm: irq/34-nipalk Tainted:4.14.87-rt49 #1 Hardware name: National Instruments NI PXIe-8880/NI PXIe-8880, BIOS 2.1.5f1 01/09/2020 Call Trace: <IRQ> ? dump_stack+0x46/0x5e ? __report_bad_irq+0x2e/0xb0 ? note_interrupt+0x242/0x290 ? nNIKAL100_memoryRead16+0x8/0x10 [nikal] ? handle_irq_event_percpu+0x55/0x70 ? handle_irq_event+0x4f/0x80 ? handle_fasteoi_irq+0x81/0x180 ? handle_irq+0x1c/0x30 ? do_IRQ+0x41/0xd0 ? common_interrupt+0x84/0x84 </IRQ> ... handlers: [<ffffffffb3297200>] irq_default_primary_handler threaded [<ffffffffb3669180>] usb_hcd_irq Disabling IRQ #19
The problem being that this device is triggering boot interrupts due to threaded interrupt handling and masking of the IO-APIC. These boot interrupts are then forwarded on to the legacy PCH's PIRQ lines where there is no handler present for the device.
Whenever a PCI device fires interrupt (INTx) to Pin 20 of IOAPIC 2 (GSI 44), the kernel receives two interrupts:
1. Interrupt from Pin 20 of IOAPIC 2 -> Expected 2. Interrupt from Pin 19 of IOAPIC 1 -> UNEXPECTED
Quirks for disabling boot interrupts (preferred) or rerouting the handler exist but do not address these Xeon chipsets' mechanism: https://lore.kernel.org/lkml/12131949181903-git-send-email-sassmann@suse.de/
Add a new mechanism via PCI CFG for those chipsets supporting CIPINTRC register's dis_intx_rout2ich bit.
Link: https://lore.kernel.org/r/20200220192930.64820-2-sean.v.kelley@linux.intel.c... Reported-by: Kar Hin Ong kar.hin.ong@ni.com Tested-by: Kar Hin Ong kar.hin.ong@ni.com Signed-off-by: Sean V Kelley sean.v.kelley@linux.intel.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Reviewed-by: Thomas Gleixner tglx@linutronix.de Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/quirks.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 73 insertions(+), 7 deletions(-)
--- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -1970,26 +1970,92 @@ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_I /* * IO-APIC1 on 6300ESB generates boot interrupts, see Intel order no * 300641-004US, section 5.7.3. + * + * Core IO on Xeon E5 1600/2600/4600, see Intel order no 326509-003. + * Core IO on Xeon E5 v2, see Intel order no 329188-003. + * Core IO on Xeon E7 v2, see Intel order no 329595-002. + * Core IO on Xeon E5 v3, see Intel order no 330784-003. + * Core IO on Xeon E7 v3, see Intel order no 332315-001US. + * Core IO on Xeon E5 v4, see Intel order no 333810-002US. + * Core IO on Xeon E7 v4, see Intel order no 332315-001US. + * Core IO on Xeon D-1500, see Intel order no 332051-001. + * Core IO on Xeon Scalable, see Intel order no 610950. */ -#define INTEL_6300_IOAPIC_ABAR 0x40 +#define INTEL_6300_IOAPIC_ABAR 0x40 /* Bus 0, Dev 29, Func 5 */ #define INTEL_6300_DISABLE_BOOT_IRQ (1<<14)
+#define INTEL_CIPINTRC_CFG_OFFSET 0x14C /* Bus 0, Dev 5, Func 0 */ +#define INTEL_CIPINTRC_DIS_INTX_ICH (1<<25) + static void quirk_disable_intel_boot_interrupt(struct pci_dev *dev) { u16 pci_config_word; + u32 pci_config_dword;
if (noioapicquirk) return;
- pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR, &pci_config_word); - pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ; - pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR, pci_config_word); - + switch (dev->device) { + case PCI_DEVICE_ID_INTEL_ESB_10: + pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR, + &pci_config_word); + pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ; + pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR, + pci_config_word); + break; + case 0x3c28: /* Xeon E5 1600/2600/4600 */ + case 0x0e28: /* Xeon E5/E7 V2 */ + case 0x2f28: /* Xeon E5/E7 V3,V4 */ + case 0x6f28: /* Xeon D-1500 */ + case 0x2034: /* Xeon Scalable Family */ + pci_read_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET, + &pci_config_dword); + pci_config_dword |= INTEL_CIPINTRC_DIS_INTX_ICH; + pci_write_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET, + pci_config_dword); + break; + default: + return; + } pci_info(dev, "disabled boot interrupts on device [%04x:%04x]\n", dev->vendor, dev->device); } -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt); -DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt); +/* + * Device 29 Func 5 Device IDs of IO-APIC + * containing ABAR—APIC1 Alternate Base Address Register + */ +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, + quirk_disable_intel_boot_interrupt); + +/* + * Device 5 Func 0 Device IDs of Core IO modules/hubs + * containing Coherent Interface Protocol Interrupt Control + * + * Device IDs obtained from volume 2 datasheets of commented + * families above. + */ +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x3c28, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0e28, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2f28, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x6f28, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2034, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x3c28, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x0e28, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x2f28, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x6f28, + quirk_disable_intel_boot_interrupt); +DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x2034, + quirk_disable_intel_boot_interrupt);
/* Disable boot interrupts on HT-1000 */ #define BC_HT1000_FEATURE_REG 0x64
From: Bjorn Andersson bjorn.andersson@linaro.org
commit 604f3956524a6a53c1e3dd27b4b685b664d181ec upstream.
There exists non-bridge PCIe devices with PCI_VENDOR_ID_QCOM, so limit the fixup to only affect the relevant PCIe bridges.
Fixes: 322f03436692 ("PCI: qcom: Use default config space read function") Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Lorenzo Pieralisi lorenzo.pieralisi@arm.com Acked-by: Stanimir Varbanov svarbanov@mm-sol.com Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/controller/dwc/pcie-qcom.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
--- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -1289,7 +1289,13 @@ static void qcom_fixup_class(struct pci_ { dev->class = PCI_CLASS_BRIDGE_PCI << 8; } -DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, PCI_ANY_ID, qcom_fixup_class); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0101, qcom_fixup_class); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0104, qcom_fixup_class); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0106, qcom_fixup_class); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0107, qcom_fixup_class); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, qcom_fixup_class); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, qcom_fixup_class); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, qcom_fixup_class);
static struct platform_driver qcom_pcie_driver = { .probe = qcom_pcie_probe,
From: Kishon Vijay Abraham I kishon@ti.com
commit 04e046ca57ebed3943422dee10eec9e73aec081e upstream.
pci-epc-mem uses a bitmap to manage the Endpoint outbound (OB) address region. This address region will be shared by multiple endpoint functions (in the case of multi function endpoint) and it has to be protected from concurrent access to avoid updating an inconsistent state.
Use a mutex to protect bitmap updates to prevent the memory allocation API from returning incorrect addresses.
Signed-off-by: Kishon Vijay Abraham I kishon@ti.com Signed-off-by: Lorenzo Pieralisi lorenzo.pieralisi@arm.com Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/endpoint/pci-epc-mem.c | 10 ++++++++-- include/linux/pci-epc.h | 3 +++ 2 files changed, 11 insertions(+), 2 deletions(-)
--- a/drivers/pci/endpoint/pci-epc-mem.c +++ b/drivers/pci/endpoint/pci-epc-mem.c @@ -79,6 +79,7 @@ int __pci_epc_mem_init(struct pci_epc *e mem->page_size = page_size; mem->pages = pages; mem->size = size; + mutex_init(&mem->lock);
epc->mem = mem;
@@ -122,7 +123,7 @@ void __iomem *pci_epc_mem_alloc_addr(str phys_addr_t *phys_addr, size_t size) { int pageno; - void __iomem *virt_addr; + void __iomem *virt_addr = NULL; struct pci_epc_mem *mem = epc->mem; unsigned int page_shift = ilog2(mem->page_size); int order; @@ -130,15 +131,18 @@ void __iomem *pci_epc_mem_alloc_addr(str size = ALIGN(size, mem->page_size); order = pci_epc_mem_get_order(mem, size);
+ mutex_lock(&mem->lock); pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order); if (pageno < 0) - return NULL; + goto ret;
*phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift); virt_addr = ioremap(*phys_addr, size); if (!virt_addr) bitmap_release_region(mem->bitmap, pageno, order);
+ret: + mutex_unlock(&mem->lock); return virt_addr; } EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr); @@ -164,7 +168,9 @@ void pci_epc_mem_free_addr(struct pci_ep pageno = (phys_addr - mem->phys_base) >> page_shift; size = ALIGN(size, mem->page_size); order = pci_epc_mem_get_order(mem, size); + mutex_lock(&mem->lock); bitmap_release_region(mem->bitmap, pageno, order); + mutex_unlock(&mem->lock); } EXPORT_SYMBOL_GPL(pci_epc_mem_free_addr);
--- a/include/linux/pci-epc.h +++ b/include/linux/pci-epc.h @@ -71,6 +71,7 @@ struct pci_epc_ops { * @bitmap: bitmap to manage the PCI address space * @pages: number of bits representing the address region * @page_size: size of each page + * @lock: mutex to protect bitmap */ struct pci_epc_mem { phys_addr_t phys_base; @@ -78,6 +79,8 @@ struct pci_epc_mem { unsigned long *bitmap; size_t page_size; int pages; + /* mutex to protect against concurrent access for memory allocation*/ + struct mutex lock; };
/**
From: Gao Xiang gaoxiang25@huawei.com
commit 9d5a09c6f3b5fb85af20e3a34827b5d27d152b34 upstream.
The remaining count should not include successful shrink attempts.
Fixes: e7e9a307be9d ("staging: erofs: introduce workstation for decompression") Cc: stable@vger.kernel.org # 4.19+ Link: https://lore.kernel.org/r/20200226081008.86348-1-gaoxiang25@huawei.com Reviewed-by: Chao Yu yuchao0@huawei.com Signed-off-by: Gao Xiang gaoxiang25@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/erofs/utils.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/erofs/utils.c +++ b/fs/erofs/utils.c @@ -293,7 +293,7 @@ static unsigned long erofs_shrink_scan(s spin_unlock(&erofs_sb_list_lock); sbi->shrinker_run_no = run_no;
- freed += erofs_shrink_workstation(sbi, nr); + freed += erofs_shrink_workstation(sbi, nr - freed);
spin_lock(&erofs_sb_list_lock); /* Get the next list element before we move this one */
From: Vincent Guittot vincent.guittot@linaro.org
commit fe61468b2cbc2b7ce5f8d3bf32ae5001d4c434e9 upstream.
When a cfs rq is throttled, the latter and its child are removed from the leaf list but their nr_running is not changed which includes staying higher than 1. When a task is enqueued in this throttled branch, the cfs rqs must be added back in order to ensure correct ordering in the list but this can only happens if nr_running == 1. When cfs bandwidth is used, we call unconditionnaly list_add_leaf_cfs_rq() when enqueuing an entity to make sure that the complete branch will be added.
Similarly unthrottle_cfs_rq() can stop adding cfs in the list when a parent is throttled. Iterate the remaining entity to ensure that the complete branch will be added in the list.
Reported-by: Christian Borntraeger borntraeger@de.ibm.com Signed-off-by: Vincent Guittot vincent.guittot@linaro.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Dietmar Eggemann dietmar.eggemann@arm.com Tested-by: Christian Borntraeger borntraeger@de.ibm.com Tested-by: Dietmar Eggemann dietmar.eggemann@arm.com Cc: stable@vger.kernel.org Cc: stable@vger.kernel.org #v5.1+ Link: https://lkml.kernel.org/r/20200306135257.25044-1-vincent.guittot@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/sched/fair.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-)
--- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3943,6 +3943,7 @@ static inline void check_schedstat_requi #endif }
+static inline bool cfs_bandwidth_used(void);
/* * MIGRATION @@ -4021,10 +4022,16 @@ enqueue_entity(struct cfs_rq *cfs_rq, st __enqueue_entity(cfs_rq, se); se->on_rq = 1;
- if (cfs_rq->nr_running == 1) { + /* + * When bandwidth control is enabled, cfs might have been removed + * because of a parent been throttled but cfs->nr_running > 1. Try to + * add it unconditionnally. + */ + if (cfs_rq->nr_running == 1 || cfs_bandwidth_used()) list_add_leaf_cfs_rq(cfs_rq); + + if (cfs_rq->nr_running == 1) check_enqueue_throttle(cfs_rq); - } }
static void __clear_buddies_last(struct sched_entity *se) @@ -4605,11 +4612,22 @@ void unthrottle_cfs_rq(struct cfs_rq *cf break; }
- assert_list_leaf_cfs_rq(rq); - if (!se) add_nr_running(rq, task_delta);
+ /* + * The cfs_rq_throttled() breaks in the above iteration can result in + * incomplete leaf list maintenance, resulting in triggering the + * assertion below. + */ + for_each_sched_entity(se) { + cfs_rq = cfs_rq_of(se); + + list_add_leaf_cfs_rq(cfs_rq); + } + + assert_list_leaf_cfs_rq(rq); + /* Determine whether we need to wake up potentially idle CPU: */ if (rq->curr == rq->idle && rq->cfs.nr_running) resched_curr(rq);
From: Matthew Garrett matthewgarrett@google.com
commit 805fa88e0780b7ce1cc9b649dd91a0a7164c6eb4 upstream.
If a TPM is in disabled state, it's reasonable for it to have an empty log. Bailing out of probe in this case means that the PPI interface isn't available, so there's no way to then enable the TPM from the OS. In general it seems reasonable to ignore log errors - they shouldn't interfere with any other TPM functionality.
Signed-off-by: Matthew Garrett matthewgarrett@google.com Cc: stable@vger.kernel.org # 4.19.x Reviewed-by: Jerry Snitselaar jsnitsel@redhat.com Reviewed-by: Jarkko Sakkinen jarkko.sakkinen@linux.intel.com Signed-off-by: Jarkko Sakkinen jarkko.sakkinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/char/tpm/eventlog/common.c | 12 ++++-------- drivers/char/tpm/tpm-chip.c | 4 +--- drivers/char/tpm/tpm.h | 2 +- 3 files changed, 6 insertions(+), 12 deletions(-)
--- a/drivers/char/tpm/eventlog/common.c +++ b/drivers/char/tpm/eventlog/common.c @@ -99,11 +99,8 @@ static int tpm_read_log(struct tpm_chip * * If an event log is found then the securityfs files are setup to * export it to userspace, otherwise nothing is done. - * - * Returns -ENODEV if the firmware has no event log or securityfs is not - * supported. */ -int tpm_bios_log_setup(struct tpm_chip *chip) +void tpm_bios_log_setup(struct tpm_chip *chip) { const char *name = dev_name(&chip->dev); unsigned int cnt; @@ -112,7 +109,7 @@ int tpm_bios_log_setup(struct tpm_chip *
rc = tpm_read_log(chip); if (rc < 0) - return rc; + return; log_version = rc;
cnt = 0; @@ -158,13 +155,12 @@ int tpm_bios_log_setup(struct tpm_chip * cnt++; }
- return 0; + return;
err: - rc = PTR_ERR(chip->bios_dir[cnt]); chip->bios_dir[cnt] = NULL; tpm_bios_log_teardown(chip); - return rc; + return; }
void tpm_bios_log_teardown(struct tpm_chip *chip) --- a/drivers/char/tpm/tpm-chip.c +++ b/drivers/char/tpm/tpm-chip.c @@ -596,9 +596,7 @@ int tpm_chip_register(struct tpm_chip *c
tpm_sysfs_add_device(chip);
- rc = tpm_bios_log_setup(chip); - if (rc != 0 && rc != -ENODEV) - return rc; + tpm_bios_log_setup(chip);
tpm_add_ppi(chip);
--- a/drivers/char/tpm/tpm.h +++ b/drivers/char/tpm/tpm.h @@ -235,7 +235,7 @@ int tpm2_prepare_space(struct tpm_chip * int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, void *buf, size_t *bufsiz);
-int tpm_bios_log_setup(struct tpm_chip *chip); +void tpm_bios_log_setup(struct tpm_chip *chip); void tpm_bios_log_teardown(struct tpm_chip *chip); int tpm_dev_common_init(void); void tpm_dev_common_exit(void);
From: Vasily Averin vvs@virtuozzo.com
commit d7a47b96ed1102551eb7325f97937e276fb91045 upstream.
If .next function does not change position index, following .show function will repeat output related to current position index.
In case of /sys/kernel/security/tpm0/ascii_bios_measurements and binary_bios_measurements: 1) read after lseek beyound end of file generates whole last line. 2) read after lseek to middle of last line generates expected end of last line and unexpected whole last line once again.
Cc: stable@vger.kernel.org # 4.19.x Fixes: 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code ...") Link: https://bugzilla.kernel.org/show_bug.cgi?id=206283 Signed-off-by: Vasily Averin vvs@virtuozzo.com Reviewed-by: Jarkko Sakkinen jarkko.sakkinen@linux.intel.com Signed-off-by: Jarkko Sakkinen jarkko.sakkinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/char/tpm/eventlog/tpm1.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/char/tpm/eventlog/tpm1.c +++ b/drivers/char/tpm/eventlog/tpm1.c @@ -115,6 +115,7 @@ static void *tpm1_bios_measurements_next u32 converted_event_size; u32 converted_event_type;
+ (*pos)++; converted_event_size = do_endian_conversion(event->event_size);
v += sizeof(struct tcpa_event) + converted_event_size; @@ -132,7 +133,6 @@ static void *tpm1_bios_measurements_next ((v + sizeof(struct tcpa_event) + converted_event_size) > limit)) return NULL;
- (*pos)++; return v; }
From: Vasily Averin vvs@virtuozzo.com
commit f9bf8adb55cd5a357b247a16aafddf8c97b276e0 upstream.
If .next function does not change position index, following .show function will repeat output related to current position index.
For /sys/kernel/security/tpm0/binary_bios_measurements: 1) read after lseek beyound end of file generates whole last line. 2) read after lseek to middle of last line generates expected end of last line and unexpected whole last line once again.
Cc: stable@vger.kernel.org # 4.19.x Fixes: 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code ...") Link: https://bugzilla.kernel.org/show_bug.cgi?id=206283 Signed-off-by: Vasily Averin vvs@virtuozzo.com Reviewed-by: Jarkko Sakkinen jarkko.sakkinen@linux.intel.com Signed-off-by: Jarkko Sakkinen jarkko.sakkinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/char/tpm/eventlog/tpm2.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/char/tpm/eventlog/tpm2.c +++ b/drivers/char/tpm/eventlog/tpm2.c @@ -94,6 +94,7 @@ static void *tpm2_bios_measurements_next size_t event_size; void *marker;
+ (*pos)++; event_header = log->bios_event_log;
if (v == SEQ_START_TOKEN) { @@ -118,7 +119,6 @@ static void *tpm2_bios_measurements_next if (((v + event_size) >= limit) || (event_size == 0)) return NULL;
- (*pos)++; return v; }
From: Yang Xu xuyang2018.jy@cn.fujitsu.com
commit 2e356101e72ab1361821b3af024d64877d9a798d upstream.
Currently, when we add a new user key, the calltrace as below:
add_key() key_create_or_update() key_alloc() __key_instantiate_and_link generic_key_instantiate key_payload_reserve ......
Since commit a08bf91ce28e ("KEYS: allow reaching the keys quotas exactly"), we can reach max bytes/keys in key_alloc, but we forget to remove this limit when we reserver space for payload in key_payload_reserve. So we can only reach max keys but not max bytes when having delta between plen and type->def_datalen. Remove this limit when instantiating the key, so we can keep consistent with key_alloc.
Also, fix the similar problem in keyctl_chown_key().
Fixes: 0b77f5bfb45c ("keys: make the keyring quotas controllable through /proc/sys") Fixes: a08bf91ce28e ("KEYS: allow reaching the keys quotas exactly") Cc: stable@vger.kernel.org # 5.0.x Cc: Eric Biggers ebiggers@google.com Signed-off-by: Yang Xu xuyang2018.jy@cn.fujitsu.com Reviewed-by: Jarkko Sakkinen jarkko.sakkinen@linux.intel.com Reviewed-by: Eric Biggers ebiggers@google.com Signed-off-by: Jarkko Sakkinen jarkko.sakkinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- security/keys/key.c | 2 +- security/keys/keyctl.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-)
--- a/security/keys/key.c +++ b/security/keys/key.c @@ -381,7 +381,7 @@ int key_payload_reserve(struct key *key, spin_lock(&key->user->lock);
if (delta > 0 && - (key->user->qnbytes + delta >= maxbytes || + (key->user->qnbytes + delta > maxbytes || key->user->qnbytes + delta < key->user->qnbytes)) { ret = -EDQUOT; } --- a/security/keys/keyctl.c +++ b/security/keys/keyctl.c @@ -937,8 +937,8 @@ long keyctl_chown_key(key_serial_t id, u key_quota_root_maxbytes : key_quota_maxbytes;
spin_lock(&newowner->lock); - if (newowner->qnkeys + 1 >= maxkeys || - newowner->qnbytes + key->quotalen >= maxbytes || + if (newowner->qnkeys + 1 > maxkeys || + newowner->qnbytes + key->quotalen > maxbytes || newowner->qnbytes + key->quotalen < newowner->qnbytes) goto quota_overrun;
From: Ludovic Barre ludovic.barre@st.com
commit d4a384cb563e555ce00255f5f496b503e6cc6358 upstream.
The busyd0 line transition can be very fast. The busy request may be completed by busy_d0end, without waiting for the busy_d0 steps. Therefore, clear the busyd0end irq flag, even if no busy_status.
Fixes: 0e68de6aa7b1 ("mmc: mmci: sdmmc: add busy_complete callback") Cc: stable@vger.kernel.org Signed-off-by: Ludovic Barre ludovic.barre@st.com Link: https://lore.kernel.org/r/20200325143409.13005-2-ludovic.barre@st.com Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mmc/host/mmci_stm32_sdmmc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/mmc/host/mmci_stm32_sdmmc.c +++ b/drivers/mmc/host/mmci_stm32_sdmmc.c @@ -315,11 +315,11 @@ complete: if (host->busy_status) { writel_relaxed(mask & ~host->variant->busy_detect_mask, base + MMCIMASK0); - writel_relaxed(host->variant->busy_detect_mask, - base + MMCICLEAR); host->busy_status = 0; }
+ writel_relaxed(host->variant->busy_detect_mask, base + MMCICLEAR); + return true; }
From: Paul E. McKenney paulmck@kernel.org
commit 127e29815b4b2206c0a97ac1d83f92ffc0e25c34 upstream.
Currently, rcu_barrier() ignores offline CPUs, However, it is possible for an offline no-CBs CPU to have callbacks queued, and rcu_barrier() must wait for those callbacks. This commit therefore makes rcu_barrier() directly invoke the rcu_barrier_func() with interrupts disabled for such CPUs. This requires passing the CPU number into this function so that it can entrain the rcu_barrier() callback onto the correct CPU's callback list, given that the code must instead execute on the current CPU.
While in the area, this commit fixes a bug where the first CPU's callback might have been invoked before rcu_segcblist_entrain() returned, which would also result in an early wakeup.
Fixes: 5d6742b37727 ("rcu/nocb: Use rcu_segcblist for no-CBs CPUs") Signed-off-by: Paul E. McKenney paulmck@kernel.org [ paulmck: Apply optimization feedback from Boqun Feng. ] Cc: stable@vger.kernel.org # 5.5.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/trace/events/rcu.h | 1 + kernel/rcu/tree.c | 36 ++++++++++++++++++++++++------------ 2 files changed, 25 insertions(+), 12 deletions(-)
--- a/include/trace/events/rcu.h +++ b/include/trace/events/rcu.h @@ -720,6 +720,7 @@ TRACE_EVENT_RCU(rcu_torture_read, * "Begin": rcu_barrier() started. * "EarlyExit": rcu_barrier() piggybacked, thus early exit. * "Inc1": rcu_barrier() piggyback check counter incremented. + * "OfflineNoCBQ": rcu_barrier() found offline no-CBs CPU with callbacks. * "OnlineQ": rcu_barrier() found online CPU with callbacks. * "OnlineNQ": rcu_barrier() found online CPU, no callbacks. * "IRQ": An rcu_barrier_callback() callback posted on remote CPU. --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2888,9 +2888,10 @@ static void rcu_barrier_callback(struct /* * Called with preemption disabled, and from cross-cpu IRQ context. */ -static void rcu_barrier_func(void *unused) +static void rcu_barrier_func(void *cpu_in) { - struct rcu_data *rdp = raw_cpu_ptr(&rcu_data); + uintptr_t cpu = (uintptr_t)cpu_in; + struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
rcu_barrier_trace(TPS("IRQ"), -1, rcu_state.barrier_sequence); rdp->barrier_head.func = rcu_barrier_callback; @@ -2917,7 +2918,7 @@ static void rcu_barrier_func(void *unuse */ void rcu_barrier(void) { - int cpu; + uintptr_t cpu; struct rcu_data *rdp; unsigned long s = rcu_seq_snap(&rcu_state.barrier_sequence);
@@ -2940,13 +2941,14 @@ void rcu_barrier(void) rcu_barrier_trace(TPS("Inc1"), -1, rcu_state.barrier_sequence);
/* - * Initialize the count to one rather than to zero in order to - * avoid a too-soon return to zero in case of a short grace period - * (or preemption of this task). Exclude CPU-hotplug operations - * to ensure that no offline CPU has callbacks queued. + * Initialize the count to two rather than to zero in order + * to avoid a too-soon return to zero in case of an immediate + * invocation of the just-enqueued callback (or preemption of + * this task). Exclude CPU-hotplug operations to ensure that no + * offline non-offloaded CPU has callbacks queued. */ init_completion(&rcu_state.barrier_completion); - atomic_set(&rcu_state.barrier_cpu_count, 1); + atomic_set(&rcu_state.barrier_cpu_count, 2); get_online_cpus();
/* @@ -2956,13 +2958,23 @@ void rcu_barrier(void) */ for_each_possible_cpu(cpu) { rdp = per_cpu_ptr(&rcu_data, cpu); - if (!cpu_online(cpu) && + if (cpu_is_offline(cpu) && !rcu_segcblist_is_offloaded(&rdp->cblist)) continue; - if (rcu_segcblist_n_cbs(&rdp->cblist)) { + if (rcu_segcblist_n_cbs(&rdp->cblist) && cpu_online(cpu)) { rcu_barrier_trace(TPS("OnlineQ"), cpu, rcu_state.barrier_sequence); - smp_call_function_single(cpu, rcu_barrier_func, NULL, 1); + smp_call_function_single(cpu, rcu_barrier_func, (void *)cpu, 1); + } else if (rcu_segcblist_n_cbs(&rdp->cblist) && + cpu_is_offline(cpu)) { + rcu_barrier_trace(TPS("OfflineNoCBQ"), cpu, + rcu_state.barrier_sequence); + local_irq_disable(); + rcu_barrier_func((void *)cpu); + local_irq_enable(); + } else if (cpu_is_offline(cpu)) { + rcu_barrier_trace(TPS("OfflineNoCBNoQ"), cpu, + rcu_state.barrier_sequence); } else { rcu_barrier_trace(TPS("OnlineNQ"), cpu, rcu_state.barrier_sequence); @@ -2974,7 +2986,7 @@ void rcu_barrier(void) * Now that we have an rcu_barrier_callback() callback on each * CPU, and thus each counted, remove the initial count. */ - if (atomic_dec_and_test(&rcu_state.barrier_cpu_count)) + if (atomic_sub_and_test(2, &rcu_state.barrier_cpu_count)) complete(&rcu_state.barrier_completion);
/* Wait for all rcu_barrier_callback() callbacks to be invoked. */
From: Thomas Gleixner tglx@linutronix.de
commit e98eac6ff1b45e4e73f2e6031b37c256ccb5d36b upstream.
A recent change to freeze_secondary_cpus() which added an early abort if a wakeup is pending missed the fact that the function is also invoked for shutdown, reboot and kexec via disable_nonboot_cpus().
In case of disable_nonboot_cpus() the wakeup event needs to be ignored as the purpose is to terminate the currently running kernel.
Add a 'suspend' argument which is only set when the freeze is in context of a suspend operation. If not set then an eventually pending wakeup event is ignored.
Fixes: a66d955e910a ("cpu/hotplug: Abort disabling secondary CPUs if wakeup is pending") Reported-by: Boqun Feng boqun.feng@gmail.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: Pavankumar Kondeti pkondeti@codeaurora.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/874kuaxdiz.fsf@nanos.tec.linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/linux/cpu.h | 12 +++++++++--- kernel/cpu.c | 4 ++-- 2 files changed, 11 insertions(+), 5 deletions(-)
--- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -138,12 +138,18 @@ static inline void get_online_cpus(void) static inline void put_online_cpus(void) { cpus_read_unlock(); }
#ifdef CONFIG_PM_SLEEP_SMP -extern int freeze_secondary_cpus(int primary); +int __freeze_secondary_cpus(int primary, bool suspend); +static inline int freeze_secondary_cpus(int primary) +{ + return __freeze_secondary_cpus(primary, true); +} + static inline int disable_nonboot_cpus(void) { - return freeze_secondary_cpus(0); + return __freeze_secondary_cpus(0, false); } -extern void enable_nonboot_cpus(void); + +void enable_nonboot_cpus(void);
static inline int suspend_disable_secondary_cpus(void) { --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -1212,7 +1212,7 @@ EXPORT_SYMBOL_GPL(cpu_up); #ifdef CONFIG_PM_SLEEP_SMP static cpumask_var_t frozen_cpus;
-int freeze_secondary_cpus(int primary) +int __freeze_secondary_cpus(int primary, bool suspend) { int cpu, error = 0;
@@ -1237,7 +1237,7 @@ int freeze_secondary_cpus(int primary) if (cpu == primary) continue;
- if (pm_wakeup_pending()) { + if (suspend && pm_wakeup_pending()) { pr_info("Wakeup pending. Abort CPU freeze\n"); error = -EBUSY; break;
From: Thomas Gleixner tglx@linutronix.de
commit a740a423c36932695b01a3e920f697bc55b05fec upstream.
Interrupts cannot be injected when the interrupt is not activated and when a replay is already in progress.
Fixes: 536e2e34bd00 ("genirq/debugfs: Triggering of interrupts from userspace") Signed-off-by: Thomas Gleixner tglx@linutronix.de Acked-by: Marc Zyngier maz@kernel.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200306130623.500019114@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/irq/debugfs.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
--- a/kernel/irq/debugfs.c +++ b/kernel/irq/debugfs.c @@ -206,8 +206,15 @@ static ssize_t irq_debug_write(struct fi chip_bus_lock(desc); raw_spin_lock_irqsave(&desc->lock, flags);
- if (irq_settings_is_level(desc) || desc->istate & IRQS_NMI) { - /* Can't do level nor NMIs, sorry */ + /* + * Don't allow injection when the interrupt is: + * - Level or NMI type + * - not activated + * - replaying already + */ + if (irq_settings_is_level(desc) || + !irqd_is_activated(&desc->irq_data) || + (desc->istate & (IRQS_NMI | IRQS_REPLAY))) { err = -EINVAL; } else { desc->istate |= IRQS_PENDING;
From: Sungbo Eo mans0n@gorani.run
commit 6a214a28132f19ace3d835a6d8f6422ec80ad200 upstream.
Clear its own IRQs before the parent IRQ get enabled, so that the remaining IRQs do not accidentally interrupt the parent IRQ controller.
This patch also fixes a reboot bug on OX820 SoC, where the remaining rps-timer IRQ raises a GIC interrupt that is left pending. After that, the rps-timer IRQ is cleared during driver initialization, and there's no IRQ left in rps-irq when local_irq_enable() is called, which evokes an error message "unexpected IRQ trap".
Fixes: bdd272cbb97a ("irqchip: versatile FPGA: support cascaded interrupts from DT") Signed-off-by: Sungbo Eo mans0n@gorani.run Signed-off-by: Marc Zyngier maz@kernel.org Reviewed-by: Linus Walleij linus.walleij@linaro.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200321133842.2408823-1-mans0n@gorani.run Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/irqchip/irq-versatile-fpga.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/irqchip/irq-versatile-fpga.c +++ b/drivers/irqchip/irq-versatile-fpga.c @@ -212,6 +212,9 @@ int __init fpga_irq_of_init(struct devic if (of_property_read_u32(node, "valid-mask", &valid_mask)) valid_mask = 0;
+ writel(clear_mask, base + IRQ_ENABLE_CLEAR); + writel(clear_mask, base + FIQ_ENABLE_CLEAR); + /* Some chips are cascaded from a parent IRQ */ parent_irq = irq_of_parse_and_map(node, 0); if (!parent_irq) { @@ -221,9 +224,6 @@ int __init fpga_irq_of_init(struct devic
fpga_irq_init(base, node->name, 0, parent_irq, valid_mask, node);
- writel(clear_mask, base + IRQ_ENABLE_CLEAR); - writel(clear_mask, base + FIQ_ENABLE_CLEAR); - /* * On Versatile AB/PB, some secondary interrupts have a direct * pass-thru to the primary controller for IRQs 20 and 22-31 which need
From: Jens Axboe axboe@kernel.dk
commit c336e992cb1cb1db9ee608dfb30342ae781057ab upstream.
We already checked this limit when the file was opened, and we keep it open in the file table. Hence when we added unit_inflight to the count we want to register, we're doubly accounting these files. This results in -EMFILE for file registration, if we're at half the limit.
Cc: stable@vger.kernel.org # v5.1+ Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/io_uring.c | 7 ------- 1 file changed, 7 deletions(-)
--- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4162,13 +4162,6 @@ static int __io_sqe_files_scm(struct io_ struct sk_buff *skb; int i, nr_files;
- if (!capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN)) { - unsigned long inflight = ctx->user->unix_inflight + nr; - - if (inflight > task_rlimit(current, RLIMIT_NOFILE)) - return -EMFILE; - } - fpl = kzalloc(sizeof(*fpl), GFP_KERNEL); if (!fpl) return -ENOMEM;
From: Vasily Averin vvs@virtuozzo.com
commit 6c871b7314dde9ab64f20de8f5aa3d01be4518e8 upstream.
In Aug 2018 NeilBrown noticed commit 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code and interface") "Some ->next functions do not increment *pos when they return NULL... Note that such ->next functions are buggy and should be fixed. A simple demonstration is
dd if=/proc/swaps bs=1000 skip=1
Choose any block size larger than the size of /proc/swaps. This will always show the whole last line of /proc/swaps"
/proc/swaps output was fixed recently, however there are lot of other affected files, and one of them is related to pstore subsystem.
If .next function does not change position index, following .show function will repeat output related to current position index.
There are at least 2 related problems: - read after lseek beyond end of file, described above by NeilBrown "dd if=<AFFECTED_FILE> bs=1000 skip=1" will generate whole last list - read after lseek on in middle of last line will output expected rest of last line but then repeat whole last line once again.
If .show() function generates multy-line output (like pstore_ftrace_seq_show() does ?) following bash script cycles endlessly
$ q=;while read -r r;do echo "$((++q)) $r";done < AFFECTED_FILE
Unfortunately I'm not familiar enough to pstore subsystem and was unable to find affected pstore-related file on my test node.
If .next function does not change position index, following .show function will repeat output related to current position index.
Cc: stable@vger.kernel.org Fixes: 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code ...") Link: https://bugzilla.kernel.org/show_bug.cgi?id=206283 Signed-off-by: Vasily Averin vvs@virtuozzo.com Link: https://lore.kernel.org/r/4e49830d-4c88-0171-ee24-1ee540028dad@virtuozzo.com [kees: with robustness tweak from Joel Fernandes joelaf@google.com] Signed-off-by: Kees Cook keescook@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/pstore/inode.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/fs/pstore/inode.c +++ b/fs/pstore/inode.c @@ -87,11 +87,11 @@ static void *pstore_ftrace_seq_next(stru struct pstore_private *ps = s->private; struct pstore_ftrace_seq_data *data = v;
+ (*pos)++; data->off += REC_SIZE; if (data->off + REC_SIZE > ps->total_size) return NULL;
- (*pos)++; return data; }
@@ -101,6 +101,9 @@ static int pstore_ftrace_seq_show(struct struct pstore_ftrace_seq_data *data = v; struct pstore_ftrace_record *rec;
+ if (!data) + return 0; + rec = (struct pstore_ftrace_record *)(ps->record->buf + data->off);
seq_printf(s, "CPU:%d ts:%llu %08lx %08lx %ps <- %pS\n",
From: Huacai Chen chenhc@lemote.com
commit d191aaffe3687d1e73e644c185f5f0550ec242b5 upstream.
LDDIR/LDPTE is Loongson-3's acceleration for Page Table Walking. If BD (Base Directory, the 4th page directory) is not enabled, then GDOffset is biased by BadVAddr[63:62]. So, if GDOffset (aka. BadVAddr[47:36] for Loongson-3) is big enough, "0b11(BadVAddr[63:62])|BadVAddr[47:36]|...." can far beyond pg_swapper_dir. This means the pg_swapper_dir may NOT be accessed by LDDIR correctly, so fix it by set PWDirExt in CP0_PWCtl.
Cc: stable@vger.kernel.org Signed-off-by: Pei Huang huangpei@loongson.cn Signed-off-by: Huacai Chen chenhc@lemote.com Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/mm/tlbex.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -1480,6 +1480,7 @@ static void build_r4000_tlb_refill_handl
static void setup_pw(void) { + unsigned int pwctl; unsigned long pgd_i, pgd_w; #ifndef __PAGETABLE_PMD_FOLDED unsigned long pmd_i, pmd_w; @@ -1506,6 +1507,7 @@ static void setup_pw(void)
pte_i = ilog2(_PAGE_GLOBAL); pte_w = 0; + pwctl = 1 << 30; /* Set PWDirExt */
#ifndef __PAGETABLE_PMD_FOLDED write_c0_pwfield(pgd_i << 24 | pmd_i << 12 | pt_i << 6 | pte_i); @@ -1516,8 +1518,9 @@ static void setup_pw(void) #endif
#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT - write_c0_pwctl(1 << 6 | psn); + pwctl |= (1 << 6 | psn); #endif + write_c0_pwctl(pwctl); write_c0_kpgd((long)swapper_pg_dir); kscratch_used_mask |= (1 << 7); /* KScratch6 is used for KPGD */ }
From: Gustavo A. R. Silva gustavo@embeddedor.com
commit 792a402c2840054533ef56279c212ef6da87d811 upstream.
There is a potential NULL pointer dereference in case kzalloc() fails and returns NULL.
Fix this by adding a NULL check on *cd*
This bug was detected with the help of Coccinelle.
Fixes: 64b139f97c01 ("MIPS: OCTEON: irq: add CIB and other fixes") Cc: stable@vger.kernel.org Signed-off-by: Gustavo A. R. Silva gustavo@embeddedor.com Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/cavium-octeon/octeon-irq.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/arch/mips/cavium-octeon/octeon-irq.c +++ b/arch/mips/cavium-octeon/octeon-irq.c @@ -2199,6 +2199,9 @@ static int octeon_irq_cib_map(struct irq }
cd = kzalloc(sizeof(*cd), GFP_KERNEL); + if (!cd) + return -ENOMEM; + cd->host_data = host_data; cd->bit = hw;
From: Ulf Hansson ulf.hansson@linaro.org
commit 56cb26891ea4180121265dc6b596015772c4a4b8 upstream.
Commit 2c361684803e ("PM / Domains: Don't treat zero found compatible idle states as an error"), moved of_genpd_parse_idle_states() towards allowing none compatible idle state to be found for the device node, rather than returning an error code.
However, it didn't consider that the "domain-idle-states" DT property may be missing as it's optional, which makes of_count_phandle_with_args() to return -ENOENT. Let's fix this to make the behaviour consistent.
Fixes: 2c361684803e ("PM / Domains: Don't treat zero found compatible idle states as an error") Reported-by: Benjamin Gaignard benjamin.gaignard@st.com Cc: 4.20+ stable@vger.kernel.org # 4.20+ Reviewed-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/power/domain.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -2615,7 +2615,7 @@ static int genpd_iterate_idle_states(str
ret = of_count_phandle_with_args(dn, "domain-idle-states", NULL); if (ret <= 0) - return ret; + return ret == -ENOENT ? 0 : ret;
/* Loop over the phandles until all the requested entry is found */ of_for_each_phandle(&it, ret, dn, "domain-idle-states", NULL, 0) {
From: Neeraj Upadhyay neeraju@codeaurora.org
commit 87de6594dc45dbf6819f3e0ef92f9331c5a9444c upstream.
Skip wakeup_source_sysfs_remove() to fix a NULL pinter dereference via ws->dev, if the wakeup source is unregistered before registering the wakeup class from device_add().
Fixes: 2ca3d1ecb8c4 ("PM / wakeup: Register wakeup class kobj after device is added") Signed-off-by: Neeraj Upadhyay neeraju@codeaurora.org Cc: 5.4+ stable@vger.kernel.org # 5.4+ [ rjw: Subject & changelog, white space ] Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/power/wakeup.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/base/power/wakeup.c +++ b/drivers/base/power/wakeup.c @@ -241,7 +241,9 @@ void wakeup_source_unregister(struct wak { if (ws) { wakeup_source_remove(ws); - wakeup_source_sysfs_remove(ws); + if (ws->dev) + wakeup_source_sysfs_remove(ws); + wakeup_source_destroy(ws); } }
From: Remi Pommarel repk@triplefau.lt
commit 968ae2caad0782db5dbbabb560d3cdefd2945d38 upstream.
When TPC is disabled IEEE80211_CONF_CHANGE_POWER event can be handled to reconfigure HW's maximum txpower.
This fixes 0dBm txpower setting when user attaches to an interface for the first time with the following scenario:
ieee80211_do_open() ath9k_add_interface() ath9k_set_txpower() /* Set TX power with not yet initialized sc->hw->conf.power_level */
ieee80211_hw_config() /* Iniatilize sc->hw->conf.power_level and raise IEEE80211_CONF_CHANGE_POWER */
ath9k_config() /* IEEE80211_CONF_CHANGE_POWER is ignored */
This issue can be reproduced with the following:
$ modprobe -r ath9k $ modprobe ath9k $ wpa_supplicant -i wlan0 -c /tmp/wpa.conf & $ iw dev /* Here TX power is either 0 or 3 depending on RF chain */ $ killall wpa_supplicant $ iw dev /* TX power goes back to calibrated value and subsequent calls will be fine */
Fixes: 283dd11994cde ("ath9k: add per-vif TX power capability") Cc: stable@vger.kernel.org Signed-off-by: Remi Pommarel repk@triplefau.lt Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/ath/ath9k/main.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/net/wireless/ath/ath9k/main.c +++ b/drivers/net/wireless/ath/ath9k/main.c @@ -1457,6 +1457,9 @@ static int ath9k_config(struct ieee80211 ath_chanctx_set_channel(sc, ctx, &hw->conf.chandef); }
+ if (changed & IEEE80211_CONF_CHANGE_POWER) + ath9k_set_txpower(sc, NULL); + mutex_unlock(&sc->mutex); ath9k_ps_restore(sc);
From: Eric W. Biederman ebiederm@xmission.com
commit d1e7fd6462ca9fc76650fbe6ca800e35b24267da upstream.
Replace the 32bit exec_id with a 64bit exec_id to make it impossible to wrap the exec_id counter. With care an attacker can cause exec_id wrap and send arbitrary signals to a newly exec'd parent. This bypasses the signal sending checks if the parent changes their credentials during exec.
The severity of this problem can been seen that in my limited testing of a 32bit exec_id it can take as little as 19s to exec 65536 times. Which means that it can take as little as 14 days to wrap a 32bit exec_id. Adam Zabrocki has succeeded wrapping the self_exe_id in 7 days. Even my slower timing is in the uptime of a typical server. Which means self_exec_id is simply a speed bump today, and if exec gets noticably faster self_exec_id won't even be a speed bump.
Extending self_exec_id to 64bits introduces a problem on 32bit architectures where reading self_exec_id is no longer atomic and can take two read instructions. Which means that is is possible to hit a window where the read value of exec_id does not match the written value. So with very lucky timing after this change this still remains expoiltable.
I have updated the update of exec_id on exec to use WRITE_ONCE and the read of exec_id in do_notify_parent to use READ_ONCE to make it clear that there is no locking between these two locations.
Link: https://lore.kernel.org/kernel-hardening/20200324215049.GA3710@pi3.com.pl Fixes: 2.3.23pre2 Cc: stable@vger.kernel.org Signed-off-by: "Eric W. Biederman" ebiederm@xmission.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/exec.c | 2 +- include/linux/sched.h | 4 ++-- kernel/signal.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-)
--- a/fs/exec.c +++ b/fs/exec.c @@ -1382,7 +1382,7 @@ void setup_new_exec(struct linux_binprm
/* An exec changes our domain. We are no longer part of the thread group */ - current->self_exec_id++; + WRITE_ONCE(current->self_exec_id, current->self_exec_id + 1); flush_signal_handlers(current, 0); } EXPORT_SYMBOL(setup_new_exec); --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -939,8 +939,8 @@ struct task_struct { struct seccomp seccomp;
/* Thread group tracking: */ - u32 parent_exec_id; - u32 self_exec_id; + u64 parent_exec_id; + u64 self_exec_id;
/* Protection against (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed, mempolicy: */ spinlock_t alloc_lock; --- a/kernel/signal.c +++ b/kernel/signal.c @@ -1931,7 +1931,7 @@ bool do_notify_parent(struct task_struct * This is only possible if parent == real_parent. * Check if it has changed security domain. */ - if (tsk->parent_exec_id != tsk->parent->self_exec_id) + if (tsk->parent_exec_id != READ_ONCE(tsk->parent->self_exec_id)) sig = SIGCHLD; }
From: Hans de Goede hdegoede@redhat.com
commit 812c2d7506fde7cdf83cb2532810a65782b51741 upstream.
Use named struct initializers for the freq_desc struct-s initialization and change the "u8 msr_plat" to a "bool use_msr_plat" to make its meaning more clear instead of relying on a comment to explain it.
Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200223140610.59612-1-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/tsc_msr.c | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-)
--- a/arch/x86/kernel/tsc_msr.c +++ b/arch/x86/kernel/tsc_msr.c @@ -22,10 +22,10 @@ * read in MSR_PLATFORM_ID[12:8], otherwise in MSR_PERF_STAT[44:40]. * Unfortunately some Intel Atom SoCs aren't quite compliant to this, * so we need manually differentiate SoC families. This is what the - * field msr_plat does. + * field use_msr_plat does. */ struct freq_desc { - u8 msr_plat; /* 1: use MSR_PLATFORM_INFO, 0: MSR_IA32_PERF_STATUS */ + bool use_msr_plat; u32 freqs[MAX_NUM_FREQS]; };
@@ -35,31 +35,39 @@ struct freq_desc { * by MSR based on SDM. */ static const struct freq_desc freq_desc_pnw = { - 0, { 0, 0, 0, 0, 0, 99840, 0, 83200 } + .use_msr_plat = false, + .freqs = { 0, 0, 0, 0, 0, 99840, 0, 83200 }, };
static const struct freq_desc freq_desc_clv = { - 0, { 0, 133200, 0, 0, 0, 99840, 0, 83200 } + .use_msr_plat = false, + .freqs = { 0, 133200, 0, 0, 0, 99840, 0, 83200 }, };
static const struct freq_desc freq_desc_byt = { - 1, { 83300, 100000, 133300, 116700, 80000, 0, 0, 0 } + .use_msr_plat = true, + .freqs = { 83300, 100000, 133300, 116700, 80000, 0, 0, 0 }, };
static const struct freq_desc freq_desc_cht = { - 1, { 83300, 100000, 133300, 116700, 80000, 93300, 90000, 88900, 87500 } + .use_msr_plat = true, + .freqs = { 83300, 100000, 133300, 116700, 80000, 93300, 90000, + 88900, 87500 }, };
static const struct freq_desc freq_desc_tng = { - 1, { 0, 100000, 133300, 0, 0, 0, 0, 0 } + .use_msr_plat = true, + .freqs = { 0, 100000, 133300, 0, 0, 0, 0, 0 }, };
static const struct freq_desc freq_desc_ann = { - 1, { 83300, 100000, 133300, 100000, 0, 0, 0, 0 } + .use_msr_plat = true, + .freqs = { 83300, 100000, 133300, 100000, 0, 0, 0, 0 }, };
static const struct freq_desc freq_desc_lgm = { - 1, { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 } + .use_msr_plat = true, + .freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 }, };
static const struct x86_cpu_id tsc_msr_cpu_ids[] = { @@ -91,7 +99,7 @@ unsigned long cpu_khz_from_msr(void) return 0;
freq_desc = (struct freq_desc *)id->driver_data; - if (freq_desc->msr_plat) { + if (freq_desc->use_msr_plat) { rdmsr(MSR_PLATFORM_INFO, lo, hi); ratio = (lo >> 8) & 0xff; } else {
From: Hans de Goede hdegoede@redhat.com
commit c8810e2ffc30c7e1577f9c057c4b85d984bbc35a upstream.
According to the "Intel 64 and IA-32 Architectures Software Developer's Manual Volume 4: Model-Specific Registers" on Cherry Trail (Airmont) devices the 4 lowest bits of the MSR_FSB_FREQ mask indicate the bus freq unlike on e.g. Bay Trail where only the lowest 3 bits are used.
This is also the reason why MAX_NUM_FREQS is defined as 9, since Cherry Trail SoCs have 9 possible frequencies, so the lo value from the MSR needs to be masked with 0x0f, not with 0x07 otherwise the 9th frequency will get interpreted as the 1st.
Bump MAX_NUM_FREQS to 16 to avoid any possibility of addressing the array out of bounds and makes the mask part of the cpufreq struct so it can be set it per model.
While at it also log an error when the index points to an uninitialized part of the freqs lookup-table.
Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200223140610.59612-2-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/tsc_msr.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-)
--- a/arch/x86/kernel/tsc_msr.c +++ b/arch/x86/kernel/tsc_msr.c @@ -15,7 +15,7 @@ #include <asm/param.h> #include <asm/tsc.h>
-#define MAX_NUM_FREQS 9 +#define MAX_NUM_FREQS 16 /* 4 bits to select the frequency */
/* * If MSR_PERF_STAT[31] is set, the maximum resolved bus ratio can be @@ -27,6 +27,7 @@ struct freq_desc { bool use_msr_plat; u32 freqs[MAX_NUM_FREQS]; + u32 mask; };
/* @@ -37,37 +38,44 @@ struct freq_desc { static const struct freq_desc freq_desc_pnw = { .use_msr_plat = false, .freqs = { 0, 0, 0, 0, 0, 99840, 0, 83200 }, + .mask = 0x07, };
static const struct freq_desc freq_desc_clv = { .use_msr_plat = false, .freqs = { 0, 133200, 0, 0, 0, 99840, 0, 83200 }, + .mask = 0x07, };
static const struct freq_desc freq_desc_byt = { .use_msr_plat = true, .freqs = { 83300, 100000, 133300, 116700, 80000, 0, 0, 0 }, + .mask = 0x07, };
static const struct freq_desc freq_desc_cht = { .use_msr_plat = true, .freqs = { 83300, 100000, 133300, 116700, 80000, 93300, 90000, 88900, 87500 }, + .mask = 0x0f, };
static const struct freq_desc freq_desc_tng = { .use_msr_plat = true, .freqs = { 0, 100000, 133300, 0, 0, 0, 0, 0 }, + .mask = 0x07, };
static const struct freq_desc freq_desc_ann = { .use_msr_plat = true, .freqs = { 83300, 100000, 133300, 100000, 0, 0, 0, 0 }, + .mask = 0x0f, };
static const struct freq_desc freq_desc_lgm = { .use_msr_plat = true, .freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 }, + .mask = 0x0f, };
static const struct x86_cpu_id tsc_msr_cpu_ids[] = { @@ -93,6 +101,7 @@ unsigned long cpu_khz_from_msr(void) const struct freq_desc *freq_desc; const struct x86_cpu_id *id; unsigned long res; + int index;
id = x86_match_cpu(tsc_msr_cpu_ids); if (!id) @@ -109,13 +118,17 @@ unsigned long cpu_khz_from_msr(void)
/* Get FSB FREQ ID */ rdmsr(MSR_FSB_FREQ, lo, hi); + index = lo & freq_desc->mask;
/* Map CPU reference clock freq ID(0-7) to CPU reference clock freq(KHz) */ - freq = freq_desc->freqs[lo & 0x7]; + freq = freq_desc->freqs[index];
/* TSC frequency = maximum resolved freq * maximum resolved bus ratio */ res = freq * ratio;
+ if (freq == 0) + pr_err("Error MSR_FSB_FREQ index %d is unknown\n", index); + #ifdef CONFIG_X86_LOCAL_APIC lapic_timer_period = (freq * 1000) / HZ; #endif
From: Hans de Goede hdegoede@redhat.com
commit fac01d11722c92a186b27ee26cd429a8066adfb5 upstream.
The "Intel 64 and IA-32 Architectures Software Developer’s Manual Volume 4: Model-Specific Registers" has the following table for the values from freq_desc_byt:
000B: 083.3 MHz 001B: 100.0 MHz 010B: 133.3 MHz 011B: 116.7 MHz 100B: 080.0 MHz
Notice how for e.g the 83.3 MHz value there are 3 significant digits, which translates to an accuracy of a 1000 ppm, where as a typical crystal oscillator is 20 - 100 ppm, so the accuracy of the frequency format used in the Software Developer’s Manual is not really helpful.
As far as we know Bay Trail SoCs use a 25 MHz crystal and Cherry Trail uses a 19.2 MHz crystal, the crystal is the source clock for a root PLL which outputs 1600 and 100 MHz. It is unclear if the root PLL outputs are used directly by the CPU clock PLL or if there is another PLL in between.
This does not matter though, we can model the chain of PLLs as a single PLL with a quotient equal to the quotients of all PLLs in the chain multiplied.
So we can create a simplified model of the CPU clock setup using a reference clock of 100 MHz plus a quotient which gets us as close to the frequency from the SDM as possible.
For the 83.3 MHz example from above this would give 100 MHz * 5 / 6 = 83 and 1/3 MHz, which matches exactly what has been measured on actual hardware.
Use a simplified PLL model with a reference clock of 100 MHz for all Bay and Cherry Trail models.
This has been tested on the following models:
CPU freq before: CPU freq after: Intel N2840 2165.800 MHz 2166.667 MHz Intel Z3736 1332.800 MHz 1333.333 MHz Intel Z3775 1466.300 MHz 1466.667 MHz Intel Z8350 1440.000 MHz 1440.000 MHz Intel Z8750 1600.000 MHz 1600.000 MHz
This fixes the time drifting by about 1 second per hour (20 - 30 seconds per day) on (some) devices which rely on the tsc_msr.c code to determine the TSC frequency.
Reported-by: Vipul Kumar vipulk0511@gmail.com Suggested-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200223140610.59612-3-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/tsc_msr.c | 97 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 86 insertions(+), 11 deletions(-)
--- a/arch/x86/kernel/tsc_msr.c +++ b/arch/x86/kernel/tsc_msr.c @@ -18,6 +18,28 @@ #define MAX_NUM_FREQS 16 /* 4 bits to select the frequency */
/* + * The frequency numbers in the SDM are e.g. 83.3 MHz, which does not contain a + * lot of accuracy which leads to clock drift. As far as we know Bay Trail SoCs + * use a 25 MHz crystal and Cherry Trail uses a 19.2 MHz crystal, the crystal + * is the source clk for a root PLL which outputs 1600 and 100 MHz. It is + * unclear if the root PLL outputs are used directly by the CPU clock PLL or + * if there is another PLL in between. + * This does not matter though, we can model the chain of PLLs as a single PLL + * with a quotient equal to the quotients of all PLLs in the chain multiplied. + * So we can create a simplified model of the CPU clock setup using a reference + * clock of 100 MHz plus a quotient which gets us as close to the frequency + * from the SDM as possible. + * For the 83.3 MHz example from above this would give us 100 MHz * 5 / 6 = + * 83 and 1/3 MHz, which matches exactly what has been measured on actual hw. + */ +#define TSC_REFERENCE_KHZ 100000 + +struct muldiv { + u32 multiplier; + u32 divider; +}; + +/* * If MSR_PERF_STAT[31] is set, the maximum resolved bus ratio can be * read in MSR_PLATFORM_ID[12:8], otherwise in MSR_PERF_STAT[44:40]. * Unfortunately some Intel Atom SoCs aren't quite compliant to this, @@ -26,6 +48,11 @@ */ struct freq_desc { bool use_msr_plat; + struct muldiv muldiv[MAX_NUM_FREQS]; + /* + * Some CPU frequencies in the SDM do not map to known PLL freqs, in + * that case the muldiv array is empty and the freqs array is used. + */ u32 freqs[MAX_NUM_FREQS]; u32 mask; }; @@ -47,31 +74,66 @@ static const struct freq_desc freq_desc_ .mask = 0x07, };
+/* + * Bay Trail SDM MSR_FSB_FREQ frequencies simplified PLL model: + * 000: 100 * 5 / 6 = 83.3333 MHz + * 001: 100 * 1 / 1 = 100.0000 MHz + * 010: 100 * 4 / 3 = 133.3333 MHz + * 011: 100 * 7 / 6 = 116.6667 MHz + * 100: 100 * 4 / 5 = 80.0000 MHz + */ static const struct freq_desc freq_desc_byt = { .use_msr_plat = true, - .freqs = { 83300, 100000, 133300, 116700, 80000, 0, 0, 0 }, + .muldiv = { { 5, 6 }, { 1, 1 }, { 4, 3 }, { 7, 6 }, + { 4, 5 } }, .mask = 0x07, };
+/* + * Cherry Trail SDM MSR_FSB_FREQ frequencies simplified PLL model: + * 0000: 100 * 5 / 6 = 83.3333 MHz + * 0001: 100 * 1 / 1 = 100.0000 MHz + * 0010: 100 * 4 / 3 = 133.3333 MHz + * 0011: 100 * 7 / 6 = 116.6667 MHz + * 0100: 100 * 4 / 5 = 80.0000 MHz + * 0101: 100 * 14 / 15 = 93.3333 MHz + * 0110: 100 * 9 / 10 = 90.0000 MHz + * 0111: 100 * 8 / 9 = 88.8889 MHz + * 1000: 100 * 7 / 8 = 87.5000 MHz + */ static const struct freq_desc freq_desc_cht = { .use_msr_plat = true, - .freqs = { 83300, 100000, 133300, 116700, 80000, 93300, 90000, - 88900, 87500 }, + .muldiv = { { 5, 6 }, { 1, 1 }, { 4, 3 }, { 7, 6 }, + { 4, 5 }, { 14, 15 }, { 9, 10 }, { 8, 9 }, + { 7, 8 } }, .mask = 0x0f, };
+/* + * Merriefield SDM MSR_FSB_FREQ frequencies simplified PLL model: + * 0001: 100 * 1 / 1 = 100.0000 MHz + * 0010: 100 * 4 / 3 = 133.3333 MHz + */ static const struct freq_desc freq_desc_tng = { .use_msr_plat = true, - .freqs = { 0, 100000, 133300, 0, 0, 0, 0, 0 }, + .muldiv = { { 0, 0 }, { 1, 1 }, { 4, 3 } }, .mask = 0x07, };
+/* + * Moorefield SDM MSR_FSB_FREQ frequencies simplified PLL model: + * 0000: 100 * 5 / 6 = 83.3333 MHz + * 0001: 100 * 1 / 1 = 100.0000 MHz + * 0010: 100 * 4 / 3 = 133.3333 MHz + * 0011: 100 * 1 / 1 = 100.0000 MHz + */ static const struct freq_desc freq_desc_ann = { .use_msr_plat = true, - .freqs = { 83300, 100000, 133300, 100000, 0, 0, 0, 0 }, + .muldiv = { { 5, 6 }, { 1, 1 }, { 4, 3 }, { 1, 1 } }, .mask = 0x0f, };
+/* 24 MHz crystal? : 24 * 13 / 4 = 78 MHz */ static const struct freq_desc freq_desc_lgm = { .use_msr_plat = true, .freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 }, @@ -97,9 +159,10 @@ static const struct x86_cpu_id tsc_msr_c */ unsigned long cpu_khz_from_msr(void) { - u32 lo, hi, ratio, freq; + u32 lo, hi, ratio, freq, tscref; const struct freq_desc *freq_desc; const struct x86_cpu_id *id; + const struct muldiv *md; unsigned long res; int index;
@@ -119,12 +182,24 @@ unsigned long cpu_khz_from_msr(void) /* Get FSB FREQ ID */ rdmsr(MSR_FSB_FREQ, lo, hi); index = lo & freq_desc->mask; + md = &freq_desc->muldiv[index];
- /* Map CPU reference clock freq ID(0-7) to CPU reference clock freq(KHz) */ - freq = freq_desc->freqs[index]; - - /* TSC frequency = maximum resolved freq * maximum resolved bus ratio */ - res = freq * ratio; + /* + * Note this also catches cases where the index points to an unpopulated + * part of muldiv, in that case the else will set freq and res to 0. + */ + if (md->divider) { + tscref = TSC_REFERENCE_KHZ * md->multiplier; + freq = DIV_ROUND_CLOSEST(tscref, md->divider); + /* + * Multiplying by ratio before the division has better + * accuracy than just calculating freq * ratio. + */ + res = DIV_ROUND_CLOSEST(tscref * ratio, md->divider); + } else { + freq = freq_desc->freqs[index]; + res = freq * ratio; + }
if (freq == 0) pr_err("Error MSR_FSB_FREQ index %d is unknown\n", index);
From: Thomas Gleixner tglx@linutronix.de
commit 3d51507f29f2153a658df4a0674ec5b592b62085 upstream.
All exception entry points must have ASM_CLAC right at the beginning. The general_protection entry is missing one.
Fixes: e59d1b0a2419 ("x86-32, smap: Add STAC/CLAC instructions to 32-bit kernel entry") Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Frederic Weisbecker frederic@kernel.org Reviewed-by: Alexandre Chartre alexandre.chartre@oracle.com Reviewed-by: Andy Lutomirski luto@kernel.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200225220216.219537887@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/entry/entry_32.S | 1 + 1 file changed, 1 insertion(+)
--- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -1694,6 +1694,7 @@ SYM_CODE_START(int3) SYM_CODE_END(int3)
SYM_CODE_START(general_protection) + ASM_CLAC pushl $do_general_protection jmp common_exception SYM_CODE_END(general_protection)
From: Kristian Klausen kristian@klausen.dk
commit 6b3586d45bba14f6912f37488090c37a3710e7b4 upstream.
The WMI method to set the charge threshold does not provide a way to specific a battery, so we assume it is the first/primary battery (by checking if the name is BAT0). On some newer ASUS laptops (Zenbook UM431DA) though, the primary/first battery isn't named BAT0 but BATT, so we need to support that case.
Fixes: 7973353e92ee ("platform/x86: asus-wmi: Refactor charge threshold to use the battery hooking API") Cc: stable@vger.kernel.org Signed-off-by: Kristian Klausen kristian@klausen.dk Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/platform/x86/asus-wmi.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/platform/x86/asus-wmi.c +++ b/drivers/platform/x86/asus-wmi.c @@ -418,8 +418,11 @@ static int asus_wmi_battery_add(struct p { /* The WMI method does not provide a way to specific a battery, so we * just assume it is the first battery. + * Note: On some newer ASUS laptops (Zenbook UM431DA), the primary/first + * battery is named BATT. */ - if (strcmp(battery->desc->name, "BAT0") != 0) + if (strcmp(battery->desc->name, "BAT0") != 0 && + strcmp(battery->desc->name, "BATT") != 0) return -ENODEV;
if (device_create_file(&battery->dev,
From: Fabiano Rosas farosas@linux.ibm.com
commit 9bee484b280a059c1faa10ae174af4f4af02c805 upstream.
kvmppc_uvmem_init checks for Ultravisor support and returns early if it is not present. Calling kvmppc_uvmem_free at module exit will cause an Oops:
$ modprobe -r kvm-hv
Oops: Kernel access of bad area, sig: 11 [#1] <snip> NIP: c000000000789e90 LR: c000000000789e8c CTR: c000000000401030 REGS: c000003fa7bab9a0 TRAP: 0300 Not tainted (5.6.0-rc6-00033-g6c90b86a745a-dirty) MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 24002282 XER: 00000000 CFAR: c000000000dae880 DAR: 0000000000000008 DSISR: 40000000 IRQMASK: 1 GPR00: c000000000789e8c c000003fa7babc30 c0000000016fe500 0000000000000000 GPR04: 0000000000000000 0000000000000006 0000000000000000 c000003faf205c00 GPR08: 0000000000000000 0000000000000001 000000008000002d c00800000ddde140 GPR12: c000000000401030 c000003ffffd9080 0000000000000001 0000000000000000 GPR16: 0000000000000000 0000000000000000 000000013aad0074 000000013aaac978 GPR20: 000000013aad0070 0000000000000000 00007fffd1b37158 0000000000000000 GPR24: 000000014fef0d58 0000000000000000 000000014fef0cf0 0000000000000001 GPR28: 0000000000000000 0000000000000000 c0000000018b2a60 0000000000000000 NIP [c000000000789e90] percpu_ref_kill_and_confirm+0x40/0x170 LR [c000000000789e8c] percpu_ref_kill_and_confirm+0x3c/0x170 Call Trace: [c000003fa7babc30] [c000003faf2064d4] 0xc000003faf2064d4 (unreliable) [c000003fa7babcb0] [c000000000400e8c] dev_pagemap_kill+0x6c/0x80 [c000003fa7babcd0] [c000000000401064] memunmap_pages+0x34/0x2f0 [c000003fa7babd50] [c00800000dddd548] kvmppc_uvmem_free+0x30/0x80 [kvm_hv] [c000003fa7babd80] [c00800000ddcef18] kvmppc_book3s_exit_hv+0x20/0x78 [kvm_hv] [c000003fa7babda0] [c0000000002084d0] sys_delete_module+0x1d0/0x2c0 [c000003fa7babe20] [c00000000000b9d0] system_call+0x5c/0x68 Instruction dump: 3fc2001b fb81ffe0 fba1ffe8 fbe1fff8 7c7f1b78 7c9c2378 3bde4560 7fc3f378 f8010010 f821ff81 486249a1 60000000 <e93f0008> 7c7d1b78 712a0002 40820084 ---[ end trace 5774ef4dc2c98279 ]---
So this patch checks if kvmppc_uvmem_init actually allocated anything before running kvmppc_uvmem_free.
Fixes: ca9f4942670c ("KVM: PPC: Book3S HV: Support for running secure guests") Cc: stable@vger.kernel.org # v5.5+ Reported-by: Greg Kurz groug@kaod.org Signed-off-by: Fabiano Rosas farosas@linux.ibm.com Tested-by: Greg Kurz groug@kaod.org Signed-off-by: Paul Mackerras paulus@ozlabs.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kvm/book3s_hv_uvmem.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -778,6 +778,9 @@ out:
void kvmppc_uvmem_free(void) { + if (!kvmppc_uvmem_bitmap) + return; + memunmap_pages(&kvmppc_uvmem_pgmap); release_mem_region(kvmppc_uvmem_pgmap.res.start, resource_size(&kvmppc_uvmem_pgmap.res));
From: Sean Christopherson sean.j.christopherson@intel.com
commit a1c77abb8d93381e25a8d2df3a917388244ba776 upstream.
Return true for vmx_interrupt_allowed() if the vCPU is in L2 and L1 has external interrupt exiting enabled. IRQs are never blocked in hardware if the CPU is in the guest (L2 from L1's perspective) when IRQs trigger VM-Exit.
The new check percolates up to kvm_vcpu_ready_for_interrupt_injection() and thus vcpu_run(), and so KVM will exit to userspace if userspace has requested an interrupt window (to inject an IRQ into L1).
Remove the @external_intr param from vmx_check_nested_events(), which is actually an indicator that userspace wants an interrupt window, e.g. it's named @req_int_win further up the stack. Injecting a VM-Exit into L1 to try and bounce out to L0 userspace is all kinds of broken and is no longer necessary.
Remove the hack in nested_vmx_vmexit() that attempted to workaround the breakage in vmx_check_nested_events() by only filling interrupt info if there's an actual interrupt pending. The hack actually made things worse because it caused KVM to _never_ fill interrupt info when the LAPIC resides in userspace (kvm_cpu_has_interrupt() queries interrupt.injected, which is always cleared by prepare_vmcs12() before reaching the hack in nested_vmx_vmexit()).
Fixes: 6550c4df7e50 ("KVM: nVMX: Fix interrupt window request with "Acknowledge interrupt on exit"") Cc: stable@vger.kernel.org Cc: Liran Alon liran.alon@oracle.com Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/vmx/nested.c | 18 ++++-------------- arch/x86/kvm/vmx/vmx.c | 9 +++++++-- arch/x86/kvm/x86.c | 10 +++++----- 4 files changed, 17 insertions(+), 22 deletions(-)
--- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1147,7 +1147,7 @@ struct kvm_x86_ops { bool (*pt_supported)(void); bool (*pku_supported)(void);
- int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr); + int (*check_nested_events)(struct kvm_vcpu *vcpu); void (*request_immediate_exit)(struct kvm_vcpu *vcpu);
void (*sched_in)(struct kvm_vcpu *kvm, int cpu); --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3611,7 +3611,7 @@ static void nested_vmx_update_pending_db vcpu->arch.exception.payload); }
-static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr) +static int vmx_check_nested_events(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long exit_qual; @@ -3660,8 +3660,7 @@ static int vmx_check_nested_events(struc return 0; }
- if ((kvm_cpu_has_interrupt(vcpu) || external_intr) && - nested_exit_on_intr(vcpu)) { + if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(vcpu)) { if (block_nested_events) return -EBUSY; nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT, 0, 0); @@ -4309,17 +4308,8 @@ void nested_vmx_vmexit(struct kvm_vcpu * vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
if (likely(!vmx->fail)) { - /* - * TODO: SDM says that with acknowledge interrupt on - * exit, bit 31 of the VM-exit interrupt information - * (valid interrupt) is always set to 1 on - * EXIT_REASON_EXTERNAL_INTERRUPT, so we shouldn't - * need kvm_cpu_has_interrupt(). See the commit - * message for details. - */ - if (nested_exit_intr_ack_set(vcpu) && - exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT && - kvm_cpu_has_interrupt(vcpu)) { + if (exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT && + nested_exit_intr_ack_set(vcpu)) { int irq = kvm_cpu_get_interrupt(vcpu); WARN_ON(irq < 0); vmcs12->vm_exit_intr_info = irq | --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4485,8 +4485,13 @@ static int vmx_nmi_allowed(struct kvm_vc
static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu) { - return (!to_vmx(vcpu)->nested.nested_run_pending && - vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) && + if (to_vmx(vcpu)->nested.nested_run_pending) + return false; + + if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) + return true; + + return (vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) && !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)); } --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7572,7 +7572,7 @@ static void update_cr8_intercept(struct kvm_x86_ops->update_cr8_intercept(vcpu, tpr, max_irr); }
-static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win) +static int inject_pending_event(struct kvm_vcpu *vcpu) { int r;
@@ -7608,7 +7608,7 @@ static int inject_pending_event(struct k * from L2 to L1. */ if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) { - r = kvm_x86_ops->check_nested_events(vcpu, req_int_win); + r = kvm_x86_ops->check_nested_events(vcpu); if (r != 0) return r; } @@ -7670,7 +7670,7 @@ static int inject_pending_event(struct k * KVM_REQ_EVENT only on certain events and not unconditionally? */ if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) { - r = kvm_x86_ops->check_nested_events(vcpu, req_int_win); + r = kvm_x86_ops->check_nested_events(vcpu); if (r != 0) return r; } @@ -8159,7 +8159,7 @@ static int vcpu_enter_guest(struct kvm_v goto out; }
- if (inject_pending_event(vcpu, req_int_win) != 0) + if (inject_pending_event(vcpu) != 0) req_immediate_exit = true; else { /* Enable SMI/NMI/IRQ window open exits if needed. @@ -8389,7 +8389,7 @@ static inline int vcpu_block(struct kvm static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu) { if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) - kvm_x86_ops->check_nested_events(vcpu, false); + kvm_x86_ops->check_nested_events(vcpu);
return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE && !vcpu->arch.apf.halted);
From: David Hildenbrand david@redhat.com
commit a1d032a49522cb5368e5dfb945a85899b4c74f65 upstream.
In case we have a region 1 the following calculation (31 + ((gmap->asce & _ASCE_TYPE_MASK) >> 2)*11) results in 64. As shifts beyond the size are undefined the compiler is free to use instructions like sllg. sllg will only use 6 bits of the shift value (here 64) resulting in no shift at all. That means that ALL addresses will be rejected.
The can result in endless loops, e.g. when prefix cannot get mapped.
Fixes: 4be130a08420 ("s390/mm: add shadow gmap support") Tested-by: Janosch Frank frankja@linux.ibm.com Reported-by: Janosch Frank frankja@linux.ibm.com Cc: stable@vger.kernel.org # v4.8+ Signed-off-by: David Hildenbrand david@redhat.com Link: https://lore.kernel.org/r/20200403153050.20569-2-david@redhat.com Reviewed-by: Claudio Imbrenda imbrenda@linux.ibm.com Reviewed-by: Christian Borntraeger borntraeger@de.ibm.com [borntraeger@de.ibm.com: fix patch description, remove WARN_ON_ONCE] Signed-off-by: Christian Borntraeger borntraeger@de.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/mm/gmap.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -787,14 +787,18 @@ static void gmap_call_notifier(struct gm static inline unsigned long *gmap_table_walk(struct gmap *gmap, unsigned long gaddr, int level) { + const int asce_type = gmap->asce & _ASCE_TYPE_MASK; unsigned long *table;
if ((gmap->asce & _ASCE_TYPE_MASK) + 4 < (level * 4)) return NULL; if (gmap_is_shadow(gmap) && gmap->removed) return NULL; - if (gaddr & (-1UL << (31 + ((gmap->asce & _ASCE_TYPE_MASK) >> 2)*11))) + + if (asce_type != _ASCE_TYPE_REGION1 && + gaddr & (-1UL << (31 + (asce_type >> 2) * 11))) return NULL; + table = gmap->table; switch (gmap->asce & _ASCE_TYPE_MASK) { case _ASCE_TYPE_REGION1:
From: David Hildenbrand david@redhat.com
commit 4d4cee96fb7a3cc53702a9be8299bf525be4ee98 upstream.
Whenever we get an -EFAULT, we failed to read in guest 2 physical address space. Such addressing exceptions are reported via a program intercept to the nested hypervisor.
We faked the intercept, we have to return to guest 2. Instead, right now we would be returning -EFAULT from the intercept handler, eventually crashing the VM. the correct thing to do is to return 1 as rc == 1 is the internal representation of "we have to go back into g2".
Addressing exceptions can only happen if the g2->g3 page tables reference invalid g2 addresses (say, either a table or the final page is not accessible - so something that basically never happens in sane environments.
Identified by manual code inspection.
Fixes: a3508fbe9dc6 ("KVM: s390: vsie: initial support for nested virtualization") Cc: stable@vger.kernel.org # v4.8+ Signed-off-by: David Hildenbrand david@redhat.com Link: https://lore.kernel.org/r/20200403153050.20569-3-david@redhat.com Reviewed-by: Claudio Imbrenda imbrenda@linux.ibm.com Reviewed-by: Christian Borntraeger borntraeger@de.ibm.com [borntraeger@de.ibm.com: fix patch description] Signed-off-by: Christian Borntraeger borntraeger@de.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/kvm/vsie.c | 1 + 1 file changed, 1 insertion(+)
--- a/arch/s390/kvm/vsie.c +++ b/arch/s390/kvm/vsie.c @@ -1202,6 +1202,7 @@ static int vsie_run(struct kvm_vcpu *vcp scb_s->iprcc = PGM_ADDRESSING; scb_s->pgmilc = 4; scb_s->gpsw.addr = __rewind_psw(scb_s->gpsw, 4); + rc = 1; } return rc; }
From: Sean Christopherson sean.j.christopherson@intel.com
commit edd4fa37baa6ee8e44dc65523b27bd6fe44c94de upstream.
Reallocate a rmap array and recalcuate large page compatibility when moving an existing memslot to correctly handle the alignment properties of the new memslot. The number of rmap entries required at each level is dependent on the alignment of the memslot's base gfn with respect to that level, e.g. moving a large-page aligned memslot so that it becomes unaligned will increase the number of rmap entries needed at the now unaligned level.
Not updating the rmap array is the most obvious bug, as KVM accesses garbage data beyond the end of the rmap. KVM interprets the bad data as pointers, leading to non-canonical #GPs, unexpected #PFs, etc...
general protection fault: 0000 [#1] SMP CPU: 0 PID: 1909 Comm: move_memory_reg Not tainted 5.4.0-rc7+ #139 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:rmap_get_first+0x37/0x50 [kvm] Code: <48> 8b 3b 48 85 ff 74 ec e8 6c f4 ff ff 85 c0 74 e3 48 89 d8 5b c3 RSP: 0018:ffffc9000021bbc8 EFLAGS: 00010246 RAX: ffff00617461642e RBX: ffff00617461642e RCX: 0000000000000012 RDX: ffff88827400f568 RSI: ffffc9000021bbe0 RDI: ffff88827400f570 RBP: 0010000000000000 R08: ffffc9000021bd00 R09: ffffc9000021bda8 R10: ffffc9000021bc48 R11: 0000000000000000 R12: 0030000000000000 R13: 0000000000000000 R14: ffff88827427d700 R15: ffffc9000021bce8 FS: 00007f7eda014700(0000) GS:ffff888277a00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f7ed9216ff8 CR3: 0000000274391003 CR4: 0000000000162eb0 Call Trace: kvm_mmu_slot_set_dirty+0xa1/0x150 [kvm] __kvm_set_memory_region.part.64+0x559/0x960 [kvm] kvm_set_memory_region+0x45/0x60 [kvm] kvm_vm_ioctl+0x30f/0x920 [kvm] do_vfs_ioctl+0xa1/0x620 ksys_ioctl+0x66/0x70 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x4c/0x170 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f7ed9911f47 Code: <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 21 6f 2c 00 f7 d8 64 89 01 48 RSP: 002b:00007ffc00937498 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000001ab0010 RCX: 00007f7ed9911f47 RDX: 0000000001ab1350 RSI: 000000004020ae46 RDI: 0000000000000004 RBP: 000000000000000a R08: 0000000000000000 R09: 00007f7ed9214700 R10: 00007f7ed92149d0 R11: 0000000000000246 R12: 00000000bffff000 R13: 0000000000000003 R14: 00007f7ed9215000 R15: 0000000000000000 Modules linked in: kvm_intel kvm irqbypass ---[ end trace 0c5f570b3358ca89 ]---
The disallow_lpage tracking is more subtle. Failure to update results in KVM creating large pages when it shouldn't, either due to stale data or again due to indexing beyond the end of the metadata arrays, which can lead to memory corruption and/or leaking data to guest/userspace.
Note, the arrays for the old memslot are freed by the unconditional call to kvm_free_memslot() in __kvm_set_memory_region().
Fixes: 05da45583de9b ("KVM: MMU: large page support") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Reviewed-by: Peter Xu peterx@redhat.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/x86.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9768,6 +9768,13 @@ int kvm_arch_create_memslot(struct kvm * { int i;
+ /* + * Clear out the previous array pointers for the KVM_MR_MOVE case. The + * old arrays will be freed by __kvm_set_memory_region() if installing + * the new memslot is successful. + */ + memset(&slot->arch, 0, sizeof(slot->arch)); + for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { struct kvm_lpage_info *linfo; unsigned long ugfn; @@ -9849,6 +9856,10 @@ int kvm_arch_prepare_memory_region(struc const struct kvm_userspace_memory_region *mem, enum kvm_mr_change change) { + if (change == KVM_MR_MOVE) + return kvm_arch_create_memslot(kvm, memslot, + mem->memory_size >> PAGE_SHIFT); + return 0; }
From: Sean Christopherson sean.j.christopherson@intel.com
commit 31603d4fc2bb4f0815245d496cb970b27b4f636a upstream.
VMCLEAR all in-use VMCSes during a crash, even if kdump's NMI shootdown interrupted a KVM update of the percpu in-use VMCS list.
Because NMIs are not blocked by disabling IRQs, it's possible that crash_vmclear_local_loaded_vmcss() could be called while the percpu list of VMCSes is being modified, e.g. in the middle of list_add() in vmx_vcpu_load_vmcs(). This potential corner case was called out in the original commit[*], but the analysis of its impact was wrong.
Skipping the VMCLEARs is wrong because it all but guarantees that a loaded, and therefore cached, VMCS will live across kexec and corrupt memory in the new kernel. Corruption will occur because the CPU's VMCS cache is non-coherent, i.e. not snooped, and so the writeback of VMCS memory on its eviction will overwrite random memory in the new kernel. The VMCS will live because the NMI shootdown also disables VMX, i.e. the in-progress VMCLEAR will #UD, and existing Intel CPUs do not flush the VMCS cache on VMXOFF.
Furthermore, interrupting list_add() and list_del() is safe due to crash_vmclear_local_loaded_vmcss() using forward iteration. list_add() ensures the new entry is not visible to forward iteration unless the entire add completes, via WRITE_ONCE(prev->next, new). A bad "prev" pointer could be observed if the NMI shootdown interrupted list_del() or list_add(), but list_for_each_entry() does not consume ->prev.
In addition to removing the temporary disabling of VMCLEAR, open code loaded_vmcs_init() in __loaded_vmcs_clear() and reorder VMCLEAR so that the VMCS is deleted from the list only after it's been VMCLEAR'd. Deleting the VMCS before VMCLEAR would allow a race where the NMI shootdown could arrive between list_del() and vmcs_clear() and thus neither flow would execute a successful VMCLEAR. Alternatively, more code could be moved into loaded_vmcs_init(), but that gets rather silly as the only other user, alloc_loaded_vmcs(), doesn't need the smp_wmb() and would need to work around the list_del().
Update the smp_*() comments related to the list manipulation, and opportunistically reword them to improve clarity.
[*] https://patchwork.kernel.org/patch/1675731/#3720461
Fixes: 8f536b7697a0 ("KVM: VMX: provide the vmclear function and a bitmap to support VMCLEAR in kdump") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Message-Id: 20200321193751.24985-2-sean.j.christopherson@intel.com Reviewed-by: Vitaly Kuznetsov vkuznets@redhat.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/vmx/vmx.c | 67 +++++++++++-------------------------------------- 1 file changed, 16 insertions(+), 51 deletions(-)
--- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -664,43 +664,15 @@ void loaded_vmcs_init(struct loaded_vmcs }
#ifdef CONFIG_KEXEC_CORE -/* - * This bitmap is used to indicate whether the vmclear - * operation is enabled on all cpus. All disabled by - * default. - */ -static cpumask_t crash_vmclear_enabled_bitmap = CPU_MASK_NONE; - -static inline void crash_enable_local_vmclear(int cpu) -{ - cpumask_set_cpu(cpu, &crash_vmclear_enabled_bitmap); -} - -static inline void crash_disable_local_vmclear(int cpu) -{ - cpumask_clear_cpu(cpu, &crash_vmclear_enabled_bitmap); -} - -static inline int crash_local_vmclear_enabled(int cpu) -{ - return cpumask_test_cpu(cpu, &crash_vmclear_enabled_bitmap); -} - static void crash_vmclear_local_loaded_vmcss(void) { int cpu = raw_smp_processor_id(); struct loaded_vmcs *v;
- if (!crash_local_vmclear_enabled(cpu)) - return; - list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu), loaded_vmcss_on_cpu_link) vmcs_clear(v->vmcs); } -#else -static inline void crash_enable_local_vmclear(int cpu) { } -static inline void crash_disable_local_vmclear(int cpu) { } #endif /* CONFIG_KEXEC_CORE */
static void __loaded_vmcs_clear(void *arg) @@ -712,19 +684,24 @@ static void __loaded_vmcs_clear(void *ar return; /* vcpu migration can race with cpu offline */ if (per_cpu(current_vmcs, cpu) == loaded_vmcs->vmcs) per_cpu(current_vmcs, cpu) = NULL; - crash_disable_local_vmclear(cpu); + + vmcs_clear(loaded_vmcs->vmcs); + if (loaded_vmcs->shadow_vmcs && loaded_vmcs->launched) + vmcs_clear(loaded_vmcs->shadow_vmcs); + list_del(&loaded_vmcs->loaded_vmcss_on_cpu_link);
/* - * we should ensure updating loaded_vmcs->loaded_vmcss_on_cpu_link - * is before setting loaded_vmcs->vcpu to -1 which is done in - * loaded_vmcs_init. Otherwise, other cpu can see vcpu = -1 fist - * then adds the vmcs into percpu list before it is deleted. + * Ensure all writes to loaded_vmcs, including deleting it from its + * current percpu list, complete before setting loaded_vmcs->vcpu to + * -1, otherwise a different cpu can see vcpu == -1 first and add + * loaded_vmcs to its percpu list before it's deleted from this cpu's + * list. Pairs with the smp_rmb() in vmx_vcpu_load_vmcs(). */ smp_wmb();
- loaded_vmcs_init(loaded_vmcs); - crash_enable_local_vmclear(cpu); + loaded_vmcs->cpu = -1; + loaded_vmcs->launched = 0; }
void loaded_vmcs_clear(struct loaded_vmcs *loaded_vmcs) @@ -1333,18 +1310,17 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu if (!already_loaded) { loaded_vmcs_clear(vmx->loaded_vmcs); local_irq_disable(); - crash_disable_local_vmclear(cpu);
/* - * Read loaded_vmcs->cpu should be before fetching - * loaded_vmcs->loaded_vmcss_on_cpu_link. - * See the comments in __loaded_vmcs_clear(). + * Ensure loaded_vmcs->cpu is read before adding loaded_vmcs to + * this cpu's percpu list, otherwise it may not yet be deleted + * from its previous cpu's percpu list. Pairs with the + * smb_wmb() in __loaded_vmcs_clear(). */ smp_rmb();
list_add(&vmx->loaded_vmcs->loaded_vmcss_on_cpu_link, &per_cpu(loaded_vmcss_on_cpu, cpu)); - crash_enable_local_vmclear(cpu); local_irq_enable(); }
@@ -2260,17 +2236,6 @@ static int hardware_enable(void) INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
- /* - * Now we can enable the vmclear operation in kdump - * since the loaded_vmcss_on_cpu list on this cpu - * has been initialized. - * - * Though the cpu is not in VMX operation now, there - * is no problem to enable the vmclear operation - * for the loaded_vmcss_on_cpu list is empty! - */ - crash_enable_local_vmclear(cpu); - rdmsrl(MSR_IA32_FEATURE_CONTROL, old);
test_bits = FEATURE_CONTROL_LOCKED;
From: Sean Christopherson sean.j.christopherson@intel.com
commit d18b2f43b9147c8005ae0844fb445d8cc6a87e31 upstream.
Check the result of __vmalloc() to avoid dereferencing a NULL pointer in the event that allocation failres.
Fixes: d1e5b0e98ea27 ("kvm: Make VM ioctl do valloc for some archs") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Reviewed-by: Vitaly Kuznetsov vkuznets@redhat.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/svm.c | 4 ++++ arch/x86/kvm/vmx/vmx.c | 4 ++++ 2 files changed, 8 insertions(+)
--- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1930,6 +1930,10 @@ static struct kvm *svm_vm_alloc(void) struct kvm_svm *kvm_svm = __vmalloc(sizeof(struct kvm_svm), GFP_KERNEL_ACCOUNT | __GFP_ZERO, PAGE_KERNEL); + + if (!kvm_svm) + return NULL; + return &kvm_svm->kvm; }
--- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6633,6 +6633,10 @@ static struct kvm *vmx_vm_alloc(void) struct kvm_vmx *kvm_vmx = __vmalloc(sizeof(struct kvm_vmx), GFP_KERNEL_ACCOUNT | __GFP_ZERO, PAGE_KERNEL); + + if (!kvm_vmx) + return NULL; + return &kvm_vmx->kvm; }
From: Sean Christopherson sean.j.christopherson@intel.com
commit 842f4be95899df22b5843ba1a7c8cf37e831a6e8 upstream.
Add a hand coded assembly trampoline to preserve volatile registers across vmread_error(), and to handle the calling convention differences between 64-bit and 32-bit due to asmlinkage on vmread_error(). Pass @field and @fault on the stack when invoking the trampoline to avoid clobbering volatile registers in the context of the inline assembly.
Calling vmread_error() directly from inline assembly is partially broken on 64-bit, and completely broken on 32-bit. On 64-bit, it will clobber %rdi and %rsi (used to pass @field and @fault) and any volatile regs written by vmread_error(). On 32-bit, asmlinkage means vmread_error() expects the parameters to be passed on the stack, not via regs.
Opportunistically zero out the result in the trampoline to save a few bytes of code for every VMREAD. A happy side effect of the trampoline is that the inline code footprint is reduced by three bytes on 64-bit due to PUSH/POP being more efficent (in terms of opcode bytes) than MOV.
Fixes: 6e2020977e3e6 ("KVM: VMX: Add error handling to VMREAD helper") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Message-Id: 20200326160712.28803-1-sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/vmx/ops.h | 28 ++++++++++++++++----- arch/x86/kvm/vmx/vmenter.S | 58 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 79 insertions(+), 7 deletions(-)
--- a/arch/x86/kvm/vmx/ops.h +++ b/arch/x86/kvm/vmx/ops.h @@ -12,7 +12,8 @@
#define __ex(x) __kvm_handle_fault_on_reboot(x)
-asmlinkage void vmread_error(unsigned long field, bool fault); +__attribute__((regparm(0))) void vmread_error_trampoline(unsigned long field, + bool fault); void vmwrite_error(unsigned long field, unsigned long value); void vmclear_error(struct vmcs *vmcs, u64 phys_addr); void vmptrld_error(struct vmcs *vmcs, u64 phys_addr); @@ -70,15 +71,28 @@ static __always_inline unsigned long __v asm volatile("1: vmread %2, %1\n\t" ".byte 0x3e\n\t" /* branch taken hint */ "ja 3f\n\t" - "mov %2, %%" _ASM_ARG1 "\n\t" - "xor %%" _ASM_ARG2 ", %%" _ASM_ARG2 "\n\t" - "2: call vmread_error\n\t" - "xor %k1, %k1\n\t" + + /* + * VMREAD failed. Push '0' for @fault, push the failing + * @field, and bounce through the trampoline to preserve + * volatile registers. + */ + "push $0\n\t" + "push %2\n\t" + "2:call vmread_error_trampoline\n\t" + + /* + * Unwind the stack. Note, the trampoline zeros out the + * memory for @fault so that the result is '0' on error. + */ + "pop %2\n\t" + "pop %1\n\t" "3:\n\t"
+ /* VMREAD faulted. As above, except push '1' for @fault. */ ".pushsection .fixup, "ax"\n\t" - "4: mov %2, %%" _ASM_ARG1 "\n\t" - "mov $1, %%" _ASM_ARG2 "\n\t" + "4: push $1\n\t" + "push %2\n\t" "jmp 2b\n\t" ".popsection\n\t" _ASM_EXTABLE(1b, 4b) --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -234,3 +234,61 @@ SYM_FUNC_START(__vmx_vcpu_run) 2: mov $1, %eax jmp 1b SYM_FUNC_END(__vmx_vcpu_run) + +/** + * vmread_error_trampoline - Trampoline from inline asm to vmread_error() + * @field: VMCS field encoding that failed + * @fault: %true if the VMREAD faulted, %false if it failed + + * Save and restore volatile registers across a call to vmread_error(). Note, + * all parameters are passed on the stack. + */ +SYM_FUNC_START(vmread_error_trampoline) + push %_ASM_BP + mov %_ASM_SP, %_ASM_BP + + push %_ASM_AX + push %_ASM_CX + push %_ASM_DX +#ifdef CONFIG_X86_64 + push %rdi + push %rsi + push %r8 + push %r9 + push %r10 + push %r11 +#endif +#ifdef CONFIG_X86_64 + /* Load @field and @fault to arg1 and arg2 respectively. */ + mov 3*WORD_SIZE(%rbp), %_ASM_ARG2 + mov 2*WORD_SIZE(%rbp), %_ASM_ARG1 +#else + /* Parameters are passed on the stack for 32-bit (see asmlinkage). */ + push 3*WORD_SIZE(%ebp) + push 2*WORD_SIZE(%ebp) +#endif + + call vmread_error + +#ifndef CONFIG_X86_64 + add $8, %esp +#endif + + /* Zero out @fault, which will be popped into the result register. */ + _ASM_MOV $0, 3*WORD_SIZE(%_ASM_BP) + +#ifdef CONFIG_X86_64 + pop %r11 + pop %r10 + pop %r9 + pop %r8 + pop %rsi + pop %rdi +#endif + pop %_ASM_DX + pop %_ASM_CX + pop %_ASM_AX + pop %_ASM_BP + + ret +SYM_FUNC_END(vmread_error_trampoline)
From: Vitaly Kuznetsov vkuznets@redhat.com
commit dbef2808af6c594922fe32833b30f55f35e9da6d upstream.
If KVM wasn't used at all before we crash the cleanup procedure fails with BUG: unable to handle page fault for address: ffffffffffffffc8 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 23215067 P4D 23215067 PUD 23217067 PMD 0 Oops: 0000 [#8] SMP PTI CPU: 0 PID: 3542 Comm: bash Kdump: loaded Tainted: G D 5.6.0-rc2+ #823 RIP: 0010:crash_vmclear_local_loaded_vmcss.cold+0x19/0x51 [kvm_intel]
The root cause is that loaded_vmcss_on_cpu list is not yet initialized, we initialize it in hardware_enable() but this only happens when we start a VM.
Previously, we used to have a bitmap with enabled CPUs and that was preventing [masking] the issue.
Initialized loaded_vmcss_on_cpu list earlier, right before we assign crash_vmclear_loaded_vmcss pointer. blocked_vcpu_on_cpu list and blocked_vcpu_on_cpu_lock are moved altogether for consistency.
Fixes: 31603d4fc2bb ("KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec support") Signed-off-by: Vitaly Kuznetsov vkuznets@redhat.com Message-Id: 20200401081348.1345307-1-vkuznets@redhat.com Reviewed-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/vmx/vmx.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
--- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2232,10 +2232,6 @@ static int hardware_enable(void) !hv_get_vp_assist_page(cpu)) return -EFAULT;
- INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); - INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); - spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); - rdmsrl(MSR_IA32_FEATURE_CONTROL, old);
test_bits = FEATURE_CONTROL_LOCKED; @@ -8006,7 +8002,7 @@ module_exit(vmx_exit);
static int __init vmx_init(void) { - int r; + int r, cpu;
#if IS_ENABLED(CONFIG_HYPERV) /* @@ -8060,6 +8056,12 @@ static int __init vmx_init(void) return r; }
+ for_each_possible_cpu(cpu) { + INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); + INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); + spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); + } + #ifdef CONFIG_KEXEC_CORE rcu_assign_pointer(crash_vmclear_loaded_vmcss, crash_vmclear_local_loaded_vmcss);
From: Steve French stfrench@microsoft.com
commit cf5371ae460eb8e484e4884747af270c86c3c469 upstream.
There are cases when we don't want to send the SMB2 flush operation (e.g. when user specifies mount parm "nostrictsync") and it can be a very expensive operation on the server. In most cases in order to set mtime, we simply need to flush (write) the dirtry pages from the client and send the writes to the server not also send a flush protocol operation to the server.
Fixes: aa081859b10c ("cifs: flush before set-info if we have writeable handles") CC: Stable stable@vger.kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/cifs/inode.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-)
--- a/fs/cifs/inode.c +++ b/fs/cifs/inode.c @@ -2517,25 +2517,26 @@ cifs_setattr_nounix(struct dentry *diren
/* * Attempt to flush data before changing attributes. We need to do - * this for ATTR_SIZE and ATTR_MTIME for sure, and if we change the - * ownership or mode then we may also need to do this. Here, we take - * the safe way out and just do the flush on all setattr requests. If - * the flush returns error, store it to report later and continue. + * this for ATTR_SIZE and ATTR_MTIME. If the flush of the data + * returns error, store it to report later and continue. * * BB: This should be smarter. Why bother flushing pages that * will be truncated anyway? Also, should we error out here if - * the flush returns error? + * the flush returns error? Do we need to check for ATTR_MTIME_SET flag? */ - rc = filemap_write_and_wait(inode->i_mapping); - if (is_interrupt_error(rc)) { - rc = -ERESTARTSYS; - goto cifs_setattr_exit; + if (attrs->ia_valid & (ATTR_MTIME | ATTR_SIZE | ATTR_CTIME)) { + rc = filemap_write_and_wait(inode->i_mapping); + if (is_interrupt_error(rc)) { + rc = -ERESTARTSYS; + goto cifs_setattr_exit; + } + mapping_set_error(inode->i_mapping, rc); }
- mapping_set_error(inode->i_mapping, rc); rc = 0;
- if (attrs->ia_valid & ATTR_MTIME) { + if ((attrs->ia_valid & ATTR_MTIME) && + !(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)) { rc = cifs_get_writable_file(cifsInode, FIND_WR_ANY, &wfile); if (!rc) { tcon = tlink_tcon(wfile->tlink);
From: Yilu Lin linyilu@huawei.com
commit 97adda8b3ab703de8e4c8d27646ddd54fe22879c upstream.
This patch is used to fix the bug in collect_uncached_read_data() that rc is automatically converted from a signed number to an unsigned number when the CIFS asynchronous read fails. It will cause ctx->rc is error.
Example: Share a directory and create a file on the Windows OS. Mount the directory to the Linux OS using CIFS. On the CIFS client of the Linux OS, invoke the pread interface to deliver the read request.
The size of the read length plus offset of the read request is greater than the maximum file size.
In this case, the CIFS server on the Windows OS returns a failure message (for example, the return value of smb2.nt_status is STATUS_INVALID_PARAMETER).
After receiving the response message, the CIFS client parses smb2.nt_status to STATUS_INVALID_PARAMETER and converts it to the Linux error code (rdata->result=-22).
Then the CIFS client invokes the collect_uncached_read_data function to assign the value of rdata->result to rc, that is, rc=rdata->result=-22.
The type of the ctx->total_len variable is unsigned integer, the type of the rc variable is integer, and the type of the ctx->rc variable is ssize_t.
Therefore, during the ternary operation, the value of rc is automatically converted to an unsigned number. The final result is ctx->rc=4294967274. However, the expected result is ctx->rc=-22.
Signed-off-by: Yilu Lin linyilu@huawei.com Signed-off-by: Steve French stfrench@microsoft.com CC: Stable stable@vger.kernel.org Acked-by: Ronnie Sahlberg lsahlber@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/cifs/file.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -3842,7 +3842,7 @@ again: if (rc == -ENODATA) rc = 0;
- ctx->rc = (rc == 0) ? ctx->total_len : rc; + ctx->rc = (rc == 0) ? (ssize_t)ctx->total_len : rc;
mutex_unlock(&ctx->aio_mutex);
From: Frieder Schrempf frieder.schrempf@kontron.de
commit 2148937501ee3d663e0010e519a553fea67ad103 upstream.
For reading and writing the bad block markers, spinand->oobbuf is currently used as a buffer for the marker bytes. During the underlying read and write operations to actually get/set the content of the OOB area, the content of spinand->oobbuf is reused and changed by accessing it through spinand->oobbuf and/or spinand->databuf.
This is a flaw in the original design of the SPI NAND core and at the latest from 13c15e07eedf ("mtd: spinand: Handle the case where PROGRAM LOAD does not reset the cache") on, it results in not having the bad block marker written at all, as the spinand->oobbuf is cleared to 0xff after setting the marker bytes to zero.
To fix it, we now just store the two bytes for the marker on the stack and let the read/write operations copy it from/to the page buffer later.
Fixes: 7529df465248 ("mtd: nand: Add core infrastructure to support SPI NANDs") Cc: stable@vger.kernel.org Signed-off-by: Frieder Schrempf frieder.schrempf@kontron.de Reviewed-by: Boris Brezillon boris.brezillon@collabora.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/20200218100432.32433-2-frieder.schrempf@ko... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/spi/core.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
--- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -568,18 +568,18 @@ static int spinand_mtd_write(struct mtd_ static bool spinand_isbad(struct nand_device *nand, const struct nand_pos *pos) { struct spinand_device *spinand = nand_to_spinand(nand); + u8 marker[2] = { }; struct nand_page_io_req req = { .pos = *pos, - .ooblen = 2, + .ooblen = sizeof(marker), .ooboffs = 0, - .oobbuf.in = spinand->oobbuf, + .oobbuf.in = marker, .mode = MTD_OPS_RAW, };
- memset(spinand->oobbuf, 0, 2); spinand_select_target(spinand, pos->target); spinand_read_page(spinand, &req, false); - if (spinand->oobbuf[0] != 0xff || spinand->oobbuf[1] != 0xff) + if (marker[0] != 0xff || marker[1] != 0xff) return true;
return false; @@ -603,11 +603,12 @@ static int spinand_mtd_block_isbad(struc static int spinand_markbad(struct nand_device *nand, const struct nand_pos *pos) { struct spinand_device *spinand = nand_to_spinand(nand); + u8 marker[2] = { }; struct nand_page_io_req req = { .pos = *pos, .ooboffs = 0, - .ooblen = 2, - .oobbuf.out = spinand->oobbuf, + .ooblen = sizeof(marker), + .oobbuf.out = marker, }; int ret;
@@ -622,7 +623,6 @@ static int spinand_markbad(struct nand_d
spinand_erase_op(spinand, pos);
- memset(spinand->oobbuf, 0, 2); return spinand_write_page(spinand, &req); }
From: Frieder Schrempf frieder.schrempf@kontron.de
commit b645ad39d56846618704e463b24bb994c9585c7f upstream.
Currently when marking a block, we use spinand_erase_op() to erase the block before writing the marker to the OOB area. Doing so without waiting for the operation to finish can lead to the marking failing silently and no bad block marker being written to the flash.
In fact we don't need to do an erase at all before writing the BBM. The ECC is disabled for raw accesses to the OOB data and we don't need to work around any issues with chips reporting ECC errors as it is known to be the case for raw NAND.
Fixes: 7529df465248 ("mtd: nand: Add core infrastructure to support SPI NANDs") Cc: stable@vger.kernel.org Signed-off-by: Frieder Schrempf frieder.schrempf@kontron.de Reviewed-by: Boris Brezillon boris.brezillon@collabora.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/20200218100432.32433-4-frieder.schrempf@ko... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/spi/core.c | 3 --- 1 file changed, 3 deletions(-)
--- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -612,7 +612,6 @@ static int spinand_markbad(struct nand_d }; int ret;
- /* Erase block before marking it bad. */ ret = spinand_select_target(spinand, pos->target); if (ret) return ret; @@ -621,8 +620,6 @@ static int spinand_markbad(struct nand_d if (ret) return ret;
- spinand_erase_op(spinand, pos); - return spinand_write_page(spinand, &req); }
From: Piotr Sroka piotrs@cadence.com
commit e4578af0354176ff6b4ae78b9998b4f479f7c31c upstream.
The value of cdns_chip->sector_count is not known at the moment of the derivation of ecc_size, leading to a zero value. Fix this by assigning ecc_size later in the code.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Piotr Sroka piotrs@cadence.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/1581328530-29966-2-git-send-email-piotrs@c... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/raw/cadence-nand-controller.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -2585,7 +2585,7 @@ int cadence_nand_attach_chip(struct nand { struct cdns_nand_ctrl *cdns_ctrl = to_cdns_nand_ctrl(chip->controller); struct cdns_nand_chip *cdns_chip = to_cdns_nand_chip(chip); - u32 ecc_size = cdns_chip->sector_count * chip->ecc.bytes; + u32 ecc_size; struct mtd_info *mtd = nand_to_mtd(chip); u32 max_oob_data_size; int ret; @@ -2625,6 +2625,7 @@ int cadence_nand_attach_chip(struct nand /* Error correction configuration. */ cdns_chip->sector_size = chip->ecc.size; cdns_chip->sector_count = mtd->writesize / cdns_chip->sector_size; + ecc_size = cdns_chip->sector_count * chip->ecc.bytes;
cdns_chip->avail_oob_size = mtd->oobsize - ecc_size;
From: Piotr Sroka piotrs@cadence.com
commit 9bf1903bed7a2e84f5a8deedb38f7e0ac5e8bfc6 upstream.
Increase bad block marker size from one byte to two bytes. Bad block marker is handled by skip bytes feature of HPNFC. Controller expects this value to be an even number.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Piotr Sroka piotrs@cadence.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/1581328530-29966-3-git-send-email-piotrs@c... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/raw/cadence-nand-controller.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
--- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -2603,12 +2603,9 @@ int cadence_nand_attach_chip(struct nand chip->options |= NAND_NO_SUBPAGE_WRITE;
cdns_chip->bbm_offs = chip->badblockpos; - if (chip->options & NAND_BUSWIDTH_16) { - cdns_chip->bbm_offs &= ~0x01; - cdns_chip->bbm_len = 2; - } else { - cdns_chip->bbm_len = 1; - } + cdns_chip->bbm_offs &= ~0x01; + /* this value should be even number */ + cdns_chip->bbm_len = 2;
ret = nand_ecc_choose_conf(chip, &cdns_ctrl->ecc_caps,
From: Piotr Sroka piotrs@cadence.com
commit 0d7d6c8183aadb1dcc13f415941404a7913b46b3 upstream.
Reing the completion object before executing CDMA command to make sure the 'done' flag is OK.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Piotr Sroka piotrs@cadence.com Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/linux-mtd/1581328530-29966-4-git-send-email-piotrs@c... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/raw/cadence-nand-controller.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -997,6 +997,7 @@ static int cadence_nand_cdma_send(struct return status;
cadence_nand_reset_irq(cdns_ctrl); + reinit_completion(&cdns_ctrl->complete);
writel_relaxed((u32)cdns_ctrl->dma_cdma_desc, cdns_ctrl->reg + CMD_REG2);
From: Qu Wenruo wqu@suse.com
commit b3ff8f1d380e65dddd772542aa9bff6c86bf715a upstream.
[BUG] There is a fuzzed image which could cause KASAN report at unmount time.
BUG: KASAN: use-after-free in btrfs_queue_work+0x2c1/0x390 Read of size 8 at addr ffff888067cf6848 by task umount/1922
CPU: 0 PID: 1922 Comm: umount Tainted: G W 5.0.21 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 Call Trace: dump_stack+0x5b/0x8b print_address_description+0x70/0x280 kasan_report+0x13a/0x19b btrfs_queue_work+0x2c1/0x390 btrfs_wq_submit_bio+0x1cd/0x240 btree_submit_bio_hook+0x18c/0x2a0 submit_one_bio+0x1be/0x320 flush_write_bio.isra.41+0x2c/0x70 btree_write_cache_pages+0x3bb/0x7f0 do_writepages+0x5c/0x130 __writeback_single_inode+0xa3/0x9a0 writeback_single_inode+0x23d/0x390 write_inode_now+0x1b5/0x280 iput+0x2ef/0x600 close_ctree+0x341/0x750 generic_shutdown_super+0x126/0x370 kill_anon_super+0x31/0x50 btrfs_kill_super+0x36/0x2b0 deactivate_locked_super+0x80/0xc0 deactivate_super+0x13c/0x150 cleanup_mnt+0x9a/0x130 task_work_run+0x11a/0x1b0 exit_to_usermode_loop+0x107/0x130 do_syscall_64+0x1e5/0x280 entry_SYSCALL_64_after_hwframe+0x44/0xa9
[CAUSE] The fuzzed image has a completely screwd up extent tree:
leaf 29421568 gen 8 total ptrs 6 free space 3587 owner EXTENT_TREE refs 2 lock (w:0 r:0 bw:0 br:0 sw:0 sr:0) lock_owner 0 current 5938 item 0 key (12587008 168 4096) itemoff 3942 itemsize 53 extent refs 1 gen 9 flags 1 ref#0: extent data backref root 5 objectid 259 offset 0 count 1 item 1 key (12591104 168 8192) itemoff 3889 itemsize 53 extent refs 1 gen 9 flags 1 ref#0: extent data backref root 5 objectid 271 offset 0 count 1 item 2 key (12599296 168 4096) itemoff 3836 itemsize 53 extent refs 1 gen 9 flags 1 ref#0: extent data backref root 5 objectid 259 offset 4096 count 1 item 3 key (29360128 169 0) itemoff 3803 itemsize 33 extent refs 1 gen 9 flags 2 ref#0: tree block backref root 5 item 4 key (29368320 169 1) itemoff 3770 itemsize 33 extent refs 1 gen 9 flags 2 ref#0: tree block backref root 5 item 5 key (29372416 169 0) itemoff 3737 itemsize 33 extent refs 1 gen 9 flags 2 ref#0: tree block backref root 5
Note that leaf 29421568 doesn't have its backref in the extent tree. Thus extent allocator can re-allocate leaf 29421568 for other trees.
In short, the bug is caused by:
- Existing tree block gets allocated to log tree This got its generation bumped.
- Log tree balance cleaned dirty bit of offending tree block It will not be written back to disk, thus no WRITTEN flag.
- Original owner of the tree block gets COWed Since the tree block has higher transid, no WRITTEN flag, it's reused, and not traced by transaction::dirty_pages.
- Transaction aborted Tree blocks get cleaned according to transaction::dirty_pages. But the offending tree block is not recorded at all.
- Filesystem unmount All pages are assumed to be are clean, destroying all workqueue, then call iput(btree_inode). But offending tree block is still dirty, which triggers writeback, and causes use-after-free bug.
The detailed sequence looks like this:
- Initial status eb: 29421568, header=WRITTEN bflags_dirty=0, page_dirty=0, gen=8, not traced by any dirty extent_iot_tree.
- New tree block is allocated Since there is no backref for 29421568, it's re-allocated as new tree block. Keep in mind that tree block 29421568 is still referred by extent tree.
- Tree block 29421568 is filled for log tree eb: 29421568, header=0 bflags_dirty=1, page_dirty=1, gen=9 << (gen bumped) traced by btrfs_root::dirty_log_pages
- Some log tree operations Since the fs is using node size 4096, the log tree can easily go a level higher.
- Log tree needs balance Tree block 29421568 gets all its content pushed to right, thus now it is empty, and we don't need it. btrfs_clean_tree_block() from __push_leaf_right() get called.
eb: 29421568, header=0 bflags_dirty=0, page_dirty=0, gen=9 traced by btrfs_root::dirty_log_pages
- Log tree write back btree_write_cache_pages() goes through dirty pages ranges, but since page of tree block 29421568 gets cleaned already, it's not written back to disk. Thus it doesn't have WRITTEN bit set. But ranges in dirty_log_pages are cleared.
eb: 29421568, header=0 bflags_dirty=0, page_dirty=0, gen=9 not traced by any dirty extent_iot_tree.
- Extent tree update when committing transaction Since tree block 29421568 has transid equal to running trans, and has no WRITTEN bit, should_cow_block() will use it directly without adding it to btrfs_transaction::dirty_pages.
eb: 29421568, header=0 bflags_dirty=1, page_dirty=1, gen=9 not traced by any dirty extent_iot_tree.
At this stage, we're doomed. We have a dirty eb not tracked by any extent io tree.
- Transaction gets aborted due to corrupted extent tree Btrfs cleans up dirty pages according to transaction::dirty_pages and btrfs_root::dirty_log_pages. But since tree block 29421568 is not tracked by neither of them, it's still dirty.
eb: 29421568, header=0 bflags_dirty=1, page_dirty=1, gen=9 not traced by any dirty extent_iot_tree.
- Filesystem unmount Since all cleanup is assumed to be done, all workqueus are destroyed. Then iput(btree_inode) is called, expecting no dirty pages. But tree 29421568 is still dirty, thus triggering writeback. Since all workqueues are already freed, we cause use-after-free.
This shows us that, log tree blocks + bad extent tree can cause wild dirty pages.
[FIX] To fix the problem, don't submit any btree write bio if the filesytem has any error. This is the last safe net, just in case other cleanup haven't caught catch it.
Link: https://github.com/bobfuzzer/CVE/tree/master/CVE-2019-19377 CC: stable@vger.kernel.org # 5.4+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Qu Wenruo wqu@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/extent_io.c | 35 ++++++++++++++++++++++++++++++++++- 1 file changed, 34 insertions(+), 1 deletion(-)
--- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -3931,6 +3931,7 @@ int btree_write_cache_pages(struct addre .extent_locked = 0, .sync_io = wbc->sync_mode == WB_SYNC_ALL, }; + struct btrfs_fs_info *fs_info = BTRFS_I(mapping->host)->root->fs_info; int ret = 0; int done = 0; int nr_to_write_done = 0; @@ -4044,7 +4045,39 @@ retry: end_write_bio(&epd, ret); return ret; } - ret = flush_write_bio(&epd); + /* + * If something went wrong, don't allow any metadata write bio to be + * submitted. + * + * This would prevent use-after-free if we had dirty pages not + * cleaned up, which can still happen by fuzzed images. + * + * - Bad extent tree + * Allowing existing tree block to be allocated for other trees. + * + * - Log tree operations + * Exiting tree blocks get allocated to log tree, bumps its + * generation, then get cleaned in tree re-balance. + * Such tree block will not be written back, since it's clean, + * thus no WRITTEN flag set. + * And after log writes back, this tree block is not traced by + * any dirty extent_io_tree. + * + * - Offending tree block gets re-dirtied from its original owner + * Since it has bumped generation, no WRITTEN flag, it can be + * reused without COWing. This tree block will not be traced + * by btrfs_transaction::dirty_pages. + * + * Now such dirty tree block will not be cleaned by any dirty + * extent io tree. Thus we don't want to submit such wild eb + * if the fs already has error. + */ + if (!test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) { + ret = flush_write_bio(&epd); + } else { + ret = -EUCLEAN; + end_write_bio(&epd, ret); + } return ret; }
From: Filipe Manana fdmanana@suse.com
commit f0cc2cd70164efe8f75c5d99560f0f69969c72e4 upstream.
During unmount we can have a job from the delayed inode items work queue still running, that can lead to at least two bad things:
1) A crash, because the worker can try to create a transaction just after the fs roots were freed;
2) A transaction leak, because the worker can create a transaction before the fs roots are freed and just after we committed the last transaction and after we stopped the transaction kthread.
A stack trace example of the crash:
[79011.691214] kernel BUG at lib/radix-tree.c:982! [79011.692056] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI [79011.693180] CPU: 3 PID: 1394 Comm: kworker/u8:2 Tainted: G W 5.6.0-rc2-btrfs-next-54 #2 (...) [79011.696789] Workqueue: btrfs-delayed-meta btrfs_work_helper [btrfs] [79011.697904] RIP: 0010:radix_tree_tag_set+0xe7/0x170 (...) [79011.702014] RSP: 0018:ffffb3c84a317ca0 EFLAGS: 00010293 [79011.702949] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [79011.704202] RDX: ffffb3c84a317cb0 RSI: ffffb3c84a317ca8 RDI: ffff8db3931340a0 [79011.705463] RBP: 0000000000000005 R08: 0000000000000005 R09: ffffffff974629d0 [79011.706756] R10: ffffb3c84a317bc0 R11: 0000000000000001 R12: ffff8db393134000 [79011.708010] R13: ffff8db3931340a0 R14: ffff8db393134068 R15: 0000000000000001 [79011.709270] FS: 0000000000000000(0000) GS:ffff8db3b6a00000(0000) knlGS:0000000000000000 [79011.710699] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [79011.711710] CR2: 00007f22c2a0a000 CR3: 0000000232ad4005 CR4: 00000000003606e0 [79011.712958] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [79011.714205] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [79011.715448] Call Trace: [79011.715925] record_root_in_trans+0x72/0xf0 [btrfs] [79011.716819] btrfs_record_root_in_trans+0x4b/0x70 [btrfs] [79011.717925] start_transaction+0xdd/0x5c0 [btrfs] [79011.718829] btrfs_async_run_delayed_root+0x17e/0x2b0 [btrfs] [79011.719915] btrfs_work_helper+0xaa/0x720 [btrfs] [79011.720773] process_one_work+0x26d/0x6a0 [79011.721497] worker_thread+0x4f/0x3e0 [79011.722153] ? process_one_work+0x6a0/0x6a0 [79011.722901] kthread+0x103/0x140 [79011.723481] ? kthread_create_worker_on_cpu+0x70/0x70 [79011.724379] ret_from_fork+0x3a/0x50 (...)
The following diagram shows a sequence of steps that lead to the crash during ummount of the filesystem:
CPU 1 CPU 2 CPU 3
btrfs_punch_hole() btrfs_btree_balance_dirty() btrfs_balance_delayed_items() --> sees fs_info->delayed_root->items with value 200, which is greater than BTRFS_DELAYED_BACKGROUND (128) and smaller than BTRFS_DELAYED_WRITEBACK (512) btrfs_wq_run_delayed_node() --> queues a job for fs_info->delayed_workers to run btrfs_async_run_delayed_root()
btrfs_async_run_delayed_root() --> job queued by CPU 1
--> starts picking and running delayed nodes from the prepare_list list
close_ctree()
btrfs_delete_unused_bgs()
btrfs_commit_super()
btrfs_join_transaction() --> gets transaction N
btrfs_commit_transaction(N) --> set transaction state to TRANTS_STATE_COMMIT_START
btrfs_first_prepared_delayed_node() --> picks delayed node X through the prepared_list list
btrfs_run_delayed_items()
btrfs_first_delayed_node() --> also picks delayed node X but through the node_list list
__btrfs_commit_inode_delayed_items() --> runs all delayed items from this node and drops the node's item count to 0 through call to btrfs_release_delayed_inode()
--> finishes running any remaining delayed nodes
--> finishes transaction commit
--> stops cleaner and transaction threads
btrfs_free_fs_roots() --> frees all roots and removes them from the radix tree fs_info->fs_roots_radix
btrfs_join_transaction() start_transaction() btrfs_record_root_in_trans() record_root_in_trans() radix_tree_tag_set() --> crashes because the root is not in the radix tree anymore
If the worker is able to call btrfs_join_transaction() before the unmount task frees the fs roots, we end up leaking a transaction and all its resources, since after the call to btrfs_commit_super() and stopping the transaction kthread, we don't expect to have any transaction open anymore.
When this situation happens the worker has a delayed node that has no more items to run, since the task calling btrfs_run_delayed_items(), which is doing a transaction commit, picks the same node and runs all its items first.
We can not wait for the worker to complete when running delayed items through btrfs_run_delayed_items(), because we call that function in several phases of a transaction commit, and that could cause a deadlock because the worker calls btrfs_join_transaction() and the task doing the transaction commit may have already set the transaction state to TRANS_STATE_COMMIT_DOING.
Also it's not possible to get into a situation where only some of the items of a delayed node are added to the fs/subvolume tree in the current transaction and the remaining ones in the next transaction, because when running the items of a delayed inode we lock its mutex, effectively waiting for the worker if the worker is running the items of the delayed node already.
Since this can only cause issues when unmounting a filesystem, fix it in a simple way by waiting for any jobs on the delayed workers queue before calling btrfs_commit_supper() at close_ctree(). This works because at this point no one can call btrfs_btree_balance_dirty() or btrfs_balance_delayed_items(), and if we end up waiting for any worker to complete, btrfs_commit_super() will commit the transaction created by the worker.
CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Filipe Manana fdmanana@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/async-thread.c | 8 ++++++++ fs/btrfs/async-thread.h | 1 + fs/btrfs/disk-io.c | 13 +++++++++++++ 3 files changed, 22 insertions(+)
--- a/fs/btrfs/async-thread.c +++ b/fs/btrfs/async-thread.c @@ -395,3 +395,11 @@ void btrfs_set_work_high_priority(struct { set_bit(WORK_HIGH_PRIO_BIT, &work->flags); } + +void btrfs_flush_workqueue(struct btrfs_workqueue *wq) +{ + if (wq->high) + flush_workqueue(wq->high->normal_wq); + + flush_workqueue(wq->normal->normal_wq); +} --- a/fs/btrfs/async-thread.h +++ b/fs/btrfs/async-thread.h @@ -44,5 +44,6 @@ void btrfs_set_work_high_priority(struct struct btrfs_fs_info * __pure btrfs_work_owner(const struct btrfs_work *work); struct btrfs_fs_info * __pure btrfs_workqueue_owner(const struct __btrfs_workqueue *wq); bool btrfs_workqueue_normal_congested(const struct btrfs_workqueue *wq); +void btrfs_flush_workqueue(struct btrfs_workqueue *wq);
#endif --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -3986,6 +3986,19 @@ void __cold close_ctree(struct btrfs_fs_ */ btrfs_delete_unused_bgs(fs_info);
+ /* + * There might be existing delayed inode workers still running + * and holding an empty delayed inode item. We must wait for + * them to complete first because they can create a transaction. + * This happens when someone calls btrfs_balance_delayed_items() + * and then a transaction commit runs the same delayed nodes + * before any delayed worker has done something with the nodes. + * We must wait for any worker here and not at transaction + * commit time since that could cause a deadlock. + * This is a very rare case. + */ + btrfs_flush_workqueue(fs_info->delayed_workers); + ret = btrfs_commit_super(fs_info); if (ret) btrfs_err(fs_info, "commit super ret %d", ret);
From: Josef Bacik josef@toxicpanda.com
commit 6217b0fadd4473a16fabc6aecd7527a9f71af534 upstream.
If we do merge_reloc_roots() we could insert a few roots onto the dirty subvol roots list, where we hold a ref on them. If we fail to start the transaction we need to run clean_dirty_subvols() in order to cleanup the refs.
CC: stable@vger.kernel.org # 5.4+ Signed-off-by: Josef Bacik josef@toxicpanda.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/relocation.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -4221,10 +4221,10 @@ restart: goto out_free; } btrfs_commit_transaction(trans); +out_free: ret = clean_dirty_subvols(rc); if (ret < 0 && !err) err = ret; -out_free: btrfs_free_block_rsv(fs_info, rc->block_rsv); btrfs_free_path(path); return err; @@ -4622,10 +4622,10 @@ int btrfs_recover_relocation(struct btrf trans = btrfs_join_transaction(rc->extent_root); if (IS_ERR(trans)) { err = PTR_ERR(trans); - goto out_free; + goto out_clean; } err = btrfs_commit_transaction(trans); - +out_clean: ret = clean_dirty_subvols(rc); if (ret < 0 && !err) err = ret;
From: Josef Bacik josef@toxicpanda.com
commit 75ec1db8717a8f0a9d9c8d033e542fdaa7b73898 upstream.
In my EIO stress testing I noticed I was getting forced to rescan the uuid tree pretty often, which was weird. This is because my error injection stuff would sometimes inject an error after log replay but before we loaded the UUID tree. If log replay committed the transaction it wouldn't have updated the uuid tree generation, but the tree was valid and didn't change, so there's no reason to not update the generation here.
Fix this by setting the BTRFS_FS_UPDATE_UUID_TREE_GEN bit immediately after reading all the fs roots if the uuid tree generation matches the fs generation. Then any transaction commits that happen during mount won't screw up our uuid tree state, forcing us to do needless uuid rescans.
Fixes: 70f801754728 ("Btrfs: check UUID tree during mount if required") CC: stable@vger.kernel.org # 4.19+ Signed-off-by: Josef Bacik josef@toxicpanda.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/disk-io.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
--- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -3054,6 +3054,18 @@ int __cold open_ctree(struct super_block if (ret) goto fail_tree_roots;
+ /* + * If we have a uuid root and we're not being told to rescan we need to + * check the generation here so we can set the + * BTRFS_FS_UPDATE_UUID_TREE_GEN bit. Otherwise we could commit the + * transaction during a balance or the log replay without updating the + * uuid generation, and then if we crash we would rescan the uuid tree, + * even though it was perfectly fine. + */ + if (fs_info->uuid_root && !btrfs_test_opt(fs_info, RESCAN_UUID_TREE) && + fs_info->generation == btrfs_super_uuid_tree_generation(disk_super)) + set_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags); + ret = btrfs_verify_dev_extents(fs_info); if (ret) { btrfs_err(fs_info, @@ -3284,8 +3296,6 @@ int __cold open_ctree(struct super_block close_ctree(fs_info); return ret; } - } else { - set_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags); } set_bit(BTRFS_FS_OPEN, &fs_info->flags);
From: Josef Bacik josef@toxicpanda.com
commit 8e19c9732ad1d127b5575a10f4fbcacf740500ff upstream.
If we have an error while building the backref tree in relocation we'll process all the pending edges and then free the node. However if we integrated some edges into the cache we'll lose our link to those edges by simply freeing this node, which means we'll leak memory and references to any roots that we've found.
Instead we need to use remove_backref_node(), which walks through all of the edges that are still linked to this node and free's them up and drops any root references we may be holding.
CC: stable@vger.kernel.org # 4.9+ Reviewed-by: Qu Wenruo wqu@suse.com Signed-off-by: Josef Bacik josef@toxicpanda.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/relocation.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -1186,7 +1186,7 @@ out: free_backref_node(cache, lower); }
- free_backref_node(cache, node); + remove_backref_node(cache, node); return ERR_PTR(err); } ASSERT(!node || !node->detached);
From: Filipe Manana fdmanana@suse.com
commit 95418ed1d10774cd9a49af6f39e216c1256f1eeb upstream.
When doing a fast fsync for a range that starts at an offset greater than zero, we can end up with a log that when replayed causes the respective inode miss a file extent item representing a hole if we are not using the NO_HOLES feature. This is because for fast fsyncs we don't log any extents that cover a range different from the one requested in the fsync.
Example scenario to trigger it:
$ mkfs.btrfs -O ^no-holes -f /dev/sdd $ mount /dev/sdd /mnt
# Create a file with a single 256K and fsync it to clear to full sync # bit in the inode - we want the msync below to trigger a fast fsync. $ xfs_io -f -c "pwrite -S 0xab 0 256K" -c "fsync" /mnt/foo
# Force a transaction commit and wipe out the log tree. $ sync
# Dirty 768K of data, increasing the file size to 1Mb, and flush only # the range from 256K to 512K without updating the log tree # (sync_file_range() does not trigger fsync, it only starts writeback # and waits for it to finish).
$ xfs_io -c "pwrite -S 0xcd 256K 768K" /mnt/foo $ xfs_io -c "sync_range -abw 256K 256K" /mnt/foo
# Now dirty the range from 768K to 1M again and sync that range. $ xfs_io -c "mmap -w 768K 256K" \ -c "mwrite -S 0xef 768K 256K" \ -c "msync -s 768K 256K" \ -c "munmap" \ /mnt/foo
<power fail>
# Mount to replay the log. $ mount /dev/sdd /mnt $ umount /mnt
$ btrfs check /dev/sdd Opening filesystem to check... Checking filesystem on /dev/sdd UUID: 482fb574-b288-478e-a190-a9c44a78fca6 [1/7] checking root items [2/7] checking extents [3/7] checking free space cache [4/7] checking fs roots root 5 inode 257 errors 100, file extent discount Found file extent holes: start: 262144, len: 524288 ERROR: errors found in fs roots found 720896 bytes used, error(s) found total csum bytes: 512 total tree bytes: 131072 total fs tree bytes: 32768 total extent tree bytes: 16384 btree space waste bytes: 123514 file data blocks allocated: 589824 referenced 589824
Fix this issue by setting the range to full (0 to LLONG_MAX) when the NO_HOLES feature is not enabled. This results in extra work being done but it gives the guarantee we don't end up with missing holes after replaying the log.
CC: stable@vger.kernel.org # 4.19+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/file.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
--- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -2072,6 +2072,16 @@ int btrfs_sync_file(struct file *file, l btrfs_init_log_ctx(&ctx, inode);
/* + * Set the range to full if the NO_HOLES feature is not enabled. + * This is to avoid missing file extent items representing holes after + * replaying the log. + */ + if (!btrfs_fs_incompat(fs_info, NO_HOLES)) { + start = 0; + end = LLONG_MAX; + } + + /* * We write the dirty pages in the range and wait until they complete * out of the ->i_mutex. If so, we can flush the dirty pages by * multi-task, and make the performance up. See
From: Josef Bacik josef@toxicpanda.com
commit fb2d83eefef4e1c717205bac71cb1941edf8ae11 upstream.
If we fail to load an fs root, or fail to start a transaction we can bail without unsetting the reloc control, which leads to problems later when we free the reloc control but still have it attached to the file system.
In the normal path we'll end up calling unset_reloc_control() twice, but all it does is set fs_info->reloc_control = NULL, and we can only have one balance at a time so it's not racey.
CC: stable@vger.kernel.org # 5.4+ Reviewed-by: Qu Wenruo wqu@suse.com Signed-off-by: Josef Bacik josef@toxicpanda.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/relocation.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
--- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -4581,9 +4581,8 @@ int btrfs_recover_relocation(struct btrf
trans = btrfs_join_transaction(rc->extent_root); if (IS_ERR(trans)) { - unset_reloc_control(rc); err = PTR_ERR(trans); - goto out_free; + goto out_unset; }
rc->merge_reloc_tree = 1; @@ -4603,7 +4602,7 @@ int btrfs_recover_relocation(struct btrf if (IS_ERR(fs_root)) { err = PTR_ERR(fs_root); list_add_tail(&reloc_root->root_list, &reloc_roots); - goto out_free; + goto out_unset; }
err = __add_reloc_root(reloc_root); @@ -4613,7 +4612,7 @@ int btrfs_recover_relocation(struct btrf
err = btrfs_commit_transaction(trans); if (err) - goto out_free; + goto out_unset;
merge_reloc_roots(rc);
@@ -4629,7 +4628,8 @@ out_clean: ret = clean_dirty_subvols(rc); if (ret < 0 && !err) err = ret; -out_free: +out_unset: + unset_reloc_control(rc); kfree(rc); out: if (!list_empty(&reloc_roots))
From: Robbie Ko robbieko@synology.com
commit 6ff06729c22ec0b7498d900d79cc88cfb8aceaeb upstream.
Ordered ops are started twice in sync file, once outside of inode mutex and once inside, taking the dio semaphore. There was one error path missing the semaphore unlock.
Fixes: aab15e8ec2576 ("Btrfs: fix rare chances for data loss when doing a fast fsync") CC: stable@vger.kernel.org # 4.19+ Signed-off-by: Robbie Ko robbieko@synology.com Reviewed-by: Filipe Manana fdmanana@suse.com [ add changelog ] Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/file.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -2135,6 +2135,7 @@ int btrfs_sync_file(struct file *file, l */ ret = start_ordered_ops(inode, start, end); if (ret) { + up_write(&BTRFS_I(inode)->dio_sem); inode_unlock(inode); goto out; }
From: Josef Bacik josef@toxicpanda.com
commit 351cbf6e4410e7ece05e35d0a07320538f2418b4 upstream.
Zygo reported the following lockdep splat while testing the balance patches
====================================================== WARNING: possible circular locking dependency detected 5.6.0-c6f0579d496a+ #53 Not tainted ------------------------------------------------------ kswapd0/1133 is trying to acquire lock: ffff888092f622c0 (&delayed_node->mutex){+.+.}, at: __btrfs_release_delayed_node+0x7c/0x5b0
but task is already holding lock: ffffffff8fc5f860 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x30
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}: fs_reclaim_acquire.part.91+0x29/0x30 fs_reclaim_acquire+0x19/0x20 kmem_cache_alloc_trace+0x32/0x740 add_block_entry+0x45/0x260 btrfs_ref_tree_mod+0x6e2/0x8b0 btrfs_alloc_tree_block+0x789/0x880 alloc_tree_block_no_bg_flush+0xc6/0xf0 __btrfs_cow_block+0x270/0x940 btrfs_cow_block+0x1ba/0x3a0 btrfs_search_slot+0x999/0x1030 btrfs_insert_empty_items+0x81/0xe0 btrfs_insert_delayed_items+0x128/0x7d0 __btrfs_run_delayed_items+0xf4/0x2a0 btrfs_run_delayed_items+0x13/0x20 btrfs_commit_transaction+0x5cc/0x1390 insert_balance_item.isra.39+0x6b2/0x6e0 btrfs_balance+0x72d/0x18d0 btrfs_ioctl_balance+0x3de/0x4c0 btrfs_ioctl+0x30ab/0x44a0 ksys_ioctl+0xa1/0xe0 __x64_sys_ioctl+0x43/0x50 do_syscall_64+0x77/0x2c0 entry_SYSCALL_64_after_hwframe+0x49/0xbe
-> #0 (&delayed_node->mutex){+.+.}: __lock_acquire+0x197e/0x2550 lock_acquire+0x103/0x220 __mutex_lock+0x13d/0xce0 mutex_lock_nested+0x1b/0x20 __btrfs_release_delayed_node+0x7c/0x5b0 btrfs_remove_delayed_node+0x49/0x50 btrfs_evict_inode+0x6fc/0x900 evict+0x19a/0x2c0 dispose_list+0xa0/0xe0 prune_icache_sb+0xbd/0xf0 super_cache_scan+0x1b5/0x250 do_shrink_slab+0x1f6/0x530 shrink_slab+0x32e/0x410 shrink_node+0x2a5/0xba0 balance_pgdat+0x4bd/0x8a0 kswapd+0x35a/0x800 kthread+0x1e9/0x210 ret_from_fork+0x3a/0x50
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&delayed_node->mutex); lock(fs_reclaim); lock(&delayed_node->mutex);
*** DEADLOCK ***
3 locks held by kswapd0/1133: #0: ffffffff8fc5f860 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x30 #1: ffffffff8fc380d8 (shrinker_rwsem){++++}, at: shrink_slab+0x1e8/0x410 #2: ffff8881e0e6c0e8 (&type->s_umount_key#42){++++}, at: trylock_super+0x1b/0x70
stack backtrace: CPU: 2 PID: 1133 Comm: kswapd0 Not tainted 5.6.0-c6f0579d496a+ #53 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 Call Trace: dump_stack+0xc1/0x11a print_circular_bug.isra.38.cold.57+0x145/0x14a check_noncircular+0x2a9/0x2f0 ? print_circular_bug.isra.38+0x130/0x130 ? stack_trace_consume_entry+0x90/0x90 ? save_trace+0x3cc/0x420 __lock_acquire+0x197e/0x2550 ? btrfs_inode_clear_file_extent_range+0x9b/0xb0 ? register_lock_class+0x960/0x960 lock_acquire+0x103/0x220 ? __btrfs_release_delayed_node+0x7c/0x5b0 __mutex_lock+0x13d/0xce0 ? __btrfs_release_delayed_node+0x7c/0x5b0 ? __asan_loadN+0xf/0x20 ? pvclock_clocksource_read+0xeb/0x190 ? __btrfs_release_delayed_node+0x7c/0x5b0 ? mutex_lock_io_nested+0xc20/0xc20 ? __kasan_check_read+0x11/0x20 ? check_chain_key+0x1e6/0x2e0 mutex_lock_nested+0x1b/0x20 ? mutex_lock_nested+0x1b/0x20 __btrfs_release_delayed_node+0x7c/0x5b0 btrfs_remove_delayed_node+0x49/0x50 btrfs_evict_inode+0x6fc/0x900 ? btrfs_setattr+0x840/0x840 ? do_raw_spin_unlock+0xa8/0x140 evict+0x19a/0x2c0 dispose_list+0xa0/0xe0 prune_icache_sb+0xbd/0xf0 ? invalidate_inodes+0x310/0x310 super_cache_scan+0x1b5/0x250 do_shrink_slab+0x1f6/0x530 shrink_slab+0x32e/0x410 ? do_shrink_slab+0x530/0x530 ? do_shrink_slab+0x530/0x530 ? __kasan_check_read+0x11/0x20 ? mem_cgroup_protected+0x13d/0x260 shrink_node+0x2a5/0xba0 balance_pgdat+0x4bd/0x8a0 ? mem_cgroup_shrink_node+0x490/0x490 ? _raw_spin_unlock_irq+0x27/0x40 ? finish_task_switch+0xce/0x390 ? rcu_read_lock_bh_held+0xb0/0xb0 kswapd+0x35a/0x800 ? _raw_spin_unlock_irqrestore+0x4c/0x60 ? balance_pgdat+0x8a0/0x8a0 ? finish_wait+0x110/0x110 ? __kasan_check_read+0x11/0x20 ? __kthread_parkme+0xc6/0xe0 ? balance_pgdat+0x8a0/0x8a0 kthread+0x1e9/0x210 ? kthread_create_worker_on_cpu+0xc0/0xc0 ret_from_fork+0x3a/0x50
This is because we hold that delayed node's mutex while doing tree operations. Fix this by just wrapping the searches in nofs.
CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Josef Bacik josef@toxicpanda.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/delayed-inode.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
--- a/fs/btrfs/delayed-inode.c +++ b/fs/btrfs/delayed-inode.c @@ -6,6 +6,7 @@
#include <linux/slab.h> #include <linux/iversion.h> +#include <linux/sched/mm.h> #include "misc.h" #include "delayed-inode.h" #include "disk-io.h" @@ -805,11 +806,14 @@ static int btrfs_insert_delayed_item(str struct btrfs_delayed_item *delayed_item) { struct extent_buffer *leaf; + unsigned int nofs_flag; char *ptr; int ret;
+ nofs_flag = memalloc_nofs_save(); ret = btrfs_insert_empty_item(trans, root, path, &delayed_item->key, delayed_item->data_len); + memalloc_nofs_restore(nofs_flag); if (ret < 0 && ret != -EEXIST) return ret;
@@ -937,6 +941,7 @@ static int btrfs_delete_delayed_items(st struct btrfs_delayed_node *node) { struct btrfs_delayed_item *curr, *prev; + unsigned int nofs_flag; int ret = 0;
do_again: @@ -945,7 +950,9 @@ do_again: if (!curr) goto delete_fail;
+ nofs_flag = memalloc_nofs_save(); ret = btrfs_search_slot(trans, root, &curr->key, path, -1, 1); + memalloc_nofs_restore(nofs_flag); if (ret < 0) goto delete_fail; else if (ret > 0) { @@ -1012,6 +1019,7 @@ static int __btrfs_update_delayed_inode( struct btrfs_key key; struct btrfs_inode_item *inode_item; struct extent_buffer *leaf; + unsigned int nofs_flag; int mod; int ret;
@@ -1024,7 +1032,9 @@ static int __btrfs_update_delayed_inode( else mod = 1;
+ nofs_flag = memalloc_nofs_save(); ret = btrfs_lookup_inode(trans, root, path, &key, mod); + memalloc_nofs_restore(nofs_flag); if (ret > 0) { btrfs_release_path(path); return -ENOENT; @@ -1075,7 +1085,10 @@ search:
key.type = BTRFS_INODE_EXTREF_KEY; key.offset = -1; + + nofs_flag = memalloc_nofs_save(); ret = btrfs_search_slot(trans, root, &key, path, -1, 1); + memalloc_nofs_restore(nofs_flag); if (ret < 0) goto err_out; ASSERT(ret);
From: Bjorn Andersson bjorn.andersson@linaro.org
commit 900fc60df22748dbc28e4970838e8f7b8f1013ce upstream.
Trying to reclaim mpss memory while the mba is not running causes the system to crash on devices with security fuses blown, so leave it assigned to the remote on shutdown and recover it on a subsequent boot.
Fixes: 6c5a9dc2481b ("remoteproc: qcom: Make secure world call for mem ownership switch") Cc: stable@vger.kernel.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Sibi Sankar sibis@codeaurora.org Tested-by: Bjorn Andersson bjorn.andersson@linaro.org Link: https://lore.kernel.org/r/20200304194729.27979-2-sibis@codeaurora.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/remoteproc/qcom_q6v5_mss.c | 35 ++++++++++++++++++++++++----------- 1 file changed, 24 insertions(+), 11 deletions(-)
--- a/drivers/remoteproc/qcom_q6v5_mss.c +++ b/drivers/remoteproc/qcom_q6v5_mss.c @@ -887,11 +887,6 @@ static void q6v5_mba_reclaim(struct q6v5 writel(val, qproc->reg_base + QDSP6SS_PWR_CTL_REG); }
- ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, - false, qproc->mpss_phys, - qproc->mpss_size); - WARN_ON(ret); - q6v5_reset_assert(qproc);
q6v5_clk_disable(qproc->dev, qproc->reset_clks, @@ -981,6 +976,14 @@ static int q6v5_mpss_load(struct q6v5 *q max_addr = ALIGN(phdr->p_paddr + phdr->p_memsz, SZ_4K); }
+ /** + * In case of a modem subsystem restart on secure devices, the modem + * memory can be reclaimed only after MBA is loaded. For modem cold + * boot this will be a nop + */ + q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, false, + qproc->mpss_phys, qproc->mpss_size); + mpss_reloc = relocate ? min_addr : qproc->mpss_phys; qproc->mpss_reloc = mpss_reloc; /* Load firmware segments */ @@ -1070,8 +1073,16 @@ static void qcom_q6v5_dump_segment(struc void *ptr = rproc_da_to_va(rproc, segment->da, segment->size);
/* Unlock mba before copying segments */ - if (!qproc->dump_mba_loaded) + if (!qproc->dump_mba_loaded) { ret = q6v5_mba_load(qproc); + if (!ret) { + /* Reset ownership back to Linux to copy segments */ + ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, + false, + qproc->mpss_phys, + qproc->mpss_size); + } + }
if (!ptr || ret) memset(dest, 0xff, segment->size); @@ -1082,8 +1093,14 @@ static void qcom_q6v5_dump_segment(struc
/* Reclaim mba after copying segments */ if (qproc->dump_segment_mask == qproc->dump_complete_mask) { - if (qproc->dump_mba_loaded) + if (qproc->dump_mba_loaded) { + /* Try to reset ownership back to Q6 */ + q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, + true, + qproc->mpss_phys, + qproc->mpss_size); q6v5_mba_reclaim(qproc); + } } }
@@ -1123,10 +1140,6 @@ static int q6v5_start(struct rproc *rpro return 0;
reclaim_mpss: - xfermemop_ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, - false, qproc->mpss_phys, - qproc->mpss_size); - WARN_ON(xfermemop_ret); q6v5_mba_reclaim(qproc);
return ret;
From: Sibi Sankar sibis@codeaurora.org
commit d96f2571dc84d128cacf1944f4ecc87834c779a6 upstream.
On secure devices after a wdog/fatal interrupt, the mba region has to be refreshed in order to prevent the following errors during mba load.
Err Logs: remoteproc remoteproc2: stopped remote processor 4080000.remoteproc qcom-q6v5-mss 4080000.remoteproc: PBL returned unexpected status -284031232 qcom-q6v5-mss 4080000.remoteproc: PBL returned unexpected status -284031232 .... qcom-q6v5-mss 4080000.remoteproc: PBL returned unexpected status -284031232 qcom-q6v5-mss 4080000.remoteproc: MBA booted, loading mpss
Fixes: 7dd8ade24dc2a ("remoteproc: qcom: q6v5-mss: Add custom dump function for modem") Cc: stable@vger.kernel.org Signed-off-by: Sibi Sankar sibis@codeaurora.org Tested-by: Bjorn Andersson bjorn.andersson@linaro.org Link: https://lore.kernel.org/r/20200304194729.27979-4-sibis@codeaurora.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/remoteproc/qcom_q6v5_mss.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-)
--- a/drivers/remoteproc/qcom_q6v5_mss.c +++ b/drivers/remoteproc/qcom_q6v5_mss.c @@ -916,6 +916,23 @@ static void q6v5_mba_reclaim(struct q6v5 } }
+static int q6v5_reload_mba(struct rproc *rproc) +{ + struct q6v5 *qproc = rproc->priv; + const struct firmware *fw; + int ret; + + ret = request_firmware(&fw, rproc->firmware, qproc->dev); + if (ret < 0) + return ret; + + q6v5_load(rproc, fw); + ret = q6v5_mba_load(qproc); + release_firmware(fw); + + return ret; +} + static int q6v5_mpss_load(struct q6v5 *qproc) { const struct elf32_phdr *phdrs; @@ -1074,7 +1091,7 @@ static void qcom_q6v5_dump_segment(struc
/* Unlock mba before copying segments */ if (!qproc->dump_mba_loaded) { - ret = q6v5_mba_load(qproc); + ret = q6v5_reload_mba(rproc); if (!ret) { /* Reset ownership back to Linux to copy segments */ ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
From: Nikita Shubin NShubin@topcon.com
commit 791c13b709dd51eb37330f2a5837434e90c87c27 upstream.
Undefined rproc_ops .kick method in remoteproc driver will result in "Unable to handle kernel NULL pointer dereference" in rproc_virtio_notify, after firmware loading if:
1) .kick method wasn't defined in driver 2) resource_table exists in firmware and has "Virtio device entry" defined
Let's refuse to register an rproc-induced virtio device if no kick method was defined for rproc.
[ 13.180049][ T415] 8<--- cut here --- [ 13.190558][ T415] Unable to handle kernel NULL pointer dereference at virtual address 00000000 [ 13.212544][ T415] pgd = (ptrval) [ 13.217052][ T415] [00000000] *pgd=00000000 [ 13.224692][ T415] Internal error: Oops: 80000005 [#1] PREEMPT SMP ARM [ 13.231318][ T415] Modules linked in: rpmsg_char imx_rproc virtio_rpmsg_bus rpmsg_core [last unloaded: imx_rproc] [ 13.241687][ T415] CPU: 0 PID: 415 Comm: unload-load.sh Not tainted 5.5.2-00002-g707df13bbbdd #6 [ 13.250561][ T415] Hardware name: Freescale i.MX7 Dual (Device Tree) [ 13.257009][ T415] PC is at 0x0 [ 13.260249][ T415] LR is at rproc_virtio_notify+0x2c/0x54 [ 13.265738][ T415] pc : [<00000000>] lr : [<8050f6b0>] psr: 60010113 [ 13.272702][ T415] sp : b8d47c48 ip : 00000001 fp : bc04de00 [ 13.278625][ T415] r10: bc04c000 r9 : 00000cc0 r8 : b8d46000 [ 13.284548][ T415] r7 : 00000000 r6 : b898f200 r5 : 00000000 r4 : b8a29800 [ 13.291773][ T415] r3 : 00000000 r2 : 990a3ad4 r1 : 00000000 r0 : b8a29800 [ 13.299000][ T415] Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none [ 13.306833][ T415] Control: 10c5387d Table: b8b4806a DAC: 00000051 [ 13.313278][ T415] Process unload-load.sh (pid: 415, stack limit = 0x(ptrval)) [ 13.320591][ T415] Stack: (0xb8d47c48 to 0xb8d48000) [ 13.325651][ T415] 7c40: b895b680 00000001 b898f200 803c6430 b895bc80 7f00ae18 [ 13.334531][ T415] 7c60: 00000035 00000000 00000000 b9393200 80b3ed80 00004000 b9393268 bbf5a9a2 [ 13.343410][ T415] 7c80: 00000e00 00000200 00000000 7f00aff0 7f00a014 b895b680 b895b800 990a3ad4 [ 13.352290][ T415] 7ca0: 00000001 b898f210 b898f200 00000000 00000000 7f00e000 00000001 00000000 [ 13.361170][ T415] 7cc0: 00000000 803c62e0 80b2169c 802a0924 b898f210 00000000 00000000 b898f210 [ 13.370049][ T415] 7ce0: 80b9ba44 00000000 80b9ba48 00000000 7f00e000 00000008 80b2169c 80400114 [ 13.378929][ T415] 7d00: 80b2169c 8061fd64 b898f210 7f00e000 80400744 b8d46000 80b21634 80b21634 [ 13.387809][ T415] 7d20: 80b2169c 80400614 80b21634 80400718 7f00e000 00000000 b8d47d7c 80400744 [ 13.396689][ T415] 7d40: b8d46000 80b21634 80b21634 803fe338 b898f254 b80fe76c b8d32e38 990a3ad4 [ 13.405569][ T415] 7d60: fffffff3 b898f210 b8d46000 00000001 b898f254 803ffe7c 80857a90 b898f210 [ 13.414449][ T415] 7d80: 00000001 990a3ad4 b8d46000 b898f210 b898f210 80b17aec b8a29c20 803ff0a4 [ 13.423328][ T415] 7da0: b898f210 00000000 b8d46000 803fb8e0 b898f200 00000000 80b17aec b898f210 [ 13.432209][ T415] 7dc0: b8a29c20 990a3ad4 b895b900 b898f200 8050fb7c 80b17aec b898f210 b8a29c20 [ 13.441088][ T415] 7de0: b8a29800 b895b900 b8a29a04 803c5ec0 b8a29c00 b898f200 b8a29a20 00000007 [ 13.449968][ T415] 7e00: b8a29c20 8050fd78 b8a29800 00000000 b8a29a20 b8a29c04 b8a29820 b8a299d0 [ 13.458848][ T415] 7e20: b895b900 8050e5a4 b8a29800 b8a299d8 b8d46000 b8a299e0 b8a29820 b8a299d0 [ 13.467728][ T415] 7e40: b895b900 8050e008 000041ed 00000000 b8b8c440 b8a299d8 b8a299e0 b8a299d8 [ 13.476608][ T415] 7e60: b8b8c440 990a3ad4 00000000 b8a29820 b8b8c400 00000006 b8a29800 b895b880 [ 13.485487][ T415] 7e80: b8d47f78 00000000 00000000 8050f4b4 00000006 b895b890 b8b8c400 008fbea0 [ 13.494367][ T415] 7ea0: b895b880 8029f530 00000000 00000000 b8d46000 00000006 b8d46000 008fbea0 [ 13.503246][ T415] 7ec0: 8029f434 00000000 b8d46000 00000000 00000000 8021e2e4 0000000a 8061fd0c [ 13.512125][ T415] 7ee0: 0000000a b8af0c00 0000000a b8af0c40 00000001 b8af0c40 00000000 8061f910 [ 13.521005][ T415] 7f00: 0000000a 80240af4 00000002 b8d46000 00000000 8061fd0c 00000002 80232d7c [ 13.529884][ T415] 7f20: 00000000 b8d46000 00000000 990a3ad4 00000000 00000006 b8a62d80 008fbea0 [ 13.538764][ T415] 7f40: b8d47f78 00000000 b8d46000 00000000 00000000 802210c0 b88f2900 00000000 [ 13.547644][ T415] 7f60: b8a62d80 b8a62d80 b8d46000 00000006 008fbea0 80221320 00000000 00000000 [ 13.556524][ T415] 7f80: b8af0c00 990a3ad4 0000006c 008fbea0 76f1cda0 00000004 80101204 00000004 [ 13.565403][ T415] 7fa0: 00000000 80101000 0000006c 008fbea0 00000001 008fbea0 00000006 00000000 [ 13.574283][ T415] 7fc0: 0000006c 008fbea0 76f1cda0 00000004 00000006 00000006 00000000 00000000 [ 13.583162][ T415] 7fe0: 00000004 7ebaf7d0 76eb4c0b 76e3f206 600d0030 00000001 00000000 00000000 [ 13.592056][ T415] [<8050f6b0>] (rproc_virtio_notify) from [<803c6430>] (virtqueue_notify+0x1c/0x34) [ 13.601298][ T415] [<803c6430>] (virtqueue_notify) from [<7f00ae18>] (rpmsg_probe+0x280/0x380 [virtio_rpmsg_bus]) [ 13.611663][ T415] [<7f00ae18>] (rpmsg_probe [virtio_rpmsg_bus]) from [<803c62e0>] (virtio_dev_probe+0x1f8/0x2c4) [ 13.622022][ T415] [<803c62e0>] (virtio_dev_probe) from [<80400114>] (really_probe+0x200/0x450) [ 13.630817][ T415] [<80400114>] (really_probe) from [<80400614>] (driver_probe_device+0x16c/0x1ac) [ 13.639873][ T415] [<80400614>] (driver_probe_device) from [<803fe338>] (bus_for_each_drv+0x84/0xc8) [ 13.649102][ T415] [<803fe338>] (bus_for_each_drv) from [<803ffe7c>] (__device_attach+0xd4/0x164) [ 13.658069][ T415] [<803ffe7c>] (__device_attach) from [<803ff0a4>] (bus_probe_device+0x84/0x8c) [ 13.666950][ T415] [<803ff0a4>] (bus_probe_device) from [<803fb8e0>] (device_add+0x444/0x768) [ 13.675572][ T415] [<803fb8e0>] (device_add) from [<803c5ec0>] (register_virtio_device+0xa4/0xfc) [ 13.684541][ T415] [<803c5ec0>] (register_virtio_device) from [<8050fd78>] (rproc_add_virtio_dev+0xcc/0x1b8) [ 13.694466][ T415] [<8050fd78>] (rproc_add_virtio_dev) from [<8050e5a4>] (rproc_start+0x148/0x200) [ 13.703521][ T415] [<8050e5a4>] (rproc_start) from [<8050e008>] (rproc_boot+0x384/0x5c0) [ 13.711708][ T415] [<8050e008>] (rproc_boot) from [<8050f4b4>] (state_store+0x3c/0xc8) [ 13.719723][ T415] [<8050f4b4>] (state_store) from [<8029f530>] (kernfs_fop_write+0xfc/0x214) [ 13.728348][ T415] [<8029f530>] (kernfs_fop_write) from [<8021e2e4>] (__vfs_write+0x30/0x1cc) [ 13.736971][ T415] [<8021e2e4>] (__vfs_write) from [<802210c0>] (vfs_write+0xac/0x17c) [ 13.744985][ T415] [<802210c0>] (vfs_write) from [<80221320>] (ksys_write+0x64/0xe4) [ 13.752825][ T415] [<80221320>] (ksys_write) from [<80101000>] (ret_fast_syscall+0x0/0x54) [ 13.761178][ T415] Exception stack(0xb8d47fa8 to 0xb8d47ff0) [ 13.766932][ T415] 7fa0: 0000006c 008fbea0 00000001 008fbea0 00000006 00000000 [ 13.775811][ T415] 7fc0: 0000006c 008fbea0 76f1cda0 00000004 00000006 00000006 00000000 00000000 [ 13.784687][ T415] 7fe0: 00000004 7ebaf7d0 76eb4c0b 76e3f206 [ 13.790442][ T415] Code: bad PC value [ 13.839214][ T415] ---[ end trace 1fe21ecfc9f28852 ]---
Reviewed-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Nikita Shubin NShubin@topcon.com Fixes: 7a186941626d ("remoteproc: remove the single rpmsg vdev limitation") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200306072452.24743-1-NShubin@topcon.com Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/remoteproc/remoteproc_virtio.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/drivers/remoteproc/remoteproc_virtio.c +++ b/drivers/remoteproc/remoteproc_virtio.c @@ -334,6 +334,13 @@ int rproc_add_virtio_dev(struct rproc_vd struct rproc_mem_entry *mem; int ret;
+ if (rproc->ops->kick == NULL) { + ret = -EINVAL; + dev_err(dev, ".kick method not defined for %s", + rproc->name); + goto out; + } + /* Try to find dedicated vdev buffer carveout */ mem = rproc_find_carveout_by_name(rproc, "vdev%dbuffer", rvdev->index); if (mem) {
From: Dan Carpenter dan.carpenter@oracle.com
commit eed74b3eba9eda36d155c11a12b2b4b50c67c1d8 upstream.
We need to decrement this refcounter on these error paths.
Fixes: f7d76e05d058 ("crypto: user - fix use_after_free of struct xxx_request") Cc: stable@vger.kernel.org Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Acked-by: Neil Horman nhorman@tuxdriver.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- crypto/rng.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
--- a/crypto/rng.c +++ b/crypto/rng.c @@ -37,12 +37,16 @@ int crypto_rng_reset(struct crypto_rng * crypto_stats_get(alg); if (!seed && slen) { buf = kmalloc(slen, GFP_KERNEL); - if (!buf) + if (!buf) { + crypto_alg_put(alg); return -ENOMEM; + }
err = get_random_bytes_wait(buf, slen); - if (err) + if (err) { + crypto_alg_put(alg); goto out; + } seed = buf; }
From: Rosioru Dragos dragos.rosioru@nxp.com
commit fa03481b6e2e82355c46644147b614f18c7a8161 upstream.
The incorrect traversal of the scatterlist, during the linearization phase lead to computing the hash value of the wrong input buffer. New implementation uses scatterwalk_map_and_copy() to address this issue.
Cc: stable@vger.kernel.org Fixes: 15b59e7c3733 ("crypto: mxs - Add Freescale MXS DCP driver") Signed-off-by: Rosioru Dragos dragos.rosioru@nxp.com Reviewed-by: Horia Geantă horia.geanta@nxp.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/mxs-dcp.c | 54 ++++++++++++++++++++++------------------------- 1 file changed, 26 insertions(+), 28 deletions(-)
--- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -20,6 +20,7 @@ #include <crypto/sha.h> #include <crypto/internal/hash.h> #include <crypto/internal/skcipher.h> +#include <crypto/scatterwalk.h>
#define DCP_MAX_CHANS 4 #define DCP_BUF_SZ PAGE_SIZE @@ -621,49 +622,46 @@ static int dcp_sha_req_to_buf(struct cry struct dcp_async_ctx *actx = crypto_ahash_ctx(tfm); struct dcp_sha_req_ctx *rctx = ahash_request_ctx(req); struct hash_alg_common *halg = crypto_hash_alg_common(tfm); - const int nents = sg_nents(req->src);
uint8_t *in_buf = sdcp->coh->sha_in_buf; uint8_t *out_buf = sdcp->coh->sha_out_buf;
- uint8_t *src_buf; - struct scatterlist *src;
- unsigned int i, len, clen; + unsigned int i, len, clen, oft = 0; int ret;
int fin = rctx->fini; if (fin) rctx->fini = 0;
- for_each_sg(req->src, src, nents, i) { - src_buf = sg_virt(src); - len = sg_dma_len(src); + src = req->src; + len = req->nbytes;
- do { - if (actx->fill + len > DCP_BUF_SZ) - clen = DCP_BUF_SZ - actx->fill; - else - clen = len; + while (len) { + if (actx->fill + len > DCP_BUF_SZ) + clen = DCP_BUF_SZ - actx->fill; + else + clen = len;
- memcpy(in_buf + actx->fill, src_buf, clen); - len -= clen; - src_buf += clen; - actx->fill += clen; + scatterwalk_map_and_copy(in_buf + actx->fill, src, oft, clen, + 0);
- /* - * If we filled the buffer and still have some - * more data, submit the buffer. - */ - if (len && actx->fill == DCP_BUF_SZ) { - ret = mxs_dcp_run_sha(req); - if (ret) - return ret; - actx->fill = 0; - rctx->init = 0; - } - } while (len); + len -= clen; + oft += clen; + actx->fill += clen; + + /* + * If we filled the buffer and still have some + * more data, submit the buffer. + */ + if (len && actx->fill == DCP_BUF_SZ) { + ret = mxs_dcp_run_sha(req); + if (ret) + return ret; + actx->fill = 0; + rctx->init = 0; + } }
if (fin) {
From: Jens Axboe axboe@kernel.dk
commit 4ed734b0d0913e566a9d871e15d24eb240f269f7 upstream.
With the previous fixes for number of files open checking, I added some debug code to see if we had other spots where we're checking rlimit() against the async io-wq workers. The only one I found was file size checking, which we should also honor.
During write and fallocate prep, store the max file size and override that for the current ask if we're in io-wq worker context.
Cc: stable@vger.kernel.org # 5.1+ Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/io_uring.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
--- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -432,6 +432,7 @@ struct io_kiocb { #define REQ_F_INFLIGHT 16384 /* on inflight list */ #define REQ_F_COMP_LOCKED 32768 /* completion under lock */ #define REQ_F_HARDLINK 65536 /* doesn't sever on completion < 0 */ + unsigned long fsize; u64 user_data; u32 result; u32 sequence; @@ -1899,6 +1900,8 @@ static int io_write_prep(struct io_kiocb if (unlikely(!(req->file->f_mode & FMODE_WRITE))) return -EBADF;
+ req->fsize = rlimit(RLIMIT_FSIZE); + if (!req->io) return 0;
@@ -1970,10 +1973,17 @@ static int io_write(struct io_kiocb *req } kiocb->ki_flags |= IOCB_WRITE;
+ if (!force_nonblock) + current->signal->rlim[RLIMIT_FSIZE].rlim_cur = req->fsize; + if (req->file->f_op->write_iter) ret2 = call_write_iter(req->file, kiocb, &iter); else ret2 = loop_rw_iter(WRITE, req->file, kiocb, &iter); + + if (!force_nonblock) + current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY; + /* * Raw bdev writes will -EOPNOTSUPP for IOCB_NOWAIT. Just * retry them without IOCB_NOWAIT.
From: Yangbo Lu yangbo.lu@nxp.com
commit 2aa3d826adb578b26629a79b775a552cfe3fedf7 upstream.
This patch is to fix operating in esdhc_reset() for different controller versions, and to add bus-width restoring after data reset for eSDHC (verdor version <= 2.2).
Also add annotation for understanding.
Signed-off-by: Yangbo Lu yangbo.lu@nxp.com Acked-by: Adrian Hunter adrian.hunter@intel.com Link: https://lore.kernel.org/r/20200108040713.38888-1-yangbo.lu@nxp.com Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mmc/host/sdhci-of-esdhc.c | 43 ++++++++++++++++++++++++++++++++++---- 1 file changed, 39 insertions(+), 4 deletions(-)
--- a/drivers/mmc/host/sdhci-of-esdhc.c +++ b/drivers/mmc/host/sdhci-of-esdhc.c @@ -758,23 +758,58 @@ static void esdhc_reset(struct sdhci_hos { struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host); - u32 val; + u32 val, bus_width = 0;
+ /* + * Add delay to make sure all the DMA transfers are finished + * for quirk. + */ if (esdhc->quirk_delay_before_data_reset && (mask & SDHCI_RESET_DATA) && (host->flags & SDHCI_REQ_USE_DMA)) mdelay(5);
+ /* + * Save bus-width for eSDHC whose vendor version is 2.2 + * or lower for data reset. + */ + if ((mask & SDHCI_RESET_DATA) && + (esdhc->vendor_ver <= VENDOR_V_22)) { + val = sdhci_readl(host, ESDHC_PROCTL); + bus_width = val & ESDHC_CTRL_BUSWIDTH_MASK; + } + sdhci_reset(host, mask);
- sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); - sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); + /* + * Restore bus-width setting and interrupt registers for eSDHC + * whose vendor version is 2.2 or lower for data reset. + */ + if ((mask & SDHCI_RESET_DATA) && + (esdhc->vendor_ver <= VENDOR_V_22)) { + val = sdhci_readl(host, ESDHC_PROCTL); + val &= ~ESDHC_CTRL_BUSWIDTH_MASK; + val |= bus_width; + sdhci_writel(host, val, ESDHC_PROCTL); + + sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); + sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); + }
- if (mask & SDHCI_RESET_ALL) { + /* + * Some bits have to be cleaned manually for eSDHC whose spec + * version is higher than 3.0 for all reset. + */ + if ((mask & SDHCI_RESET_ALL) && + (esdhc->spec_ver >= SDHCI_SPEC_300)) { val = sdhci_readl(host, ESDHC_TBCTL); val &= ~ESDHC_TB_EN; sdhci_writel(host, val, ESDHC_TBCTL);
+ /* + * Initialize eSDHC_DLLCFG1[DLL_PD_PULSE_STRETCH_SEL] to + * 0 for quirk. + */ if (esdhc->quirk_unreliable_pulse_detection) { val = sdhci_readl(host, ESDHC_DLLCFG1); val &= ~ESDHC_DLL_PD_PULSE_STRETCH_SEL;
From: Anssi Hannula anssi.hannula@bitwise.fi
commit 82f04bfe2aff428b063eefd234679b2d693228ed upstream.
Commit 0161a94e2d1c7 ("tools: gpio: Correctly add make dependencies for gpio_utils") added a make rule for gpio-utils-in.o but used $(output) instead of the correct $(OUTPUT) for the output directory, breaking out-of-tree build (O=xx) with the following error:
No rule to make target 'out/tools/gpio/gpio-utils-in.o', needed by 'out/tools/gpio/lsgpio-in.o'. Stop.
Fix that.
Fixes: 0161a94e2d1c ("tools: gpio: Correctly add make dependencies for gpio_utils") Cc: stable@vger.kernel.org Cc: Laura Abbott labbott@redhat.com Signed-off-by: Anssi Hannula anssi.hannula@bitwise.fi Link: https://lore.kernel.org/r/20200325103154.32235-1-anssi.hannula@bitwise.fi Reviewed-by: Bartosz Golaszewski bgolaszewski@baylibre.com Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- tools/gpio/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/tools/gpio/Makefile +++ b/tools/gpio/Makefile @@ -35,7 +35,7 @@ $(OUTPUT)include/linux/gpio.h: ../../inc
prepare: $(OUTPUT)include/linux/gpio.h
-GPIO_UTILS_IN := $(output)gpio-utils-in.o +GPIO_UTILS_IN := $(OUTPUT)gpio-utils-in.o $(GPIO_UTILS_IN): prepare FORCE $(Q)$(MAKE) $(build)=gpio-utils
From: Subash Abhinov Kasiviswanathan subashab@codeaurora.org
commit 2abb5792387eb188b12051337d5dcd2cba615cb0 upstream.
This allows the changelink operation to succeed if the mux_id was specified as an argument. Note that the mux_id must match the existing mux_id of the rmnet device or should be an unused mux_id.
Fixes: 1dc49e9d164c ("net: rmnet: do not allow to change mux id if mux id is duplicated") Reported-and-tested-by: Alex Elder elder@linaro.org Signed-off-by: Sean Tranchetti stranche@codeaurora.org Signed-off-by: Subash Abhinov Kasiviswanathan subashab@codeaurora.org Signed-off-by: David S. Miller davem@davemloft.net Cc: Guenter Roeck linux@roeck-us.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c | 31 ++++++++++++--------- 1 file changed, 19 insertions(+), 12 deletions(-)
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c @@ -279,7 +279,6 @@ static int rmnet_changelink(struct net_d { struct rmnet_priv *priv = netdev_priv(dev); struct net_device *real_dev; - struct rmnet_endpoint *ep; struct rmnet_port *port; u16 mux_id;
@@ -294,19 +293,27 @@ static int rmnet_changelink(struct net_d
if (data[IFLA_RMNET_MUX_ID]) { mux_id = nla_get_u16(data[IFLA_RMNET_MUX_ID]); - if (rmnet_get_endpoint(port, mux_id)) { - NL_SET_ERR_MSG_MOD(extack, "MUX ID already exists"); - return -EINVAL; - } - ep = rmnet_get_endpoint(port, priv->mux_id); - if (!ep) - return -ENODEV;
- hlist_del_init_rcu(&ep->hlnode); - hlist_add_head_rcu(&ep->hlnode, &port->muxed_ep[mux_id]); + if (mux_id != priv->mux_id) { + struct rmnet_endpoint *ep; + + ep = rmnet_get_endpoint(port, priv->mux_id); + if (!ep) + return -ENODEV; + + if (rmnet_get_endpoint(port, mux_id)) { + NL_SET_ERR_MSG_MOD(extack, + "MUX ID already exists"); + return -EINVAL; + } + + hlist_del_init_rcu(&ep->hlnode); + hlist_add_head_rcu(&ep->hlnode, + &port->muxed_ep[mux_id]);
- ep->mux_id = mux_id; - priv->mux_id = mux_id; + ep->mux_id = mux_id; + priv->mux_id = mux_id; + } }
if (data[IFLA_RMNET_FLAGS]) {
From: Maxime Ripard maxime@cerno.tech
commit 4c7eeb9af3e41ae7d840977119c58f3bbb3f4f59 upstream.
The commit 7aa9b9eb7d6a ("arm64: dts: allwinner: H6: Add PMU mode") introduced support for the PMU found on the Allwinner H6. However, the binding only allows for a single compatible, while the patch was adding two.
Make sure we follow the binding.
Fixes: 7aa9b9eb7d6a ("arm64: dts: allwinner: H6: Add PMU mode") Signed-off-by: Maxime Ripard maxime@cerno.tech Cc: Guenter Roeck linux@roeck-us.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi +++ b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi @@ -71,8 +71,7 @@ };
pmu { - compatible = "arm,cortex-a53-pmu", - "arm,armv8-pmuv3"; + compatible = "arm,cortex-a53-pmu"; interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,
From: Scott Wood swood@redhat.com
commit 82e0516ce3a147365a5dd2a9bedd5ba43a18663d upstream.
A redundant "curr = rq->curr" was added; remove it.
Fixes: ebc0f83c78a2 ("timers/nohz: Update NOHZ load in remote tick") Signed-off-by: Scott Wood swood@redhat.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/1580776558-12882-1-git-send-email-swood@redhat.com Cc: Guenter Roeck linux@roeck-us.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/sched/core.c | 1 - 1 file changed, 1 deletion(-)
--- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3677,7 +3677,6 @@ static void sched_tick_remote(struct wor if (cpu_is_offline(cpu)) goto out_unlock;
- curr = rq->curr; update_rq_clock(rq);
if (!is_idle_task(curr)) {
From: Maxime Ripard maxime@cerno.tech
commit 4ae7a3c3d7d31260f690d8d658f0365f3eca67a2 upstream.
The commit c35a516a4618 ("arm64: dts: allwinner: H5: Add PMU node") introduced support for the PMU found on the Allwinner H5. However, the binding only allows for a single compatible, while the patch was adding two.
Make sure we follow the binding.
Fixes: c35a516a4618 ("arm64: dts: allwinner: H5: Add PMU node") Signed-off-by: Maxime Ripard maxime@cerno.tech Cc: Guenter Roeck linux@roeck-us.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi +++ b/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi @@ -77,8 +77,7 @@ };
pmu { - compatible = "arm,cortex-a53-pmu", - "arm,armv8-pmuv3"; + compatible = "arm,cortex-a53-pmu"; interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
From: Jakub Kicinski kuba@kernel.org
commit 9b8b17541f13809d06f6f873325305ddbb760e3e upstream.
If a cgroup violates its memory.high constraints, we may end up unduly penalising it. For example, for the following hierarchy:
A: max high, 20 usage A/B: 9 high, 10 usage A/C: max high, 10 usage
We would end up doing the following calculation below when calculating high delay for A/B:
A/B: 10 - 9 = 1... A: 20 - PAGE_COUNTER_MAX = 21, so set max_overage to 21.
This gets worse with higher disparities in usage in the parent.
I have no idea how this disappeared from the final version of the patch, but it is certainly Not Good(tm). This wasn't obvious in testing because, for a simple cgroup hierarchy with only one child, the result is usually roughly the same. It's only in more complex hierarchies that things go really awry (although still, the effects are limited to a maximum of 2 seconds in schedule_timeout_killable at a maximum).
[chris@chrisdown.name: changelog] Fixes: e26733e0d0ec ("mm, memcg: throttle allocators based on ancestral memory.high") Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Chris Down chris@chrisdown.name Signed-off-by: Andrew Morton akpm@linux-foundation.org Acked-by: Michal Hocko mhocko@suse.com Cc: Johannes Weiner hannes@cmpxchg.org Cc: stable@vger.kernel.org [5.4.x] Link: http://lkml.kernel.org/r/20200331152424.GA1019937@chrisdown.name Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Cc: Guenter Roeck linux@roeck-us.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- mm/memcontrol.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2324,6 +2324,9 @@ static unsigned long calculate_high_dela usage = page_counter_read(&memcg->memory); high = READ_ONCE(memcg->high);
+ if (usage <= high) + continue; + /* * Prevent division by 0 in overage calculation by acting as if * it was a threshold of 1 page
From: Mikulas Patocka mpatocka@redhat.com
commit 1edaa447d958bec24c6a79685a5790d98976fd16 upstream.
Initializing a dm-writecache device can take a long time when the persistent memory device is large. Add cond_resched() to a few loops to avoid warnings that the CPU is stuck.
Cc: stable@vger.kernel.org # v4.18+ Signed-off-by: Mikulas Patocka mpatocka@redhat.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-writecache.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/md/dm-writecache.c +++ b/drivers/md/dm-writecache.c @@ -872,6 +872,7 @@ static int writecache_alloc_entries(stru struct wc_entry *e = &wc->entries[b]; e->index = b; e->write_in_progress = false; + cond_resched(); }
return 0; @@ -926,6 +927,7 @@ static void writecache_resume(struct dm_ e->original_sector = le64_to_cpu(wme.original_sector); e->seq_count = le64_to_cpu(wme.seq_count); } + cond_resched(); } #endif for (b = 0; b < wc->n_blocks; b++) { @@ -1770,8 +1772,10 @@ static int init_memory(struct dm_writeca pmem_assign(sb(wc)->n_blocks, cpu_to_le64(wc->n_blocks)); pmem_assign(sb(wc)->seq_count, cpu_to_le64(0));
- for (b = 0; b < wc->n_blocks; b++) + for (b = 0; b < wc->n_blocks; b++) { write_original_sector_seq_count(wc, &wc->entries[b], -1, -1); + cond_resched(); + }
writecache_flush_all_metadata(wc); writecache_commit_flushed(wc, false);
From: Mikulas Patocka mpatocka@redhat.com
commit b93b6643e9b5a7f260b931e97f56ffa3fa65e26d upstream.
If the user specifies tag size larger than HASH_MAX_DIGESTSIZE, there's a crash in integrity_metadata().
Cc: stable@vger.kernel.org Signed-off-by: Mikulas Patocka mpatocka@redhat.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-integrity.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -1519,7 +1519,7 @@ static void integrity_metadata(struct wo struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io)); char *checksums; unsigned extra_space = unlikely(digest_size > ic->tag_size) ? digest_size - ic->tag_size : 0; - char checksums_onstack[HASH_MAX_DIGESTSIZE]; + char checksums_onstack[max((size_t)HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)]; unsigned sectors_to_process = dio->range.n_sectors; sector_t sector = dio->range.logical_sector;
@@ -1748,7 +1748,7 @@ retry_kmap: } while (++s < ic->sectors_per_block); #ifdef INTERNAL_VERIFY if (ic->internal_hash) { - char checksums_onstack[max(HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)]; + char checksums_onstack[max((size_t)HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack); if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
From: Shetty, Harshini X (EXT-Sony Mobile) Harshini.X.Shetty@sony.com
commit 75fa601934fda23d2f15bf44b09c2401942d8e15 upstream.
Fix below kmemleak detected in verity_fec_ctr. output_pool is allocated for each dm-verity-fec device. But it is not freed when dm-table for the verity target is removed. Hence free the output mempool in destructor function verity_fec_dtr.
unreferenced object 0xffffffffa574d000 (size 4096): comm "init", pid 1667, jiffies 4294894890 (age 307.168s) hex dump (first 32 bytes): 8e 36 00 98 66 a8 0b 9b 00 00 00 00 00 00 00 00 .6..f........... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<0000000060e82407>] __kmalloc+0x2b4/0x340 [<00000000dd99488f>] mempool_kmalloc+0x18/0x20 [<000000002560172b>] mempool_init_node+0x98/0x118 [<000000006c3574d2>] mempool_init+0x14/0x20 [<0000000008cb266e>] verity_fec_ctr+0x388/0x3b0 [<000000000887261b>] verity_ctr+0x87c/0x8d0 [<000000002b1e1c62>] dm_table_add_target+0x174/0x348 [<000000002ad89eda>] table_load+0xe4/0x328 [<000000001f06f5e9>] dm_ctl_ioctl+0x3b4/0x5a0 [<00000000bee5fbb7>] do_vfs_ioctl+0x5dc/0x928 [<00000000b475b8f5>] __arm64_sys_ioctl+0x70/0x98 [<000000005361e2e8>] el0_svc_common+0xa0/0x158 [<000000001374818f>] el0_svc_handler+0x6c/0x88 [<000000003364e9f4>] el0_svc+0x8/0xc [<000000009d84cec9>] 0xffffffffffffffff
Fixes: a739ff3f543af ("dm verity: add support for forward error correction") Depends-on: 6f1c819c219f7 ("dm: convert to bioset_init()/mempool_init()") Cc: stable@vger.kernel.org Signed-off-by: Harshini Shetty harshini.x.shetty@sony.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-verity-fec.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/md/dm-verity-fec.c +++ b/drivers/md/dm-verity-fec.c @@ -551,6 +551,7 @@ void verity_fec_dtr(struct dm_verity *v) mempool_exit(&f->rs_pool); mempool_exit(&f->prealloc_pool); mempool_exit(&f->extra_pool); + mempool_exit(&f->output_pool); kmem_cache_destroy(f->cache);
if (f->data_bufio)
From: Bob Liu bob.liu@oracle.com
commit b8fdd090376a7a46d17db316638fe54b965c2fb0 upstream.
zmd->nr_rnd_zones was increased twice by mistake. The other place it is increased in dmz_init_zone() is the only one needed:
1131 zmd->nr_useable_zones++; 1132 if (dmz_is_rnd(zone)) { 1133 zmd->nr_rnd_zones++; ^^^ Fixes: 3b1a94c88b79 ("dm zoned: drive-managed zoned block device target") Cc: stable@vger.kernel.org Signed-off-by: Bob Liu bob.liu@oracle.com Reviewed-by: Damien Le Moal damien.lemoal@wdc.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-zoned-metadata.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/md/dm-zoned-metadata.c +++ b/drivers/md/dm-zoned-metadata.c @@ -1109,7 +1109,6 @@ static int dmz_init_zone(struct blk_zone switch (blkz->type) { case BLK_ZONE_TYPE_CONVENTIONAL: set_bit(DMZ_RND, &zone->flags); - zmd->nr_rnd_zones++; break; case BLK_ZONE_TYPE_SEQWRITE_REQ: case BLK_ZONE_TYPE_SEQWRITE_PREF:
From: Nikos Tsironis ntsironis@arrikto.com
commit 4b5142905d4ff58a4b93f7c8eaa7ba829c0a53c9 upstream.
There is a bug in the way dm-clone handles discards, which can lead to discarding the wrong blocks or trying to discard blocks beyond the end of the device.
This could lead to data corruption, if the destination device indeed discards the underlying blocks, i.e., if the discard operation results in the original contents of a block to be lost.
The root of the problem is the code that calculates the range of regions covered by a discard request and decides which regions to discard.
Since dm-clone handles the device in units of regions, we don't discard parts of a region, only whole regions.
The range is calculated as:
rs = dm_sector_div_up(bio->bi_iter.bi_sector, clone->region_size); re = bio_end_sector(bio) >> clone->region_shift;
, where 'rs' is the first region to discard and (re - rs) is the number of regions to discard.
The bug manifests when we try to discard part of a single region, i.e., when we try to discard a block with size < region_size, and the discard request both starts at an offset with respect to the beginning of that region and ends before the end of the region.
The root cause is the following comparison:
if (rs == re) // skip discard and complete original bio immediately
, which doesn't take into account that 'rs' might be greater than 're'.
Thus, we then issue a discard request for the wrong blocks, instead of skipping the discard all together.
Fix the check to also take into account the above case, so we don't end up discarding the wrong blocks.
Also, add some range checks to dm_clone_set_region_hydrated() and dm_clone_cond_set_range(), which update dm-clone's region bitmap.
Note that the aforementioned bug doesn't cause invalid memory accesses, because dm_clone_is_range_hydrated() returns True for this case, so the checks are just precautionary.
Fixes: 7431b7835f55 ("dm: add clone target") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Nikos Tsironis ntsironis@arrikto.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-clone-metadata.c | 13 ++++++++++++ drivers/md/dm-clone-target.c | 43 +++++++++++++++++++++++++++-------------- 2 files changed, 42 insertions(+), 14 deletions(-)
--- a/drivers/md/dm-clone-metadata.c +++ b/drivers/md/dm-clone-metadata.c @@ -850,6 +850,12 @@ int dm_clone_set_region_hydrated(struct struct dirty_map *dmap; unsigned long word, flags;
+ if (unlikely(region_nr >= cmd->nr_regions)) { + DMERR("Region %lu out of range (total number of regions %lu)", + region_nr, cmd->nr_regions); + return -ERANGE; + } + word = region_nr / BITS_PER_LONG;
spin_lock_irqsave(&cmd->bitmap_lock, flags); @@ -879,6 +885,13 @@ int dm_clone_cond_set_range(struct dm_cl struct dirty_map *dmap; unsigned long word, region_nr;
+ if (unlikely(start >= cmd->nr_regions || (start + nr_regions) < start || + (start + nr_regions) > cmd->nr_regions)) { + DMERR("Invalid region range: start %lu, nr_regions %lu (total number of regions %lu)", + start, nr_regions, cmd->nr_regions); + return -ERANGE; + } + spin_lock_irq(&cmd->bitmap_lock);
if (cmd->read_only) { --- a/drivers/md/dm-clone-target.c +++ b/drivers/md/dm-clone-target.c @@ -293,10 +293,17 @@ static inline unsigned long bio_to_regio
/* Get the region range covered by the bio */ static void bio_region_range(struct clone *clone, struct bio *bio, - unsigned long *rs, unsigned long *re) + unsigned long *rs, unsigned long *nr_regions) { + unsigned long end; + *rs = dm_sector_div_up(bio->bi_iter.bi_sector, clone->region_size); - *re = bio_end_sector(bio) >> clone->region_shift; + end = bio_end_sector(bio) >> clone->region_shift; + + if (*rs >= end) + *nr_regions = 0; + else + *nr_regions = end - *rs; }
/* Check whether a bio overwrites a region */ @@ -454,7 +461,7 @@ static void trim_bio(struct bio *bio, se
static void complete_discard_bio(struct clone *clone, struct bio *bio, bool success) { - unsigned long rs, re; + unsigned long rs, nr_regions;
/* * If the destination device supports discards, remap and trim the @@ -463,9 +470,9 @@ static void complete_discard_bio(struct */ if (test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags) && success) { remap_to_dest(clone, bio); - bio_region_range(clone, bio, &rs, &re); + bio_region_range(clone, bio, &rs, &nr_regions); trim_bio(bio, rs << clone->region_shift, - (re - rs) << clone->region_shift); + nr_regions << clone->region_shift); generic_make_request(bio); } else bio_endio(bio); @@ -473,12 +480,21 @@ static void complete_discard_bio(struct
static void process_discard_bio(struct clone *clone, struct bio *bio) { - unsigned long rs, re; + unsigned long rs, nr_regions;
- bio_region_range(clone, bio, &rs, &re); - BUG_ON(re > clone->nr_regions); + bio_region_range(clone, bio, &rs, &nr_regions); + if (!nr_regions) { + bio_endio(bio); + return; + }
- if (unlikely(rs == re)) { + if (WARN_ON(rs >= clone->nr_regions || (rs + nr_regions) < rs || + (rs + nr_regions) > clone->nr_regions)) { + DMERR("%s: Invalid range (%lu + %lu, total regions %lu) for discard (%llu + %u)", + clone_device_name(clone), rs, nr_regions, + clone->nr_regions, + (unsigned long long)bio->bi_iter.bi_sector, + bio_sectors(bio)); bio_endio(bio); return; } @@ -487,7 +503,7 @@ static void process_discard_bio(struct c * The covered regions are already hydrated so we just need to pass * down the discard. */ - if (dm_clone_is_range_hydrated(clone->cmd, rs, re - rs)) { + if (dm_clone_is_range_hydrated(clone->cmd, rs, nr_regions)) { complete_discard_bio(clone, bio, true); return; } @@ -1169,7 +1185,7 @@ static void process_deferred_discards(st int r = -EPERM; struct bio *bio; struct blk_plug plug; - unsigned long rs, re; + unsigned long rs, nr_regions; struct bio_list discards = BIO_EMPTY_LIST;
spin_lock_irq(&clone->lock); @@ -1185,14 +1201,13 @@ static void process_deferred_discards(st
/* Update the metadata */ bio_list_for_each(bio, &discards) { - bio_region_range(clone, bio, &rs, &re); + bio_region_range(clone, bio, &rs, &nr_regions); /* * A discard request might cover regions that have been already * hydrated. There is no need to update the metadata for these * regions. */ - r = dm_clone_cond_set_range(clone->cmd, rs, re - rs); - + r = dm_clone_cond_set_range(clone->cmd, rs, nr_regions); if (unlikely(r)) break; }
From: Nikos Tsironis ntsironis@arrikto.com
commit cd481c12269b4d276f1a52eda0ebd419079bfe3a upstream.
Add overflow check for clone->nr_regions variable, which holds the number of regions of the target.
The overflow can occur with sufficiently large devices, if BITS_PER_LONG == 32. E.g., if the region size is 8 sectors (4K), the overflow would occur for device sizes > 34359738360 sectors (~16TB).
This could result in multiple device sectors wrongly mapping to the same region number, due to the truncation from 64 bits to 32 bits, which would lead to data corruption.
Fixes: 7431b7835f55 ("dm: add clone target") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Nikos Tsironis ntsironis@arrikto.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-clone-target.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
--- a/drivers/md/dm-clone-target.c +++ b/drivers/md/dm-clone-target.c @@ -1790,6 +1790,7 @@ error: static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv) { int r; + sector_t nr_regions; struct clone *clone; struct dm_arg_set as;
@@ -1831,7 +1832,16 @@ static int clone_ctr(struct dm_target *t goto out_with_source_dev;
clone->region_shift = __ffs(clone->region_size); - clone->nr_regions = dm_sector_div_up(ti->len, clone->region_size); + nr_regions = dm_sector_div_up(ti->len, clone->region_size); + + /* Check for overflow */ + if (nr_regions != (unsigned long)nr_regions) { + ti->error = "Too many regions. Consider increasing the region size"; + r = -EOVERFLOW; + goto out_with_source_dev; + } + + clone->nr_regions = nr_regions;
r = validate_nr_regions(clone->nr_regions, &ti->error); if (r)
From: Nikos Tsironis ntsironis@arrikto.com
commit 9fc06ff56845cc5ccafec52f545fc2e08d22f849 upstream.
Add missing casts when converting from regions to sectors.
In case BITS_PER_LONG == 32, the lack of the appropriate casts can lead to overflows and miscalculation of the device sector.
As a result, we could end up discarding and/or copying the wrong parts of the device, thus corrupting the device's data.
Fixes: 7431b7835f55 ("dm: add clone target") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Nikos Tsironis ntsironis@arrikto.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-clone-target.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/drivers/md/dm-clone-target.c +++ b/drivers/md/dm-clone-target.c @@ -282,7 +282,7 @@ static bool bio_triggers_commit(struct c /* Get the address of the region in sectors */ static inline sector_t region_to_sector(struct clone *clone, unsigned long region_nr) { - return (region_nr << clone->region_shift); + return ((sector_t)region_nr << clone->region_shift); }
/* Get the region number of the bio */ @@ -471,7 +471,7 @@ static void complete_discard_bio(struct if (test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags) && success) { remap_to_dest(clone, bio); bio_region_range(clone, bio, &rs, &nr_regions); - trim_bio(bio, rs << clone->region_shift, + trim_bio(bio, region_to_sector(clone, rs), nr_regions << clone->region_shift); generic_make_request(bio); } else @@ -804,11 +804,14 @@ static void hydration_copy(struct dm_clo struct dm_io_region from, to; struct clone *clone = hd->clone;
+ if (WARN_ON(!nr_regions)) + return; + region_size = clone->region_size; region_start = hd->region_nr; region_end = region_start + nr_regions - 1;
- total_size = (nr_regions - 1) << clone->region_shift; + total_size = region_to_sector(clone, nr_regions - 1);
if (region_end == clone->nr_regions - 1) { /*
From: Nikos Tsironis ntsironis@arrikto.com
commit 81d5553d1288c2ec0390f02f84d71ca0f0f9f137 upstream.
dm_clone_nr_of_hydrated_regions() returns the number of regions that have been hydrated so far. In order to do so it employs bitmap_weight().
Until now, the return type of dm_clone_nr_of_hydrated_regions() was unsigned long.
Because bitmap_weight() returns an int, in case BITS_PER_LONG == 64 and the return value of bitmap_weight() is 2^31 (the maximum allowed number of regions for a device), the result is sign extended from 32 bits to 64 bits and an incorrect value is displayed, in the status output of dm-clone, as the number of hydrated regions.
Fix this by having dm_clone_nr_of_hydrated_regions() return an unsigned int.
Fixes: 7431b7835f55 ("dm: add clone target") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Nikos Tsironis ntsironis@arrikto.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-clone-metadata.c | 2 +- drivers/md/dm-clone-metadata.h | 2 +- drivers/md/dm-clone-target.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/md/dm-clone-metadata.c +++ b/drivers/md/dm-clone-metadata.c @@ -656,7 +656,7 @@ bool dm_clone_is_range_hydrated(struct d return (bit >= (start + nr_regions)); }
-unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd) +unsigned int dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd) { return bitmap_weight(cmd->region_map, cmd->nr_regions); } --- a/drivers/md/dm-clone-metadata.h +++ b/drivers/md/dm-clone-metadata.h @@ -156,7 +156,7 @@ bool dm_clone_is_range_hydrated(struct d /* * Returns the number of hydrated regions. */ -unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd); +unsigned int dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd);
/* * Returns the first unhydrated region with region_nr >= @start --- a/drivers/md/dm-clone-target.c +++ b/drivers/md/dm-clone-target.c @@ -1473,7 +1473,7 @@ static void clone_status(struct dm_targe goto error; }
- DMEMIT("%u %llu/%llu %llu %lu/%lu %u ", + DMEMIT("%u %llu/%llu %llu %u/%lu %u ", DM_CLONE_METADATA_BLOCK_SIZE, (unsigned long long)(nr_metadata_blocks - nr_free_metadata_blocks), (unsigned long long)nr_metadata_blocks,
From: Matthew Wilcox (Oracle) willy@infradead.org
commit c36d451ad386b34f452fc3c8621ff14b9eaa31a6 upstream.
Inspired by the recent Coverity report, I looked for other places where the offset wasn't being converted to an unsigned long before being shifted, and I found one in xas_pause() when the entry being paused is of order >32.
Fixes: b803b42823d0 ("xarray: Add XArray iterators") Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- lib/test_xarray.c | 37 +++++++++++++++++++++++++++++++++++++ lib/xarray.c | 2 +- 2 files changed, 38 insertions(+), 1 deletion(-)
--- a/lib/test_xarray.c +++ b/lib/test_xarray.c @@ -1156,6 +1156,42 @@ static noinline void check_find_entry(st XA_BUG_ON(xa, !xa_empty(xa)); }
+static noinline void check_pause(struct xarray *xa) +{ + XA_STATE(xas, xa, 0); + void *entry; + unsigned int order; + unsigned long index = 1; + unsigned int count = 0; + + for (order = 0; order < order_limit; order++) { + XA_BUG_ON(xa, xa_store_order(xa, index, order, + xa_mk_index(index), GFP_KERNEL)); + index += 1UL << order; + } + + rcu_read_lock(); + xas_for_each(&xas, entry, ULONG_MAX) { + XA_BUG_ON(xa, entry != xa_mk_index(1UL << count)); + count++; + } + rcu_read_unlock(); + XA_BUG_ON(xa, count != order_limit); + + count = 0; + xas_set(&xas, 0); + rcu_read_lock(); + xas_for_each(&xas, entry, ULONG_MAX) { + XA_BUG_ON(xa, entry != xa_mk_index(1UL << count)); + count++; + xas_pause(&xas); + } + rcu_read_unlock(); + XA_BUG_ON(xa, count != order_limit); + + xa_destroy(xa); +} + static noinline void check_move_tiny(struct xarray *xa) { XA_STATE(xas, xa, 0); @@ -1664,6 +1700,7 @@ static int xarray_checks(void) check_xa_alloc(); check_find(&array); check_find_entry(&array); + check_pause(&array); check_account(&array); check_destroy(&array); check_move(&array); --- a/lib/xarray.c +++ b/lib/xarray.c @@ -970,7 +970,7 @@ void xas_pause(struct xa_state *xas)
xas->xa_node = XAS_RESTART; if (node) { - unsigned int offset = xas->xa_offset; + unsigned long offset = xas->xa_offset; while (++offset < XA_CHUNK_SIZE) { if (!xa_is_sibling(xa_entry(xas->xa, node, offset))) break;
From: Matthew Wilcox (Oracle) willy@infradead.org
commit 7e934cf5ace1dceeb804f7493fa28bb697ed3c52 upstream.
xas_for_each_marked() is using entry == NULL as a termination condition of the iteration. When xas_for_each_marked() is used protected only by RCU, this can however race with xas_store(xas, NULL) in the following way:
TASK1 TASK2 page_cache_delete() find_get_pages_range_tag() xas_for_each_marked() xas_find_marked() off = xas_find_chunk()
xas_store(&xas, NULL) xas_init_marks(&xas); ... rcu_assign_pointer(*slot, NULL); entry = xa_entry(off);
And thus xas_for_each_marked() terminates prematurely possibly leading to missed entries in the iteration (translating to missing writeback of some pages or a similar problem).
If we find a NULL entry that has been marked, skip it (unless we're trying to allocate an entry).
Reported-by: Jan Kara jack@suse.cz CC: stable@vger.kernel.org Fixes: ef8e5717db01 ("page cache: Convert delete_batch to XArray") Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/linux/xarray.h | 6 + lib/xarray.c | 2 tools/testing/radix-tree/Makefile | 4 - tools/testing/radix-tree/iteration_check_2.c | 87 +++++++++++++++++++++++++++ tools/testing/radix-tree/main.c | 1 tools/testing/radix-tree/test.h | 1 6 files changed, 98 insertions(+), 3 deletions(-)
--- a/include/linux/xarray.h +++ b/include/linux/xarray.h @@ -1648,6 +1648,7 @@ static inline void *xas_next_marked(stru xa_mark_t mark) { struct xa_node *node = xas->xa_node; + void *entry; unsigned int offset;
if (unlikely(xas_not_node(node) || node->shift)) @@ -1659,7 +1660,10 @@ static inline void *xas_next_marked(stru return NULL; if (offset == XA_CHUNK_SIZE) return xas_find_marked(xas, max, mark); - return xa_entry(xas->xa, node, offset); + entry = xa_entry(xas->xa, node, offset); + if (!entry) + return xas_find_marked(xas, max, mark); + return entry; }
/* --- a/lib/xarray.c +++ b/lib/xarray.c @@ -1208,6 +1208,8 @@ void *xas_find_marked(struct xa_state *x }
entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset); + if (!entry && !(xa_track_free(xas->xa) && mark == XA_FREE_MARK)) + continue; if (!xa_is_node(entry)) return entry; xas->xa_node = xa_to_node(entry); --- a/tools/testing/radix-tree/Makefile +++ b/tools/testing/radix-tree/Makefile @@ -7,8 +7,8 @@ LDLIBS+= -lpthread -lurcu TARGETS = main idr-test multiorder xarray CORE_OFILES := xarray.o radix-tree.o idr.o linux.o test.o find_bit.o bitmap.o OFILES = main.o $(CORE_OFILES) regression1.o regression2.o regression3.o \ - regression4.o \ - tag_check.o multiorder.o idr-test.o iteration_check.o benchmark.o + regression4.o tag_check.o multiorder.o idr-test.o iteration_check.o \ + iteration_check_2.o benchmark.o
ifndef SHIFT SHIFT=3 --- /dev/null +++ b/tools/testing/radix-tree/iteration_check_2.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * iteration_check_2.c: Check that deleting a tagged entry doesn't cause + * an RCU walker to finish early. + * Copyright (c) 2020 Oracle + * Author: Matthew Wilcox willy@infradead.org + */ +#include <pthread.h> +#include "test.h" + +static volatile bool test_complete; + +static void *iterator(void *arg) +{ + XA_STATE(xas, arg, 0); + void *entry; + + rcu_register_thread(); + + while (!test_complete) { + xas_set(&xas, 0); + rcu_read_lock(); + xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0) + ; + rcu_read_unlock(); + assert(xas.xa_index >= 100); + } + + rcu_unregister_thread(); + return NULL; +} + +static void *throbber(void *arg) +{ + struct xarray *xa = arg; + + rcu_register_thread(); + + while (!test_complete) { + int i; + + for (i = 0; i < 100; i++) { + xa_store(xa, i, xa_mk_value(i), GFP_KERNEL); + xa_set_mark(xa, i, XA_MARK_0); + } + for (i = 0; i < 100; i++) + xa_erase(xa, i); + } + + rcu_unregister_thread(); + return NULL; +} + +void iteration_test2(unsigned test_duration) +{ + pthread_t threads[2]; + DEFINE_XARRAY(array); + int i; + + printv(1, "Running iteration test 2 for %d seconds\n", test_duration); + + test_complete = false; + + xa_store(&array, 100, xa_mk_value(100), GFP_KERNEL); + xa_set_mark(&array, 100, XA_MARK_0); + + if (pthread_create(&threads[0], NULL, iterator, &array)) { + perror("create iterator thread"); + exit(1); + } + if (pthread_create(&threads[1], NULL, throbber, &array)) { + perror("create throbber thread"); + exit(1); + } + + sleep(test_duration); + test_complete = true; + + for (i = 0; i < 2; i++) { + if (pthread_join(threads[i], NULL)) { + perror("pthread_join"); + exit(1); + } + } + + xa_destroy(&array); +} --- a/tools/testing/radix-tree/main.c +++ b/tools/testing/radix-tree/main.c @@ -311,6 +311,7 @@ int main(int argc, char **argv) regression4_test(); iteration_test(0, 10 + 90 * long_run); iteration_test(7, 10 + 90 * long_run); + iteration_test2(10 + 90 * long_run); single_thread_tests(long_run);
/* Free any remaining preallocated nodes */ --- a/tools/testing/radix-tree/test.h +++ b/tools/testing/radix-tree/test.h @@ -34,6 +34,7 @@ void xarray_tests(void); void tag_check(void); void multiorder_checks(void); void iteration_test(unsigned order, unsigned duration); +void iteration_test2(unsigned duration); void benchmark(void); void idr_checks(void); void ida_tests(void);
From: Horia Geantă horia.geanta@nxp.com
commit 3a5a9e1ef37b030b836d92df8264f840988f4a38 upstream.
HW generates a Data Size error for chacha20 requests that are not a multiple of 64B, since algorithm state (AS) does not have the FINAL bit set.
Since updating req->iv (for chaining) is not required, modify skcipher descriptors to set the FINAL bit for chacha20.
[Note that for skcipher decryption we know that ctx1_iv_off is 0, which allows for an optimization by not checking algorithm type, since append_dec_op1() sets FINAL bit for all algorithms except AES.]
Also drop the descriptor operations that save the IV. However, in order to keep code logic simple, things like S/G tables generation etc. are not touched.
Cc: stable@vger.kernel.org # v5.3+ Fixes: 334d37c9e263 ("crypto: caam - update IV using HW support") Signed-off-by: Horia Geantă horia.geanta@nxp.com Tested-by: Valentin Ciocoi Radulescu valentin.ciocoi@nxp.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/caam/caamalg_desc.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)
--- a/drivers/crypto/caam/caamalg_desc.c +++ b/drivers/crypto/caam/caamalg_desc.c @@ -1379,6 +1379,9 @@ void cnstr_shdsc_skcipher_encap(u32 * co const u32 ctx1_iv_off) { u32 *key_jump_cmd; + u32 options = cdata->algtype | OP_ALG_AS_INIT | OP_ALG_ENCRYPT; + bool is_chacha20 = ((cdata->algtype & OP_ALG_ALGSEL_MASK) == + OP_ALG_ALGSEL_CHACHA20);
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX); /* Skip if already shared */ @@ -1417,14 +1420,15 @@ void cnstr_shdsc_skcipher_encap(u32 * co LDST_OFFSET_SHIFT));
/* Load operation */ - append_operation(desc, cdata->algtype | OP_ALG_AS_INIT | - OP_ALG_ENCRYPT); + if (is_chacha20) + options |= OP_ALG_AS_FINALIZE; + append_operation(desc, options);
/* Perform operation */ skcipher_append_src_dst(desc);
/* Store IV */ - if (ivsize) + if (!is_chacha20 && ivsize) append_seq_store(desc, ivsize, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB | (ctx1_iv_off << LDST_OFFSET_SHIFT)); @@ -1451,6 +1455,8 @@ void cnstr_shdsc_skcipher_decap(u32 * co const u32 ctx1_iv_off) { u32 *key_jump_cmd; + bool is_chacha20 = ((cdata->algtype & OP_ALG_ALGSEL_MASK) == + OP_ALG_ALGSEL_CHACHA20);
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX); /* Skip if already shared */ @@ -1499,7 +1505,7 @@ void cnstr_shdsc_skcipher_decap(u32 * co skcipher_append_src_dst(desc);
/* Store IV */ - if (ivsize) + if (!is_chacha20 && ivsize) append_seq_store(desc, ivsize, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB | (ctx1_iv_off << LDST_OFFSET_SHIFT));
From: Andrei Botila andrei.botila@nxp.com
commit 3f142b6a7b573bde6cff926f246da05652c61eb4 upstream.
Since in the software implementation of XTS-AES there is no notion of sector every input length is processed the same way. CAAM implementation has the notion of sector which causes different results between the software implementation and the one in CAAM for input lengths bigger than 512 bytes. Increase sector size to maximum value on 16 bits.
Fixes: c6415a6016bf ("crypto: caam - add support for acipher xts(aes)") Cc: stable@vger.kernel.org # v4.12+ Signed-off-by: Andrei Botila andrei.botila@nxp.com Reviewed-by: Horia Geantă horia.geanta@nxp.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/caam/caamalg_desc.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-)
--- a/drivers/crypto/caam/caamalg_desc.c +++ b/drivers/crypto/caam/caamalg_desc.c @@ -1524,7 +1524,13 @@ EXPORT_SYMBOL(cnstr_shdsc_skcipher_decap */ void cnstr_shdsc_xts_skcipher_encap(u32 * const desc, struct alginfo *cdata) { - __be64 sector_size = cpu_to_be64(512); + /* + * Set sector size to a big value, practically disabling + * sector size segmentation in xts implementation. We cannot + * take full advantage of this HW feature with existing + * crypto API / dm-crypt SW architecture. + */ + __be64 sector_size = cpu_to_be64(BIT(15)); u32 *key_jump_cmd;
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX); @@ -1577,7 +1583,13 @@ EXPORT_SYMBOL(cnstr_shdsc_xts_skcipher_e */ void cnstr_shdsc_xts_skcipher_decap(u32 * const desc, struct alginfo *cdata) { - __be64 sector_size = cpu_to_be64(512); + /* + * Set sector size to a big value, practically disabling + * sector size segmentation in xts implementation. We cannot + * take full advantage of this HW feature with existing + * crypto API / dm-crypt SW architecture. + */ + __be64 sector_size = cpu_to_be64(BIT(15)); u32 *key_jump_cmd;
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
From: Gilad Ben-Yossef gilad@benyossef.com
commit ce0fc6db38decf0d2919bfe783de6d6b76e421a9 upstream.
Deal gracefully with a NULL or empty scatterlist which can happen if both cryptlen and assoclen are zero and we're doing in-place AEAD encryption.
This fixes a crash when this causes us to try and map a NULL page, at least with some platforms / DMA mapping configs.
Cc: stable@vger.kernel.org # v4.19+ Reported-by: Geert Uytterhoeven geert+renesas@glider.be Tested-by: Geert Uytterhoeven geert+renesas@glider.be Signed-off-by: Gilad Ben-Yossef gilad@benyossef.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/ccree/cc_buffer_mgr.c | 62 +++++++++++++++-------------------- drivers/crypto/ccree/cc_buffer_mgr.h | 1 2 files changed, 28 insertions(+), 35 deletions(-)
--- a/drivers/crypto/ccree/cc_buffer_mgr.c +++ b/drivers/crypto/ccree/cc_buffer_mgr.c @@ -87,6 +87,8 @@ static unsigned int cc_get_sgl_nents(str { unsigned int nents = 0;
+ *lbytes = 0; + while (nbytes && sg_list) { nents++; /* get the number of bytes in the last entry */ @@ -95,6 +97,7 @@ static unsigned int cc_get_sgl_nents(str nbytes : sg_list->length; sg_list = sg_next(sg_list); } + dev_dbg(dev, "nents %d last bytes %d\n", nents, *lbytes); return nents; } @@ -290,37 +293,25 @@ static int cc_map_sg(struct device *dev, unsigned int nbytes, int direction, u32 *nents, u32 max_sg_nents, u32 *lbytes, u32 *mapped_nents) { - if (sg_is_last(sg)) { - /* One entry only case -set to DLLI */ - if (dma_map_sg(dev, sg, 1, direction) != 1) { - dev_err(dev, "dma_map_sg() single buffer failed\n"); - return -ENOMEM; - } - dev_dbg(dev, "Mapped sg: dma_address=%pad page=%p addr=%pK offset=%u length=%u\n", - &sg_dma_address(sg), sg_page(sg), sg_virt(sg), - sg->offset, sg->length); - *lbytes = nbytes; - *nents = 1; - *mapped_nents = 1; - } else { /*sg_is_last*/ - *nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes); - if (*nents > max_sg_nents) { - *nents = 0; - dev_err(dev, "Too many fragments. current %d max %d\n", - *nents, max_sg_nents); - return -ENOMEM; - } - /* In case of mmu the number of mapped nents might - * be changed from the original sgl nents - */ - *mapped_nents = dma_map_sg(dev, sg, *nents, direction); - if (*mapped_nents == 0) { - *nents = 0; - dev_err(dev, "dma_map_sg() sg buffer failed\n"); - return -ENOMEM; - } + int ret = 0; + + *nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes); + if (*nents > max_sg_nents) { + *nents = 0; + dev_err(dev, "Too many fragments. current %d max %d\n", + *nents, max_sg_nents); + return -ENOMEM; }
+ ret = dma_map_sg(dev, sg, *nents, direction); + if (dma_mapping_error(dev, ret)) { + *nents = 0; + dev_err(dev, "dma_map_sg() sg buffer failed %d\n", ret); + return -ENOMEM; + } + + *mapped_nents = ret; + return 0; }
@@ -555,11 +546,12 @@ void cc_unmap_aead_request(struct device sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents, areq_ctx->assoclen, req->cryptlen);
- dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_BIDIRECTIONAL); + dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, + DMA_BIDIRECTIONAL); if (req->src != req->dst) { dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n", sg_virt(req->dst)); - dma_unmap_sg(dev, req->dst, sg_nents(req->dst), + dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, DMA_BIDIRECTIONAL); } if (drvdata->coherent && @@ -881,7 +873,7 @@ static int cc_aead_chain_data(struct cc_ &src_last_bytes); sg_index = areq_ctx->src_sgl->length; //check where the data starts - while (sg_index <= size_to_skip) { + while (src_mapped_nents && (sg_index <= size_to_skip)) { src_mapped_nents--; offset -= areq_ctx->src_sgl->length; sgl = sg_next(areq_ctx->src_sgl); @@ -908,7 +900,7 @@ static int cc_aead_chain_data(struct cc_ size_for_map += crypto_aead_ivsize(tfm);
rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL, - &areq_ctx->dst.nents, + &areq_ctx->dst.mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes, &dst_mapped_nents); if (rc) @@ -921,7 +913,7 @@ static int cc_aead_chain_data(struct cc_ offset = size_to_skip;
//check where the data starts - while (sg_index <= size_to_skip) { + while (dst_mapped_nents && sg_index <= size_to_skip) { dst_mapped_nents--; offset -= areq_ctx->dst_sgl->length; sgl = sg_next(areq_ctx->dst_sgl); @@ -1123,7 +1115,7 @@ int cc_map_aead_request(struct cc_drvdat if (is_gcm4543) size_to_map += crypto_aead_ivsize(tfm); rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL, - &areq_ctx->src.nents, + &areq_ctx->src.mapped_nents, (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES + LLI_MAX_NUM_OF_DATA_ENTRIES), &dummy, &mapped_nents); --- a/drivers/crypto/ccree/cc_buffer_mgr.h +++ b/drivers/crypto/ccree/cc_buffer_mgr.h @@ -25,6 +25,7 @@ enum cc_sg_cpy_direct {
struct cc_mlli { cc_sram_addr_t sram_addr; + unsigned int mapped_nents; unsigned int nents; //sg nents unsigned int mlli_nents; //mlli nents might be different than the above };
From: Gilad Ben-Yossef gilad@benyossef.com
commit 504e84abec7a635b861afd8d7f92ecd13eaa2b09 upstream.
Make sure to only add the size of the auth tag to the source mapping for encryption if it is an in-place operation. Failing to do this previously caused us to try and map auth size len bytes from a NULL mapping and crashing if both the cryptlen and assoclen are zero.
Reported-by: Geert Uytterhoeven geert+renesas@glider.be Tested-by: Geert Uytterhoeven geert+renesas@glider.be Signed-off-by: Gilad Ben-Yossef gilad@benyossef.com Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/ccree/cc_buffer_mgr.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/crypto/ccree/cc_buffer_mgr.c +++ b/drivers/crypto/ccree/cc_buffer_mgr.c @@ -1109,9 +1109,11 @@ int cc_map_aead_request(struct cc_drvdat }
size_to_map = req->cryptlen + areq_ctx->assoclen; - if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) + /* If we do in-place encryption, we also need the auth tag */ + if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) && + (req->src == req->dst)) { size_to_map += authsize; - + } if (is_gcm4543) size_to_map += crypto_aead_ivsize(tfm); rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL,
From: Gilad Ben-Yossef gilad@benyossef.com
commit 8962c6d2c2b8ca51b0f188109015b15fc5f4da44 upstream.
Remove the auth tag size from cryptlen before mapping the destination in out-of-place AEAD decryption thus resolving a crash with extended testmgr tests.
Signed-off-by: Gilad Ben-Yossef gilad@benyossef.com Reported-by: Geert Uytterhoeven geert+renesas@glider.be Cc: stable@vger.kernel.org # v4.19+ Tested-by: Geert Uytterhoeven geert+renesas@glider.be Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/ccree/cc_buffer_mgr.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
--- a/drivers/crypto/ccree/cc_buffer_mgr.c +++ b/drivers/crypto/ccree/cc_buffer_mgr.c @@ -894,8 +894,12 @@ static int cc_aead_chain_data(struct cc_
if (req->src != req->dst) { size_for_map = areq_ctx->assoclen + req->cryptlen; - size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? - authsize : 0; + + if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) + size_for_map += authsize; + else + size_for_map -= authsize; + if (is_gcm4543) size_for_map += crypto_aead_ivsize(tfm);
From: Steffen Maier maier@linux.ibm.com
commit 819732be9fea728623e1ed84eba28def7384ad1f upstream.
v2.6.27 commit cc8c282963bd ("[SCSI] zfcp: Automatically attach remote ports") introduced zfcp automatic port scan.
Before that, the user had to use the sysfs attribute "port_add" of an FCP device (adapter) to add and open remote (target) ports, even for the remote peer port in point-to-point topology. That code path did a proper port open recovery trigger taking the erp_lock.
Since above commit, a new helper function zfcp_erp_open_ptp_port() performed an UNlocked port open recovery trigger. This can race with other parallel recovery triggers. In zfcp_erp_action_enqueue() this could corrupt e.g. adapter->erp_total_count or adapter->erp_ready_head.
As already found for fabric topology in v4.17 commit fa89adba1941 ("scsi: zfcp: fix infinite iteration on ERP ready list"), there was an endless loop during tracing of rport (un)block. A subsequent v4.18 commit 9e156c54ace3 ("scsi: zfcp: assert that the ERP lock is held when tracing a recovery trigger") introduced a lockdep assertion for that case.
As a side effect, that lockdep assertion now uncovered the unlocked code path for PtP. It is from within an adapter ERP action:
zfcp_erp_strategy[1479] intentionally DROPs erp lock around zfcp_erp_strategy_do_action() zfcp_erp_strategy_do_action[1441] NO erp lock zfcp_erp_adapter_strategy[876] NO erp lock zfcp_erp_adapter_strategy_open[855] NO erp lock zfcp_erp_adapter_strategy_open_fsf[806]NO erp lock zfcp_erp_adapter_strat_fsf_xconf[772] erp lock only around zfcp_erp_action_to_running(), BUT *_not_* around zfcp_erp_enqueue_ptp_port() zfcp_erp_enqueue_ptp_port[728] BUG: *_not_* taking erp lock _zfcp_erp_port_reopen[432] assumes to be called with erp lock zfcp_erp_action_enqueue[314] assumes to be called with erp lock zfcp_dbf_rec_trig[288] _checks_ to be called with erp lock: lockdep_assert_held(&adapter->erp_lock);
It causes the following lockdep warning:
WARNING: CPU: 2 PID: 775 at drivers/s390/scsi/zfcp_dbf.c:288 zfcp_dbf_rec_trig+0x16a/0x188 no locks held by zfcperp0.0.17c0/775.
Fix this by using the proper locked recovery trigger helper function.
Link: https://lore.kernel.org/r/20200312174505.51294-2-maier@linux.ibm.com Fixes: cc8c282963bd ("[SCSI] zfcp: Automatically attach remote ports") Cc: stable@vger.kernel.org #v2.6.27+ Reviewed-by: Jens Remus jremus@linux.ibm.com Reviewed-by: Benjamin Block bblock@linux.ibm.com Signed-off-by: Steffen Maier maier@linux.ibm.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/s390/scsi/zfcp_erp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -725,7 +725,7 @@ static void zfcp_erp_enqueue_ptp_port(st adapter->peer_d_id); if (IS_ERR(port)) /* error or port already attached */ return; - _zfcp_erp_port_reopen(port, 0, "ereptp1"); + zfcp_erp_port_reopen(port, 0, "ereptp1"); }
static enum zfcp_erp_act_result zfcp_erp_adapter_strat_fsf_xconf(
From: Stanley Chu stanley.chu@mediatek.com
commit 5a244e0ea67b293abb1d26c825db2ddde5f2862f upstream.
Auto-Hibern8 may be disabled by some vendors or sysfs in runtime even if Auto-Hibern8 capability is supported by host. If Auto-Hibern8 capability is supported by host but not actually enabled, Auto-Hibern8 error shall not happen.
To fix this, provide a way to detect if Auto-Hibern8 is actually enabled first, and bypass Auto-Hibern8 disabling case in ufshcd_is_auto_hibern8_error().
Fixes: 821744403913 ("scsi: ufs: Add error-handling of Auto-Hibernate") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200129105251.12466-4-stanley.chu@mediatek.com Reviewed-by: Bean Huo beanhuo@micron.com Reviewed-by: Alim Akhtar alim.akhtar@samsung.com Reviewed-by: Asutosh Das asutoshd@codeaurora.org Reviewed-by: Can Guo cang@codeaurora.org Signed-off-by: Stanley Chu stanley.chu@mediatek.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/ufs/ufshcd.c | 3 ++- drivers/scsi/ufs/ufshcd.h | 6 ++++++ 2 files changed, 8 insertions(+), 1 deletion(-)
--- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -5513,7 +5513,8 @@ static irqreturn_t ufshcd_update_uic_err static bool ufshcd_is_auto_hibern8_error(struct ufs_hba *hba, u32 intr_mask) { - if (!ufshcd_is_auto_hibern8_supported(hba)) + if (!ufshcd_is_auto_hibern8_supported(hba) || + !ufshcd_is_auto_hibern8_enabled(hba)) return false;
if (!(intr_mask & UFSHCD_UIC_HIBERN8_MASK)) --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -55,6 +55,7 @@ #include <linux/clk.h> #include <linux/completion.h> #include <linux/regulator/consumer.h> +#include <linux/bitfield.h> #include "unipro.h"
#include <asm/irq.h> @@ -781,6 +782,11 @@ static inline bool ufshcd_is_auto_hibern return (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT); }
+static inline bool ufshcd_is_auto_hibern8_enabled(struct ufs_hba *hba) +{ + return FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, hba->ahit) ? true : false; +} + #define ufshcd_writel(hba, val, reg) \ writel((val), (hba)->mmio_base + (reg)) #define ufshcd_readl(hba, reg) \
From: James Smart jsmart2021@gmail.com
commit 0ab384a49c548baf132ccef249f78d9c6c506380 upstream.
If a call to lpfc_get_cmd_rsp_buf_per_hdwq returns NULL (memory allocation failure), a previously allocated lpfc_io_buf resource is leaked.
Fix by releasing the lpfc_io_buf resource in the failure path.
Fixes: d79c9e9d4b3d ("scsi: lpfc: Support dynamic unbounded SGL lists on G7 hardware.") Cc: stable@vger.kernel.org # v5.4+ Link: https://lore.kernel.org/r/20200128002312.16346-3-jsmart2021@gmail.com Signed-off-by: Dick Kennedy dick.kennedy@broadcom.com Signed-off-by: James Smart jsmart2021@gmail.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/lpfc/lpfc_scsi.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/scsi/lpfc/lpfc_scsi.c +++ b/drivers/scsi/lpfc/lpfc_scsi.c @@ -671,8 +671,10 @@ lpfc_get_scsi_buf_s4(struct lpfc_hba *ph lpfc_cmd->prot_data_type = 0; #endif tmp = lpfc_get_cmd_rsp_buf_per_hdwq(phba, lpfc_cmd); - if (!tmp) + if (!tmp) { + lpfc_release_io_buf(phba, lpfc_cmd, lpfc_cmd->hdwq); return NULL; + }
lpfc_cmd->fcp_cmnd = tmp->fcp_cmnd; lpfc_cmd->fcp_rsp = tmp->fcp_rsp;
From: James Smart jsmart2021@gmail.com
commit 835214f5d5f516a38069bc077c879c7da00d6108 upstream.
When driver is set to enable bb credit recovery, the switch displayed the setting as inactive. If the link bounces, it switches to Active.
During link up processing, the driver currently does a MBX_READ_SPARAM followed by a MBX_CONFIG_LINK. These mbox commands are queued to be executed, one at a time and the completion is processed by the worker thread. Since the MBX_READ_SPARAM is done BEFORE the MBX_CONFIG_LINK, the BB_SC_N bit is never set the the returned values. BB Credit recovery status only gets set after the driver requests the feature in CONFIG_LINK, which is done after the link up. Thus the ordering of READ_SPARAM needs to follow the CONFIG_LINK.
Fix by reordering so that READ_SPARAM is done after CONFIG_LINK. Added a HBA_DEFER_FLOGI flag so that any FLOGI handling waits until after the READ_SPARAM is done so that the proper BB credit value is set in the FLOGI payload.
Fixes: 6bfb16208298 ("scsi: lpfc: Fix configuration of BB credit recovery in service parameters") Cc: stable@vger.kernel.org # v5.4+ Link: https://lore.kernel.org/r/20200128002312.16346-4-jsmart2021@gmail.com Signed-off-by: Dick Kennedy dick.kennedy@broadcom.com Signed-off-by: James Smart jsmart2021@gmail.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/lpfc/lpfc.h | 1 drivers/scsi/lpfc/lpfc_hbadisc.c | 59 +++++++++++++++++++++++++-------------- 2 files changed, 40 insertions(+), 20 deletions(-)
--- a/drivers/scsi/lpfc/lpfc.h +++ b/drivers/scsi/lpfc/lpfc.h @@ -749,6 +749,7 @@ struct lpfc_hba { * capability */ #define HBA_FLOGI_ISSUED 0x100000 /* FLOGI was issued */ +#define HBA_DEFER_FLOGI 0x800000 /* Defer FLOGI till read_sparm cmpl */
uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/ struct lpfc_dmabuf slim2p; --- a/drivers/scsi/lpfc/lpfc_hbadisc.c +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c @@ -1162,13 +1162,16 @@ lpfc_mbx_cmpl_local_config_link(struct l }
/* Start discovery by sending a FLOGI. port_state is identically - * LPFC_FLOGI while waiting for FLOGI cmpl + * LPFC_FLOGI while waiting for FLOGI cmpl. Check if sending + * the FLOGI is being deferred till after MBX_READ_SPARAM completes. */ - if (vport->port_state != LPFC_FLOGI) - lpfc_initial_flogi(vport); - else if (vport->fc_flag & FC_PT2PT) - lpfc_disc_start(vport); - + if (vport->port_state != LPFC_FLOGI) { + if (!(phba->hba_flag & HBA_DEFER_FLOGI)) + lpfc_initial_flogi(vport); + } else { + if (vport->fc_flag & FC_PT2PT) + lpfc_disc_start(vport); + } return;
out: @@ -3093,6 +3096,14 @@ lpfc_mbx_cmpl_read_sparam(struct lpfc_hb lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); mempool_free(pmb, phba->mbox_mem_pool); + + /* Check if sending the FLOGI is being deferred to after we get + * up to date CSPs from MBX_READ_SPARAM. + */ + if (phba->hba_flag & HBA_DEFER_FLOGI) { + lpfc_initial_flogi(vport); + phba->hba_flag &= ~HBA_DEFER_FLOGI; + } return;
out: @@ -3223,6 +3234,23 @@ lpfc_mbx_process_link_up(struct lpfc_hba }
lpfc_linkup(phba); + sparam_mbox = NULL; + + if (!(phba->hba_flag & HBA_FCOE_MODE)) { + cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); + if (!cfglink_mbox) + goto out; + vport->port_state = LPFC_LOCAL_CFG_LINK; + lpfc_config_link(phba, cfglink_mbox); + cfglink_mbox->vport = vport; + cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link; + rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT); + if (rc == MBX_NOT_FINISHED) { + mempool_free(cfglink_mbox, phba->mbox_mem_pool); + goto out; + } + } + sparam_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); if (!sparam_mbox) goto out; @@ -3243,20 +3271,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba goto out; }
- if (!(phba->hba_flag & HBA_FCOE_MODE)) { - cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); - if (!cfglink_mbox) - goto out; - vport->port_state = LPFC_LOCAL_CFG_LINK; - lpfc_config_link(phba, cfglink_mbox); - cfglink_mbox->vport = vport; - cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link; - rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT); - if (rc == MBX_NOT_FINISHED) { - mempool_free(cfglink_mbox, phba->mbox_mem_pool); - goto out; - } - } else { + if (phba->hba_flag & HBA_FCOE_MODE) { vport->port_state = LPFC_VPORT_UNKNOWN; /* * Add the driver's default FCF record at FCF index 0 now. This @@ -3313,6 +3328,10 @@ lpfc_mbx_process_link_up(struct lpfc_hba } /* Reset FCF roundrobin bmask for new discovery */ lpfc_sli4_clear_fcf_rr_bmask(phba); + } else { + if (phba->bbcredit_support && phba->cfg_enable_bbcr && + !(phba->link_flag & LS_LOOPBACK_MODE)) + phba->hba_flag |= HBA_DEFER_FLOGI; }
return;
From: Marek Szyprowski m.szyprowski@samsung.com
commit 32a1671ff8e84f0dfff3a50d4b2091d25e91f5e2 upstream.
Recent changes in the SPI core and the SPI-GPIO driver revealed that the GPIO lines for the LD9040 LCD controller on the UniversalC210 board are defined incorrectly. Fix the polarity for those lines to match the old behavior and hardware requirements to fix LCD panel operation with recent kernels.
Cc: stable@vger.kernel.org # 5.0.x Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Reviewed-by: Andrzej Hajda a.hajda@samsung.com Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm/boot/dts/exynos4210-universal_c210.dts | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
--- a/arch/arm/boot/dts/exynos4210-universal_c210.dts +++ b/arch/arm/boot/dts/exynos4210-universal_c210.dts @@ -115,7 +115,7 @@ gpio-sck = <&gpy3 1 GPIO_ACTIVE_HIGH>; gpio-mosi = <&gpy3 3 GPIO_ACTIVE_HIGH>; num-chipselects = <1>; - cs-gpios = <&gpy4 3 GPIO_ACTIVE_HIGH>; + cs-gpios = <&gpy4 3 GPIO_ACTIVE_LOW>;
lcd@0 { compatible = "samsung,ld9040"; @@ -124,8 +124,6 @@ vci-supply = <&ldo17_reg>; reset-gpios = <&gpy4 5 GPIO_ACTIVE_HIGH>; spi-max-frequency = <1200000>; - spi-cpol; - spi-cpha; power-on-delay = <10>; reset-delay = <10>; panel-width-mm = <90>;
From: Dave Gerlach d-gerlach@ti.com
commit a81e5442d796ccfa2cc97d205a5477053264d978 upstream.
The TI sci-clk driver can scan the DT for all clocks provided by system firmware and does this by checking the clocks property of all nodes, so we must add this to the dwc3 nodes so USB clocks are available.
Without this USB does not work with latest system firmware i.e. [ 1.714662] clk: couldn't get parent clock 0 for /interconnect@100000/dwc3@4020000
Fixes: cc54a99464ccd ("arm64: dts: ti: k3-am6: add USB suppor") Signed-off-by: Dave Gerlach d-gerlach@ti.com Signed-off-by: Roger Quadros rogerq@ti.com Cc: stable@kernel.org Signed-off-by: Tero Kristo t-kristo@ti.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/boot/dts/ti/k3-am65-main.dtsi | 2 ++ 1 file changed, 2 insertions(+)
--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi +++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi @@ -307,6 +307,7 @@ interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>; dma-coherent; power-domains = <&k3_pds 151 TI_SCI_PD_EXCLUSIVE>; + clocks = <&k3_clks 151 2>, <&k3_clks 151 7>; assigned-clocks = <&k3_clks 151 2>, <&k3_clks 151 7>; assigned-clock-parents = <&k3_clks 151 4>, /* set REF_CLK to 20MHz i.e. PER0_PLL/48 */ <&k3_clks 151 9>; /* set PIPE3_TXB_CLK to CLK_12M_RC/256 (for HS only) */ @@ -346,6 +347,7 @@ interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>; dma-coherent; power-domains = <&k3_pds 152 TI_SCI_PD_EXCLUSIVE>; + clocks = <&k3_clks 152 2>; assigned-clocks = <&k3_clks 152 2>; assigned-clock-parents = <&k3_clks 152 4>; /* set REF_CLK to 20MHz i.e. PER0_PLL/48 */
From: Fredrik Strupe fredrik@strupe.net
commit fc2266011accd5aeb8ebc335c381991f20e26e33 upstream.
For thumb instructions, call_undef_hook() in traps.c first reads a u16, and if the u16 indicates a T32 instruction (u16 >= 0xe800), a second u16 is read, which then makes up the the lower half-word of a T32 instruction. For T16 instructions, the second u16 is not read, which makes the resulting u32 opcode always have the upper half set to 0.
However, having the upper half of instr_mask in the undef_hook set to 0 masks out the upper half of all thumb instructions - both T16 and T32. This results in trapped T32 instructions with the lower half-word equal to the T16 encoding of setend (b650) being matched, even though the upper half-word is not 0000 and thus indicates a T32 opcode.
An example of such a T32 instruction is eaa0b650, which should raise a SIGILL since T32 instructions with an eaa prefix are unallocated as per Arm ARM, but instead works as a SETEND because the second half-word is set to b650.
This patch fixes the issue by extending instr_mask to include the upper u32 half, which will still match T16 instructions where the upper half is 0, but not T32 instructions.
Fixes: 2d888f48e056 ("arm64: Emulate SETEND for AArch32 tasks") Cc: stable@vger.kernel.org # 4.0.x- Reviewed-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Fredrik Strupe fredrik@strupe.net Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/kernel/armv8_deprecated.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm64/kernel/armv8_deprecated.c +++ b/arch/arm64/kernel/armv8_deprecated.c @@ -601,7 +601,7 @@ static struct undef_hook setend_hooks[] }, { /* Thumb mode */ - .instr_mask = 0x0000fff7, + .instr_mask = 0xfffffff7, .instr_val = 0x0000b650, .pstate_mask = (PSR_AA32_T_BIT | PSR_AA32_MODE_MASK), .pstate_val = (PSR_AA32_T_BIT | PSR_AA32_MODE_USR),
From: Michal Hocko mhocko@suse.com
commit eea274d64e6ea8aff2224d33d0851133a84cc7b5 upstream.
It was noticed that mlock2 tests are failing after 9c4e6b1a7027f ("mm, mlock, vmscan: no more skipping pagevecs") because the patch has changed the timing on when the page is added to the unevictable LRU list and thus gains the unevictable page flag.
The test was just too dependent on the implementation details which were true at the time when it was introduced. Page flags and the timing when they are set is something no userspace should ever depend on. The test should be testing only for the user observable contract of the tested syscalls. Those are defined pretty well for the mlock and there are other means for testing them. In fact this is already done and testing for page flags can be safely dropped to achieve the aimed purpose. Present bits can be checked by /proc/<pid>/smaps RSS field and the locking state by VmFlags although I would argue that Locked: field would be more appropriate.
Drop all the page flag machinery and considerably simplify the test. This should be more robust for future kernel changes while checking the promised contract is still valid.
Fixes: 9c4e6b1a7027f ("mm, mlock, vmscan: no more skipping pagevecs") Reported-by: Rafael Aquini aquini@redhat.com Signed-off-by: Michal Hocko mhocko@suse.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Acked-by: Rafael Aquini aquini@redhat.com Cc: Shakeel Butt shakeelb@google.com Cc: Eric B Munson emunson@akamai.com Cc: Shuah Khan shuah@kernel.org Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20200324154218.GS19542@dhcp22.suse.cz Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- tools/testing/selftests/vm/mlock2-tests.c | 233 ++++-------------------------- 1 file changed, 37 insertions(+), 196 deletions(-)
--- a/tools/testing/selftests/vm/mlock2-tests.c +++ b/tools/testing/selftests/vm/mlock2-tests.c @@ -67,59 +67,6 @@ out: return ret; }
-static uint64_t get_pageflags(unsigned long addr) -{ - FILE *file; - uint64_t pfn; - unsigned long offset; - - file = fopen("/proc/self/pagemap", "r"); - if (!file) { - perror("fopen pagemap"); - _exit(1); - } - - offset = addr / getpagesize() * sizeof(pfn); - - if (fseek(file, offset, SEEK_SET)) { - perror("fseek pagemap"); - _exit(1); - } - - if (fread(&pfn, sizeof(pfn), 1, file) != 1) { - perror("fread pagemap"); - _exit(1); - } - - fclose(file); - return pfn; -} - -static uint64_t get_kpageflags(unsigned long pfn) -{ - uint64_t flags; - FILE *file; - - file = fopen("/proc/kpageflags", "r"); - if (!file) { - perror("fopen kpageflags"); - _exit(1); - } - - if (fseek(file, pfn * sizeof(flags), SEEK_SET)) { - perror("fseek kpageflags"); - _exit(1); - } - - if (fread(&flags, sizeof(flags), 1, file) != 1) { - perror("fread kpageflags"); - _exit(1); - } - - fclose(file); - return flags; -} - #define VMFLAGS "VmFlags:"
static bool is_vmflag_set(unsigned long addr, const char *vmflag) @@ -159,19 +106,13 @@ out: #define RSS "Rss:" #define LOCKED "lo"
-static bool is_vma_lock_on_fault(unsigned long addr) +static unsigned long get_value_for_name(unsigned long addr, const char *name) { - bool ret = false; - bool locked; - FILE *smaps = NULL; - unsigned long vma_size, vma_rss; char *line = NULL; - char *value; size_t size = 0; - - locked = is_vmflag_set(addr, LOCKED); - if (!locked) - goto out; + char *value_ptr; + FILE *smaps = NULL; + unsigned long value = -1UL;
smaps = seek_to_smaps_entry(addr); if (!smaps) { @@ -180,112 +121,70 @@ static bool is_vma_lock_on_fault(unsigne }
while (getline(&line, &size, smaps) > 0) { - if (!strstr(line, SIZE)) { + if (!strstr(line, name)) { free(line); line = NULL; size = 0; continue; }
- value = line + strlen(SIZE); - if (sscanf(value, "%lu kB", &vma_size) < 1) { + value_ptr = line + strlen(name); + if (sscanf(value_ptr, "%lu kB", &value) < 1) { printf("Unable to parse smaps entry for Size\n"); goto out; } break; }
- while (getline(&line, &size, smaps) > 0) { - if (!strstr(line, RSS)) { - free(line); - line = NULL; - size = 0; - continue; - } - - value = line + strlen(RSS); - if (sscanf(value, "%lu kB", &vma_rss) < 1) { - printf("Unable to parse smaps entry for Rss\n"); - goto out; - } - break; - } - - ret = locked && (vma_rss < vma_size); out: - free(line); if (smaps) fclose(smaps); - return ret; + free(line); + return value; }
-#define PRESENT_BIT 0x8000000000000000ULL -#define PFN_MASK 0x007FFFFFFFFFFFFFULL -#define UNEVICTABLE_BIT (1UL << 18) - -static int lock_check(char *map) +static bool is_vma_lock_on_fault(unsigned long addr) { - unsigned long page_size = getpagesize(); - uint64_t page1_flags, page2_flags; + bool locked; + unsigned long vma_size, vma_rss; + + locked = is_vmflag_set(addr, LOCKED); + if (!locked) + return false;
- page1_flags = get_pageflags((unsigned long)map); - page2_flags = get_pageflags((unsigned long)map + page_size); + vma_size = get_value_for_name(addr, SIZE); + vma_rss = get_value_for_name(addr, RSS);
- /* Both pages should be present */ - if (((page1_flags & PRESENT_BIT) == 0) || - ((page2_flags & PRESENT_BIT) == 0)) { - printf("Failed to make both pages present\n"); - return 1; - } + /* only one page is faulted in */ + return (vma_rss < vma_size); +}
- page1_flags = get_kpageflags(page1_flags & PFN_MASK); - page2_flags = get_kpageflags(page2_flags & PFN_MASK); +#define PRESENT_BIT 0x8000000000000000ULL +#define PFN_MASK 0x007FFFFFFFFFFFFFULL +#define UNEVICTABLE_BIT (1UL << 18)
- /* Both pages should be unevictable */ - if (((page1_flags & UNEVICTABLE_BIT) == 0) || - ((page2_flags & UNEVICTABLE_BIT) == 0)) { - printf("Failed to make both pages unevictable\n"); - return 1; - } +static int lock_check(unsigned long addr) +{ + bool locked; + unsigned long vma_size, vma_rss;
- if (!is_vmflag_set((unsigned long)map, LOCKED)) { - printf("VMA flag %s is missing on page 1\n", LOCKED); - return 1; - } + locked = is_vmflag_set(addr, LOCKED); + if (!locked) + return false;
- if (!is_vmflag_set((unsigned long)map + page_size, LOCKED)) { - printf("VMA flag %s is missing on page 2\n", LOCKED); - return 1; - } + vma_size = get_value_for_name(addr, SIZE); + vma_rss = get_value_for_name(addr, RSS);
- return 0; + return (vma_rss == vma_size); }
static int unlock_lock_check(char *map) { - unsigned long page_size = getpagesize(); - uint64_t page1_flags, page2_flags; - - page1_flags = get_pageflags((unsigned long)map); - page2_flags = get_pageflags((unsigned long)map + page_size); - page1_flags = get_kpageflags(page1_flags & PFN_MASK); - page2_flags = get_kpageflags(page2_flags & PFN_MASK); - - if ((page1_flags & UNEVICTABLE_BIT) || (page2_flags & UNEVICTABLE_BIT)) { - printf("A page is still marked unevictable after unlock\n"); - return 1; - } - if (is_vmflag_set((unsigned long)map, LOCKED)) { printf("VMA flag %s is present on page 1 after unlock\n", LOCKED); return 1; }
- if (is_vmflag_set((unsigned long)map + page_size, LOCKED)) { - printf("VMA flag %s is present on page 2 after unlock\n", LOCKED); - return 1; - } - return 0; }
@@ -311,7 +210,7 @@ static int test_mlock_lock() goto unmap; }
- if (lock_check(map)) + if (!lock_check((unsigned long)map)) goto unmap;
/* Now unlock and recheck attributes */ @@ -330,64 +229,18 @@ out:
static int onfault_check(char *map) { - unsigned long page_size = getpagesize(); - uint64_t page1_flags, page2_flags; - - page1_flags = get_pageflags((unsigned long)map); - page2_flags = get_pageflags((unsigned long)map + page_size); - - /* Neither page should be present */ - if ((page1_flags & PRESENT_BIT) || (page2_flags & PRESENT_BIT)) { - printf("Pages were made present by MLOCK_ONFAULT\n"); - return 1; - } - *map = 'a'; - page1_flags = get_pageflags((unsigned long)map); - page2_flags = get_pageflags((unsigned long)map + page_size); - - /* Only page 1 should be present */ - if ((page1_flags & PRESENT_BIT) == 0) { - printf("Page 1 is not present after fault\n"); - return 1; - } else if (page2_flags & PRESENT_BIT) { - printf("Page 2 was made present\n"); - return 1; - } - - page1_flags = get_kpageflags(page1_flags & PFN_MASK); - - /* Page 1 should be unevictable */ - if ((page1_flags & UNEVICTABLE_BIT) == 0) { - printf("Failed to make faulted page unevictable\n"); - return 1; - } - if (!is_vma_lock_on_fault((unsigned long)map)) { printf("VMA is not marked for lock on fault\n"); return 1; }
- if (!is_vma_lock_on_fault((unsigned long)map + page_size)) { - printf("VMA is not marked for lock on fault\n"); - return 1; - } - return 0; }
static int unlock_onfault_check(char *map) { unsigned long page_size = getpagesize(); - uint64_t page1_flags; - - page1_flags = get_pageflags((unsigned long)map); - page1_flags = get_kpageflags(page1_flags & PFN_MASK); - - if (page1_flags & UNEVICTABLE_BIT) { - printf("Page 1 is still marked unevictable after unlock\n"); - return 1; - }
if (is_vma_lock_on_fault((unsigned long)map) || is_vma_lock_on_fault((unsigned long)map + page_size)) { @@ -445,7 +298,6 @@ static int test_lock_onfault_of_present( char *map; int ret = 1; unsigned long page_size = getpagesize(); - uint64_t page1_flags, page2_flags;
map = mmap(NULL, 2 * page_size, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); @@ -465,17 +317,6 @@ static int test_lock_onfault_of_present( goto unmap; }
- page1_flags = get_pageflags((unsigned long)map); - page2_flags = get_pageflags((unsigned long)map + page_size); - page1_flags = get_kpageflags(page1_flags & PFN_MASK); - page2_flags = get_kpageflags(page2_flags & PFN_MASK); - - /* Page 1 should be unevictable */ - if ((page1_flags & UNEVICTABLE_BIT) == 0) { - printf("Failed to make present page unevictable\n"); - goto unmap; - } - if (!is_vma_lock_on_fault((unsigned long)map) || !is_vma_lock_on_fault((unsigned long)map + page_size)) { printf("VMA with present pages is not marked lock on fault\n"); @@ -507,7 +348,7 @@ static int test_munlockall() goto out; }
- if (lock_check(map)) + if (!lock_check((unsigned long)map)) goto unmap;
if (munlockall()) { @@ -549,7 +390,7 @@ static int test_munlockall() goto out; }
- if (lock_check(map)) + if (!lock_check((unsigned long)map)) goto unmap;
if (munlockall()) {
From: Christophe Leroy christophe.leroy@c-s.fr
commit cabc30da10e677c67ab9a136b1478175734715c5 upstream.
Commit fa7b9a805c79 ("tools/selftest/vm: allow choosing mem size and page size in map_hugetlb") added the possibility to change the size of memory mapped for the test, but left the read and write test using the default value. This is unnoticed when mapping a length greater than the default one, but segfaults otherwise.
Fix read_bytes() and write_bytes() by giving them the real length.
Also fix the call to munmap().
Fixes: fa7b9a805c79 ("tools/selftest/vm: allow choosing mem size and page size in map_hugetlb") Signed-off-by: Christophe Leroy christophe.leroy@c-s.fr Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Leonardo Bras leonardo@linux.ibm.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Shuah Khan shuah@kernel.org Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/9a404a13c871c4bd0ba9ede68f69a1225180dd7e.1580978385... Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- tools/testing/selftests/vm/map_hugetlb.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
--- a/tools/testing/selftests/vm/map_hugetlb.c +++ b/tools/testing/selftests/vm/map_hugetlb.c @@ -45,20 +45,20 @@ static void check_bytes(char *addr) printf("First hex is %x\n", *((unsigned int *)addr)); }
-static void write_bytes(char *addr) +static void write_bytes(char *addr, size_t length) { unsigned long i;
- for (i = 0; i < LENGTH; i++) + for (i = 0; i < length; i++) *(addr + i) = (char)i; }
-static int read_bytes(char *addr) +static int read_bytes(char *addr, size_t length) { unsigned long i;
check_bytes(addr); - for (i = 0; i < LENGTH; i++) + for (i = 0; i < length; i++) if (*(addr + i) != (char)i) { printf("Mismatch at %lu\n", i); return 1; @@ -96,11 +96,11 @@ int main(int argc, char **argv)
printf("Returned address is %p\n", addr); check_bytes(addr); - write_bytes(addr); - ret = read_bytes(addr); + write_bytes(addr, length); + ret = read_bytes(addr, length);
/* munmap() length of MAP_HUGETLB memory must be hugepage aligned */ - if (munmap(addr, LENGTH)) { + if (munmap(addr, length)) { perror("munmap"); exit(1); }
From: Christophe Leroy christophe.leroy@c-s.fr
commit 47bf235f324c696395c30541fe4fcf99fcd24188 upstream.
The commit identified below added tlbie_test but forgot to add it in .gitignore.
Fixes: 93cad5f78995 ("selftests/powerpc: Add test case for tlbie vs mtpidr ordering issue") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Christophe Leroy christophe.leroy@c-s.fr Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/259f9c06ed4563c4fa4fa8ffa652347278d769e7.158284778... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- tools/testing/selftests/powerpc/mm/.gitignore | 1 + 1 file changed, 1 insertion(+)
--- a/tools/testing/selftests/powerpc/mm/.gitignore +++ b/tools/testing/selftests/powerpc/mm/.gitignore @@ -5,3 +5,4 @@ prot_sao segv_errors wild_bctr large_vm_fork_separation +tlbie_test
From: Michael Ellerman mpe@ellerman.id.au
commit 9686813f6e9d5568bc045de0be853411e44958c8 upstream.
We added a usage of try-run to pmu/ebb/Makefile to detect if the toolchain supported the -no-pie option.
This fails if we build out-of-tree and the source tree is not writable, as try-run tries to write its temporary files to the current directory. That leads to the -no-pie option being silently dropped, which leads to broken executables with some toolchains.
If we remove the redirect to /dev/null in try-run, we see the error:
make[3]: Entering directory '/linux/tools/testing/selftests/powerpc/pmu/ebb' /usr/bin/ld: cannot open output file .54.tmp: Read-only file system collect2: error: ld returned 1 exit status make[3]: Nothing to be done for 'all'.
And looking with strace we see it's trying to use a file that's in the source tree:
lstat("/linux/tools/testing/selftests/powerpc/pmu/ebb/.54.tmp", 0x7ffffc0f83c8)
We can fix it by setting TMPOUT to point to the $(OUTPUT) directory, and we can verify with strace it's now trying to write to the output directory:
lstat("/output/kselftest/powerpc/pmu/ebb/.54.tmp", 0x7fffd1bf6bf8)
And also see that the -no-pie option is now correctly detected.
Fixes: 0695f8bca93e ("selftests/powerpc: Handle Makefile for unrecognized option") Cc: stable@vger.kernel.org # v5.5+ Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200327095319.2347641-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- tools/testing/selftests/powerpc/pmu/ebb/Makefile | 1 + 1 file changed, 1 insertion(+)
--- a/tools/testing/selftests/powerpc/pmu/ebb/Makefile +++ b/tools/testing/selftests/powerpc/pmu/ebb/Makefile @@ -7,6 +7,7 @@ noarg: # The EBB handler is 64-bit code and everything links against it CFLAGS += -m64
+TMPOUT = $(OUTPUT)/ # Toolchains may build PIE by default which breaks the assembly no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \ $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
From: Eric Auger eric.auger@redhat.com
commit 723fe298ad85ad1278bd2312469ad14738953cc6 upstream.
Since commit 7723f4c5ecdb ("driver core: platform: Add an error message to platform_get_irq*()"), platform_get_irq() calls dev_err() on an error. As we enumerate all interrupts until platform_get_irq() fails, we now systematically get a message such as: "vfio-platform fff51000.ethernet: IRQ index 3 not found" which is a false positive.
Let's use platform_get_irq_optional() instead.
Signed-off-by: Eric Auger eric.auger@redhat.com Cc: stable@vger.kernel.org # v5.3+ Reviewed-by: Andre Przywara andre.przywara@arm.com Tested-by: Andre Przywara andre.przywara@arm.com Signed-off-by: Alex Williamson alex.williamson@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/vfio/platform/vfio_platform.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/vfio/platform/vfio_platform.c +++ b/drivers/vfio/platform/vfio_platform.c @@ -44,7 +44,7 @@ static int get_platform_irq(struct vfio_ { struct platform_device *pdev = (struct platform_device *) vdev->opaque;
- return platform_get_irq(pdev, i); + return platform_get_irq_optional(pdev, i); }
static int vfio_platform_probe(struct platform_device *pdev)
From: Chris Wilson chris@chris-wilson.co.uk
commit 1aaea8476d9f014667d2cb24819f9bcaf3ebb7a4 upstream.
__i915_gem_object_flush_map() takes a byte range, so feed it the written bytes and do not mistake the u32 index as bytes!
Fixes: a679f58d0510 ("drm/i915: Flush pages on acquisition") Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Matthew Auld matthew.william.auld@gmail.com Cc: stable@vger.kernel.org # v5.2+ Reviewed-by: Matthew Auld matthew.william.auld@gmail.com Link: https://patchwork.freedesktop.org/patch/msgid/20200406114821.10949-1-chris@c... (cherry picked from commit 30c88a47f1abd5744908d3681f54dcf823fe2a12) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -939,11 +939,13 @@ static inline struct i915_ggtt *cache_to
static void reloc_gpu_flush(struct reloc_cache *cache) { - GEM_BUG_ON(cache->rq_size >= cache->rq->batch->obj->base.size / sizeof(u32)); + struct drm_i915_gem_object *obj = cache->rq->batch->obj; + + GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32)); cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
- __i915_gem_object_flush_map(cache->rq->batch->obj, 0, cache->rq_size); - i915_gem_object_unpin_map(cache->rq->batch->obj); + __i915_gem_object_flush_map(obj, 0, sizeof(u32) * (cache->rq_size + 1)); + i915_gem_object_unpin_map(obj);
intel_gt_chipset_flush(cache->rq->engine->gt);
From: Christian Gmeiner christian.gmeiner@gmail.com
commit ed1dd899baa32d47d9a93d98336472da50564346 upstream.
Report the correct perfmon domains and signals depending on the supported feature flags.
Reported-by: Dan Carpenter dan.carpenter@oracle.com Fixes: 9e2c2e273012 ("drm/etnaviv: add infrastructure to query perf counter") Cc: stable@vger.kernel.org Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com Signed-off-by: Lucas Stach l.stach@pengutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/etnaviv/etnaviv_perfmon.c | 59 ++++++++++++++++++++++++++---- 1 file changed, 52 insertions(+), 7 deletions(-)
--- a/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c @@ -32,6 +32,7 @@ struct etnaviv_pm_domain { };
struct etnaviv_pm_domain_meta { + unsigned int feature; const struct etnaviv_pm_domain *domains; u32 nr_domains; }; @@ -410,36 +411,78 @@ static const struct etnaviv_pm_domain do
static const struct etnaviv_pm_domain_meta doms_meta[] = { { + .feature = chipFeatures_PIPE_3D, .nr_domains = ARRAY_SIZE(doms_3d), .domains = &doms_3d[0] }, { + .feature = chipFeatures_PIPE_2D, .nr_domains = ARRAY_SIZE(doms_2d), .domains = &doms_2d[0] }, { + .feature = chipFeatures_PIPE_VG, .nr_domains = ARRAY_SIZE(doms_vg), .domains = &doms_vg[0] } };
+static unsigned int num_pm_domains(const struct etnaviv_gpu *gpu) +{ + unsigned int num = 0, i; + + for (i = 0; i < ARRAY_SIZE(doms_meta); i++) { + const struct etnaviv_pm_domain_meta *meta = &doms_meta[i]; + + if (gpu->identity.features & meta->feature) + num += meta->nr_domains; + } + + return num; +} + +static const struct etnaviv_pm_domain *pm_domain(const struct etnaviv_gpu *gpu, + unsigned int index) +{ + const struct etnaviv_pm_domain *domain = NULL; + unsigned int offset = 0, i; + + for (i = 0; i < ARRAY_SIZE(doms_meta); i++) { + const struct etnaviv_pm_domain_meta *meta = &doms_meta[i]; + + if (!(gpu->identity.features & meta->feature)) + continue; + + if (meta->nr_domains < (index - offset)) { + offset += meta->nr_domains; + continue; + } + + domain = meta->domains + (index - offset); + } + + return domain; +} + int etnaviv_pm_query_dom(struct etnaviv_gpu *gpu, struct drm_etnaviv_pm_domain *domain) { - const struct etnaviv_pm_domain_meta *meta = &doms_meta[domain->pipe]; + const unsigned int nr_domains = num_pm_domains(gpu); const struct etnaviv_pm_domain *dom;
- if (domain->iter >= meta->nr_domains) + if (domain->iter >= nr_domains) return -EINVAL;
- dom = meta->domains + domain->iter; + dom = pm_domain(gpu, domain->iter); + if (!dom) + return -EINVAL;
domain->id = domain->iter; domain->nr_signals = dom->nr_signals; strncpy(domain->name, dom->name, sizeof(domain->name));
domain->iter++; - if (domain->iter == meta->nr_domains) + if (domain->iter == nr_domains) domain->iter = 0xff;
return 0; @@ -448,14 +491,16 @@ int etnaviv_pm_query_dom(struct etnaviv_ int etnaviv_pm_query_sig(struct etnaviv_gpu *gpu, struct drm_etnaviv_pm_signal *signal) { - const struct etnaviv_pm_domain_meta *meta = &doms_meta[signal->pipe]; + const unsigned int nr_domains = num_pm_domains(gpu); const struct etnaviv_pm_domain *dom; const struct etnaviv_pm_signal *sig;
- if (signal->domain >= meta->nr_domains) + if (signal->domain >= nr_domains) return -EINVAL;
- dom = meta->domains + signal->domain; + dom = pm_domain(gpu, signal->domain); + if (!dom) + return -EINVAL;
if (signal->iter >= dom->nr_signals) return -EINVAL;
From: Chris Wilson chris@chris-wilson.co.uk
commit ea36ec8623f56791c6ff6738d0509b7920f85220 upstream.
drm_pci_alloc/drm_pci_free are very thin wrappers around the core dma facilities, and we have no special reason within the drm layer to behave differently. In particular, since
commit de09d31dd38a50fdce106c15abd68432eebbd014 Author: Kirill A. Shutemov kirill.shutemov@linux.intel.com Date: Fri Jan 15 16:51:42 2016 -0800
page-flags: define PG_reserved behavior on compound pages
As far as I can see there's no users of PG_reserved on compound pages. Let's use PF_NO_COMPOUND here.
it has been illegal to combine GFP_COMP with SetPageReserved, so lets stop doing both and leave the dma layer to its own devices.
Reported-by: Taketo Kabe Bug: https://gitlab.freedesktop.org/drm/intel/issues/1027 Fixes: de09d31dd38a ("page-flags: define PG_reserved behavior on compound pages") Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: stable@vger.kernel.org # v4.5+ Reviewed-by: Alex Deucher alexander.deucher@amd.com Link: https://patchwork.freedesktop.org/patch/msgid/20200202171635.4039044-1-chris... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/drm_pci.c | 23 ++--------------------- 1 file changed, 2 insertions(+), 21 deletions(-)
--- a/drivers/gpu/drm/drm_pci.c +++ b/drivers/gpu/drm/drm_pci.c @@ -51,8 +51,6 @@ drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t align) { drm_dma_handle_t *dmah; - unsigned long addr; - size_t sz;
/* pci_alloc_consistent only guarantees alignment to the smallest * PAGE_SIZE order which is greater than or equal to the requested size. @@ -68,20 +66,13 @@ drm_dma_handle_t *drm_pci_alloc(struct d dmah->size = size; dmah->vaddr = dma_alloc_coherent(&dev->pdev->dev, size, &dmah->busaddr, - GFP_KERNEL | __GFP_COMP); + GFP_KERNEL);
if (dmah->vaddr == NULL) { kfree(dmah); return NULL; }
- /* XXX - Is virt_to_page() legal for consistent mem? */ - /* Reserve */ - for (addr = (unsigned long)dmah->vaddr, sz = size; - sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { - SetPageReserved(virt_to_page((void *)addr)); - } - return dmah; }
@@ -94,19 +85,9 @@ EXPORT_SYMBOL(drm_pci_alloc); */ void __drm_legacy_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah) { - unsigned long addr; - size_t sz; - - if (dmah->vaddr) { - /* XXX - Is virt_to_page() legal for consistent mem? */ - /* Unreserve */ - for (addr = (unsigned long)dmah->vaddr, sz = dmah->size; - sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { - ClearPageReserved(virt_to_page((void *)addr)); - } + if (dmah->vaddr) dma_free_coherent(&dev->pdev->dev, dmah->size, dmah->vaddr, dmah->busaddr); - } }
/**
From: Yuxian Dai Yuxian.Dai@amd.com
commit 022ac4c9c55be35a2d1f71019a931324c51b0dab upstream.
1.Using the FCLK DPM table to set the MCLK for DPM states consist of three entities: FCLK UCLK MEMCLK All these three clk change together, MEMCLK from FCLK, so use the fclk frequency. 2.we should show the current working clock freqency from clock table metric
Signed-off-by: Yuxian Dai Yuxian.Dai@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Huang Rui ray.huang@amd.com Reviewed-by: Kevin Wang Kevin1.Wang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 6 ++++++ drivers/gpu/drm/amd/powerplay/renoir_ppt.h | 2 +- 2 files changed, 7 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c +++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c @@ -181,6 +181,7 @@ static int renoir_print_clk_levels(struc uint32_t cur_value = 0, value = 0, count = 0, min = 0, max = 0; DpmClocks_t *clk_table = smu->smu_table.clocks_table; SmuMetrics_t metrics; + bool cur_value_match_level = false;
if (!clk_table || clk_type >= SMU_CLK_COUNT) return -EINVAL; @@ -240,8 +241,13 @@ static int renoir_print_clk_levels(struc GET_DPM_CUR_FREQ(clk_table, clk_type, i, value); size += sprintf(buf + size, "%d: %uMhz %s\n", i, value, cur_value == value ? "*" : ""); + if (cur_value == value) + cur_value_match_level = true; }
+ if (!cur_value_match_level) + size += sprintf(buf + size, " %uMhz *\n", cur_value); + return size; }
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.h +++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.h @@ -37,7 +37,7 @@ extern void renoir_set_ppt_funcs(struct freq = table->SocClocks[dpm_level].Freq; \ break; \ case SMU_MCLK: \ - freq = table->MemClocks[dpm_level].Freq; \ + freq = table->FClocks[dpm_level].Freq; \ break; \ case SMU_DCEFCLK: \ freq = table->DcfClocks[dpm_level].Freq; \
From: Aaron Liu aaron.liu@amd.com
commit 2960758cce2310774de60bbbd8d6841d436c54d9 upstream.
Make the fw_write_wait default case true since presumably all new gfx9 asics will have updated firmware. That is using unique WAIT_REG_MEM packet with opration=1.
Signed-off-by: Aaron Liu aaron.liu@amd.com Tested-by: Aaron Liu aaron.liu@amd.com Tested-by: Yuxian Dai Yuxian.Dai@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Acked-by: Huang Rui ray.huang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c @@ -1040,6 +1040,8 @@ static void gfx_v9_0_check_fw_write_wait adev->gfx.mec_fw_write_wait = true; break; default: + adev->gfx.me_fw_write_wait = true; + adev->gfx.mec_fw_write_wait = true; break; } }
From: Michael Strauss michael.strauss@amd.com
commit 72f5b5a308c744573fdbc6c78202c52196d2c162 upstream.
[WHY] In cases where a clock table is malformed such that fclk entries have frequencies but not voltages listed, we don't catch the error and set clocks to 0 instead of using hardcoded values as we should.
[HOW] Add check for clock tables fclk entry's voltage as well
Signed-off-by: Michael Strauss michael.strauss@amd.com Reviewed-by: Eric Yang eric.yang2@amd.com Acked-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c @@ -641,7 +641,7 @@ static void rn_clk_mgr_helper_populate_b /* Find lowest DPM, FCLK is filled in reverse order*/
for (i = PP_SMU_NUM_FCLK_DPM_LEVELS - 1; i >= 0; i--) { - if (clock_table->FClocks[i].Freq != 0) { + if (clock_table->FClocks[i].Freq != 0 && clock_table->FClocks[i].Vol != 0) { j = i; break; }
From: Marek Szyprowski m.szyprowski@samsung.com
commit c0f83d164fb8f3a2b7bc379a6c1e27d1123a9eab upstream.
Scatterlist elements contains both pages and DMA addresses, but one should not assume 1:1 relation between them. The sg->length is the size of the physical memory chunk described by the sg->page, while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk described by the sg_dma_address(sg).
The proper way of extracting both: pages and DMA addresses of the whole buffer described by a scatterlist it to iterate independently over the sg->pages/sg->length and sg_dma_address(sg)/sg_dma_len(sg) entries.
Fixes: 42e67b479eab ("drm/prime: use dma length macro when mapping sg") Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Link: https://patchwork.freedesktop.org/patch/msgid/20200327162126.29705-1-m.szypr... Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/drm_prime.c | 37 +++++++++++++++++++++++++------------ 1 file changed, 25 insertions(+), 12 deletions(-)
--- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -959,27 +959,40 @@ int drm_prime_sg_to_page_addr_arrays(str unsigned count; struct scatterlist *sg; struct page *page; - u32 len, index; + u32 page_len, page_index; dma_addr_t addr; + u32 dma_len, dma_index;
- index = 0; + /* + * Scatterlist elements contains both pages and DMA addresses, but + * one shoud not assume 1:1 relation between them. The sg->length is + * the size of the physical memory chunk described by the sg->page, + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk + * described by the sg_dma_address(sg). + */ + page_index = 0; + dma_index = 0; for_each_sg(sgt->sgl, sg, sgt->nents, count) { - len = sg_dma_len(sg); + page_len = sg->length; page = sg_page(sg); + dma_len = sg_dma_len(sg); addr = sg_dma_address(sg);
- while (len > 0) { - if (WARN_ON(index >= max_entries)) + while (pages && page_len > 0) { + if (WARN_ON(page_index >= max_entries)) return -1; - if (pages) - pages[index] = page; - if (addrs) - addrs[index] = addr; - + pages[page_index] = page; page++; + page_len -= PAGE_SIZE; + page_index++; + } + while (addrs && dma_len > 0) { + if (WARN_ON(dma_index >= max_entries)) + return -1; + addrs[dma_index] = addr; addr += PAGE_SIZE; - len -= PAGE_SIZE; - index++; + dma_len -= PAGE_SIZE; + dma_index++; } } return 0;
From: Libor Pechacek lpechacek@suse.cz
commit a83836dbc53e96f13fec248ecc201d18e1e3111d upstream.
In guests without hotplugagble memory drmem structure is only zero initialized. Trying to manipulate DLPAR parameters results in a crash.
$ echo "memory add count 1" > /sys/kernel/dlpar Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries ... NIP: c0000000000ff294 LR: c0000000000ff248 CTR: 0000000000000000 REGS: c0000000fb9d3880 TRAP: 0300 Tainted: G E (5.5.0-rc6-2-default) MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE> CR: 28242428 XER: 20000000 CFAR: c0000000009a6c10 DAR: 0000000000000010 DSISR: 40000000 IRQMASK: 0 ... NIP dlpar_memory+0x6e4/0xd00 LR dlpar_memory+0x698/0xd00 Call Trace: dlpar_memory+0x698/0xd00 (unreliable) handle_dlpar_errorlog+0xc0/0x190 dlpar_store+0x198/0x4a0 kobj_attr_store+0x30/0x50 sysfs_kf_write+0x64/0x90 kernfs_fop_write+0x1b0/0x290 __vfs_write+0x3c/0x70 vfs_write+0xd0/0x260 ksys_write+0xdc/0x130 system_call+0x5c/0x68
Taking closer look at the code, I can see that for_each_drmem_lmb is a macro expanding into `for (lmb = &drmem_info->lmbs[0]; lmb <= &drmem_info->lmbs[drmem_info->n_lmbs - 1]; lmb++)`. When drmem_info->lmbs is NULL, the loop would iterate through the whole address range if it weren't stopped by the NULL pointer dereference on the next line.
This patch aligns for_each_drmem_lmb and for_each_drmem_lmb_in_range macro behavior with the common C semantics, where the end marker does not belong to the scanned range, and alters get_lmb_range() semantics. As a side effect, the wraparound observed in the crash is prevented.
Fixes: 6c6ea53725b3 ("powerpc/mm: Separate ibm, dynamic-memory data from DT format") Cc: stable@vger.kernel.org # v4.16+ Signed-off-by: Libor Pechacek lpechacek@suse.cz Signed-off-by: Michal Suchanek msuchanek@suse.de Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200131132829.10281-1-msuchanek@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/include/asm/drmem.h | 4 ++-- arch/powerpc/platforms/pseries/hotplug-memory.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-)
--- a/arch/powerpc/include/asm/drmem.h +++ b/arch/powerpc/include/asm/drmem.h @@ -27,12 +27,12 @@ struct drmem_lmb_info { extern struct drmem_lmb_info *drmem_info;
#define for_each_drmem_lmb_in_range(lmb, start, end) \ - for ((lmb) = (start); (lmb) <= (end); (lmb)++) + for ((lmb) = (start); (lmb) < (end); (lmb)++)
#define for_each_drmem_lmb(lmb) \ for_each_drmem_lmb_in_range((lmb), \ &drmem_info->lmbs[0], \ - &drmem_info->lmbs[drmem_info->n_lmbs - 1]) + &drmem_info->lmbs[drmem_info->n_lmbs])
/* * The of_drconf_cell_v1 struct defines the layout of the LMB data --- a/arch/powerpc/platforms/pseries/hotplug-memory.c +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c @@ -223,7 +223,7 @@ static int get_lmb_range(u32 drc_index, struct drmem_lmb **end_lmb) { struct drmem_lmb *lmb, *start, *end; - struct drmem_lmb *last_lmb; + struct drmem_lmb *limit;
start = NULL; for_each_drmem_lmb(lmb) { @@ -236,10 +236,10 @@ static int get_lmb_range(u32 drc_index, if (!start) return -EINVAL;
- end = &start[n_lmbs - 1]; + end = &start[n_lmbs];
- last_lmb = &drmem_info->lmbs[drmem_info->n_lmbs - 1]; - if (end > last_lmb) + limit = &drmem_info->lmbs[drmem_info->n_lmbs]; + if (end > limit) return -EINVAL;
*start_lmb = start;
From: Hans de Goede hdegoede@redhat.com
commit a65a97b48694d34248195eb89bf3687403261056 upstream.
The vboxvideo driver is missing a call to remove conflicting framebuffers.
Surprisingly, when using legacy BIOS booting this does not really cause any issues. But when using UEFI to boot the VM then plymouth will draw on both the efifb /dev/fb0 and /dev/drm/card0 (which has registered /dev/fb1 as fbdev emulation).
VirtualBox will actual display the output of both devices (I guess it is showing whatever was drawn last), this causes weird artifacts because of pitch issues in the efifb when the VM window is not sized at 1024x768 (the window will resize to its last size once the vboxvideo driver loads, changing the pitch).
Adding the missing drm_fb_helper_remove_conflicting_pci_framebuffers() call fixes this.
Changes in v2: -Make the drm_fb_helper_remove_conflicting_pci_framebuffers() call one of the first things we do in our probe() method
Cc: stable@vger.kernel.org Fixes: 2695eae1f6d3 ("drm/vboxvideo: Switch to generic fbdev emulation") Signed-off-by: Hans de Goede hdegoede@redhat.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Link: https://patchwork.freedesktop.org/patch/msgid/20200325144310.36779-1-hdegoed... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/vboxvideo/vbox_drv.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/gpu/drm/vboxvideo/vbox_drv.c +++ b/drivers/gpu/drm/vboxvideo/vbox_drv.c @@ -41,6 +41,10 @@ static int vbox_pci_probe(struct pci_dev if (!vbox_check_supported(VBE_DISPI_ID_HGSMI)) return -ENODEV;
+ ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "vboxvideodrmfb"); + if (ret) + return ret; + vbox = kzalloc(sizeof(*vbox), GFP_KERNEL); if (!vbox) return -ENOMEM;
From: J. Bruce Fields bfields@redhat.com
commit 69afd267982e733a555fede4e85fe30329ed0588 upstream.
Userspace should be able to monitor nfsd/clients/ to see when clients come and go, but we're failing to send fsnotify events.
Cc: stable@kernel.org Signed-off-by: J. Bruce Fields bfields@redhat.com Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfsd/nfsctl.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -1333,6 +1333,7 @@ void nfsd_client_rmdir(struct dentry *de dget(dentry); ret = simple_rmdir(dir, dentry); WARN_ON_ONCE(ret); + fsnotify_rmdir(dir, dentry); d_delete(dentry); inode_unlock(dir); }
From: Trond Myklebust trond.myklebust@hammerspace.com
commit dc9dc2febb17f72e9878eb540ad3996f7984239a upstream.
We need to ensure that we create the mirror requests before calling nfs_pageio_add_request_mirror() on the request we are adding. Otherwise, we can end up with a use-after-free if the call to nfs_pageio_add_request_mirror() triggers I/O.
Fixes: c917cfaf9bbe ("NFS: Fix up NFS I/O subrequest creation") Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/pagelist.c | 48 ++++++++++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 24 deletions(-)
--- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -1177,38 +1177,38 @@ int nfs_pageio_add_request(struct nfs_pa if (desc->pg_error < 0) goto out_failed;
- for (midx = 0; midx < desc->pg_mirror_count; midx++) { - if (midx) { - nfs_page_group_lock(req); - - /* find the last request */ - for (lastreq = req->wb_head; - lastreq->wb_this_page != req->wb_head; - lastreq = lastreq->wb_this_page) - ; - - dupreq = nfs_create_subreq(req, lastreq, - pgbase, offset, bytes); - - nfs_page_group_unlock(req); - if (IS_ERR(dupreq)) { - desc->pg_error = PTR_ERR(dupreq); - goto out_failed; - } - } else - dupreq = req; + /* Create the mirror instances first, and fire them off */ + for (midx = 1; midx < desc->pg_mirror_count; midx++) { + nfs_page_group_lock(req); + + /* find the last request */ + for (lastreq = req->wb_head; + lastreq->wb_this_page != req->wb_head; + lastreq = lastreq->wb_this_page) + ; + + dupreq = nfs_create_subreq(req, lastreq, + pgbase, offset, bytes); + + nfs_page_group_unlock(req); + if (IS_ERR(dupreq)) { + desc->pg_error = PTR_ERR(dupreq); + goto out_failed; + }
- if (nfs_pgio_has_mirroring(desc)) - desc->pg_mirror_idx = midx; + desc->pg_mirror_idx = midx; if (!nfs_pageio_add_request_mirror(desc, dupreq)) goto out_cleanup_subreq; }
+ desc->pg_mirror_idx = 0; + if (!nfs_pageio_add_request_mirror(desc, req)) + goto out_failed; + return 1;
out_cleanup_subreq: - if (req != dupreq) - nfs_pageio_cleanup_request(desc, dupreq); + nfs_pageio_cleanup_request(desc, dupreq); out_failed: nfs_pageio_error_cleanup(desc); return 0;
From: Trond Myklebust trond.myklebust@hammerspace.com
commit add42de31721fa29ed77a7ce388674d69f9d31a4 upstream.
When we detach a subrequest from the list, we must also release the reference it holds to the parent.
Fixes: 5b2b5187fa85 ("NFS: Fix nfs_page_group_destroy() and nfs_lock_and_join_requests() race cases") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/write.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -441,6 +441,7 @@ nfs_destroy_unlinked_subrequests(struct }
subreq->wb_head = subreq; + nfs_release_request(old_head);
if (test_and_clear_bit(PG_INODE_REF, &subreq->wb_flags)) { nfs_release_request(subreq);
From: Qian Cai cai@lca.pw
commit 28936b62e71e41600bab319f262ea9f9b1027629 upstream.
inode->i_blocks could be accessed concurrently as noticed by KCSAN,
BUG: KCSAN: data-race in ext4_do_update_inode [ext4] / inode_add_bytes
write to 0xffff9a00d4b982d0 of 8 bytes by task 22100 on cpu 118: inode_add_bytes+0x65/0xf0 __inode_add_bytes at fs/stat.c:689 (inlined by) inode_add_bytes at fs/stat.c:702 ext4_mb_new_blocks+0x418/0xca0 [ext4] ext4_ext_map_blocks+0x1a6b/0x27b0 [ext4] ext4_map_blocks+0x1a9/0x950 [ext4] _ext4_get_block+0xfc/0x270 [ext4] ext4_get_block_unwritten+0x33/0x50 [ext4] __block_write_begin_int+0x22e/0xae0 __block_write_begin+0x39/0x50 ext4_write_begin+0x388/0xb50 [ext4] ext4_da_write_begin+0x35f/0x8f0 [ext4] generic_perform_write+0x15d/0x290 ext4_buffered_write_iter+0x11f/0x210 [ext4] ext4_file_write_iter+0xce/0x9e0 [ext4] new_sync_write+0x29c/0x3b0 __vfs_write+0x92/0xa0 vfs_write+0x103/0x260 ksys_write+0x9d/0x130 __x64_sys_write+0x4c/0x60 do_syscall_64+0x91/0xb05 entry_SYSCALL_64_after_hwframe+0x49/0xbe
read to 0xffff9a00d4b982d0 of 8 bytes by task 8 on cpu 65: ext4_do_update_inode+0x4a0/0xf60 [ext4] ext4_inode_blocks_set at fs/ext4/inode.c:4815 ext4_mark_iloc_dirty+0xaf/0x160 [ext4] ext4_mark_inode_dirty+0x129/0x3e0 [ext4] ext4_convert_unwritten_extents+0x253/0x2d0 [ext4] ext4_convert_unwritten_io_end_vec+0xc5/0x150 [ext4] ext4_end_io_rsv_work+0x22c/0x350 [ext4] process_one_work+0x54f/0xb90 worker_thread+0x80/0x5f0 kthread+0x1cd/0x1f0 ret_from_fork+0x27/0x50
4 locks held by kworker/u256:0/8: #0: ffff9a025abc4328 ((wq_completion)ext4-rsv-conversion){+.+.}, at: process_one_work+0x443/0xb90 #1: ffffab5a862dbe20 ((work_completion)(&ei->i_rsv_conversion_work)){+.+.}, at: process_one_work+0x443/0xb90 #2: ffff9a025a9d0f58 (jbd2_handle){++++}, at: start_this_handle+0x1c1/0x9d0 [jbd2] #3: ffff9a00d4b985d8 (&(&ei->i_raw_lock)->rlock){+.+.}, at: ext4_do_update_inode+0xaa/0xf60 [ext4] irq event stamp: 3009267 hardirqs last enabled at (3009267): [<ffffffff980da9b7>] __find_get_block+0x107/0x790 hardirqs last disabled at (3009266): [<ffffffff980da8f9>] __find_get_block+0x49/0x790 softirqs last enabled at (3009230): [<ffffffff98a0034c>] __do_softirq+0x34c/0x57c softirqs last disabled at (3009223): [<ffffffff97cc67a2>] irq_exit+0xa2/0xc0
Reported by Kernel Concurrency Sanitizer on: CPU: 65 PID: 8 Comm: kworker/u256:0 Tainted: G L 5.6.0-rc2-next-20200221+ #7 Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019 Workqueue: ext4-rsv-conversion ext4_end_io_rsv_work [ext4]
The plain read is outside of inode->i_lock critical section which results in a data race. Fix it by adding READ_ONCE() there.
Link: https://lore.kernel.org/r/20200222043258.2279-1-cai@lca.pw Signed-off-by: Qian Cai cai@lca.pw Signed-off-by: Theodore Ts'o tytso@mit.edu Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/inode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4783,7 +4783,7 @@ static int ext4_inode_blocks_set(handle_ struct ext4_inode_info *ei) { struct inode *inode = &(ei->vfs_inode); - u64 i_blocks = inode->i_blocks; + u64 i_blocks = READ_ONCE(inode->i_blocks); struct super_block *sb = inode->i_sb;
if (i_blocks <= ~0U) {
From: Chris Wilson chris@chris-wilson.co.uk
commit 98479ada421a8fd2123b98efd398a6f1379307ab upstream.
If we park/unpark faster than we can respond to RPS events, we never will process a downclock event after expiring a waitboost, and thus we will forever restart the GPU at max clocks even if the workload switches and doesn't justify full power.
Closes: https://gitlab.freedesktop.org/drm/intel/issues/1500 Fixes: 3e7abf814193 ("drm/i915: Extract GT render power state management") Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Andi Shyti andi.shyti@intel.com Cc: Lyude Paul lyude@redhat.com Reviewed-by: Andi Shyti andi.shyti@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20200322163225.28791-1-chris@c... Cc: stable@vger.kernel.org # v5.5+ (cherry picked from commit 21abf0bf168dffff1192e0f072af1dc74ae1ff0e) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/i915/gt/intel_rps.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
--- a/drivers/gpu/drm/i915/gt/intel_rps.c +++ b/drivers/gpu/drm/i915/gt/intel_rps.c @@ -763,6 +763,19 @@ void intel_rps_park(struct intel_rps *rp intel_uncore_forcewake_get(rps_to_uncore(rps), FORCEWAKE_MEDIA); rps_set(rps, rps->idle_freq); intel_uncore_forcewake_put(rps_to_uncore(rps), FORCEWAKE_MEDIA); + + /* + * Since we will try and restart from the previously requested + * frequency on unparking, treat this idle point as a downclock + * interrupt and reduce the frequency for resume. If we park/unpark + * more frequently than the rps worker can run, we will not respond + * to any EI and never see a change in frequency. + * + * (Note we accommodate Cherryview's limitation of only using an + * even bin by applying it to all.) + */ + rps->cur_freq = + max_t(int, round_down(rps->cur_freq - 1, 2), rps->min_freq); }
void intel_rps_boost(struct i915_request *rq)
From: Eric Biggers ebiggers@google.com
commit 26c5d78c976ca298e59a56f6101a97b618ba3539 upstream.
After request_module(), nothing is stopping the module from being unloaded until someone takes a reference to it via try_get_module().
The WARN_ONCE() in get_fs_type() is thus user-reachable, via userspace running 'rmmod' concurrently.
Since WARN_ONCE() is for kernel bugs only, not for user-reachable situations, downgrade this warning to pr_warn_once().
Keep it printed once only, since the intent of this warning is to detect a bug in modprobe at boot time. Printing the warning more than once wouldn't really provide any useful extra information.
Fixes: 41124db869b7 ("fs: warn in case userspace lied about modprobe return") Signed-off-by: Eric Biggers ebiggers@google.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Jessica Yu jeyu@kernel.org Cc: Alexei Starovoitov ast@kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Jeff Vander Stoep jeffv@google.com Cc: Jessica Yu jeyu@kernel.org Cc: Kees Cook keescook@chromium.org Cc: Luis Chamberlain mcgrof@kernel.org Cc: NeilBrown neilb@suse.com Cc: stable@vger.kernel.org [4.13+] Link: http://lkml.kernel.org/r/20200312202552.241885-3-ebiggers@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/filesystems.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/fs/filesystems.c +++ b/fs/filesystems.c @@ -271,7 +271,9 @@ struct file_system_type *get_fs_type(con fs = __get_fs_type(name, len); if (!fs && (request_module("fs-%.*s", len, name) == 0)) { fs = __get_fs_type(name, len); - WARN_ONCE(!fs, "request_module fs-%.*s succeeded, but still no fs?\n", len, name); + if (!fs) + pr_warn_once("request_module fs-%.*s succeeded, but still no fs?\n", + len, name); }
if (dot && fs && !(fs->fs_flags & FS_HAS_SUBTYPE)) {
From: Changwei Ge chge@linux.alibaba.com
commit 783fda856e1034dee90a873f7654c418212d12d7 upstream.
Linux fallocate(2) with FALLOC_FL_PUNCH_HOLE mode set, its offset can exceed the inode size. Ocfs2 now doesn't allow that offset beyond inode size. This restriction is not necessary and violates fallocate(2) semantics.
If fallocate(2) offset is beyond inode size, just return success and do nothing further.
Otherwise, ocfs2 will crash the kernel.
kernel BUG at fs/ocfs2//alloc.c:7264! ocfs2_truncate_inline+0x20f/0x360 [ocfs2] ocfs2_remove_inode_range+0x23c/0xcb0 [ocfs2] __ocfs2_change_file_space+0x4a5/0x650 [ocfs2] ocfs2_fallocate+0x83/0xa0 [ocfs2] vfs_fallocate+0x148/0x230 SyS_fallocate+0x48/0x80 do_syscall_64+0x79/0x170
Signed-off-by: Changwei Ge chge@linux.alibaba.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Joseph Qi joseph.qi@linux.alibaba.com Cc: Mark Fasheh mark@fasheh.com Cc: Joel Becker jlbec@evilplan.org Cc: Junxiao Bi junxiao.bi@oracle.com Cc: Changwei Ge gechangwei@live.cn Cc: Gang He ghe@suse.com Cc: Jun Piao piaojun@huawei.com Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20200407082754.17565-1-chge@linux.alibaba.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ocfs2/alloc.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/fs/ocfs2/alloc.c +++ b/fs/ocfs2/alloc.c @@ -7403,6 +7403,10 @@ int ocfs2_truncate_inline(struct inode * struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data; struct ocfs2_inline_data *idata = &di->id2.i_data;
+ /* No need to punch hole beyond i_size. */ + if (start >= i_size_read(inode)) + return 0; + if (end > i_size_read(inode)) end = i_size_read(inode);
From: Sam Lunt samueljlunt@gmail.com
commit b9c9ce4e598e012ca7c1813fae2f4d02395807de upstream.
Python 3.8 changed the output of 'python-config --ldflags' to no longer include the '-lpythonX.Y' flag (this apparently fixed an issue loading modules with a statically linked Python executable). The libpython feature check in linux/build/feature fails if the Python library is not included in FEATURE_CHECK_LDFLAGS-libpython variable.
This adds a check in the Makefile to determine if PYTHON_CONFIG accepts the '--embed' flag and passes that flag alongside '--ldflags' if so.
tools/perf is the only place the libpython feature check is used.
Signed-off-by: Sam Lunt samuel.j.lunt@gmail.com Tested-by: He Zhe zhe.he@windriver.com Link: http://lore.kernel.org/lkml/c56be2e1-8111-9dfe-8298-f7d0f9ab7431@windriver.c... Acked-by: Jiri Olsa jolsa@redhat.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: trivial@kernel.org Cc: stable@kernel.org Link: http://lore.kernel.org/lkml/20200131181123.tmamivhq4b7uqasr@gmail.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- tools/perf/Makefile.config | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
--- a/tools/perf/Makefile.config +++ b/tools/perf/Makefile.config @@ -228,8 +228,17 @@ strip-libs = $(filter-out -l%,$(1))
PYTHON_CONFIG_SQ := $(call shell-sq,$(PYTHON_CONFIG))
+# Python 3.8 changed the output of `python-config --ldflags` to not include the +# '-lpythonX.Y' flag unless '--embed' is also passed. The feature check for +# libpython fails if that flag is not included in LDFLAGS +ifeq ($(shell $(PYTHON_CONFIG_SQ) --ldflags --embed 2>&1 1>/dev/null; echo $$?), 0) + PYTHON_CONFIG_LDFLAGS := --ldflags --embed +else + PYTHON_CONFIG_LDFLAGS := --ldflags +endif + ifdef PYTHON_CONFIG - PYTHON_EMBED_LDOPTS := $(shell $(PYTHON_CONFIG_SQ) --ldflags 2>/dev/null) + PYTHON_EMBED_LDOPTS := $(shell $(PYTHON_CONFIG_SQ) $(PYTHON_CONFIG_LDFLAGS) 2>/dev/null) PYTHON_EMBED_LDFLAGS := $(call strip-libs,$(PYTHON_EMBED_LDOPTS)) PYTHON_EMBED_LIBADD := $(call grep-libs,$(PYTHON_EMBED_LDOPTS)) -lutil PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --includes 2>/dev/null)
From: Michael Mueller mimu@linux.ibm.com
commit 6c7c851f1b666a8a455678a0b480b9162de86052 upstream.
Show the full diag statistic table and not just parts of it.
The issue surfaced in a KVM guest with a number of vcpus defined smaller than NR_DIAG_STAT.
Fixes: 1ec2772e0c3c ("s390/diag: add a statistic for diagnose calls") Cc: stable@vger.kernel.org Signed-off-by: Michael Mueller mimu@linux.ibm.com Reviewed-by: Heiko Carstens heiko.carstens@de.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/kernel/diag.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/s390/kernel/diag.c +++ b/arch/s390/kernel/diag.c @@ -84,7 +84,7 @@ static int show_diag_stat(struct seq_fil
static void *show_diag_stat_start(struct seq_file *m, loff_t *pos) { - return *pos <= nr_cpu_ids ? (void *)((unsigned long) *pos + 1) : NULL; + return *pos <= NR_DIAG_STAT ? (void *)((unsigned long) *pos + 1) : NULL; }
static void *show_diag_stat_next(struct seq_file *m, void *v, loff_t *pos)
From: Hans de Goede hdegoede@redhat.com
commit ebc68cedec4aead47d8d11623d013cca9bf8e825 upstream.
The Acer Aspire 5738z has a button to disable (and re-enable) the touchpad next to the touchpad.
When this button is pressed a LED underneath indicates that the touchpad is disabled (and an event is send to userspace and GNOME shows its touchpad enabled / disable OSD thingie).
So far so good, but after re-enabling the touchpad it no longer works.
The laptop does not have an external ps2 port, so mux mode is not needed and disabling mux mode fixes the touchpad no longer working after toggling it off and back on again, so lets add this laptop model to the nomux list.
Signed-off-by: Hans de Goede hdegoede@redhat.com Link: https://lore.kernel.org/r/20200331123947.318908-1-hdegoede@redhat.com Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/input/serio/i8042-x86ia64io.h | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/drivers/input/serio/i8042-x86ia64io.h +++ b/drivers/input/serio/i8042-x86ia64io.h @@ -530,6 +530,17 @@ static const struct dmi_system_id __init DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo LaVie Z"), }, }, + { + /* + * Acer Aspire 5738z + * Touchpad stops working in mux mode when dis- + re-enabled + * with the touchpad enable/disable toggle hotkey + */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5738"), + }, + }, { } };
From: Masami Hiramatsu mhiramat@kernel.org
commit 6a13a0d7b4d1171ef9b80ad69abc37e1daa941b3 upstream.
Show maxactive parameter on kprobe_events. This allows user to save the current configuration and restore it without losing maxactive parameter.
Link: http://lkml.kernel.org/r/4762764a-6df7-bc93-ed60-e336146dce1f@gmail.com Link: http://lkml.kernel.org/r/158503528846.22706.5549974121212526020.stgit@devnot...
Cc: stable@vger.kernel.org Fixes: 696ced4fb1d76 ("tracing/kprobes: expose maxactive for kretprobe in kprobe_events") Reported-by: Taeung Song treeze.taeung@gmail.com Signed-off-by: Masami Hiramatsu mhiramat@kernel.org Signed-off-by: Steven Rostedt (VMware) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/trace/trace_kprobe.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/kernel/trace/trace_kprobe.c +++ b/kernel/trace/trace_kprobe.c @@ -918,6 +918,8 @@ static int trace_kprobe_show(struct seq_ int i;
seq_putc(m, trace_kprobe_is_return(tk) ? 'r' : 'p'); + if (trace_kprobe_is_return(tk) && tk->rp.maxactive) + seq_printf(m, "%d", tk->rp.maxactive); seq_printf(m, ":%s/%s", trace_probe_group_name(&tk->tp), trace_probe_name(&tk->tp));
From: Paul Cercueil paul@crapouillou.net
commit c067b46d731a764fc46ecc466c2967088c97089e upstream.
Exit jz4770_cgu_init() if the 'cgu' pointer we get is NULL, since the pointer is passed as argument to functions later on.
Fixes: 7a01c19007ad ("clk: Add Ingenic jz4770 CGU driver") Cc: stable@vger.kernel.org Signed-off-by: Paul Cercueil paul@crapouillou.net Reported-by: kbuild test robot lkp@intel.com Reported-by: Dan Carpenter dan.carpenter@oracle.com Link: https://lkml.kernel.org/r/20200213161952.37460-1-paul@crapouillou.net Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/clk/ingenic/jz4770-cgu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/clk/ingenic/jz4770-cgu.c +++ b/drivers/clk/ingenic/jz4770-cgu.c @@ -432,8 +432,10 @@ static void __init jz4770_cgu_init(struc
cgu = ingenic_cgu_new(jz4770_cgu_clocks, ARRAY_SIZE(jz4770_cgu_clocks), np); - if (!cgu) + if (!cgu) { pr_err("%s: failed to initialise CGU\n", __func__); + return; + }
retval = ingenic_cgu_register_clocks(cgu); if (retval)
From: Paul Cercueil paul@crapouillou.net
commit edcc42945dee85e9dec3737f3dbf59d917ae5418 upstream.
When requesting a rate superior to the parent's rate, it would return -EINVAL instead of simply returning the parent's rate like it should.
Fixes: 4f89e4b8f121 ("clk: ingenic: Add driver for the TCU clocks") Cc: stable@vger.kernel.org Signed-off-by: Paul Cercueil paul@crapouillou.net Link: https://lkml.kernel.org/r/20200213161952.37460-2-paul@crapouillou.net Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/clk/ingenic/tcu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/clk/ingenic/tcu.c +++ b/drivers/clk/ingenic/tcu.c @@ -189,7 +189,7 @@ static long ingenic_tcu_round_rate(struc u8 prescale;
if (req_rate > rate) - return -EINVAL; + return rate;
prescale = ingenic_tcu_get_prescale(rate, req_rate);
From: Eric Biggers ebiggers@google.com
commit d7d27cfc5cf0766a26a8f56868c5ad5434735126 upstream.
Patch series "module autoloading fixes and cleanups", v5.
This series fixes a bug where request_module() was reporting success to kernel code when module autoloading had been completely disabled via 'echo > /proc/sys/kernel/modprobe'.
It also addresses the issues raised on the original thread (https://lkml.kernel.org/lkml/20200310223731.126894-1-ebiggers@kernel.org/T/#...) bydocumenting the modprobe sysctl, adding a self-test for the empty path case, and downgrading a user-reachable WARN_ONCE().
This patch (of 4):
It's long been possible to disable kernel module autoloading completely (while still allowing manual module insertion) by setting /proc/sys/kernel/modprobe to the empty string.
This can be preferable to setting it to a nonexistent file since it avoids the overhead of an attempted execve(), avoids potential deadlocks, and avoids the call to security_kernel_module_request() and thus on SELinux-based systems eliminates the need to write SELinux rules to dontaudit module_request.
However, when module autoloading is disabled in this way, request_module() returns 0. This is broken because callers expect 0 to mean that the module was successfully loaded.
Apparently this was never noticed because this method of disabling module autoloading isn't used much, and also most callers don't use the return value of request_module() since it's always necessary to check whether the module registered its functionality or not anyway.
But improperly returning 0 can indeed confuse a few callers, for example get_fs_type() in fs/filesystems.c where it causes a WARNING to be hit:
if (!fs && (request_module("fs-%.*s", len, name) == 0)) { fs = __get_fs_type(name, len); WARN_ONCE(!fs, "request_module fs-%.*s succeeded, but still no fs?\n", len, name); }
This is easily reproduced with:
echo > /proc/sys/kernel/modprobe mount -t NONEXISTENT none /
It causes:
request_module fs-NONEXISTENT succeeded, but still no fs? WARNING: CPU: 1 PID: 1106 at fs/filesystems.c:275 get_fs_type+0xd6/0xf0 [...]
This should actually use pr_warn_once() rather than WARN_ONCE(), since it's also user-reachable if userspace immediately unloads the module. Regardless, request_module() should correctly return an error when it fails. So let's make it return -ENOENT, which matches the error when the modprobe binary doesn't exist.
I've also sent patches to document and test this case.
Signed-off-by: Eric Biggers ebiggers@google.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Kees Cook keescook@chromium.org Reviewed-by: Jessica Yu jeyu@kernel.org Acked-by: Luis Chamberlain mcgrof@kernel.org Cc: Alexei Starovoitov ast@kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Jeff Vander Stoep jeffv@google.com Cc: Ben Hutchings benh@debian.org Cc: Josh Triplett josh@joshtriplett.org Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20200310223731.126894-1-ebiggers@kernel.org Link: http://lkml.kernel.org/r/20200312202552.241885-1-ebiggers@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/kmod.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/kernel/kmod.c +++ b/kernel/kmod.c @@ -120,7 +120,7 @@ out: * invoke it. * * If module auto-loading support is disabled then this function - * becomes a no-operation. + * simply returns -ENOENT. */ int __request_module(bool wait, const char *fmt, ...) { @@ -137,7 +137,7 @@ int __request_module(bool wait, const ch WARN_ON_ONCE(wait && current_is_async());
if (!modprobe_path[0]) - return 0; + return -ENOENT;
va_start(args, fmt); ret = vsnprintf(module_name, MODULE_NAME_LEN, fmt, args);
From: Oliver O'Halloran oohall@gmail.com
commit d0a72efac89d1c35ac55197895201b7b94c5e6ef upstream.
The cpufreq driver has a use-after-free that we can hit if:
a) There's an OCC message pending when the notifier is registered, and b) The cpufreq driver fails to register with the core.
When a) occurs the notifier schedules a workqueue item to handle the message. The backing work_struct is located on chips[].throttle and when b) happens we clean up by freeing the array. Once we get to the (now free) queued item and the kernel crashes.
Fixes: c5e29ea7ac14 ("cpufreq: powernv: Fix bugs in powernv_cpufreq_{init/exit}") Cc: stable@vger.kernel.org # v4.6+ Signed-off-by: Oliver O'Halloran oohall@gmail.com Reviewed-by: Gautham R. Shenoy ego@linux.vnet.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200206062622.28235-1-oohall@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/cpufreq/powernv-cpufreq.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/cpufreq/powernv-cpufreq.c +++ b/drivers/cpufreq/powernv-cpufreq.c @@ -1080,6 +1080,12 @@ free_and_return:
static inline void clean_chip_info(void) { + int i; + + /* flush any pending work items */ + if (chips) + for (i = 0; i < nr_chips; i++) + cancel_work_sync(&chips[i].throttle); kfree(chips); }
From: Simon Gander simon@tuxera.com
commit 25efb2ffdf991177e740b2f63e92b4ec7d310a92 upstream.
When removing files containing extended attributes, the hfsplus driver may remove the wrong entries from the attributes b-tree, causing major filesystem damage and in some cases even kernel crashes.
To remove a file, all its extended attributes have to be removed as well. The driver does this by looking up all keys in the attributes b-tree with the cnid of the file. Each of these entries then gets deleted using the key used for searching, which doesn't contain the attribute's name when it should. Since the key doesn't contain the name, the deletion routine will not find the correct entry and instead remove the one in front of it. If parent nodes have to be modified, these become corrupt as well. This causes invalid links and unsorted entries that not even macOS's fsck_hfs is able to fix.
To fix this, modify the search key before an entry is deleted from the attributes b-tree by copying the found entry's key into the search key, therefore ensuring that the correct entry gets removed from the tree.
Signed-off-by: Simon Gander simon@tuxera.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Anton Altaparmakov anton@tuxera.com Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20200327155541.1521-1-simon@tuxera.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/hfsplus/attributes.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/fs/hfsplus/attributes.c +++ b/fs/hfsplus/attributes.c @@ -292,6 +292,10 @@ static int __hfsplus_delete_attr(struct return -ENOENT; }
+ /* Avoid btree corruption */ + hfs_bnode_read(fd->bnode, fd->search_key, + fd->keyoffset, fd->keylength); + err = hfs_brec_remove(fd); if (err) return err;
From: Kai-Heng Feng kai.heng.feng@canonical.com
commit 8305f72f952cff21ce8109dc1ea4b321c8efc5af upstream.
During system resume from suspend, this can be observed on ASM1062 PMP controller:
ata10.01: SATA link down (SStatus 0 SControl 330) ata10.02: hard resetting link ata10.02: SATA link down (SStatus 0 SControl 330) ata10.00: configured for UDMA/133 Kernel panic - not syncing: stack-protector: Kernel in: sata_pmp_eh_recover+0xa2b/0xa40
CPU: 2 PID: 230 Comm: scsi_eh_9 Tainted: P OE #49-Ubuntu Hardware name: System manufacturer System Product 1001 12/10/2017 Call Trace: dump_stack+0x63/0x8b panic+0xe4/0x244 ? sata_pmp_eh_recover+0xa2b/0xa40 __stack_chk_fail+0x19/0x20 sata_pmp_eh_recover+0xa2b/0xa40 ? ahci_do_softreset+0x260/0x260 [libahci] ? ahci_do_hardreset+0x140/0x140 [libahci] ? ata_phys_link_offline+0x60/0x60 ? ahci_stop_engine+0xc0/0xc0 [libahci] sata_pmp_error_handler+0x22/0x30 ahci_error_handler+0x45/0x80 [libahci] ata_scsi_port_error_handler+0x29b/0x770 ? ata_scsi_cmd_error_handler+0x101/0x140 ata_scsi_error+0x95/0xd0 ? scsi_try_target_reset+0x90/0x90 scsi_error_handler+0xd0/0x5b0 kthread+0x121/0x140 ? scsi_eh_get_sense+0x200/0x200 ? kthread_create_worker_on_cpu+0x70/0x70 ret_from_fork+0x22/0x40 Kernel Offset: 0xcc00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
Since sata_pmp_eh_recover_pmp() doens't set rc when ATA_DFLAG_DETACH is set, sata_pmp_eh_recover() continues to run. During retry it triggers the stack protector.
Set correct rc in sata_pmp_eh_recover_pmp() to let sata_pmp_eh_recover() jump to pmp_fail directly.
BugLink: https://bugs.launchpad.net/bugs/1821434 Cc: stable@vger.kernel.org Signed-off-by: Kai-Heng Feng kai.heng.feng@canonical.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/ata/libata-pmp.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/ata/libata-pmp.c +++ b/drivers/ata/libata-pmp.c @@ -763,6 +763,7 @@ static int sata_pmp_eh_recover_pmp(struc
if (dev->flags & ATA_DFLAG_DETACH) { detach = 1; + rc = -ENODEV; goto fail; }
From: Wen Yang wenyang@linux.alibaba.com
commit 32830a0534700f86366f371b150b17f0f0d140d7 upstream.
The wait_event() function is used to detect command completion. When send_guid_cmd() returns an error, smi_send() has not been called to send data. Therefore, wait_event() should not be used on the error path, otherwise it will cause the following warning:
[ 1361.588808] systemd-udevd D 0 1501 1436 0x00000004 [ 1361.588813] ffff883f4b1298c0 0000000000000000 ffff883f4b188000 ffff887f7e3d9f40 [ 1361.677952] ffff887f64bd4280 ffffc90037297a68 ffffffff8173ca3b ffffc90000000010 [ 1361.767077] 00ffc90037297ad0 ffff887f7e3d9f40 0000000000000286 ffff883f4b188000 [ 1361.856199] Call Trace: [ 1361.885578] [<ffffffff8173ca3b>] ? __schedule+0x23b/0x780 [ 1361.951406] [<ffffffff8173cfb6>] schedule+0x36/0x80 [ 1362.010979] [<ffffffffa071f178>] get_guid+0x118/0x150 [ipmi_msghandler] [ 1362.091281] [<ffffffff810d5350>] ? prepare_to_wait_event+0x100/0x100 [ 1362.168533] [<ffffffffa071f755>] ipmi_register_smi+0x405/0x940 [ipmi_msghandler] [ 1362.258337] [<ffffffffa0230ae9>] try_smi_init+0x529/0x950 [ipmi_si] [ 1362.334521] [<ffffffffa022f350>] ? std_irq_setup+0xd0/0xd0 [ipmi_si] [ 1362.411701] [<ffffffffa0232bd2>] init_ipmi_si+0x492/0x9e0 [ipmi_si] [ 1362.487917] [<ffffffffa0232740>] ? ipmi_pci_probe+0x280/0x280 [ipmi_si] [ 1362.568219] [<ffffffff810021a0>] do_one_initcall+0x50/0x180 [ 1362.636109] [<ffffffff812231b2>] ? kmem_cache_alloc_trace+0x142/0x190 [ 1362.714330] [<ffffffff811b2ae1>] do_init_module+0x5f/0x200 [ 1362.781208] [<ffffffff81123ca8>] load_module+0x1898/0x1de0 [ 1362.848069] [<ffffffff811202e0>] ? __symbol_put+0x60/0x60 [ 1362.913886] [<ffffffff8130696b>] ? security_kernel_post_read_file+0x6b/0x80 [ 1362.998514] [<ffffffff81124465>] SYSC_finit_module+0xe5/0x120 [ 1363.068463] [<ffffffff81124465>] ? SYSC_finit_module+0xe5/0x120 [ 1363.140513] [<ffffffff811244be>] SyS_finit_module+0xe/0x10 [ 1363.207364] [<ffffffff81003c04>] do_syscall_64+0x74/0x180
Fixes: 50c812b2b951 ("[PATCH] ipmi: add full sysfs support") Signed-off-by: Wen Yang wenyang@linux.alibaba.com Cc: Corey Minyard minyard@acm.org Cc: Arnd Bergmann arnd@arndb.de Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: openipmi-developer@lists.sourceforge.net Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org # 2.6.17- Message-Id: 20200403090408.58745-1-wenyang@linux.alibaba.com Signed-off-by: Corey Minyard cminyard@mvista.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/char/ipmi/ipmi_msghandler.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/char/ipmi/ipmi_msghandler.c +++ b/drivers/char/ipmi/ipmi_msghandler.c @@ -3188,8 +3188,8 @@ static void __get_guid(struct ipmi_smi * if (rv) /* Send failed, no GUID available. */ bmc->dyn_guid_set = 0; - - wait_event(intf->waitq, bmc->dyn_guid_set != 2); + else + wait_event(intf->waitq, bmc->dyn_guid_set != 2);
/* dyn_guid_set makes the guid data available. */ smp_rmb();
From: Juergen Gross jgross@suse.com
commit 3a169c0be75b59dd85d159493634870cdec6d3c4 upstream.
Commit 1d5c76e664333 ("xen-blkfront: switch kcalloc to kvcalloc for large array allocation") didn't fix the issue it was meant to, as the flags for allocating the memory are GFP_NOIO, which will lead the memory allocation falling back to kmalloc().
So instead of GFP_NOIO use GFP_KERNEL and do all the memory allocation in blkfront_setup_indirect() in a memalloc_noio_{save,restore} section.
Fixes: 1d5c76e664333 ("xen-blkfront: switch kcalloc to kvcalloc for large array allocation") Cc: stable@vger.kernel.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Boris Ostrovsky boris.ostrovsky@oracle.com Acked-by: Roger Pau Monné roger.pau@citrix.com Link: https://lore.kernel.org/r/20200403090034.8753-1-jgross@suse.com Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/block/xen-blkfront.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-)
--- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -47,6 +47,7 @@ #include <linux/bitmap.h> #include <linux/list.h> #include <linux/workqueue.h> +#include <linux/sched/mm.h>
#include <xen/xen.h> #include <xen/xenbus.h> @@ -2188,10 +2189,12 @@ static void blkfront_setup_discard(struc
static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo) { - unsigned int psegs, grants; + unsigned int psegs, grants, memflags; int err, i; struct blkfront_info *info = rinfo->dev_info;
+ memflags = memalloc_noio_save(); + if (info->max_indirect_segments == 0) { if (!HAS_EXTRA_REQ) grants = BLKIF_MAX_SEGMENTS_PER_REQUEST; @@ -2223,7 +2226,7 @@ static int blkfront_setup_indirect(struc
BUG_ON(!list_empty(&rinfo->indirect_pages)); for (i = 0; i < num; i++) { - struct page *indirect_page = alloc_page(GFP_NOIO); + struct page *indirect_page = alloc_page(GFP_KERNEL); if (!indirect_page) goto out_of_memory; list_add(&indirect_page->lru, &rinfo->indirect_pages); @@ -2234,15 +2237,15 @@ static int blkfront_setup_indirect(struc rinfo->shadow[i].grants_used = kvcalloc(grants, sizeof(rinfo->shadow[i].grants_used[0]), - GFP_NOIO); + GFP_KERNEL); rinfo->shadow[i].sg = kvcalloc(psegs, sizeof(rinfo->shadow[i].sg[0]), - GFP_NOIO); + GFP_KERNEL); if (info->max_indirect_segments) rinfo->shadow[i].indirect_grants = kvcalloc(INDIRECT_GREFS(grants), sizeof(rinfo->shadow[i].indirect_grants[0]), - GFP_NOIO); + GFP_KERNEL); if ((rinfo->shadow[i].grants_used == NULL) || (rinfo->shadow[i].sg == NULL) || (info->max_indirect_segments && @@ -2251,6 +2254,7 @@ static int blkfront_setup_indirect(struc sg_init_table(rinfo->shadow[i].sg, psegs); }
+ memalloc_noio_restore(memflags);
return 0;
@@ -2270,6 +2274,9 @@ out_of_memory: __free_page(indirect_page); } } + + memalloc_noio_restore(memflags); + return -ENOMEM; }
From: Clement Courbet courbet@google.com
commit c17eb4dca5a353a9dbbb8ad6934fe57af7165e91 upstream.
Declaring setjmp()/longjmp() as taking longs makes the signature non-standard, and makes clang complain. In the past, this has been worked around by adding -ffreestanding to the compile flags.
The implementation looks like it only ever propagates the value (in longjmp) or sets it to 1 (in setjmp), and we only call longjmp with integer parameters.
This allows removing -ffreestanding from the compilation flags.
Fixes: c9029ef9c957 ("powerpc: Avoid clang warnings around setjmp and longjmp") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Clement Courbet courbet@google.com Reviewed-by: Nathan Chancellor natechancellor@gmail.com Tested-by: Nathan Chancellor natechancellor@gmail.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200330080400.124803-1-courbet@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/include/asm/setjmp.h | 6 ++++-- arch/powerpc/kexec/Makefile | 3 --- arch/powerpc/xmon/Makefile | 3 --- 3 files changed, 4 insertions(+), 8 deletions(-)
--- a/arch/powerpc/include/asm/setjmp.h +++ b/arch/powerpc/include/asm/setjmp.h @@ -7,7 +7,9 @@
#define JMP_BUF_LEN 23
-extern long setjmp(long *) __attribute__((returns_twice)); -extern void longjmp(long *, long) __attribute__((noreturn)); +typedef long jmp_buf[JMP_BUF_LEN]; + +extern int setjmp(jmp_buf env) __attribute__((returns_twice)); +extern void longjmp(jmp_buf env, int val) __attribute__((noreturn));
#endif /* _ASM_POWERPC_SETJMP_H */ --- a/arch/powerpc/kexec/Makefile +++ b/arch/powerpc/kexec/Makefile @@ -3,9 +3,6 @@ # Makefile for the linux kernel. #
-# Avoid clang warnings around longjmp/setjmp declarations -CFLAGS_crash.o += -ffreestanding - obj-y += core.o crash.o core_$(BITS).o
obj-$(CONFIG_PPC32) += relocate_32.o --- a/arch/powerpc/xmon/Makefile +++ b/arch/powerpc/xmon/Makefile @@ -1,9 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 # Makefile for xmon
-# Avoid clang warnings around longjmp/setjmp declarations -subdir-ccflags-y := -ffreestanding - GCOV_PROFILE := n KCOV_INSTRUMENT := n UBSAN_SANITIZE := n
From: Michael Ellerman mpe@ellerman.id.au
commit c7def7fbdeaa25feaa19caf4a27c5d10bd8789e4 upstream.
In restore_tm_sigcontexts() we take the trap value directly from the user sigcontext with no checking:
err |= __get_user(regs->trap, &sc->gp_regs[PT_TRAP]);
This means we can be in the kernel with an arbitrary regs->trap value.
Although that's not immediately problematic, there is a risk we could trigger one of the uses of CHECK_FULL_REGS():
#define CHECK_FULL_REGS(regs) BUG_ON(regs->trap & 1)
It can also cause us to unnecessarily save non-volatile GPRs again in save_nvgprs(), which shouldn't be problematic but is still wrong.
It's also possible it could trick the syscall restart machinery, which relies on regs->trap not being == 0xc00 (see 9a81c16b5275 ("powerpc: fix double syscall restarts")), though I haven't been able to make that happen.
Finally it doesn't match the behaviour of the non-TM case, in restore_sigcontext() which zeroes regs->trap.
So change restore_tm_sigcontexts() to zero regs->trap.
This was discovered while testing Nick's upcoming rewrite of the syscall entry path. In that series the call to save_nvgprs() prior to signal handling (do_notify_resume()) is removed, which leaves the low-bit of regs->trap uncleared which can then trigger the FULL_REGS() WARNs in setup_tm_sigcontexts().
Fixes: 2b0a576d15e0 ("powerpc: Add new transactional memory state to the signal context") Cc: stable@vger.kernel.org # v3.9+ Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200401023836.3286664-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/signal_64.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/arch/powerpc/kernel/signal_64.c +++ b/arch/powerpc/kernel/signal_64.c @@ -473,8 +473,10 @@ static long restore_tm_sigcontexts(struc err |= __get_user(tsk->thread.ckpt_regs.ccr, &sc->gp_regs[PT_CCR]);
+ /* Don't allow userspace to set the trap value */ + regs->trap = 0; + /* These regs are not checkpointed; they can go in 'regs'. */ - err |= __get_user(regs->trap, &sc->gp_regs[PT_TRAP]); err |= __get_user(regs->dar, &sc->gp_regs[PT_DAR]); err |= __get_user(regs->dsisr, &sc->gp_regs[PT_DSISR]); err |= __get_user(regs->result, &sc->gp_regs[PT_RESULT]);
From: Laurentiu Tudor laurentiu.tudor@nxp.com
commit aa4113340ae6c2811e046f08c2bc21011d20a072 upstream.
In the current implementation, the call to loadcam_multi() is wrapped between switch_to_as1() and restore_to_as0() calls so, when it tries to create its own temporary AS=1 TLB1 entry, it ends up duplicating the existing one created by switch_to_as1(). Add a check to skip creating the temporary entry if already running in AS=1.
Fixes: d9e1831a4202 ("powerpc/85xx: Load all early TLB entries at once") Cc: stable@vger.kernel.org # v4.4+ Signed-off-by: Laurentiu Tudor laurentiu.tudor@nxp.com Acked-by: Scott Wood oss@buserror.net Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200123111914.2565-1-laurentiu.tudor@nxp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/mm/nohash/tlb_low.S | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
--- a/arch/powerpc/mm/nohash/tlb_low.S +++ b/arch/powerpc/mm/nohash/tlb_low.S @@ -397,7 +397,7 @@ _GLOBAL(set_context) * extern void loadcam_entry(unsigned int index) * * Load TLBCAM[index] entry in to the L2 CAM MMU - * Must preserve r7, r8, r9, and r10 + * Must preserve r7, r8, r9, r10 and r11 */ _GLOBAL(loadcam_entry) mflr r5 @@ -433,6 +433,10 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PH */ _GLOBAL(loadcam_multi) mflr r8 + /* Don't switch to AS=1 if already there */ + mfmsr r11 + andi. r11,r11,MSR_IS + bne 10f
/* * Set up temporary TLB entry that is the same as what we're @@ -458,6 +462,7 @@ _GLOBAL(loadcam_multi) mtmsr r6 isync
+10: mr r9,r3 add r10,r3,r4 2: bl loadcam_entry @@ -466,6 +471,10 @@ _GLOBAL(loadcam_multi) mr r3,r9 blt 2b
+ /* Don't return to AS=0 if we were in AS=1 at function start */ + andi. r11,r11,MSR_IS + bne 3f + /* Return to AS=0 and clear the temporary entry */ mfmsr r6 rlwinm. r6,r6,0,~(MSR_IS|MSR_DS) @@ -481,6 +490,7 @@ _GLOBAL(loadcam_multi) tlbwe isync
+3: mtlr r8 blr #endif
From: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com
commit 36b78402d97a3b9aeab136feb9b00d8647ec2c20 upstream.
H_PAGE_THP_HUGE is used to differentiate between a THP hugepage and hugetlb hugepage entries. The difference is WRT how we handle hash fault on these address. THP address enables MPSS in segments. We want to manage devmap hugepage entries similar to THP pt entries. Hence use H_PAGE_THP_HUGE for devmap huge PTE entries.
With current code while handling hash PTE fault, we do set is_thp = true when finding devmap PTE huge PTE entries.
Current code also does the below sequence we setting up huge devmap entries.
entry = pmd_mkhuge(pfn_t_pmd(pfn, prot)); if (pfn_t_devmap(pfn)) entry = pmd_mkdevmap(entry);
In that case we would find both H_PAGE_THP_HUGE and PAGE_DEVMAP set for huge devmap PTE entries. This results in false positive error like below.
kernel BUG at /home/kvaneesh/src/linux/mm/memory.c:4321! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: CPU: 56 PID: 67996 Comm: t_mmap_dio Not tainted 5.6.0-rc4-59640-g371c804dedbc #128 .... NIP [c00000000044c9e4] __follow_pte_pmd+0x264/0x900 LR [c0000000005d45f8] dax_writeback_one+0x1a8/0x740 Call Trace: str_spec.74809+0x22ffb4/0x2d116c (unreliable) dax_writeback_one+0x1a8/0x740 dax_writeback_mapping_range+0x26c/0x700 ext4_dax_writepages+0x150/0x5a0 do_writepages+0x68/0x180 __filemap_fdatawrite_range+0x138/0x180 file_write_and_wait_range+0xa4/0x110 ext4_sync_file+0x370/0x6e0 vfs_fsync_range+0x70/0xf0 sys_msync+0x220/0x2e0 system_call+0x5c/0x68
This is because our pmd_trans_huge check doesn't exclude _PAGE_DEVMAP.
To make this all consistent, update pmd_mkdevmap to set H_PAGE_THP_HUGE and pmd_trans_huge check now excludes _PAGE_DEVMAP correctly.
Fixes: ebd31197931d ("powerpc/mm: Add devmap support for ppc64") Cc: stable@vger.kernel.org # v4.13+ Signed-off-by: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200313094842.351830-1-aneesh.kumar@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/include/asm/book3s/64/hash-4k.h | 6 ++++++ arch/powerpc/include/asm/book3s/64/hash-64k.h | 8 +++++++- arch/powerpc/include/asm/book3s/64/pgtable.h | 4 +++- arch/powerpc/include/asm/book3s/64/radix.h | 5 +++++ 4 files changed, 21 insertions(+), 2 deletions(-)
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -156,6 +156,12 @@ extern pmd_t hash__pmdp_huge_get_and_cle extern int hash__has_transparent_hugepage(void); #endif
+static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd) +{ + BUG(); + return pmd; +} + #endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_4K_H */ --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -246,7 +246,7 @@ static inline void mark_hpte_slot_valid( */ static inline int hash__pmd_trans_huge(pmd_t pmd) { - return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) == + return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)) == (_PAGE_PTE | H_PAGE_THP_HUGE)); }
@@ -272,6 +272,12 @@ extern pmd_t hash__pmdp_huge_get_and_cle unsigned long addr, pmd_t *pmdp); extern int hash__has_transparent_hugepage(void); #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + +static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd) +{ + return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)); +} + #endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_64K_H */ --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -1303,7 +1303,9 @@ extern void serialize_against_pte_lookup
static inline pmd_t pmd_mkdevmap(pmd_t pmd) { - return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP)); + if (radix_enabled()) + return radix__pmd_mkdevmap(pmd); + return hash__pmd_mkdevmap(pmd); }
static inline int pmd_devmap(pmd_t pmd) --- a/arch/powerpc/include/asm/book3s/64/radix.h +++ b/arch/powerpc/include/asm/book3s/64/radix.h @@ -263,6 +263,11 @@ static inline int radix__has_transparent } #endif
+static inline pmd_t radix__pmd_mkdevmap(pmd_t pmd) +{ + return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP)); +} + extern int __meminit radix__vmemmap_create_mapping(unsigned long start, unsigned long page_size, unsigned long phys);
From: Cédric Le Goater clg@kaod.org
commit b1a504a6500df50e83b701b7946b34fce27ad8a3 upstream.
When a CPU is brought up, an IPI number is allocated and recorded under the XIVE CPU structure. Invalid IPI numbers are tracked with interrupt number 0x0.
On the PowerNV platform, the interrupt number space starts at 0x10 and this works fine. However, on the sPAPR platform, it is possible to allocate the interrupt number 0x0 and this raises an issue when CPU 0 is unplugged. The XIVE spapr driver tracks allocated interrupt numbers in a bitmask and it is not correctly updated when interrupt number 0x0 is freed. It stays allocated and it is then impossible to reallocate.
Fix by using the XIVE_BAD_IRQ value instead of zero on both platforms.
Reported-by: David Gibson david@gibson.dropbear.id.au Fixes: eac1e731b59e ("powerpc/xive: guest exploitation of the XIVE interrupt controller") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Cédric Le Goater clg@kaod.org Reviewed-by: David Gibson david@gibson.dropbear.id.au Tested-by: David Gibson david@gibson.dropbear.id.au Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200306150143.5551-2-clg@kaod.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/sysdev/xive/common.c | 12 +++--------- arch/powerpc/sysdev/xive/native.c | 4 ++-- arch/powerpc/sysdev/xive/spapr.c | 4 ++-- arch/powerpc/sysdev/xive/xive-internal.h | 7 +++++++ 4 files changed, 14 insertions(+), 13 deletions(-)
--- a/arch/powerpc/sysdev/xive/common.c +++ b/arch/powerpc/sysdev/xive/common.c @@ -68,13 +68,6 @@ static u32 xive_ipi_irq; /* Xive state for each CPU */ static DEFINE_PER_CPU(struct xive_cpu *, xive_cpu);
-/* - * A "disabled" interrupt should never fire, to catch problems - * we set its logical number to this - */ -#define XIVE_BAD_IRQ 0x7fffffff -#define XIVE_MAX_IRQ (XIVE_BAD_IRQ - 1) - /* An invalid CPU target */ #define XIVE_INVALID_TARGET (-1)
@@ -1150,7 +1143,7 @@ static int xive_setup_cpu_ipi(unsigned i xc = per_cpu(xive_cpu, cpu);
/* Check if we are already setup */ - if (xc->hw_ipi != 0) + if (xc->hw_ipi != XIVE_BAD_IRQ) return 0;
/* Grab an IPI from the backend, this will populate xc->hw_ipi */ @@ -1187,7 +1180,7 @@ static void xive_cleanup_cpu_ipi(unsigne /* Disable the IPI and free the IRQ data */
/* Already cleaned up ? */ - if (xc->hw_ipi == 0) + if (xc->hw_ipi == XIVE_BAD_IRQ) return;
/* Mask the IPI */ @@ -1343,6 +1336,7 @@ static int xive_prepare_cpu(unsigned int if (np) xc->chip_id = of_get_ibm_chip_id(np); of_node_put(np); + xc->hw_ipi = XIVE_BAD_IRQ;
per_cpu(xive_cpu, cpu) = xc; } --- a/arch/powerpc/sysdev/xive/native.c +++ b/arch/powerpc/sysdev/xive/native.c @@ -312,7 +312,7 @@ static void xive_native_put_ipi(unsigned s64 rc;
/* Free the IPI */ - if (!xc->hw_ipi) + if (xc->hw_ipi == XIVE_BAD_IRQ) return; for (;;) { rc = opal_xive_free_irq(xc->hw_ipi); @@ -320,7 +320,7 @@ static void xive_native_put_ipi(unsigned msleep(OPAL_BUSY_DELAY_MS); continue; } - xc->hw_ipi = 0; + xc->hw_ipi = XIVE_BAD_IRQ; break; } } --- a/arch/powerpc/sysdev/xive/spapr.c +++ b/arch/powerpc/sysdev/xive/spapr.c @@ -560,11 +560,11 @@ static int xive_spapr_get_ipi(unsigned i
static void xive_spapr_put_ipi(unsigned int cpu, struct xive_cpu *xc) { - if (!xc->hw_ipi) + if (xc->hw_ipi == XIVE_BAD_IRQ) return;
xive_irq_bitmap_free(xc->hw_ipi); - xc->hw_ipi = 0; + xc->hw_ipi = XIVE_BAD_IRQ; } #endif /* CONFIG_SMP */
--- a/arch/powerpc/sysdev/xive/xive-internal.h +++ b/arch/powerpc/sysdev/xive/xive-internal.h @@ -5,6 +5,13 @@ #ifndef __XIVE_INTERNAL_H #define __XIVE_INTERNAL_H
+/* + * A "disabled" interrupt should never fire, to catch problems + * we set its logical number to this + */ +#define XIVE_BAD_IRQ 0x7fffffff +#define XIVE_MAX_IRQ (XIVE_BAD_IRQ - 1) + /* Each CPU carry one of these with various per-CPU state */ struct xive_cpu { #ifdef CONFIG_SMP
From: Daniel Axtens dja@axtens.net
commit d4a8e98621543d5798421eed177978bf2b3cdd11 upstream.
Currently we set up the paca after parsing the device tree for CPU features. Prior to that, r13 contains random data, which means there is random data in r13 while we're running the generic dt parsing code.
This random data varies depending on whether we boot through a vmlinux or a zImage: for the vmlinux case it's usually around zero, but for zImages we see random values like 912a72603d420015.
This is poor practice, and can also lead to difficult-to-debug crashes. For example, when kcov is enabled, the kcov instrumentation attempts to read preempt_count out of the current task, which goes via the paca. This then crashes in the zImage case.
Similarly stack protector can cause crashes if r13 is bogus, by reading from the stack canary in the paca.
To resolve this:
- move the paca setup to before the CPU feature parsing.
- because we no longer have access to CPU feature flags in paca setup, change the HV feature test in the paca setup path to consider the actual value of the MSR rather than the CPU feature.
Translations get switched on once we leave early_setup, so I think we'd already catch any other cases where the paca or task aren't set up.
Boot tested on a P9 guest and host.
Fixes: fb0b0a73b223 ("powerpc: Enable kcov") Fixes: 06ec27aea9fc ("powerpc/64: add stack protector support") Cc: stable@vger.kernel.org # v4.20+ Reviewed-by: Andrew Donnellan ajd@linux.ibm.com Suggested-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Daniel Axtens dja@axtens.net [mpe: Reword comments & change log a bit to mention stack protector] Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200320032116.1024773-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/dt_cpu_ftrs.c | 1 - arch/powerpc/kernel/paca.c | 10 +++++++--- arch/powerpc/kernel/setup_64.c | 30 ++++++++++++++++++++++++------ 3 files changed, 31 insertions(+), 10 deletions(-)
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -139,7 +139,6 @@ static void __init cpufeatures_setup_cpu /* Initialize the base environment -- clear FSCR/HFSCR. */ hv_mode = !!(mfmsr() & MSR_HV); if (hv_mode) { - /* CPU_FTR_HVMODE is used early in PACA setup */ cur_cpu_spec->cpu_features |= CPU_FTR_HVMODE; mtspr(SPRN_HFSCR, 0); } --- a/arch/powerpc/kernel/paca.c +++ b/arch/powerpc/kernel/paca.c @@ -214,11 +214,15 @@ void setup_paca(struct paca_struct *new_ /* On Book3E, initialize the TLB miss exception frames */ mtspr(SPRN_SPRG_TLB_EXFRAME, local_paca->extlb); #else - /* In HV mode, we setup both HPACA and PACA to avoid problems + /* + * In HV mode, we setup both HPACA and PACA to avoid problems * if we do a GET_PACA() before the feature fixups have been - * applied + * applied. + * + * Normally you should test against CPU_FTR_HVMODE, but CPU features + * are not yet set up when we first reach here. */ - if (early_cpu_has_feature(CPU_FTR_HVMODE)) + if (mfmsr() & MSR_HV) mtspr(SPRN_SPRG_HPACA, local_paca); #endif mtspr(SPRN_SPRG_PACA, local_paca); --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -285,18 +285,36 @@ void __init early_setup(unsigned long dt
/* -------- printk is _NOT_ safe to use here ! ------- */
- /* Try new device tree based feature discovery ... */ - if (!dt_cpu_ftrs_init(__va(dt_ptr))) - /* Otherwise use the old style CPU table */ - identify_cpu(0, mfspr(SPRN_PVR)); - - /* Assume we're on cpu 0 for now. Don't write to the paca yet! */ + /* + * Assume we're on cpu 0 for now. + * + * We need to load a PACA very early for a few reasons. + * + * The stack protector canary is stored in the paca, so as soon as we + * call any stack protected code we need r13 pointing somewhere valid. + * + * If we are using kcov it will call in_task() in its instrumentation, + * which relies on the current task from the PACA. + * + * dt_cpu_ftrs_init() calls into generic OF/fdt code, as well as + * printk(), which can trigger both stack protector and kcov. + * + * percpu variables and spin locks also use the paca. + * + * So set up a temporary paca. It will be replaced below once we know + * what CPU we are on. + */ initialise_paca(&boot_paca, 0); setup_paca(&boot_paca); fixup_boot_paca();
/* -------- printk is now safe to use ------- */
+ /* Try new device tree based feature discovery ... */ + if (!dt_cpu_ftrs_init(__va(dt_ptr))) + /* Otherwise use the old style CPU table */ + identify_cpu(0, mfspr(SPRN_PVR)); + /* Enable early debugging if any specified (see udbg.h) */ udbg_early_init();
From: Cédric Le Goater clg@kaod.org
commit 97ef275077932c65b1b8ec5022abd737a9fbf3e0 upstream.
The PowerNV platform has multiple IRQ chips and the xmon command dumping the state of the XIVE interrupt should only operate on the XIVE IRQ chip.
Fixes: 5896163f7f91 ("powerpc/xmon: Improve output of XIVE interrupts") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Cédric Le Goater clg@kaod.org Reviewed-by: Greg Kurz groug@kaod.org Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200306150143.5551-3-clg@kaod.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/sysdev/xive/common.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/arch/powerpc/sysdev/xive/common.c +++ b/arch/powerpc/sysdev/xive/common.c @@ -258,11 +258,15 @@ notrace void xmon_xive_do_dump(int cpu)
int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d) { + struct irq_chip *chip = irq_data_get_irq_chip(d); int rc; u32 target; u8 prio; u32 lirq;
+ if (!is_xive_irq(chip)) + return -EINVAL; + rc = xive_ops->get_irq_config(hw_irq, &target, &prio, &lirq); if (rc) { xmon_printf("IRQ 0x%08x : no config rc=%d\n", hw_irq, rc);
From: Christophe Leroy christophe.leroy@c-s.fr
commit 21f8b2fa3ca5b01f7a2b51b89ce97a3705a15aa0 upstream.
When a program check exception happens while MMU translation is disabled, following Oops happens in kprobe_handler() in the following code:
} else if (*addr != BREAKPOINT_INSTRUCTION) {
BUG: Unable to handle kernel data access on read at 0x0000e268 Faulting instruction address: 0xc000ec34 Oops: Kernel access of bad area, sig: 11 [#1] BE PAGE_SIZE=16K PREEMPT CMPC885 Modules linked in: CPU: 0 PID: 429 Comm: cat Not tainted 5.6.0-rc1-s3k-dev-00824-g84195dc6c58a #3267 NIP: c000ec34 LR: c000ecd8 CTR: c019cab8 REGS: ca4d3b58 TRAP: 0300 Not tainted (5.6.0-rc1-s3k-dev-00824-g84195dc6c58a) MSR: 00001032 <ME,IR,DR,RI> CR: 2a4d3c52 XER: 00000000 DAR: 0000e268 DSISR: c0000000 GPR00: c000b09c ca4d3c10 c66d0620 00000000 ca4d3c60 00000000 00009032 00000000 GPR08: 00020000 00000000 c087de44 c000afe0 c66d0ad0 100d3dd6 fffffff3 00000000 GPR16: 00000000 00000041 00000000 ca4d3d70 00000000 00000000 0000416d 00000000 GPR24: 00000004 c53b6128 00000000 0000e268 00000000 c07c0000 c07bb6fc ca4d3c60 NIP [c000ec34] kprobe_handler+0x128/0x290 LR [c000ecd8] kprobe_handler+0x1cc/0x290 Call Trace: [ca4d3c30] [c000b09c] program_check_exception+0xbc/0x6fc [ca4d3c50] [c000e43c] ret_from_except_full+0x0/0x4 --- interrupt: 700 at 0xe268 Instruction dump: 913e0008 81220000 38600001 3929ffff 91220000 80010024 bb410008 7c0803a6 38210020 4e800020 38600000 4e800020 <813b0000> 6d2a7fe0 2f8a0008 419e0154 ---[ end trace 5b9152d4cdadd06d ]---
kprobe is not prepared to handle events in real mode and functions running in real mode should have been blacklisted, so kprobe_handler() can safely bail out telling 'this trap is not mine' for any trap that happened while in real-mode.
If the trap happened with MSR_IR or MSR_DR cleared, return 0 immediately.
Reported-by: Larry Finger Larry.Finger@lwfinger.net Fixes: 6cc89bad60a6 ("powerpc/kprobes: Invoke handlers directly") Cc: stable@vger.kernel.org # v4.10+ Signed-off-by: Christophe Leroy christophe.leroy@c-s.fr Reviewed-by: Masami Hiramatsu mhiramat@kernel.org Reviewed-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/424331e2006e7291a1bfe40e7f3fa58825f565e1.158205457... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/kprobes.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/arch/powerpc/kernel/kprobes.c +++ b/arch/powerpc/kernel/kprobes.c @@ -264,6 +264,9 @@ int kprobe_handler(struct pt_regs *regs) if (user_mode(regs)) return 0;
+ if (!(regs->msr & MSR_IR) || !(regs->msr & MSR_DR)) + return 0; + /* * We don't want to be preempted for the entire * duration of kprobe processing
From: Michael Ellerman mpe@ellerman.id.au
commit 7053f80d96967d8e72e9f2a724bbfc3906ce2b07 upstream.
The previous commit reduced the amount of code that is run before we setup a paca. However there are still a few remaining functions that run with no paca, or worse, with an arbitrary value in r13 that will be used as a paca pointer.
In particular the stack protector canary is stored in the paca, so if stack protector is activated for any of these functions we will read the stack canary from wherever r13 points. If r13 happens to point outside of memory we will get a machine check / checkstop.
For example if we modify initialise_paca() to trigger stack protection, and then boot in the mambo simulator with r13 poisoned in skiboot before calling the kernel:
DEBUG: 19952232: (19952232): INSTRUCTION: PC=0xC0000000191FC1E8: [0x3C4C006D]: addis r2,r12,0x6D [fetch] DEBUG: 19952236: (19952236): INSTRUCTION: PC=0xC00000001807EAD8: [0x7D8802A6]: mflr r12 [fetch] FATAL ERROR: 19952276: (19952276): Check Stop for 0:0: Machine Check with ME bit of MSR off DEBUG: 19952276: (19952276): INSTRUCTION: PC=0xC0000000191FCA7C: [0xE90D0CF8]: ld r8,0xCF8(r13) [Instruction Failed] INFO: 19952276: (19952277): ** Execution stopped: Mambo Error, Machine Check Stop, ** systemsim % bt pc: 0xC0000000191FCA7C initialise_paca+0x54 lr: 0xC0000000191FC22C early_setup+0x44 stack:0x00000000198CBED0 0x0 +0x0 stack:0x00000000198CBF00 0xC0000000191FC22C early_setup+0x44 stack:0x00000000198CBF90 0x1801C968 +0x1801C968
So annotate the relevant functions to ensure stack protection is never enabled for them.
Fixes: 06ec27aea9fc ("powerpc/64: add stack protector support") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200320032116.1024773-2-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/paca.c | 4 ++-- arch/powerpc/kernel/setup.h | 6 ++++++ arch/powerpc/kernel/setup_64.c | 2 +- 3 files changed, 9 insertions(+), 3 deletions(-)
--- a/arch/powerpc/kernel/paca.c +++ b/arch/powerpc/kernel/paca.c @@ -176,7 +176,7 @@ static struct slb_shadow * __init new_sl struct paca_struct **paca_ptrs __read_mostly; EXPORT_SYMBOL(paca_ptrs);
-void __init initialise_paca(struct paca_struct *new_paca, int cpu) +void __init __nostackprotector initialise_paca(struct paca_struct *new_paca, int cpu) { #ifdef CONFIG_PPC_PSERIES new_paca->lppaca_ptr = NULL; @@ -205,7 +205,7 @@ void __init initialise_paca(struct paca_ }
/* Put the paca pointer into r13 and SPRG_PACA */ -void setup_paca(struct paca_struct *new_paca) +void __nostackprotector setup_paca(struct paca_struct *new_paca) { /* Setup r13 */ local_paca = new_paca; --- a/arch/powerpc/kernel/setup.h +++ b/arch/powerpc/kernel/setup.h @@ -8,6 +8,12 @@ #ifndef __ARCH_POWERPC_KERNEL_SETUP_H #define __ARCH_POWERPC_KERNEL_SETUP_H
+#ifdef CONFIG_CC_IS_CLANG +#define __nostackprotector +#else +#define __nostackprotector __attribute__((__optimize__("no-stack-protector"))) +#endif + void initialize_cache_info(void); void irqstack_early_init(void);
--- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -279,7 +279,7 @@ void __init record_spr_defaults(void) * device-tree is not accessible via normal means at this point. */
-void __init early_setup(unsigned long dt_ptr) +void __init __nostackprotector early_setup(unsigned long dt_ptr) { static __initdata struct paca_struct boot_paca;
From: Sreekanth Reddy sreekanth.reddy@broadcom.com
commit cc41f11a21a51d6869d71e525a7264c748d7c0d7 upstream.
Generic protection fault type kernel panic is observed when user performs soft (ordered) HBA unplug operation while IOs are running on drives connected to HBA.
When user performs ordered HBA removal operation, the kernel calls PCI device's .remove() call back function where driver is flushing out all the outstanding SCSI IO commands with DID_NO_CONNECT host byte and also unmaps sg buffers allocated for these IO commands.
However, in the ordered HBA removal case (unlike of real HBA hot removal), HBA device is still alive and hence HBA hardware is performing the DMA operations to those buffers on the system memory which are already unmapped while flushing out the outstanding SCSI IO commands and this leads to kernel panic.
Don't flush out the outstanding IOs from .remove() path in case of ordered removal since HBA will be still alive in this case and it can complete the outstanding IOs. Flush out the outstanding IOs only in case of 'physical HBA hot unplug' where there won't be any communication with the HBA.
During shutdown also it is possible that HBA hardware can perform DMA operations on those outstanding IO buffers which are completed with DID_NO_CONNECT by the driver from .shutdown(). So same above fix is applied in shutdown path as well.
It is safe to drop the outstanding commands when HBA is inaccessible such as when permanent PCI failure happens, when HBA is in non-operational state, or when someone does a real HBA hot unplug operation. Since driver knows that HBA is inaccessible during these cases, it is safe to drop the outstanding commands instead of waiting for SCSI error recovery to kick in and clear these outstanding commands.
Link: https://lore.kernel.org/r/1585302763-23007-1-git-send-email-sreekanth.reddy@... Fixes: c666d3be99c0 ("scsi: mpt3sas: wait for and flush running commands on shutdown/unload") Cc: stable@vger.kernel.org #v4.14.174+ Signed-off-by: Sreekanth Reddy sreekanth.reddy@broadcom.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/mpt3sas/mpt3sas_scsih.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c +++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c @@ -9747,8 +9747,8 @@ static void scsih_remove(struct pci_dev
ioc->remove_host = 1;
- mpt3sas_wait_for_commands_to_complete(ioc); - _scsih_flush_running_cmds(ioc); + if (!pci_device_is_present(pdev)) + _scsih_flush_running_cmds(ioc);
_scsih_fw_event_cleanup_queue(ioc);
@@ -9831,8 +9831,8 @@ scsih_shutdown(struct pci_dev *pdev)
ioc->remove_host = 1;
- mpt3sas_wait_for_commands_to_complete(ioc); - _scsih_flush_running_cmds(ioc); + if (!pci_device_is_present(pdev)) + _scsih_flush_running_cmds(ioc);
_scsih_fw_event_cleanup_queue(ioc);
From: Mark Brown broonie@kernel.org
commit b8fdef311a0bd9223f10754f94fdcf1a594a3457 upstream.
Compilers with branch protection support can be configured to enable it by default, it is likely that distributions will do this as part of deploying branch protection system wide. As well as the slight overhead from having some extra NOPs for unused branch protection features this can cause more serious problems when the kernel is providing pointer authentication to userspace but not built for pointer authentication itself. In that case our switching of keys for userspace can affect the kernel unexpectedly, causing pointer authentication instructions in the kernel to corrupt addresses.
To ensure that we get consistent and reliable behaviour always explicitly initialise the branch protection mode, ensuring that the kernel is built the same way regardless of the compiler defaults.
[This is a reworked version of b8fdef311a0bd9223f1075 ("arm64: Always force a branch protection mode when the compiler has one") for backport. Kernels prior to 74afda4016a7 ("arm64: compile the kernel with ptrauth return address signing") don't have any Makefile machinery for forcing on pointer auth but still have issues if the compiler defaults it on so need this reworked version. -- broonie]
Fixes: 7503197562567 (arm64: add basic pointer authentication support) Reported-by: Szabolcs Nagy szabolcs.nagy@arm.com Signed-off-by: Mark Brown broonie@kernel.org Cc: stable@vger.kernel.org [catalin.marinas@arm.com: remove Kconfig option in favour of Makefile check] Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/Makefile | 4 ++++ 1 file changed, 4 insertions(+)
--- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -72,6 +72,10 @@ stack_protector_prepare: prepare0 include/generated/asm-offsets.h)) endif
+# Ensure that if the compiler supports branch protection we default it +# off. +KBUILD_CFLAGS += $(call cc-option,-mbranch-protection=none) + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__
From: Torsten Duwe duwe@lst.de
[ Upstream commit 3e138a63d6674a4567a018a31e467567c40b14d5 ]
drm_dp_link_rate_to_bw_code and ...bw_code_to_link_rate simply divide by and multiply with 27000, respectively. Avoid an overflow in the u8 dpcd[0] and the multiply+divide alltogether.
Signed-off-by: Torsten Duwe duwe@lst.de Fixes: ff1e8fb68ea0 ("drm/bridge: analogix-anx78xx: Avoid drm_dp_link helpers") Cc: Thierry Reding treding@nvidia.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Andrzej Hajda a.hajda@samsung.com Cc: Neil Armstrong narmstrong@baylibre.com Cc: Laurent Pinchart Laurent.pinchart@ideasonboard.com Cc: Jonas Karlman jonas@kwiboo.se Cc: Jernej Skrabec jernej.skrabec@siol.net Cc: stable@vger.kernel.org # v5.5+ Reviewed-by: Thierry Reding treding@nvidia.com Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Link: https://patchwork.freedesktop.org/patch/msgid/20200218155744.9675368BE1@vere... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/bridge/analogix-anx78xx.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/bridge/analogix-anx78xx.c b/drivers/gpu/drm/bridge/analogix-anx78xx.c index 274989f96a916..914263a1afab4 100644 --- a/drivers/gpu/drm/bridge/analogix-anx78xx.c +++ b/drivers/gpu/drm/bridge/analogix-anx78xx.c @@ -866,10 +866,9 @@ static int anx78xx_dp_link_training(struct anx78xx *anx78xx) if (err) return err;
- dpcd[0] = drm_dp_max_link_rate(anx78xx->dpcd); - dpcd[0] = drm_dp_link_rate_to_bw_code(dpcd[0]); err = regmap_write(anx78xx->map[I2C_IDX_TX_P0], - SP_DP_MAIN_LINK_BW_SET_REG, dpcd[0]); + SP_DP_MAIN_LINK_BW_SET_REG, + anx78xx->dpcd[DP_MAX_LINK_RATE]); if (err) return err;
[ Upstream commit a86675968e2300fb567994459da3dbc4cd1b322a ]
This reverts commit 64e62bdf04ab8529f45ed0a85122c703035dec3a.
This commit ends up causing some lockdep splats due to trying to grab the payload lock while holding the mgr's lock:
[ 54.010099] [ 54.011765] ====================================================== [ 54.018670] WARNING: possible circular locking dependency detected [ 54.025577] 5.5.0-rc6-02274-g77381c23ee63 #47 Not tainted [ 54.031610] ------------------------------------------------------ [ 54.038516] kworker/1:6/1040 is trying to acquire lock: [ 54.044354] ffff888272af3228 (&mgr->payload_lock){+.+.}, at: drm_dp_mst_topology_mgr_set_mst+0x218/0x2e4 [ 54.054957] [ 54.054957] but task is already holding lock: [ 54.061473] ffff888272af3060 (&mgr->lock){+.+.}, at: drm_dp_mst_topology_mgr_set_mst+0x3c/0x2e4 [ 54.071193] [ 54.071193] which lock already depends on the new lock. [ 54.071193] [ 54.080334] [ 54.080334] the existing dependency chain (in reverse order) is: [ 54.088697] [ 54.088697] -> #1 (&mgr->lock){+.+.}: [ 54.094440] __mutex_lock+0xc3/0x498 [ 54.099015] drm_dp_mst_topology_get_port_validated+0x25/0x80 [ 54.106018] drm_dp_update_payload_part1+0xa2/0x2e2 [ 54.112051] intel_mst_pre_enable_dp+0x144/0x18f [ 54.117791] intel_encoders_pre_enable+0x63/0x70 [ 54.123532] hsw_crtc_enable+0xa1/0x722 [ 54.128396] intel_update_crtc+0x50/0x194 [ 54.133455] skl_commit_modeset_enables+0x40c/0x540 [ 54.139485] intel_atomic_commit_tail+0x5f7/0x130d [ 54.145418] intel_atomic_commit+0x2c8/0x2d8 [ 54.150770] drm_atomic_helper_set_config+0x5a/0x70 [ 54.156801] drm_mode_setcrtc+0x2ab/0x833 [ 54.161862] drm_ioctl+0x2e5/0x424 [ 54.166242] vfs_ioctl+0x21/0x2f [ 54.170426] do_vfs_ioctl+0x5fb/0x61e [ 54.175096] ksys_ioctl+0x55/0x75 [ 54.179377] __x64_sys_ioctl+0x1a/0x1e [ 54.184146] do_syscall_64+0x5c/0x6d [ 54.188721] entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 54.194946] [ 54.194946] -> #0 (&mgr->payload_lock){+.+.}: [ 54.201463] [ 54.201463] other info that might help us debug this: [ 54.201463] [ 54.210410] Possible unsafe locking scenario: [ 54.210410] [ 54.217025] CPU0 CPU1 [ 54.222082] ---- ---- [ 54.227138] lock(&mgr->lock); [ 54.230643] lock(&mgr->payload_lock); [ 54.237742] lock(&mgr->lock); [ 54.244062] lock(&mgr->payload_lock); [ 54.248346] [ 54.248346] *** DEADLOCK *** [ 54.248346] [ 54.254959] 7 locks held by kworker/1:6/1040: [ 54.259822] #0: ffff888275c4f528 ((wq_completion)events){+.+.}, at: worker_thread+0x455/0x6e2 [ 54.269451] #1: ffffc9000119beb0 ((work_completion)(&(&dev_priv->hotplug.hotplug_work)->work)){+.+.}, at: worker_thread+0x455/0x6e2 [ 54.282768] #2: ffff888272a403f0 (&dev->mode_config.mutex){+.+.}, at: i915_hotplug_work_func+0x4b/0x2be [ 54.293368] #3: ffffffff824fc6c0 (drm_connector_list_iter){.+.+}, at: i915_hotplug_work_func+0x17e/0x2be [ 54.304061] #4: ffffc9000119bc58 (crtc_ww_class_acquire){+.+.}, at: drm_helper_probe_detect_ctx+0x40/0xfd [ 54.314855] #5: ffff888272a40470 (crtc_ww_class_mutex){+.+.}, at: drm_modeset_lock+0x74/0xe2 [ 54.324385] #6: ffff888272af3060 (&mgr->lock){+.+.}, at: drm_dp_mst_topology_mgr_set_mst+0x3c/0x2e4 [ 54.334597] [ 54.334597] stack backtrace: [ 54.339464] CPU: 1 PID: 1040 Comm: kworker/1:6 Not tainted 5.5.0-rc6-02274-g77381c23ee63 #47 [ 54.348893] Hardware name: Google Fizz/Fizz, BIOS Google_Fizz.10139.39.0 01/04/2018 [ 54.357451] Workqueue: events i915_hotplug_work_func [ 54.362995] Call Trace: [ 54.365724] dump_stack+0x71/0x9c [ 54.369427] check_noncircular+0x91/0xbc [ 54.373809] ? __lock_acquire+0xc9e/0xf66 [ 54.378286] ? __lock_acquire+0xc9e/0xf66 [ 54.382763] ? lock_acquire+0x175/0x1ac [ 54.387048] ? drm_dp_mst_topology_mgr_set_mst+0x218/0x2e4 [ 54.393177] ? __mutex_lock+0xc3/0x498 [ 54.397362] ? drm_dp_mst_topology_mgr_set_mst+0x218/0x2e4 [ 54.403492] ? drm_dp_mst_topology_mgr_set_mst+0x218/0x2e4 [ 54.409620] ? drm_dp_dpcd_access+0xd9/0x101 [ 54.414390] ? drm_dp_mst_topology_mgr_set_mst+0x218/0x2e4 [ 54.420517] ? drm_dp_mst_topology_mgr_set_mst+0x218/0x2e4 [ 54.426645] ? intel_digital_port_connected+0x34d/0x35c [ 54.432482] ? intel_dp_detect+0x227/0x44e [ 54.437056] ? ww_mutex_lock+0x49/0x9a [ 54.441242] ? drm_helper_probe_detect_ctx+0x75/0xfd [ 54.446789] ? intel_encoder_hotplug+0x4b/0x97 [ 54.451752] ? intel_ddi_hotplug+0x61/0x2e0 [ 54.456423] ? mark_held_locks+0x53/0x68 [ 54.460803] ? _raw_spin_unlock_irqrestore+0x3a/0x51 [ 54.466347] ? lockdep_hardirqs_on+0x187/0x1a4 [ 54.471310] ? drm_connector_list_iter_next+0x89/0x9a [ 54.476953] ? i915_hotplug_work_func+0x206/0x2be [ 54.482208] ? worker_thread+0x4d5/0x6e2 [ 54.486587] ? worker_thread+0x455/0x6e2 [ 54.490966] ? queue_work_on+0x64/0x64 [ 54.495151] ? kthread+0x1e9/0x1f1 [ 54.498946] ? queue_work_on+0x64/0x64 [ 54.503130] ? kthread_unpark+0x5e/0x5e [ 54.507413] ? ret_from_fork+0x3a/0x50
The proper fix for this is probably cleanup the VCPI allocations when we're enabling the topology, or on the first payload allocation. For now though, let's just revert.
Signed-off-by: Lyude Paul lyude@redhat.com Fixes: 64e62bdf04ab ("drm/dp_mst: Remove VCPI while disabling topology mgr") Cc: Sean Paul sean@poorly.run Cc: Wayne Lin Wayne.Lin@amd.com Reviewed-by: Sean Paul sean@poorly.run Link: https://patchwork.freedesktop.org/patch/msgid/20200117205149.97262-1-lyude@r... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/drm_dp_mst_topology.c | 12 ------------ 1 file changed, 12 deletions(-)
diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c index 4a65ef8d8bff3..c4b692dff5956 100644 --- a/drivers/gpu/drm/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/drm_dp_mst_topology.c @@ -3437,7 +3437,6 @@ static int drm_dp_get_vc_payload_bw(u8 dp_link_bw, u8 dp_link_count) int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool mst_state) { int ret = 0; - int i = 0; struct drm_dp_mst_branch *mstb = NULL;
mutex_lock(&mgr->lock); @@ -3498,21 +3497,10 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms /* this can fail if the device is gone */ drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0); ret = 0; - mutex_lock(&mgr->payload_lock); memset(mgr->payloads, 0, mgr->max_payloads * sizeof(struct drm_dp_payload)); mgr->payload_mask = 0; set_bit(0, &mgr->payload_mask); - for (i = 0; i < mgr->max_payloads; i++) { - struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i]; - - if (vcpi) { - vcpi->vcpi = 0; - vcpi->num_slots = 0; - } - mgr->proposed_vcpis[i] = NULL; - } mgr->vcpi_mask = 0; - mutex_unlock(&mgr->payload_lock); }
out_unlock:
From: Lyude Paul lyude@redhat.com
[ Upstream commit 8732fe46b20c951493bfc4dba0ad08efdf41de81 ]
The issues caused by:
commit 64e62bdf04ab ("drm/dp_mst: Remove VCPI while disabling topology mgr")
Prompted me to take a closer look at how we clear the payload state in general when disabling the topology, and it turns out there's actually two subtle issues here.
The first is that we're not grabbing &mgr.payload_lock when clearing the payloads in drm_dp_mst_topology_mgr_set_mst(). Seeing as the canonical lock order is &mgr.payload_lock -> &mgr.lock (because we always want &mgr.lock to be the inner-most lock so topology validation always works), this makes perfect sense. It also means that -technically- there could be racing between someone calling drm_dp_mst_topology_mgr_set_mst() to disable the topology, along with a modeset occurring that's modifying the payload state at the same time.
The second is the more obvious issue that Wayne Lin discovered, that we're not clearing proposed_payloads when disabling the topology.
I actually can't see any obvious places where the racing caused by the first issue would break something, and it could be that some of our higher-level locks already prevent this by happenstance, but better safe then sorry. So, let's make it so that drm_dp_mst_topology_mgr_set_mst() first grabs &mgr.payload_lock followed by &mgr.lock so that we never race when modifying the payload state. Then, we also clear proposed_payloads to fix the original issue of enabling a new topology with a dirty payload state. This doesn't clear any of the drm_dp_vcpi structures, but those are getting destroyed along with the ports anyway.
Changes since v1: * Use sizeof(mgr->payloads[0])/sizeof(mgr->proposed_vcpis[0]) instead - vsyrjala
Cc: Sean Paul sean@poorly.run Cc: Wayne Lin Wayne.Lin@amd.com Cc: Ville Syrjälä ville.syrjala@linux.intel.com Cc: stable@vger.kernel.org # v4.4+ Signed-off-by: Lyude Paul lyude@redhat.com Reviewed-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20200122194321.14953-1-lyude@r... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/drm_dp_mst_topology.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c index c4b692dff5956..c9dd41175853e 100644 --- a/drivers/gpu/drm/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/drm_dp_mst_topology.c @@ -3439,6 +3439,7 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms int ret = 0; struct drm_dp_mst_branch *mstb = NULL;
+ mutex_lock(&mgr->payload_lock); mutex_lock(&mgr->lock); if (mst_state == mgr->mst_state) goto out_unlock; @@ -3497,7 +3498,10 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms /* this can fail if the device is gone */ drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0); ret = 0; - memset(mgr->payloads, 0, mgr->max_payloads * sizeof(struct drm_dp_payload)); + memset(mgr->payloads, 0, + mgr->max_payloads * sizeof(mgr->payloads[0])); + memset(mgr->proposed_vcpis, 0, + mgr->max_payloads * sizeof(mgr->proposed_vcpis[0])); mgr->payload_mask = 0; set_bit(0, &mgr->payload_mask); mgr->vcpi_mask = 0; @@ -3505,6 +3509,7 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
out_unlock: mutex_unlock(&mgr->lock); + mutex_unlock(&mgr->payload_lock); if (mstb) drm_dp_mst_topology_put_mstb(mstb); return ret;
From: Prike Liang Prike.Liang@amd.com
[ Upstream commit 487eca11a321ef33bcf4ca5adb3c0c4954db1b58 ]
The system will be hang up during S3 suspend because of SMU is pending for GC not respose the register CP_HQD_ACTIVE access request.This issue root cause of accessing the GC register under enter GFX CGGPG and can be fixed by disable GFX CGPG before perform suspend.
v2: Use disable the GFX CGPG instead of RLC safe mode guard.
Signed-off-by: Prike Liang Prike.Liang@amd.com Tested-by: Mengbing Wang Mengbing.Wang@amd.com Reviewed-by: Huang Rui ray.huang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 9a8a1c6ca3210..7d340c9ec3037 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -2259,8 +2259,6 @@ static int amdgpu_device_ip_suspend_phase1(struct amdgpu_device *adev) { int i, r;
- amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE); - amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
for (i = adev->num_ip_blocks - 1; i >= 0; i--) { if (!adev->ip_blocks[i].status.valid) @@ -3242,6 +3240,9 @@ int amdgpu_device_suspend(struct drm_device *dev, bool suspend, bool fbcon) } }
+ amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE); + amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE); + amdgpu_amdkfd_suspend(adev);
amdgpu_ras_suspend(adev);
From: Imre Deak imre.deak@intel.com
The DDI IO power well must not be enabled for a TypeC port in TBT mode, ensure this during driver loading/system resume.
This gets rid of error messages like [drm] *ERROR* power well DDI E TC2 IO state mismatch (refcount 1/enabled 0)
and avoids leaking the power ref when disabling the output.
Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Imre Deak imre.deak@intel.com Reviewed-by: José Roberto de Souza jose.souza@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20200330152244.11316-1-imre.de... (cherry picked from commit f77a2db27f26c3ccba0681f7e89fef083718f07f) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/intel_ddi.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c index 1488822398fed..4872c357eb6da 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi.c +++ b/drivers/gpu/drm/i915/display/intel_ddi.c @@ -2235,7 +2235,11 @@ static void intel_ddi_get_power_domains(struct intel_encoder *encoder, return;
dig_port = enc_to_dig_port(&encoder->base); - intel_display_power_get(dev_priv, dig_port->ddi_io_power_domain); + + if (!intel_phy_is_tc(dev_priv, phy) || + dig_port->tc_mode != TC_PORT_TBT_ALT) + intel_display_power_get(dev_priv, + dig_port->ddi_io_power_domain);
/* * AUX power is only needed for (e)DP mode, and for HDMI mode on TC
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit 75da98586af75eb80664714a67a9895bf0a5517e ]
We must not return from nfs_d_automount() without holding 2 references to the mount record. Doing so, will trigger the BUG() in finish_automount(). Also ensure that we don't try to reschedule the automount timer with a negative or zero timeout value.
Fixes: 22a1ae9a93fb ("NFS: If nfs_mountpoint_expiry_timeout < 0, do not expire submounts") Cc: stable@vger.kernel.org # v5.5+ Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/namespace.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/fs/nfs/namespace.c b/fs/nfs/namespace.c index 5e0e9d29f5c57..0c5db17607411 100644 --- a/fs/nfs/namespace.c +++ b/fs/nfs/namespace.c @@ -143,6 +143,7 @@ struct vfsmount *nfs_d_automount(struct path *path) struct nfs_server *server = NFS_SERVER(d_inode(path->dentry)); struct nfs_fh *fh = NULL; struct nfs_fattr *fattr = NULL; + int timeout = READ_ONCE(nfs_mountpoint_expiry_timeout);
if (IS_ROOT(path->dentry)) return ERR_PTR(-ESTALE); @@ -157,12 +158,12 @@ struct vfsmount *nfs_d_automount(struct path *path) if (IS_ERR(mnt)) goto out;
- if (nfs_mountpoint_expiry_timeout < 0) + mntget(mnt); /* prevent immediate expiration */ + if (timeout <= 0) goto out;
- mntget(mnt); /* prevent immediate expiration */ mnt_set_expiry(mnt, &nfs_automount_list); - schedule_delayed_work(&nfs_automount_task, nfs_mountpoint_expiry_timeout); + schedule_delayed_work(&nfs_automount_task, timeout);
out: nfs_free_fattr(fattr); @@ -201,10 +202,11 @@ const struct inode_operations nfs_referral_inode_operations = { static void nfs_expire_automounts(struct work_struct *work) { struct list_head *list = &nfs_automount_list; + int timeout = READ_ONCE(nfs_mountpoint_expiry_timeout);
mark_mounts_for_expiry(list); - if (!list_empty(list)) - schedule_delayed_work(&nfs_automount_task, nfs_mountpoint_expiry_timeout); + if (!list_empty(list) && timeout > 0) + schedule_delayed_work(&nfs_automount_task, timeout); }
void nfs_release_automount_timer(void)
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit ab6f824cfdf7363b5e529621cbc72ae6519c78d1 ]
Less is more; unify the two very nearly identical function.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/core.c | 58 ++++++++++++++++---------------------------- 1 file changed, 21 insertions(+), 37 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index fdb7f7ef380c4..b3d4f485bcfa6 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1986,6 +1986,12 @@ static int perf_get_aux_event(struct perf_event *event, return 1; }
+static inline struct list_head *get_event_list(struct perf_event *event) +{ + struct perf_event_context *ctx = event->ctx; + return event->attr.pinned ? &ctx->pinned_active : &ctx->flexible_active; +} + static void perf_group_detach(struct perf_event *event) { struct perf_event *sibling, *tmp; @@ -2028,12 +2034,8 @@ static void perf_group_detach(struct perf_event *event) if (!RB_EMPTY_NODE(&event->group_node)) { add_event_to_groups(sibling, event->ctx);
- if (sibling->state == PERF_EVENT_STATE_ACTIVE) { - struct list_head *list = sibling->attr.pinned ? - &ctx->pinned_active : &ctx->flexible_active; - - list_add_tail(&sibling->active_list, list); - } + if (sibling->state == PERF_EVENT_STATE_ACTIVE) + list_add_tail(&sibling->active_list, get_event_list(sibling)); }
WARN_ON_ONCE(sibling->ctx != event->ctx); @@ -2350,6 +2352,8 @@ event_sched_in(struct perf_event *event, { int ret = 0;
+ WARN_ON_ONCE(event->ctx != ctx); + lockdep_assert_held(&ctx->lock);
if (event->state <= PERF_EVENT_STATE_OFF) @@ -3425,10 +3429,12 @@ struct sched_in_data { int can_add_hw; };
-static int pinned_sched_in(struct perf_event *event, void *data) +static int merge_sched_in(struct perf_event *event, void *data) { struct sched_in_data *sid = data;
+ WARN_ON_ONCE(event->ctx != sid->ctx); + if (event->state <= PERF_EVENT_STATE_OFF) return 0;
@@ -3437,37 +3443,15 @@ static int pinned_sched_in(struct perf_event *event, void *data)
if (group_can_go_on(event, sid->cpuctx, sid->can_add_hw)) { if (!group_sched_in(event, sid->cpuctx, sid->ctx)) - list_add_tail(&event->active_list, &sid->ctx->pinned_active); + list_add_tail(&event->active_list, get_event_list(event)); }
- /* - * If this pinned group hasn't been scheduled, - * put it in error state. - */ - if (event->state == PERF_EVENT_STATE_INACTIVE) - perf_event_set_state(event, PERF_EVENT_STATE_ERROR); - - return 0; -} - -static int flexible_sched_in(struct perf_event *event, void *data) -{ - struct sched_in_data *sid = data; - - if (event->state <= PERF_EVENT_STATE_OFF) - return 0; - - if (!event_filter_match(event)) - return 0; + if (event->state == PERF_EVENT_STATE_INACTIVE) { + if (event->attr.pinned) + perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
- if (group_can_go_on(event, sid->cpuctx, sid->can_add_hw)) { - int ret = group_sched_in(event, sid->cpuctx, sid->ctx); - if (ret) { - sid->can_add_hw = 0; - sid->ctx->rotate_necessary = 1; - return 0; - } - list_add_tail(&event->active_list, &sid->ctx->flexible_active); + sid->can_add_hw = 0; + sid->ctx->rotate_necessary = 1; }
return 0; @@ -3485,7 +3469,7 @@ ctx_pinned_sched_in(struct perf_event_context *ctx,
visit_groups_merge(&ctx->pinned_groups, smp_processor_id(), - pinned_sched_in, &sid); + merge_sched_in, &sid); }
static void @@ -3500,7 +3484,7 @@ ctx_flexible_sched_in(struct perf_event_context *ctx,
visit_groups_merge(&ctx->flexible_groups, smp_processor_id(), - flexible_sched_in, &sid); + merge_sched_in, &sid); }
static void
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit 33238c50451596be86db1505ab65fee5172844d0 ]
Song reports that installing cgroup events is broken since:
db0503e4f675 ("perf/core: Optimize perf_install_in_event()")
The problem being that cgroup events try to track cpuctx->cgrp even for disabled events, which is pointless and actively harmful since the above commit. Rework the code to have explicit enable/disable hooks for cgroup events, such that we can limit cgroup tracking to active events.
More specifically, since the above commit disabled events are no longer added to their context from the 'right' CPU, and we can't access things like the current cgroup for a remote CPU.
Cc: stable@vger.kernel.org # v5.5+ Fixes: db0503e4f675 ("perf/core: Optimize perf_install_in_event()") Reported-by: Song Liu songliubraving@fb.com Tested-by: Song Liu songliubraving@fb.com Reviewed-by: Song Liu songliubraving@fb.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lkml.kernel.org/r/20200318193337.GB20760@hirez.programming.kicks-ass... Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/core.c | 70 +++++++++++++++++++++++++++----------------- 1 file changed, 43 insertions(+), 27 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index b3d4f485bcfa6..8d8e52a0922f4 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -935,16 +935,10 @@ perf_cgroup_set_shadow_time(struct perf_event *event, u64 now) event->shadow_ctx_time = now - t->timestamp; }
-/* - * Update cpuctx->cgrp so that it is set when first cgroup event is added and - * cleared when last cgroup event is removed. - */ static inline void -list_update_cgroup_event(struct perf_event *event, - struct perf_event_context *ctx, bool add) +perf_cgroup_event_enable(struct perf_event *event, struct perf_event_context *ctx) { struct perf_cpu_context *cpuctx; - struct list_head *cpuctx_entry;
if (!is_cgroup_event(event)) return; @@ -961,28 +955,41 @@ list_update_cgroup_event(struct perf_event *event, * because if the first would mismatch, the second would not try again * and we would leave cpuctx->cgrp unset. */ - if (add && !cpuctx->cgrp) { + if (ctx->is_active && !cpuctx->cgrp) { struct perf_cgroup *cgrp = perf_cgroup_from_task(current, ctx);
if (cgroup_is_descendant(cgrp->css.cgroup, event->cgrp->css.cgroup)) cpuctx->cgrp = cgrp; }
- if (add && ctx->nr_cgroups++) + if (ctx->nr_cgroups++) return; - else if (!add && --ctx->nr_cgroups) + + list_add(&cpuctx->cgrp_cpuctx_entry, + per_cpu_ptr(&cgrp_cpuctx_list, event->cpu)); +} + +static inline void +perf_cgroup_event_disable(struct perf_event *event, struct perf_event_context *ctx) +{ + struct perf_cpu_context *cpuctx; + + if (!is_cgroup_event(event)) return;
- /* no cgroup running */ - if (!add) + /* + * Because cgroup events are always per-cpu events, + * @ctx == &cpuctx->ctx. + */ + cpuctx = container_of(ctx, struct perf_cpu_context, ctx); + + if (--ctx->nr_cgroups) + return; + + if (ctx->is_active && cpuctx->cgrp) cpuctx->cgrp = NULL;
- cpuctx_entry = &cpuctx->cgrp_cpuctx_entry; - if (add) - list_add(cpuctx_entry, - per_cpu_ptr(&cgrp_cpuctx_list, event->cpu)); - else - list_del(cpuctx_entry); + list_del(&cpuctx->cgrp_cpuctx_entry); }
#else /* !CONFIG_CGROUP_PERF */ @@ -1048,11 +1055,14 @@ static inline u64 perf_cgroup_event_time(struct perf_event *event) }
static inline void -list_update_cgroup_event(struct perf_event *event, - struct perf_event_context *ctx, bool add) +perf_cgroup_event_enable(struct perf_event *event, struct perf_event_context *ctx) { }
+static inline void +perf_cgroup_event_disable(struct perf_event *event, struct perf_event_context *ctx) +{ +} #endif
/* @@ -1682,13 +1692,14 @@ list_add_event(struct perf_event *event, struct perf_event_context *ctx) add_event_to_groups(event, ctx); }
- list_update_cgroup_event(event, ctx, true); - list_add_rcu(&event->event_entry, &ctx->event_list); ctx->nr_events++; if (event->attr.inherit_stat) ctx->nr_stat++;
+ if (event->state > PERF_EVENT_STATE_OFF) + perf_cgroup_event_enable(event, ctx); + ctx->generation++; }
@@ -1864,8 +1875,6 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx)
event->attach_state &= ~PERF_ATTACH_CONTEXT;
- list_update_cgroup_event(event, ctx, false); - ctx->nr_events--; if (event->attr.inherit_stat) ctx->nr_stat--; @@ -1882,8 +1891,10 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx) * of error state is by explicit re-enabling * of the event */ - if (event->state > PERF_EVENT_STATE_OFF) + if (event->state > PERF_EVENT_STATE_OFF) { + perf_cgroup_event_disable(event, ctx); perf_event_set_state(event, PERF_EVENT_STATE_OFF); + }
ctx->generation++; } @@ -2114,6 +2125,7 @@ event_sched_out(struct perf_event *event,
if (READ_ONCE(event->pending_disable) >= 0) { WRITE_ONCE(event->pending_disable, -1); + perf_cgroup_event_disable(event, ctx); state = PERF_EVENT_STATE_OFF; } perf_event_set_state(event, state); @@ -2250,6 +2262,7 @@ static void __perf_event_disable(struct perf_event *event, event_sched_out(event, cpuctx, ctx);
perf_event_set_state(event, PERF_EVENT_STATE_OFF); + perf_cgroup_event_disable(event, ctx); }
/* @@ -2633,7 +2646,7 @@ static int __perf_install_in_context(void *info) }
#ifdef CONFIG_CGROUP_PERF - if (is_cgroup_event(event)) { + if (event->state > PERF_EVENT_STATE_OFF && is_cgroup_event(event)) { /* * If the current cgroup doesn't match the event's * cgroup, we should not try to schedule it. @@ -2793,6 +2806,7 @@ static void __perf_event_enable(struct perf_event *event, ctx_sched_out(ctx, cpuctx, EVENT_TIME);
perf_event_set_state(event, PERF_EVENT_STATE_INACTIVE); + perf_cgroup_event_enable(event, ctx);
if (!ctx->is_active) return; @@ -3447,8 +3461,10 @@ static int merge_sched_in(struct perf_event *event, void *data) }
if (event->state == PERF_EVENT_STATE_INACTIVE) { - if (event->attr.pinned) + if (event->attr.pinned) { + perf_cgroup_event_disable(event, ctx); perf_event_set_state(event, PERF_EVENT_STATE_ERROR); + }
sid->can_add_hw = 0; sid->ctx->rotate_necessary = 1;
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit 2c2366c7548ecee65adfd264517ddf50f9e2d029 ]
We can deduce the ctx and cpuctx from the event, no need to pass them along. Remove the structure and pass in can_add_hw directly.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/core.c | 36 +++++++++++------------------------- 1 file changed, 11 insertions(+), 25 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index 8d8e52a0922f4..78068b57cbba2 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3437,17 +3437,11 @@ static int visit_groups_merge(struct perf_event_groups *groups, int cpu, return 0; }
-struct sched_in_data { - struct perf_event_context *ctx; - struct perf_cpu_context *cpuctx; - int can_add_hw; -}; - static int merge_sched_in(struct perf_event *event, void *data) { - struct sched_in_data *sid = data; - - WARN_ON_ONCE(event->ctx != sid->ctx); + struct perf_event_context *ctx = event->ctx; + struct perf_cpu_context *cpuctx = __get_cpu_context(ctx); + int *can_add_hw = data;
if (event->state <= PERF_EVENT_STATE_OFF) return 0; @@ -3455,8 +3449,8 @@ static int merge_sched_in(struct perf_event *event, void *data) if (!event_filter_match(event)) return 0;
- if (group_can_go_on(event, sid->cpuctx, sid->can_add_hw)) { - if (!group_sched_in(event, sid->cpuctx, sid->ctx)) + if (group_can_go_on(event, cpuctx, *can_add_hw)) { + if (!group_sched_in(event, cpuctx, ctx)) list_add_tail(&event->active_list, get_event_list(event)); }
@@ -3466,8 +3460,8 @@ static int merge_sched_in(struct perf_event *event, void *data) perf_event_set_state(event, PERF_EVENT_STATE_ERROR); }
- sid->can_add_hw = 0; - sid->ctx->rotate_necessary = 1; + *can_add_hw = 0; + ctx->rotate_necessary = 1; }
return 0; @@ -3477,30 +3471,22 @@ static void ctx_pinned_sched_in(struct perf_event_context *ctx, struct perf_cpu_context *cpuctx) { - struct sched_in_data sid = { - .ctx = ctx, - .cpuctx = cpuctx, - .can_add_hw = 1, - }; + int can_add_hw = 1;
visit_groups_merge(&ctx->pinned_groups, smp_processor_id(), - merge_sched_in, &sid); + merge_sched_in, &can_add_hw); }
static void ctx_flexible_sched_in(struct perf_event_context *ctx, struct perf_cpu_context *cpuctx) { - struct sched_in_data sid = { - .ctx = ctx, - .cpuctx = cpuctx, - .can_add_hw = 1, - }; + int can_add_hw = 1;
visit_groups_merge(&ctx->flexible_groups, smp_processor_id(), - merge_sched_in, &sid); + merge_sched_in, &can_add_hw); }
static void
From: Christophe Leroy christophe.leroy@c-s.fr
[ Upstream commit af92bad615be75c6c0d1b1c5b48178360250a187 ]
At the moment kasan_remap_early_shadow_ro() does nothing, because k_end is 0 and k_cur < 0 is always true.
Change the test to k_cur != k_end, as done in kasan_init_shadow_page_tables()
Signed-off-by: Christophe Leroy christophe.leroy@c-s.fr Fixes: cbd18991e24f ("powerpc/mm: Fix an Oops in kasan_mmu_init()") Cc: stable@vger.kernel.org Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/4e7b56865e01569058914c991143f5961b5d4719.158350733... Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/mm/kasan/kasan_init_32.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c index 0e6ed4413eeac..1cfe57b51d7e3 100644 --- a/arch/powerpc/mm/kasan/kasan_init_32.c +++ b/arch/powerpc/mm/kasan/kasan_init_32.c @@ -117,7 +117,7 @@ static void __init kasan_remap_early_shadow_ro(void)
kasan_populate_pte(kasan_early_shadow_pte, prot);
- for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { + for (k_cur = k_start & PAGE_MASK; k_cur != k_end; k_cur += PAGE_SIZE) { pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur); pte_t *ptep = pte_offset_kernel(pmd, k_cur);
From: Faiz Abbas faiz_abbas@ti.com
[ Upstream commit 7907ebe741a7f14ed12889ebe770438a4ff47613 ]
Export sdhci_set_timeout_irq() so that it is accessible from platform drivers.
Signed-off-by: Faiz Abbas faiz_abbas@ti.com Acked-by: Adrian Hunter adrian.hunter@intel.com Link: https://lore.kernel.org/r/20200116105154.7685-6-faiz_abbas@ti.com Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mmc/host/sdhci.c | 3 ++- drivers/mmc/host/sdhci.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 659a9459ace34..29c854e48bc69 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -992,7 +992,7 @@ static void sdhci_set_transfer_irqs(struct sdhci_host *host) sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); }
-static void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable) +void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable) { if (enable) host->ier |= SDHCI_INT_DATA_TIMEOUT; @@ -1001,6 +1001,7 @@ static void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable) sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); } +EXPORT_SYMBOL_GPL(sdhci_set_data_timeout_irq);
static void sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd) { diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index fe83ece6965b1..4613d71b3cd6e 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -795,5 +795,6 @@ void sdhci_end_tuning(struct sdhci_host *host); void sdhci_reset_tuning(struct sdhci_host *host); void sdhci_send_tuning(struct sdhci_host *host, u32 opcode); void sdhci_abort_tuning(struct sdhci_host *host, u32 opcode); +void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable);
#endif /* __SDHCI_HW_H */
From: Faiz Abbas faiz_abbas@ti.com
[ Upstream commit 7d76ed77cfbd39468ae58d419f537d35ca892d83 ]
Refactor sdhci_set_timeout() such that platform drivers can do some functionality in a set_timeout() callback and then call __sdhci_set_timeout() to complete the operation.
Signed-off-by: Faiz Abbas faiz_abbas@ti.com Acked-by: Adrian Hunter adrian.hunter@intel.com Link: https://lore.kernel.org/r/20200116105154.7685-7-faiz_abbas@ti.com Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mmc/host/sdhci.c | 38 ++++++++++++++++++++------------------ drivers/mmc/host/sdhci.h | 1 + 2 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 29c854e48bc69..1c9ca6864be36 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -1003,27 +1003,29 @@ void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable) } EXPORT_SYMBOL_GPL(sdhci_set_data_timeout_irq);
-static void sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd) +void __sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd) { - u8 count; - - if (host->ops->set_timeout) { - host->ops->set_timeout(host, cmd); - } else { - bool too_big = false; - - count = sdhci_calc_timeout(host, cmd, &too_big); + bool too_big = false; + u8 count = sdhci_calc_timeout(host, cmd, &too_big); + + if (too_big && + host->quirks2 & SDHCI_QUIRK2_DISABLE_HW_TIMEOUT) { + sdhci_calc_sw_timeout(host, cmd); + sdhci_set_data_timeout_irq(host, false); + } else if (!(host->ier & SDHCI_INT_DATA_TIMEOUT)) { + sdhci_set_data_timeout_irq(host, true); + }
- if (too_big && - host->quirks2 & SDHCI_QUIRK2_DISABLE_HW_TIMEOUT) { - sdhci_calc_sw_timeout(host, cmd); - sdhci_set_data_timeout_irq(host, false); - } else if (!(host->ier & SDHCI_INT_DATA_TIMEOUT)) { - sdhci_set_data_timeout_irq(host, true); - } + sdhci_writeb(host, count, SDHCI_TIMEOUT_CONTROL); +} +EXPORT_SYMBOL_GPL(__sdhci_set_timeout);
- sdhci_writeb(host, count, SDHCI_TIMEOUT_CONTROL); - } +static void sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd) +{ + if (host->ops->set_timeout) + host->ops->set_timeout(host, cmd); + else + __sdhci_set_timeout(host, cmd); }
static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index 4613d71b3cd6e..76e69288632db 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -796,5 +796,6 @@ void sdhci_reset_tuning(struct sdhci_host *host); void sdhci_send_tuning(struct sdhci_host *host, u32 opcode); void sdhci_abort_tuning(struct sdhci_host *host, u32 opcode); void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable); +void __sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd);
#endif /* __SDHCI_HW_H */
On Thu, 16 Apr 2020 at 19:05, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.5.18 release. There are 257 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat, 18 Apr 2020 13:11:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.5.18-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.5.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Summary ------------------------------------------------------------------------
kernel: 5.5.18-rc1 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-5.5.y git commit: 23ca98930364cb8a4bdc5a27feb6d7fe29668e47 git describe: v5.5.17-258-g23ca98930364 Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-5.5-oe/build/v5.5.17-258-...
No regressions (compared to build v5.5.16-45-g95e8add082c3)
No fixes (compared to build v5.5.16-45-g95e8add082c3)
Ran 33449 total tests in the following environments and test suites.
Environments -------------- - dragonboard-410c - hi6220-hikey - i386 - juno-r2 - juno-r2-compat - juno-r2-kasan - nxp-ls2088 - qemu_arm - qemu_arm64 - qemu_i386 - qemu_x86_64 - x15 - x86 - x86-kasan
Test Suites ----------- * build * install-android-platform-tools-r2600 * install-android-platform-tools-r2800 * libgpiod * libhugetlbfs * linux-log-parser * ltp-containers-tests * ltp-cve-tests * ltp-dio-tests * ltp-io-tests * ltp-ipc-tests * ltp-syscalls-tests * perf * kselftest * kvm-unit-tests * ltp-commands-tests * ltp-hugetlb-tests * ltp-math-tests * ltp-mm-tests * network-basic-tests * ltp-cap_bounds-tests * ltp-cpuhotplug-tests * ltp-crypto-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-nptl-tests * ltp-open-posix-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * v4l2-compliance * spectre-meltdown-checker-test * kselftest-vsyscall-mode-none
On 4/16/20 6:20 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.5.18 release. There are 257 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat, 18 Apr 2020 13:11:20 +0000. Anything received after that time might be too late.
Build results: total: 156 pass: 156 fail: 0 Qemu test results: total: 428 pass: 428 fail: 0
Guenter
On 4/16/20 7:20 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.5.18 release. There are 257 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat, 18 Apr 2020 13:11:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.5.18-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.5.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions. reboot and poweroff hang forever. The same problem I am seeing on Linux 5.7-rc1 and Linux 5.6.5-rc1.
I am starting bisect on 5.6.5.
thanks, -- Shuah
On 16/04/2020 14:20, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.5.18 release. There are 257 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat, 18 Apr 2020 13:11:20 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.5.18-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.5.y and the diffstat can be found below.
thanks,
greg k-h
All tests are passing for Tegra ...
Test results for stable-v5.5: 13 builds: 13 pass, 0 fail 24 boots: 24 pass, 0 fail 40 tests: 40 pass, 0 fail
Linux version: 5.5.18-rc1-g23ca98930364 Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Cheers Jon
linux-stable-mirror@lists.linaro.org