This is the start of the stable review cycle for the 5.0.2 release.
There are 25 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Thu Mar 14 17:03:43 UTC 2019.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.0.2-rc1.…
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.0.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Linux 5.0.2-rc1
Peter Zijlstra (Intel) <peterz(a)infradead.org>
perf/x86/intel: Implement support for TSX Force Abort
Peter Zijlstra (Intel) <peterz(a)infradead.org>
x86: Add TSX Force Abort CPUID/MSR
Peter Zijlstra (Intel) <peterz(a)infradead.org>
perf/x86/intel: Generalize dynamic constraint creation
Peter Zijlstra (Intel) <peterz(a)infradead.org>
perf/x86/intel: Make cpuc allocations consistent
Daniel F. Dickinson <cshored(a)thecshore.com>
ath9k: Avoid OF no-EEPROM quirks without qca,no-eeprom
Jackie Liu <liuyun01(a)kylinos.cn>
scripts/gdb: replace flags (MS_xyz -> SB_xyz)
Gao Xiang <gaoxiang25(a)huawei.com>
staging: erofs: compressed_pages should not be accessed again after freed
Gao Xiang <gaoxiang25(a)huawei.com>
staging: erofs: keep corrupted fs from crashing kernel in erofs_namei()
Andreas Gruenbacher <agruenba(a)redhat.com>
gfs2: Fix missed wakeups in find_insert_glock
Jakub Sitnicki <jakub(a)cloudflare.com>
bpf: Stop the psock parser before canceling its work
Mika Westerberg <mika.westerberg(a)linux.intel.com>
Revert "PCI/PME: Implement runtime PM callbacks"
Sean Young <sean(a)mess.org>
media: Revert "media: rc: some events are dropped by userspace"
Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
drm: disable uncached DMA optimization for ARM and arm64
Marek Szyprowski <m.szyprowski(a)samsung.com>
ARM: dts: exynos: Fix max voltage for buck8 regulator on Odroid XU3/XU4
Marek Szyprowski <m.szyprowski(a)samsung.com>
ARM: dts: exynos: Add minimal clkout parameters to Exynos3250 PMU
Marek Szyprowski <m.szyprowski(a)samsung.com>
ARM: dts: exynos: Fix pinctrl definition for eMMC RTSN line on Odroid X2/U3
Alistair Strachan <astrachan(a)google.com>
arm64: dts: hikey: Revert "Enable HS200 mode on eMMC"
Jan Kiszka <jan.kiszka(a)siemens.com>
arm64: dts: hikey: Give wifi some time after power-on
Jan Kiszka <jan.kiszka(a)siemens.com>
arm64: dts: zcu100-revC: Give wifi some time after power-on
Alexander Shishkin <alexander.shishkin(a)linux.intel.com>
x86/PCI: Fixup RTIT_BAR of Intel Denverton Trace Hub
Gustavo A. R. Silva <gustavo(a)embeddedor.com>
scsi: aacraid: Fix missing break in switch statement
Gustavo A. R. Silva <gustavo(a)embeddedor.com>
iscsi_ibft: Fix missing break in switch statement
Vincent Batts <vbatts(a)hashbangbash.com>
Input: elan_i2c - add id for touchpad found in Lenovo s21e-20
Jason Gerecke <jason.gerecke(a)wacom.com>
Input: wacom_serial4 - add support for Wacom ArtPad II tablet
Alistair Strachan <astrachan(a)google.com>
media: uvcvideo: Fix 'type' check leading to overflow
-------------
Diffstat:
Makefile | 4 +-
arch/arm/boot/dts/exynos3250.dtsi | 3 +
arch/arm/boot/dts/exynos4412-odroid-common.dtsi | 13 +-
arch/arm/boot/dts/exynos5422-odroid-core.dtsi | 2 +-
arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts | 2 +-
arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts | 1 +
arch/x86/events/core.c | 13 +-
arch/x86/events/intel/core.c | 154 +++++++++++++-----
arch/x86/events/perf_event.h | 17 +-
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 6 +
arch/x86/pci/fixup.c | 16 ++
drivers/firmware/iscsi_ibft.c | 1 +
drivers/input/mouse/elan_i2c_core.c | 1 +
drivers/input/tablet/wacom_serial4.c | 2 +
drivers/media/rc/rc-main.c | 13 +-
drivers/media/usb/uvc/uvc_driver.c | 14 +-
drivers/net/wireless/ath/ath9k/init.c | 6 +-
drivers/pci/pcie/pme.c | 27 ----
drivers/scsi/aacraid/commsup.c | 5 +-
drivers/staging/erofs/namei.c | 183 ++++++++++++----------
drivers/staging/erofs/unzip_vle.c | 38 ++---
drivers/staging/erofs/unzip_vle.h | 3 +-
drivers/staging/erofs/unzip_vle_lz4.c | 19 +--
fs/gfs2/glock.c | 2 +-
include/drm/drm_cache.h | 18 +++
net/core/skmsg.c | 1 +
scripts/gdb/linux/constants.py.in | 12 +-
scripts/gdb/linux/proc.py | 12 +-
29 files changed, 366 insertions(+), 223 deletions(-)
On Wed, Mar 13, 2019 at 09:17:15PM +0800, 陈华才 wrote:
> Hi, GREG,
>
> 4.9 need to modify spinlock.h, please wait my patch.
>
>
>
> ---原始邮件---
> 发件人:"Greg Kroah-Hartman"<gregkh(a)linuxfoundation.org>
> 发送时间:2019年3月13日(星期三) 凌晨1:10
> 收件人:"linux-kernel"<linux-kernel(a)vger.kernel.org>;
> 主题:[PATCH 4.9 81/96] MIPS: Loongson: Introduce and use loongson_llsc_mb()
> 4.9-stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> [ Upstream commit e02e07e3127d8aec1f4bcdfb2fc52a2d99b4859e ]
>
> On the Loongson-2G/2H/3A/3B there is a hardware flaw that ll/sc and
> lld/scd is very weak ordering. We should add sync instructions "before
> each ll/lld" and "at the branch-target between ll/sc" to workaround.
> Otherwise, this flaw will cause deadlock occasionally (e.g. when doing
> heavy load test with LTP).
>
> Below is the explaination of CPU designer:
>
> "For Loongson 3 family, when a memory access instruction (load, store,
> or prefetch)'executing occurs between the execution of LL and SC, the
> success or failure of SC is not predictable. Although programmer would
> not insert memory access instructions between LL and SC, the memory
> instructions before LL in program-order, may dynamically executed
> between the execution of LL/SC, so a memory fence (SYNC) is needed
> before LL/LLD to avoid this situation.
>
> Since Loongson-3A R2 (3A2000), we have improved our hardware design to
> handle this case. But we later deduce a rarely circumstance that some
> speculatively executed memory instructions due to branch misprediction
> between LL/SC still fall into the above case, so a memory fence (SYNC)
> at branch-target (if its target is not between LL/SC) is needed for
> Loongson 3A1000, 3B1500, 3A2000 and 3A3000.
>
> Our processor is continually evolving and we aim to to remove all these
> workaround-SYNCs around LL/SC for new-come processor."
>
> Here is an example:
>
> Both cpu1 and cpu2 simutaneously run atomic_add by 1 on same atomic var,
> this bug cause both ''un by two cpus (in atomic_add) succeed at same
> time(''eturn 1), and the variable is only *added by 1*, sometimes,
> which is wrong and unacceptable(it should be added by 2).
>
> Why disable fix-loongson3-llsc in compiler?
> Because compiler fix will cause problems in kernel'__ex_table section.
>
> This patch fix all the cases in kernel, but:
>
> +. the fix at the end of futex_atomic_cmpxchg_inatomic is for branch-target
> of 'e'there other cases which smp_mb__before_llsc() and smp_llsc_mb() fix
> the ll and branch-target coincidently such as atomic_sub_if_positive/
> cmpxchg/xchg, just like this one.
>
> +. Loongson 3 does support CONFIG_EDAC_ATOMIC_SCRUB, so no need to touch
> edac.h
>
> +. local_ops and cmpxchg_local should not be affected by this bug since
> only the owner can write.
>
> +. mips_atomic_set for syscall.c is deprecated and rarely used, just let
> it go
>
> Signed-off-by: Huacai Chen <chenhc(a)lemote.com>
> Signed-off-by: Huang Pei <huangpei(a)loongson.cn>
> [paul.burton(a)mips.com:
> - Simplify the addition of -mno-fix-loongson3-llsc to cflags, and add
> a comment describing why it'there.
> - Make loongson_llsc_mb() a no-op when
> CONFIG_CPU_LOONGSON3_WORKAROUNDS=n, rather than a compiler memory
> barrier.
> - Add a comment describing the bug & how loongson_llsc_mb() helps
> in asm/barrier.h.]
> Signed-off-by: Paul Burton <paul.burton(a)mips.com>
> Cc: Ralf Baechle <ralf(a)linux-mips.org>
> Cc: ambrosehua(a)gmail.com
> Cc: Steven J . Hill <Steven.Hill(a)cavium.com>
> Cc: linux-mips(a)linux-mips.org
> Cc: Fuxin Zhang <zhangfx(a)lemote.com>
> Cc: Zhangjin Wu <wuzhangjin(a)gmail.com>
> Cc: Li Xuefeng <lixuefeng(a)loongson.cn>
> Cc: Xu Chenghua <xuchenghua(a)loongson.cn>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> ---
> arch/mips/Kconfig | 15 ++++++++++++++
> arch/mips/include/asm/atomic.h | 6 ++++++
> arch/mips/include/asm/barrier.h | 36 +++++++++++++++++++++++++++++++++
> arch/mips/include/asm/bitops.h | 5 +++++
> arch/mips/include/asm/futex.h | 3 +++
> arch/mips/include/asm/pgtable.h | 2 ++
> arch/mips/loongson64/Platform | 23 +++++++++++++++++++++
> arch/mips/mm/tlbex.c | 10 +++++++++
> 8 files changed, 100 insertions(+)
Ok, I will go drop this from all stable queues now, thanks!
greg k-h
From: David Howells <dhowells(a)redhat.com>
[ Upstream commit bb2ba2d75a2d673e76ddaf13a9bd30d6a8b1bb08 ]
Fix the creation of shortcuts for which the length of the index key value
is an exact multiple of the machine word size. The problem is that the
code that blanks off the unused bits of the shortcut value malfunctions if
the number of bits in the last word equals machine word size. This is due
to the "<<" operator being given a shift of zero in this case, and so the
mask that should be all zeros is all ones instead. This causes the
subsequent masking operation to clear everything rather than clearing
nothing.
Ordinarily, the presence of the hash at the beginning of the tree index key
makes the issue very hard to test for, but in this case, it was encountered
due to a development mistake that caused the hash output to be either 0
(keyring) or 1 (non-keyring) only. This made it susceptible to the
keyctl/unlink/valid test in the keyutils package.
The fix is simply to skip the blanking if the shift would be 0. For
example, an index key that is 64 bits long would produce a 0 shift and thus
a 'blank' of all 1s. This would then be inverted and AND'd onto the
index_key, incorrectly clearing the entire last word.
Fixes: 3cb989501c26 ("Add a generic associative array implementation.")
Signed-off-by: David Howells <dhowells(a)redhat.com>
Signed-off-by: James Morris <james.morris(a)microsoft.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
lib/assoc_array.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/lib/assoc_array.c b/lib/assoc_array.c
index 0d122543bd63..1db287fffb67 100644
--- a/lib/assoc_array.c
+++ b/lib/assoc_array.c
@@ -780,9 +780,11 @@ static bool assoc_array_insert_into_terminal_node(struct assoc_array_edit *edit,
new_s0->index_key[i] =
ops->get_key_chunk(index_key, i * ASSOC_ARRAY_KEY_CHUNK_SIZE);
- blank = ULONG_MAX << (level & ASSOC_ARRAY_KEY_CHUNK_MASK);
- pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, level, blank);
- new_s0->index_key[keylen - 1] &= ~blank;
+ if (level & ASSOC_ARRAY_KEY_CHUNK_MASK) {
+ blank = ULONG_MAX << (level & ASSOC_ARRAY_KEY_CHUNK_MASK);
+ pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, level, blank);
+ new_s0->index_key[keylen - 1] &= ~blank;
+ }
/* This now reduces to a node splitting exercise for which we'll need
* to regenerate the disparity table.
--
2.19.1
From: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
[ Upstream commit 1b5ba350784242eb1f899bcffd95d2c7cff61e84 ]
Arm TC2 fails cpu hotplug stress test.
This issue was tracked down to a missing copy of the new affinity
cpumask for the vexpress-spc interrupt into struct
irq_common_data.affinity when the interrupt is migrated in
migrate_one_irq().
Fix it by replacing the arm specific hotplug cpu migration with the
generic irq code.
This is the counterpart implementation to commit 217d453d473c ("arm64:
fix a migrating irq bug when hotplug cpu").
Tested with cpu hotplug stress test on Arm TC2 (multi_v7_defconfig plus
CONFIG_ARM_BIG_LITTLE_CPUFREQ=y and CONFIG_ARM_VEXPRESS_SPC_CPUFREQ=y).
The vexpress-spc interrupt (irq=22) on this board is affine to CPU0.
Its affinity cpumask now changes correctly e.g. from 0 to 1-4 when
CPU0 is hotplugged out.
Suggested-by: Marc Zyngier <marc.zyngier(a)arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
Acked-by: Marc Zyngier <marc.zyngier(a)arm.com>
Reviewed-by: Linus Walleij <linus.walleij(a)linaro.org>
Signed-off-by: Russell King <rmk+kernel(a)armlinux.org.uk>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/irq.h | 1 -
arch/arm/kernel/irq.c | 62 --------------------------------------
arch/arm/kernel/smp.c | 2 +-
4 files changed, 2 insertions(+), 64 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 34e1569a11ee..3a0277c6c060 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1475,6 +1475,7 @@ config NR_CPUS
config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
depends on SMP
+ select GENERIC_IRQ_MIGRATION
help
Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu.
diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h
index 1bd9510de1b9..cae4df39f02e 100644
--- a/arch/arm/include/asm/irq.h
+++ b/arch/arm/include/asm/irq.h
@@ -24,7 +24,6 @@
#ifndef __ASSEMBLY__
struct irqaction;
struct pt_regs;
-extern void migrate_irqs(void);
extern void asm_do_IRQ(unsigned int, struct pt_regs *);
void handle_IRQ(unsigned int, struct pt_regs *);
diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c
index 1d45320ee125..900c591913d5 100644
--- a/arch/arm/kernel/irq.c
+++ b/arch/arm/kernel/irq.c
@@ -31,7 +31,6 @@
#include <linux/smp.h>
#include <linux/init.h>
#include <linux/seq_file.h>
-#include <linux/ratelimit.h>
#include <linux/errno.h>
#include <linux/list.h>
#include <linux/kallsyms.h>
@@ -119,64 +118,3 @@ int __init arch_probe_nr_irqs(void)
return nr_irqs;
}
#endif
-
-#ifdef CONFIG_HOTPLUG_CPU
-static bool migrate_one_irq(struct irq_desc *desc)
-{
- struct irq_data *d = irq_desc_get_irq_data(desc);
- const struct cpumask *affinity = irq_data_get_affinity_mask(d);
- struct irq_chip *c;
- bool ret = false;
-
- /*
- * If this is a per-CPU interrupt, or the affinity does not
- * include this CPU, then we have nothing to do.
- */
- if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
- return false;
-
- if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
- affinity = cpu_online_mask;
- ret = true;
- }
-
- c = irq_data_get_irq_chip(d);
- if (!c->irq_set_affinity)
- pr_debug("IRQ%u: unable to set affinity\n", d->irq);
- else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret)
- cpumask_copy(irq_data_get_affinity_mask(d), affinity);
-
- return ret;
-}
-
-/*
- * The current CPU has been marked offline. Migrate IRQs off this CPU.
- * If the affinity settings do not allow other CPUs, force them onto any
- * available CPU.
- *
- * Note: we must iterate over all IRQs, whether they have an attached
- * action structure or not, as we need to get chained interrupts too.
- */
-void migrate_irqs(void)
-{
- unsigned int i;
- struct irq_desc *desc;
- unsigned long flags;
-
- local_irq_save(flags);
-
- for_each_irq_desc(i, desc) {
- bool affinity_broken;
-
- raw_spin_lock(&desc->lock);
- affinity_broken = migrate_one_irq(desc);
- raw_spin_unlock(&desc->lock);
-
- if (affinity_broken)
- pr_warn_ratelimited("IRQ%u no longer affine to CPU%u\n",
- i, smp_processor_id());
- }
-
- local_irq_restore(flags);
-}
-#endif /* CONFIG_HOTPLUG_CPU */
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index e42be5800f37..08ce9e36dc5a 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -218,7 +218,7 @@ int __cpu_disable(void)
/*
* OK - migrate IRQs away from this CPU
*/
- migrate_irqs();
+ irq_migrate_all_off_this_cpu();
/*
* Flush user cache and TLB mappings, and then remove this CPU
--
2.19.1