From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
Russell King (18): ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs ARM: bugs: prepare processor bug infrastructure ARM: bugs: hook processor bug checking into SMP and suspend paths ARM: bugs: add support for per-processor bug checking ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre ARM: spectre-v2: harden branch predictor on context switches ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit ARM: spectre-v2: harden user aborts in kernel space ARM: spectre-v2: warn about incorrect context switching functions ARM: spectre-v1: add speculation barrier (csdb) macros ARM: spectre-v1: add array_index_mask_nospec() implementation ARM: spectre-v1: fix syscall entry ARM: signal: copy registers using __copy_from_user() ARM: vfp: use __copy_from_user() when restoring VFP state ARM: oabi-compat: copy semops using __copy_from_user() ARM: use __inttype() in get_user() ARM: spectre-v1: use get_user() for __get_user() ARM: spectre-v1: mitigate user accesses
arch/arm/include/asm/assembler.h | 12 +++ arch/arm/include/asm/barrier.h | 32 +++++++ arch/arm/include/asm/bugs.h | 6 +- arch/arm/include/asm/cp15.h | 18 ++++ arch/arm/include/asm/cputype.h | 8 ++ arch/arm/include/asm/proc-fns.h | 4 + arch/arm/include/asm/system_misc.h | 15 ++++ arch/arm/include/asm/thread_info.h | 4 +- arch/arm/include/asm/uaccess.h | 25 ++++-- arch/arm/kernel/Makefile | 1 + arch/arm/kernel/bugs.c | 18 ++++ arch/arm/kernel/entry-common.S | 18 ++-- arch/arm/kernel/entry-header.S | 25 ++++++ arch/arm/kernel/signal.c | 56 ++++++------ arch/arm/kernel/smp.c | 4 + arch/arm/kernel/suspend.c | 2 + arch/arm/kernel/sys_oabi-compat.c | 8 +- arch/arm/lib/copy_from_user.S | 9 ++ arch/arm/mm/Kconfig | 23 +++++ arch/arm/mm/Makefile | 2 +- arch/arm/mm/fault.c | 3 + arch/arm/mm/proc-macros.S | 3 +- arch/arm/mm/proc-v7-2level.S | 6 -- arch/arm/mm/proc-v7-bugs.c | 112 ++++++++++++++++++++++++ arch/arm/mm/proc-v7.S | 133 ++++++++++++++++++++++------- arch/arm/vfp/vfpmodule.c | 17 ++-- 26 files changed, 462 insertions(+), 102 deletions(-) create mode 100644 arch/arm/kernel/bugs.c create mode 100644 arch/arm/mm/proc-v7-bugs.c
From: Russell King rmk+kernel@armlinux.org.uk
Commit f5683e76f35b4ec5891031b6a29036efe0a1ff84 upstream.
Add CPU part numbers for Cortex A53, A57, A72, A73, A75 and the Broadcom Brahma B15 CPU.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Acked-by: Florian Fainelli f.fainelli@gmail.com Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Acked-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/cputype.h | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h index e9d04f475929..76bb3bd060d1 100644 --- a/arch/arm/include/asm/cputype.h +++ b/arch/arm/include/asm/cputype.h @@ -74,8 +74,16 @@ #define ARM_CPU_PART_CORTEX_A12 0x4100c0d0 #define ARM_CPU_PART_CORTEX_A17 0x4100c0e0 #define ARM_CPU_PART_CORTEX_A15 0x4100c0f0 +#define ARM_CPU_PART_CORTEX_A53 0x4100d030 +#define ARM_CPU_PART_CORTEX_A57 0x4100d070 +#define ARM_CPU_PART_CORTEX_A72 0x4100d080 +#define ARM_CPU_PART_CORTEX_A73 0x4100d090 +#define ARM_CPU_PART_CORTEX_A75 0x4100d0a0 #define ARM_CPU_PART_MASK 0xff00fff0
+/* Broadcom cores */ +#define ARM_CPU_PART_BRAHMA_B15 0x420000f0 + #define ARM_CPU_XSCALE_ARCH_MASK 0xe000 #define ARM_CPU_XSCALE_ARCH_V1 0x2000 #define ARM_CPU_XSCALE_ARCH_V2 0x4000
From: Russell King rmk+kernel@armlinux.org.uk
Commit a5b9177f69329314721aa7022b7e69dab23fa1f0 upstream.
Prepare the processor bug infrastructure so that it can be expanded to check for per-processor bugs.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Reviewed-by: Florian Fainelli f.fainelli@gmail.com Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Acked-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/bugs.h | 4 ++-- arch/arm/kernel/Makefile | 1 + arch/arm/kernel/bugs.c | 9 +++++++++ 3 files changed, 12 insertions(+), 2 deletions(-) create mode 100644 arch/arm/kernel/bugs.c
diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h index a97f1ea708d1..ed122d294f3f 100644 --- a/arch/arm/include/asm/bugs.h +++ b/arch/arm/include/asm/bugs.h @@ -10,10 +10,10 @@ #ifndef __ASM_BUGS_H #define __ASM_BUGS_H
-#ifdef CONFIG_MMU extern void check_writebuffer_bugs(void);
-#define check_bugs() check_writebuffer_bugs() +#ifdef CONFIG_MMU +extern void check_bugs(void); #else #define check_bugs() do { } while (0) #endif diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile index 3c789496297f..f936cec24f72 100644 --- a/arch/arm/kernel/Makefile +++ b/arch/arm/kernel/Makefile @@ -30,6 +30,7 @@ else obj-y += entry-armv.o endif
+obj-$(CONFIG_MMU) += bugs.o obj-$(CONFIG_CPU_IDLE) += cpuidle.o obj-$(CONFIG_ISA_DMA_API) += dma.o obj-$(CONFIG_FIQ) += fiq.o fiqasm.o diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c new file mode 100644 index 000000000000..88024028bb70 --- /dev/null +++ b/arch/arm/kernel/bugs.c @@ -0,0 +1,9 @@ +// SPDX-Identifier: GPL-2.0 +#include <linux/init.h> +#include <asm/bugs.h> +#include <asm/proc-fns.h> + +void __init check_bugs(void) +{ + check_writebuffer_bugs(); +}
From: Russell King rmk+kernel@armlinux.org.uk
Commit 26602161b5ba795928a5a719fe1d5d9f2ab5c3ef upstream.
Check for CPU bugs when secondary processors are being brought online, and also when CPUs are resuming from a low power mode. This gives an opportunity to check that processor specific bug workarounds are correctly enabled for all paths that a CPU re-enters the kernel.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Reviewed-by: Florian Fainelli f.fainelli@gmail.com Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Acked-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/bugs.h | 2 ++ arch/arm/kernel/bugs.c | 5 +++++ arch/arm/kernel/smp.c | 4 ++++ arch/arm/kernel/suspend.c | 2 ++ 4 files changed, 13 insertions(+)
diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h index ed122d294f3f..73a99c72a930 100644 --- a/arch/arm/include/asm/bugs.h +++ b/arch/arm/include/asm/bugs.h @@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
#ifdef CONFIG_MMU extern void check_bugs(void); +extern void check_other_bugs(void); #else #define check_bugs() do { } while (0) +#define check_other_bugs() do { } while (0) #endif
#endif diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c index 88024028bb70..16e7ba2a9cc4 100644 --- a/arch/arm/kernel/bugs.c +++ b/arch/arm/kernel/bugs.c @@ -3,7 +3,12 @@ #include <asm/bugs.h> #include <asm/proc-fns.h>
+void check_other_bugs(void) +{ +} + void __init check_bugs(void) { check_writebuffer_bugs(); + check_other_bugs(); } diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index b26361355dae..334c319f5766 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -29,6 +29,7 @@ #include <linux/irq_work.h>
#include <linux/atomic.h> +#include <asm/bugs.h> #include <asm/smp.h> #include <asm/cacheflush.h> #include <asm/cpu.h> @@ -396,6 +397,9 @@ asmlinkage void secondary_start_kernel(void) * before we continue - which happens after __cpu_up returns. */ set_cpu_online(cpu, true); + + check_other_bugs(); + complete(&cpu_running);
local_irq_enable(); diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c index 9a2f882a0a2d..134f0d432610 100644 --- a/arch/arm/kernel/suspend.c +++ b/arch/arm/kernel/suspend.c @@ -1,6 +1,7 @@ #include <linux/init.h> #include <linux/slab.h>
+#include <asm/bugs.h> #include <asm/cacheflush.h> #include <asm/idmap.h> #include <asm/pgalloc.h> @@ -34,6 +35,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) cpu_switch_mm(mm->pgd, mm); local_flush_bp_all(); local_flush_tlb_all(); + check_other_bugs(); }
return ret;
From: Russell King rmk+kernel@armlinux.org.uk
Commit 9d3a04925deeabb97c8e26d940b501a2873e8af3 upstream.
Add support for per-processor bug checking - each processor function descriptor gains a function pointer for this check, which must not be an __init function. If non-NULL, this will be called whenever a CPU enters the kernel via which ever path (boot CPU, secondary CPU startup, CPU resuming, etc.)
This allows processor specific bug checks to validate that workaround bits are properly enabled by firmware via all entry paths to the kernel.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Reviewed-by: Florian Fainelli f.fainelli@gmail.com Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Acked-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/proc-fns.h | 4 ++++ arch/arm/kernel/bugs.c | 4 ++++ arch/arm/mm/proc-macros.S | 3 ++- 3 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h index 8877ad5ffe10..f379f5f849a9 100644 --- a/arch/arm/include/asm/proc-fns.h +++ b/arch/arm/include/asm/proc-fns.h @@ -36,6 +36,10 @@ extern struct processor { * Set up any processor specifics */ void (*_proc_init)(void); + /* + * Check for processor bugs + */ + void (*check_bugs)(void); /* * Disable any processor specifics */ diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c index 16e7ba2a9cc4..7be511310191 100644 --- a/arch/arm/kernel/bugs.c +++ b/arch/arm/kernel/bugs.c @@ -5,6 +5,10 @@
void check_other_bugs(void) { +#ifdef MULTI_CPU + if (processor.check_bugs) + processor.check_bugs(); +#endif }
void __init check_bugs(void) diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S index c671f345266a..212147c78f4b 100644 --- a/arch/arm/mm/proc-macros.S +++ b/arch/arm/mm/proc-macros.S @@ -258,13 +258,14 @@ mcr p15, 0, ip, c7, c10, 4 @ data write barrier .endm
-.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0 +.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0 .type \name()_processor_functions, #object .align 2 ENTRY(\name()_processor_functions) .word \dabort .word \pabort .word cpu_\name()_proc_init + .word \bugs .word cpu_\name()_proc_fin .word cpu_\name()_reset .word cpu_\name()_do_idle
From: Russell King rmk+kernel@armlinux.org.uk
Commit c58d237d0852a57fde9bc2c310972e8f4e3d155d upstream.
Add a Kconfig symbol for CPUs which are vulnerable to the Spectre attacks.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Reviewed-by: Florian Fainelli f.fainelli@gmail.com Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Acked-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/mm/Kconfig | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 41218867a9a6..7ef92e6692ab 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -396,6 +396,7 @@ config CPU_V7 select CPU_CP15_MPU if !MMU select CPU_HAS_ASID if MMU select CPU_PABRT_V7 + select CPU_SPECTRE if MMU select CPU_TLB_V7 if MMU
# ARMv7M @@ -793,6 +794,9 @@ config CPU_BPREDICT_DISABLE help Say Y here to disable branch prediction. If unsure, say N.
+config CPU_SPECTRE + bool + config TLS_REG_EMUL bool select NEED_KUSER_HELPERS
From: Russell King rmk+kernel@armlinux.org.uk
Commit 06c23f5ffe7ad45b908d0fff604dae08a7e334b9 upstream.
Required manual merge of arch/arm/mm/proc-v7.S.
Harden the branch predictor against Spectre v2 attacks on context switches for ARMv7 and later CPUs. We do this by:
Cortex A9, A12, A17, A73, A75: invalidating the BTB. Cortex A15, Brahma B15: invalidating the instruction cache.
Cortex A57 and Cortex A72 are not addressed in this patch.
Cortex R7 and Cortex R8 are also not addressed as we do not enforce memory protection on these cores.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Acked-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/mm/Kconfig | 19 ++++++ arch/arm/mm/proc-v7-2level.S | 6 -- arch/arm/mm/proc-v7.S | 125 +++++++++++++++++++++++++++-------- 3 files changed, 115 insertions(+), 35 deletions(-)
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 7ef92e6692ab..71115afb71a0 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -797,6 +797,25 @@ config CPU_BPREDICT_DISABLE config CPU_SPECTRE bool
+config HARDEN_BRANCH_PREDICTOR + bool "Harden the branch predictor against aliasing attacks" if EXPERT + depends on CPU_SPECTRE + default y + help + Speculation attacks against some high-performance processors rely + on being able to manipulate the branch predictor for a victim + context by executing aliasing branches in the attacker context. + Such attacks can be partially mitigated against by clearing + internal branch predictor state and limiting the prediction + logic in some situations. + + This config option will take CPU-specific actions to harden + the branch predictor against aliasing attacks and may rely on + specific instruction sequences or control bits being set by + the system firmware. + + If unsure, say Y. + config TLS_REG_EMUL bool select NEED_KUSER_HELPERS diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S index c6141a5435c3..f8d45ad2a515 100644 --- a/arch/arm/mm/proc-v7-2level.S +++ b/arch/arm/mm/proc-v7-2level.S @@ -41,11 +41,6 @@ * even on Cortex-A8 revisions not affected by 430973. * If IBE is not set, the flush BTAC/BTB won't do anything. */ -ENTRY(cpu_ca8_switch_mm) -#ifdef CONFIG_MMU - mov r2, #0 - mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB -#endif ENTRY(cpu_v7_switch_mm) #ifdef CONFIG_MMU mmid r1, r1 @ get mm->context.id @@ -66,7 +61,6 @@ ENTRY(cpu_v7_switch_mm) #endif bx lr ENDPROC(cpu_v7_switch_mm) -ENDPROC(cpu_ca8_switch_mm)
/* * cpu_v7_set_pte_ext(ptep, pte) diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S index 8e1ea433c3f1..c2950317c7c2 100644 --- a/arch/arm/mm/proc-v7.S +++ b/arch/arm/mm/proc-v7.S @@ -87,6 +87,17 @@ ENTRY(cpu_v7_dcache_clean_area) ret lr ENDPROC(cpu_v7_dcache_clean_area)
+ENTRY(cpu_v7_iciallu_switch_mm) + mov r3, #0 + mcr p15, 0, r3, c7, c5, 0 @ ICIALLU + b cpu_v7_switch_mm +ENDPROC(cpu_v7_iciallu_switch_mm) +ENTRY(cpu_v7_bpiall_switch_mm) + mov r3, #0 + mcr p15, 0, r3, c7, c5, 6 @ flush BTAC/BTB + b cpu_v7_switch_mm +ENDPROC(cpu_v7_bpiall_switch_mm) + string cpu_v7_name, "ARMv7 Processor" .align
@@ -152,31 +163,6 @@ ENTRY(cpu_v7_do_resume) ENDPROC(cpu_v7_do_resume) #endif
-/* - * Cortex-A8 - */ - globl_equ cpu_ca8_proc_init, cpu_v7_proc_init - globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin - globl_equ cpu_ca8_reset, cpu_v7_reset - globl_equ cpu_ca8_do_idle, cpu_v7_do_idle - globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area - globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext - globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size -#ifdef CONFIG_ARM_CPU_SUSPEND - globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend - globl_equ cpu_ca8_do_resume, cpu_v7_do_resume -#endif - -/* - * Cortex-A9 processor functions - */ - globl_equ cpu_ca9mp_proc_init, cpu_v7_proc_init - globl_equ cpu_ca9mp_proc_fin, cpu_v7_proc_fin - globl_equ cpu_ca9mp_reset, cpu_v7_reset - globl_equ cpu_ca9mp_do_idle, cpu_v7_do_idle - globl_equ cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area - globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm - globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext .globl cpu_ca9mp_suspend_size .equ cpu_ca9mp_suspend_size, cpu_v7_suspend_size + 4 * 2 #ifdef CONFIG_ARM_CPU_SUSPEND @@ -490,10 +476,75 @@ __v7_setup_stack:
@ define struct processor (see <asm/proc-fns.h> and proc-macros.S) define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + @ generic v7 bpiall on context switch + globl_equ cpu_v7_bpiall_proc_init, cpu_v7_proc_init + globl_equ cpu_v7_bpiall_proc_fin, cpu_v7_proc_fin + globl_equ cpu_v7_bpiall_reset, cpu_v7_reset + globl_equ cpu_v7_bpiall_do_idle, cpu_v7_do_idle + globl_equ cpu_v7_bpiall_dcache_clean_area, cpu_v7_dcache_clean_area + globl_equ cpu_v7_bpiall_set_pte_ext, cpu_v7_set_pte_ext + globl_equ cpu_v7_bpiall_suspend_size, cpu_v7_suspend_size +#ifdef CONFIG_ARM_CPU_SUSPEND + globl_equ cpu_v7_bpiall_do_suspend, cpu_v7_do_suspend + globl_equ cpu_v7_bpiall_do_resume, cpu_v7_do_resume +#endif + define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + +#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions +#else +#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_processor_functions +#endif + #ifndef CONFIG_ARM_LPAE + @ Cortex-A8 - always needs bpiall switch_mm implementation + globl_equ cpu_ca8_proc_init, cpu_v7_proc_init + globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin + globl_equ cpu_ca8_reset, cpu_v7_reset + globl_equ cpu_ca8_do_idle, cpu_v7_do_idle + globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area + globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext + globl_equ cpu_ca8_switch_mm, cpu_v7_bpiall_switch_mm + globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size +#ifdef CONFIG_ARM_CPU_SUSPEND + globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend + globl_equ cpu_ca8_do_resume, cpu_v7_do_resume +#endif define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + + @ Cortex-A9 - needs more registers preserved across suspend/resume + @ and bpiall switch_mm for hardening + globl_equ cpu_ca9mp_proc_init, cpu_v7_proc_init + globl_equ cpu_ca9mp_proc_fin, cpu_v7_proc_fin + globl_equ cpu_ca9mp_reset, cpu_v7_reset + globl_equ cpu_ca9mp_do_idle, cpu_v7_do_idle + globl_equ cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + globl_equ cpu_ca9mp_switch_mm, cpu_v7_bpiall_switch_mm +#else + globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm +#endif + globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 #endif + + @ Cortex-A15 - needs iciallu switch_mm for hardening + globl_equ cpu_ca15_proc_init, cpu_v7_proc_init + globl_equ cpu_ca15_proc_fin, cpu_v7_proc_fin + globl_equ cpu_ca15_reset, cpu_v7_reset + globl_equ cpu_ca15_do_idle, cpu_v7_do_idle + globl_equ cpu_ca15_dcache_clean_area, cpu_v7_dcache_clean_area +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + globl_equ cpu_ca15_switch_mm, cpu_v7_iciallu_switch_mm +#else + globl_equ cpu_ca15_switch_mm, cpu_v7_switch_mm +#endif + globl_equ cpu_ca15_set_pte_ext, cpu_v7_set_pte_ext + globl_equ cpu_ca15_suspend_size, cpu_v7_suspend_size + globl_equ cpu_ca15_do_suspend, cpu_v7_do_suspend + globl_equ cpu_ca15_do_resume, cpu_v7_do_resume + define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 #ifdef CONFIG_CPU_PJ4B define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 #endif @@ -600,7 +651,7 @@ __v7_ca7mp_proc_info: __v7_ca12mp_proc_info: .long 0x410fc0d0 .long 0xff0ffff0 - __v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup + __v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS .size __v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info
/* @@ -610,7 +661,7 @@ __v7_ca12mp_proc_info: __v7_ca15mp_proc_info: .long 0x410fc0f0 .long 0xff0ffff0 - __v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup + __v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup, proc_fns = ca15_processor_functions .size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
/* @@ -620,7 +671,7 @@ __v7_ca15mp_proc_info: __v7_b15mp_proc_info: .long 0x420f00f0 .long 0xff0ffff0 - __v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup + __v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup, proc_fns = ca15_processor_functions .size __v7_b15mp_proc_info, . - __v7_b15mp_proc_info
/* @@ -630,9 +681,25 @@ __v7_b15mp_proc_info: __v7_ca17mp_proc_info: .long 0x410fc0e0 .long 0xff0ffff0 - __v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup + __v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS .size __v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info
+ /* ARM Ltd. Cortex A73 processor */ + .type __v7_ca73_proc_info, #object +__v7_ca73_proc_info: + .long 0x410fd090 + .long 0xff0ffff0 + __v7_proc __v7_ca73_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS + .size __v7_ca73_proc_info, . - __v7_ca73_proc_info + + /* ARM Ltd. Cortex A75 processor */ + .type __v7_ca75_proc_info, #object +__v7_ca75_proc_info: + .long 0x410fd0a0 + .long 0xff0ffff0 + __v7_proc __v7_ca75_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS + .size __v7_ca75_proc_info, . - __v7_ca75_proc_info + /* * Qualcomm Inc. Krait processors. */
From: Russell King rmk+kernel@armlinux.org.uk
Commit e388b80288aade31135aca23d32eee93dd106795 upstream.
When the branch predictor hardening is enabled, firmware must have set the IBE bit in the auxiliary control register. If this bit has not been set, the Spectre workarounds will not be functional.
Add validation that this bit is set, and print a warning at alert level if this is not the case.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Reviewed-by: Florian Fainelli f.fainelli@gmail.com Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/mm/Makefile | 2 +- arch/arm/mm/proc-v7-bugs.c | 36 ++++++++++++++++++++++++++++++++++++ arch/arm/mm/proc-v7.S | 4 ++-- 3 files changed, 39 insertions(+), 3 deletions(-) create mode 100644 arch/arm/mm/proc-v7-bugs.c
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 7f76d96ce546..35307176e46c 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -92,7 +92,7 @@ obj-$(CONFIG_CPU_MOHAWK) += proc-mohawk.o obj-$(CONFIG_CPU_FEROCEON) += proc-feroceon.o obj-$(CONFIG_CPU_V6) += proc-v6.o obj-$(CONFIG_CPU_V6K) += proc-v6.o -obj-$(CONFIG_CPU_V7) += proc-v7.o +obj-$(CONFIG_CPU_V7) += proc-v7.o proc-v7-bugs.o obj-$(CONFIG_CPU_V7M) += proc-v7m.o
AFLAGS_proc-v6.o :=-Wa,-march=armv6 diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c new file mode 100644 index 000000000000..e46557db6446 --- /dev/null +++ b/arch/arm/mm/proc-v7-bugs.c @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <linux/kernel.h> +#include <linux/smp.h> + +static __maybe_unused void cpu_v7_check_auxcr_set(bool *warned, + u32 mask, const char *msg) +{ + u32 aux_cr; + + asm("mrc p15, 0, %0, c1, c0, 1" : "=r" (aux_cr)); + + if ((aux_cr & mask) != mask) { + if (!*warned) + pr_err("CPU%u: %s", smp_processor_id(), msg); + *warned = true; + } +} + +static DEFINE_PER_CPU(bool, spectre_warned); + +static void check_spectre_auxcr(bool *warned, u32 bit) +{ + if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) && + cpu_v7_check_auxcr_set(warned, bit, + "Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n"); +} + +void cpu_v7_ca8_ibe(void) +{ + check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)); +} + +void cpu_v7_ca15_ibe(void) +{ + check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)); +} diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S index c2950317c7c2..1436ad424f2a 100644 --- a/arch/arm/mm/proc-v7.S +++ b/arch/arm/mm/proc-v7.S @@ -511,7 +511,7 @@ __v7_setup_stack: globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend globl_equ cpu_ca8_do_resume, cpu_v7_do_resume #endif - define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca8_ibe
@ Cortex-A9 - needs more registers preserved across suspend/resume @ and bpiall switch_mm for hardening @@ -544,7 +544,7 @@ __v7_setup_stack: globl_equ cpu_ca15_suspend_size, cpu_v7_suspend_size globl_equ cpu_ca15_do_suspend, cpu_v7_do_suspend globl_equ cpu_ca15_do_resume, cpu_v7_do_resume - define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca15_ibe #ifdef CONFIG_CPU_PJ4B define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 #endif
From: Russell King rmk+kernel@armlinux.org.uk
Commit f5fe12b1eaee220ce62ff9afb8b90929c396595f upstream. This required some additional defines be brought back.
In order to prevent aliasing attacks on the branch predictor, invalidate the BTB or instruction cache on CPUs that are known to be affected when taking an abort on a address that is outside of a user task limit:
Cortex A8, A9, A12, A17, A73, A75: flush BTB. Cortex A15, Brahma B15: invalidate icache.
If the IBE bit is not set, then there is little point to enabling the workaround.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Signed-off-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/cp15.h | 18 ++++++++ arch/arm/include/asm/system_misc.h | 15 ++++++ arch/arm/mm/fault.c | 3 ++ arch/arm/mm/proc-v7-bugs.c | 73 ++++++++++++++++++++++++++++-- arch/arm/mm/proc-v7.S | 8 ++-- 5 files changed, 109 insertions(+), 8 deletions(-)
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h index c3f11524f10c..b74b174ac9fc 100644 --- a/arch/arm/include/asm/cp15.h +++ b/arch/arm/include/asm/cp15.h @@ -49,6 +49,24 @@
#ifdef CONFIG_CPU_CP15
+#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \ + "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32 +#define __ACCESS_CP15_64(Op1, CRm) \ + "mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64 + +#define __read_sysreg(r, w, c, t) ({ \ + t __val; \ + asm volatile(r " " c : "=r" (__val)); \ + __val; \ +}) +#define read_sysreg(...) __read_sysreg(__VA_ARGS__) + +#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v))) +#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__) + +#define BPIALL __ACCESS_CP15(c7, 0, c5, 6) +#define ICIALLU __ACCESS_CP15(c7, 0, c5, 0) + extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void) diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h index a3d61ad984af..1fed41440af9 100644 --- a/arch/arm/include/asm/system_misc.h +++ b/arch/arm/include/asm/system_misc.h @@ -7,6 +7,7 @@ #include <linux/linkage.h> #include <linux/irqflags.h> #include <linux/reboot.h> +#include <linux/percpu.h>
extern void cpu_init(void);
@@ -14,6 +15,20 @@ void soft_restart(unsigned long); extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); extern void (*arm_pm_idle)(void);
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR +typedef void (*harden_branch_predictor_fn_t)(void); +DECLARE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn); +static inline void harden_branch_predictor(void) +{ + harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn, + smp_processor_id()); + if (fn) + fn(); +} +#else +#define harden_branch_predictor() do { } while (0) +#endif + #define UDBG_UNDEFINED (1 << 0) #define UDBG_SYSCALL (1 << 1) #define UDBG_BADABORT (1 << 2) diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 0d20cd594017..afc8d7cf7625 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr, { struct siginfo si;
+ if (addr > TASK_SIZE) + harden_branch_predictor(); + #ifdef CONFIG_DEBUG_USER if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) || ((user_debug & UDBG_BUS) && (sig == SIGBUS))) { diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c index e46557db6446..85a2e3d6263c 100644 --- a/arch/arm/mm/proc-v7-bugs.c +++ b/arch/arm/mm/proc-v7-bugs.c @@ -2,7 +2,61 @@ #include <linux/kernel.h> #include <linux/smp.h>
-static __maybe_unused void cpu_v7_check_auxcr_set(bool *warned, +#include <asm/cp15.h> +#include <asm/cputype.h> +#include <asm/system_misc.h> + +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR +DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn); + +static void harden_branch_predictor_bpiall(void) +{ + write_sysreg(0, BPIALL); +} + +static void harden_branch_predictor_iciallu(void) +{ + write_sysreg(0, ICIALLU); +} + +static void cpu_v7_spectre_init(void) +{ + const char *spectre_v2_method = NULL; + int cpu = smp_processor_id(); + + if (per_cpu(harden_branch_predictor_fn, cpu)) + return; + + switch (read_cpuid_part()) { + case ARM_CPU_PART_CORTEX_A8: + case ARM_CPU_PART_CORTEX_A9: + case ARM_CPU_PART_CORTEX_A12: + case ARM_CPU_PART_CORTEX_A17: + case ARM_CPU_PART_CORTEX_A73: + case ARM_CPU_PART_CORTEX_A75: + per_cpu(harden_branch_predictor_fn, cpu) = + harden_branch_predictor_bpiall; + spectre_v2_method = "BPIALL"; + break; + + case ARM_CPU_PART_CORTEX_A15: + case ARM_CPU_PART_BRAHMA_B15: + per_cpu(harden_branch_predictor_fn, cpu) = + harden_branch_predictor_iciallu; + spectre_v2_method = "ICIALLU"; + break; + } + if (spectre_v2_method) + pr_info("CPU%u: Spectre v2: using %s workaround\n", + smp_processor_id(), spectre_v2_method); +} +#else +static void cpu_v7_spectre_init(void) +{ +} +#endif + +static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned, u32 mask, const char *msg) { u32 aux_cr; @@ -13,24 +67,33 @@ static __maybe_unused void cpu_v7_check_auxcr_set(bool *warned, if (!*warned) pr_err("CPU%u: %s", smp_processor_id(), msg); *warned = true; + return false; } + return true; }
static DEFINE_PER_CPU(bool, spectre_warned);
-static void check_spectre_auxcr(bool *warned, u32 bit) +static bool check_spectre_auxcr(bool *warned, u32 bit) { - if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) && + return IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) && cpu_v7_check_auxcr_set(warned, bit, "Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n"); }
void cpu_v7_ca8_ibe(void) { - check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)); + if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6))) + cpu_v7_spectre_init(); }
void cpu_v7_ca15_ibe(void) { - check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)); + if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0))) + cpu_v7_spectre_init(); +} + +void cpu_v7_bugs_init(void) +{ + cpu_v7_spectre_init(); } diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S index 1436ad424f2a..f6a4589b4fd2 100644 --- a/arch/arm/mm/proc-v7.S +++ b/arch/arm/mm/proc-v7.S @@ -474,8 +474,10 @@ __v7_setup_stack:
__INITDATA
+ .weak cpu_v7_bugs_init + @ define struct processor (see <asm/proc-fns.h> and proc-macros.S) - define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR @ generic v7 bpiall on context switch @@ -490,7 +492,7 @@ __v7_setup_stack: globl_equ cpu_v7_bpiall_do_suspend, cpu_v7_do_suspend globl_equ cpu_v7_bpiall_do_resume, cpu_v7_do_resume #endif - define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions #else @@ -526,7 +528,7 @@ __v7_setup_stack: globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm #endif globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext - define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init #endif
@ Cortex-A15 - needs iciallu switch_mm for hardening
From: Russell King rmk+kernel@armlinux.org.uk
Commit c44f366ea7c85e1be27d08f2f0880f4120698125 upstream.
Warn at error level if the context switching function is not what we are expecting. This can happen with big.Little systems, which we currently do not support.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Acked-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/mm/proc-v7-bugs.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c index 85a2e3d6263c..027b29f852f6 100644 --- a/arch/arm/mm/proc-v7-bugs.c +++ b/arch/arm/mm/proc-v7-bugs.c @@ -4,11 +4,15 @@
#include <asm/cp15.h> #include <asm/cputype.h> +#include <asm/proc-fns.h> #include <asm/system_misc.h>
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
+extern void cpu_v7_iciallu_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm); +extern void cpu_v7_bpiall_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm); + static void harden_branch_predictor_bpiall(void) { write_sysreg(0, BPIALL); @@ -34,6 +38,8 @@ static void cpu_v7_spectre_init(void) case ARM_CPU_PART_CORTEX_A17: case ARM_CPU_PART_CORTEX_A73: case ARM_CPU_PART_CORTEX_A75: + if (processor.switch_mm != cpu_v7_bpiall_switch_mm) + goto bl_error; per_cpu(harden_branch_predictor_fn, cpu) = harden_branch_predictor_bpiall; spectre_v2_method = "BPIALL"; @@ -41,6 +47,8 @@ static void cpu_v7_spectre_init(void)
case ARM_CPU_PART_CORTEX_A15: case ARM_CPU_PART_BRAHMA_B15: + if (processor.switch_mm != cpu_v7_iciallu_switch_mm) + goto bl_error; per_cpu(harden_branch_predictor_fn, cpu) = harden_branch_predictor_iciallu; spectre_v2_method = "ICIALLU"; @@ -49,6 +57,11 @@ static void cpu_v7_spectre_init(void) if (spectre_v2_method) pr_info("CPU%u: Spectre v2: using %s workaround\n", smp_processor_id(), spectre_v2_method); + return; + +bl_error: + pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n", + cpu); } #else static void cpu_v7_spectre_init(void)
From: Russell King rmk+kernel@armlinux.org.uk
Commit a78d156587931a2c3b354534aa772febf6c9e855 upstream.
Add assembly and C macros for the new CSDB instruction.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Acked-by: Mark Rutland mark.rutland@arm.com Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/assembler.h | 8 ++++++++ arch/arm/include/asm/barrier.h | 13 +++++++++++++ 2 files changed, 21 insertions(+)
diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 4a275fba6059..307901f88a1e 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -441,6 +441,14 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .size \name , . - \name .endm
+ .macro csdb +#ifdef CONFIG_THUMB2_KERNEL + .inst.w 0xf3af8014 +#else + .inst 0xe320f014 +#endif + .endm + .macro check_uaccess, addr:req, size:req, limit:req, tmp:req, bad:req #ifndef CONFIG_CPU_USE_DOMAINS adds \tmp, \addr, #\size - 1 diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h index 3ff5642d9788..d705be47a1ad 100644 --- a/arch/arm/include/asm/barrier.h +++ b/arch/arm/include/asm/barrier.h @@ -16,6 +16,12 @@ #define isb(option) __asm__ __volatile__ ("isb " #option : : : "memory") #define dsb(option) __asm__ __volatile__ ("dsb " #option : : : "memory") #define dmb(option) __asm__ __volatile__ ("dmb " #option : : : "memory") +#ifdef CONFIG_THUMB2_KERNEL +#define CSDB ".inst.w 0xf3af8014" +#else +#define CSDB ".inst 0xe320f014" +#endif +#define csdb() __asm__ __volatile__(CSDB : : : "memory") #elif defined(CONFIG_CPU_XSC3) || __LINUX_ARM_ARCH__ == 6 #define isb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \ : : "r" (0) : "memory") @@ -36,6 +42,13 @@ #define dmb(x) __asm__ __volatile__ ("" : : : "memory") #endif
+#ifndef CSDB +#define CSDB +#endif +#ifndef csdb +#define csdb() +#endif + #ifdef CONFIG_ARM_HEAVY_MB extern void (*soc_mb)(void); extern void arm_heavy_mb(void);
From: Russell King rmk+kernel@armlinux.org.uk
Commit 1d4238c56f9816ce0f9c8dbe42d7f2ad81cb6613 upstream.
Add an implementation of the array_index_mask_nospec() function for mitigating Spectre variant 1 throughout the kernel.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Acked-by: Mark Rutland mark.rutland@arm.com Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/barrier.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h index d705be47a1ad..85eb7ef44a18 100644 --- a/arch/arm/include/asm/barrier.h +++ b/arch/arm/include/asm/barrier.h @@ -106,5 +106,24 @@ do { \ #define smp_mb__before_atomic() smp_mb() #define smp_mb__after_atomic() smp_mb()
+#ifdef CONFIG_CPU_SPECTRE +static inline unsigned long array_index_mask_nospec(unsigned long idx, + unsigned long sz) +{ + unsigned long mask; + + asm volatile( + "cmp %1, %2\n" + " sbc %0, %1, %1\n" + CSDB + : "=r" (mask) + : "r" (idx), "Ir" (sz) + : "cc"); + + return mask; +} +#define array_index_mask_nospec array_index_mask_nospec +#endif + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_BARRIER_H */
From: Russell King rmk+kernel@armlinux.org.uk
Commit 10573ae547c85b2c61417ff1a106cffbfceada35 upstream.
Prevent speculation at the syscall table decoding by clamping the index used to zero on invalid system call numbers, and using the csdb speculative barrier.
Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Acked-by: Mark Rutland mark.rutland@arm.com Boot-tested-by: Tony Lindgren tony@atomide.com Reviewed-by: Tony Lindgren tony@atomide.com Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/kernel/entry-common.S | 18 +++++++----------- arch/arm/kernel/entry-header.S | 25 +++++++++++++++++++++++++ 2 files changed, 32 insertions(+), 11 deletions(-)
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index 30a7228eaceb..e969b18d9ff9 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -223,9 +223,7 @@ local_restart: tst r10, #_TIF_SYSCALL_WORK @ are we tracing syscalls? bne __sys_trace
- cmp scno, #NR_syscalls @ check upper syscall limit - badr lr, ret_fast_syscall @ return address - ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine + invoke_syscall tbl, scno, r10, ret_fast_syscall
add r1, sp, #S_OFF 2: cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE) @@ -258,14 +256,8 @@ __sys_trace: mov r1, scno add r0, sp, #S_OFF bl syscall_trace_enter - - badr lr, __sys_trace_return @ return address - mov scno, r0 @ syscall number (possibly new) - add r1, sp, #S_R0 + S_OFF @ pointer to regs - cmp scno, #NR_syscalls @ check upper syscall limit - ldmccia r1, {r0 - r6} @ have to reload r0 - r6 - stmccia sp, {r4, r5} @ and update the stack args - ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine + mov scno, r0 + invoke_syscall tbl, scno, r10, __sys_trace_return, reload=1 cmp scno, #-1 @ skip the syscall? bne 2b add sp, sp, #S_OFF @ restore stack @@ -317,6 +309,10 @@ sys_syscall: bic scno, r0, #__NR_OABI_SYSCALL_BASE cmp scno, #__NR_syscall - __NR_SYSCALL_BASE cmpne scno, #NR_syscalls @ check range +#ifdef CONFIG_CPU_SPECTRE + movhs scno, #0 + csdb +#endif stmloia sp, {r5, r6} @ shuffle args movlo r0, r1 movlo r1, r2 diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index 6d243e830516..86dfee487e24 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -373,6 +373,31 @@ #endif .endm
+ .macro invoke_syscall, table, nr, tmp, ret, reload=0 +#ifdef CONFIG_CPU_SPECTRE + mov \tmp, \nr + cmp \tmp, #NR_syscalls @ check upper syscall limit + movcs \tmp, #0 + csdb + badr lr, \ret @ return address + .if \reload + add r1, sp, #S_R0 + S_OFF @ pointer to regs + ldmccia r1, {r0 - r6} @ reload r0-r6 + stmccia sp, {r4, r5} @ update stack arguments + .endif + ldrcc pc, [\table, \tmp, lsl #2] @ call sys_* routine +#else + cmp \nr, #NR_syscalls @ check upper syscall limit + badr lr, \ret @ return address + .if \reload + add r1, sp, #S_R0 + S_OFF @ pointer to regs + ldmccia r1, {r0 - r6} @ reload r0-r6 + stmccia sp, {r4, r5} @ update stack arguments + .endif + ldrcc pc, [\table, \nr, lsl #2] @ call sys_* routine +#endif + .endm + /* * These are the registers used in the syscall handler, and allow us to * have in theory up to 7 arguments to a function - r0 to r6.
From: Russell King rmk+kernel@armlinux.org.uk
Commit c32cd419d6650e42b9cdebb83c672ec945e6bd7e upstream.
__get_user_error() is used as a fast accessor to make copying structure members in the signal handling path as efficient as possible. However, with software PAN and the recent Spectre variant 1, the efficiency is reduced as these are no longer fast accessors.
In the case of software PAN, it has to switch the domain register around each access, and with Spectre variant 1, it would have to repeat the access_ok() check for each access.
It becomes much more efficient to use __copy_from_user() instead, so let's use this for the ARM integer registers.
Acked-by: Mark Rutland mark.rutland@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/kernel/signal.c | 38 +++++++++++++++++++++----------------- 1 file changed, 21 insertions(+), 17 deletions(-)
diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index 7b8f2141427b..a592bc0287f8 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -141,6 +141,7 @@ struct rt_sigframe {
static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf) { + struct sigcontext context; struct aux_sigframe __user *aux; sigset_t set; int err; @@ -149,23 +150,26 @@ static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf) if (err == 0) set_current_blocked(&set);
- __get_user_error(regs->ARM_r0, &sf->uc.uc_mcontext.arm_r0, err); - __get_user_error(regs->ARM_r1, &sf->uc.uc_mcontext.arm_r1, err); - __get_user_error(regs->ARM_r2, &sf->uc.uc_mcontext.arm_r2, err); - __get_user_error(regs->ARM_r3, &sf->uc.uc_mcontext.arm_r3, err); - __get_user_error(regs->ARM_r4, &sf->uc.uc_mcontext.arm_r4, err); - __get_user_error(regs->ARM_r5, &sf->uc.uc_mcontext.arm_r5, err); - __get_user_error(regs->ARM_r6, &sf->uc.uc_mcontext.arm_r6, err); - __get_user_error(regs->ARM_r7, &sf->uc.uc_mcontext.arm_r7, err); - __get_user_error(regs->ARM_r8, &sf->uc.uc_mcontext.arm_r8, err); - __get_user_error(regs->ARM_r9, &sf->uc.uc_mcontext.arm_r9, err); - __get_user_error(regs->ARM_r10, &sf->uc.uc_mcontext.arm_r10, err); - __get_user_error(regs->ARM_fp, &sf->uc.uc_mcontext.arm_fp, err); - __get_user_error(regs->ARM_ip, &sf->uc.uc_mcontext.arm_ip, err); - __get_user_error(regs->ARM_sp, &sf->uc.uc_mcontext.arm_sp, err); - __get_user_error(regs->ARM_lr, &sf->uc.uc_mcontext.arm_lr, err); - __get_user_error(regs->ARM_pc, &sf->uc.uc_mcontext.arm_pc, err); - __get_user_error(regs->ARM_cpsr, &sf->uc.uc_mcontext.arm_cpsr, err); + err |= __copy_from_user(&context, &sf->uc.uc_mcontext, sizeof(context)); + if (err == 0) { + regs->ARM_r0 = context.arm_r0; + regs->ARM_r1 = context.arm_r1; + regs->ARM_r2 = context.arm_r2; + regs->ARM_r3 = context.arm_r3; + regs->ARM_r4 = context.arm_r4; + regs->ARM_r5 = context.arm_r5; + regs->ARM_r6 = context.arm_r6; + regs->ARM_r7 = context.arm_r7; + regs->ARM_r8 = context.arm_r8; + regs->ARM_r9 = context.arm_r9; + regs->ARM_r10 = context.arm_r10; + regs->ARM_fp = context.arm_fp; + regs->ARM_ip = context.arm_ip; + regs->ARM_sp = context.arm_sp; + regs->ARM_lr = context.arm_lr; + regs->ARM_pc = context.arm_pc; + regs->ARM_cpsr = context.arm_cpsr; + }
err |= !valid_user_regs(regs);
From: Russell King rmk+kernel@armlinux.org.uk
Commit 42019fc50dfadb219f9e6ddf4c354f3837057d80 upstream.
__get_user_error() is used as a fast accessor to make copying structure members in the signal handling path as efficient as possible. However, with software PAN and the recent Spectre variant 1, the efficiency is reduced as these are no longer fast accessors.
In the case of software PAN, it has to switch the domain register around each access, and with Spectre variant 1, it would have to repeat the access_ok() check for each access.
Use __copy_from_user() rather than __get_user_err() for individual members when restoring VFP state.
Acked-by: Mark Rutland mark.rutland@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/thread_info.h | 4 ++-- arch/arm/kernel/signal.c | 18 ++++++++---------- arch/arm/vfp/vfpmodule.c | 17 +++++++---------- 3 files changed, 17 insertions(+), 22 deletions(-)
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index 776757d1604a..57d2ad9c75ca 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -126,8 +126,8 @@ struct user_vfp_exc;
extern int vfp_preserve_user_clear_hwstate(struct user_vfp __user *, struct user_vfp_exc __user *); -extern int vfp_restore_user_hwstate(struct user_vfp __user *, - struct user_vfp_exc __user *); +extern int vfp_restore_user_hwstate(struct user_vfp *, + struct user_vfp_exc *); #endif
/* diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index a592bc0287f8..76f85c38f2b8 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -107,21 +107,19 @@ static int preserve_vfp_context(struct vfp_sigframe __user *frame) return vfp_preserve_user_clear_hwstate(&frame->ufp, &frame->ufp_exc); }
-static int restore_vfp_context(struct vfp_sigframe __user *frame) +static int restore_vfp_context(struct vfp_sigframe __user *auxp) { - unsigned long magic; - unsigned long size; - int err = 0; - - __get_user_error(magic, &frame->magic, err); - __get_user_error(size, &frame->size, err); + struct vfp_sigframe frame; + int err;
+ err = __copy_from_user(&frame, (char __user *) auxp, sizeof(frame)); if (err) - return -EFAULT; - if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE) + return err; + + if (frame.magic != VFP_MAGIC || frame.size != VFP_STORAGE_SIZE) return -EINVAL;
- return vfp_restore_user_hwstate(&frame->ufp, &frame->ufp_exc); + return vfp_restore_user_hwstate(&frame.ufp, &frame.ufp_exc); }
#endif diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 2a61e4b04600..7aa6366b2a8d 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -601,13 +601,11 @@ int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp, }
/* Sanitise and restore the current VFP state from the provided structures. */ -int vfp_restore_user_hwstate(struct user_vfp __user *ufp, - struct user_vfp_exc __user *ufp_exc) +int vfp_restore_user_hwstate(struct user_vfp *ufp, struct user_vfp_exc *ufp_exc) { struct thread_info *thread = current_thread_info(); struct vfp_hard_struct *hwstate = &thread->vfpstate.hard; unsigned long fpexc; - int err = 0;
/* Disable VFP to avoid corrupting the new thread state. */ vfp_flush_hwstate(thread); @@ -616,17 +614,16 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp, * Copy the floating point registers. There can be unused * registers see asm/hwcap.h for details. */ - err |= __copy_from_user(&hwstate->fpregs, &ufp->fpregs, - sizeof(hwstate->fpregs)); + memcpy(&hwstate->fpregs, &ufp->fpregs, sizeof(hwstate->fpregs)); /* * Copy the status and control register. */ - __get_user_error(hwstate->fpscr, &ufp->fpscr, err); + hwstate->fpscr = ufp->fpscr;
/* * Sanitise and restore the exception registers. */ - __get_user_error(fpexc, &ufp_exc->fpexc, err); + fpexc = ufp_exc->fpexc;
/* Ensure the VFP is enabled. */ fpexc |= FPEXC_EN; @@ -635,10 +632,10 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp, fpexc &= ~(FPEXC_EX | FPEXC_FP2V); hwstate->fpexc = fpexc;
- __get_user_error(hwstate->fpinst, &ufp_exc->fpinst, err); - __get_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err); + hwstate->fpinst = ufp_exc->fpinst; + hwstate->fpinst2 = ufp_exc->fpinst2;
- return err ? -EFAULT : 0; + return 0; }
/*
From: Russell King rmk+kernel@armlinux.org.uk
Commit 8c8484a1c18e3231648f5ba7cc5ffb7fd70b3ca4 upstream.
__get_user_error() is used as a fast accessor to make copying structure members as efficient as possible. However, with software PAN and the recent Spectre variant 1, the efficiency is reduced as these are no longer fast accessors.
In the case of software PAN, it has to switch the domain register around each access, and with Spectre variant 1, it would have to repeat the access_ok() check for each access.
Rather than using __get_user_error() to copy each semops element member, copy each semops element in full using __copy_from_user().
Acked-by: Mark Rutland mark.rutland@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/kernel/sys_oabi-compat.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/arm/kernel/sys_oabi-compat.c b/arch/arm/kernel/sys_oabi-compat.c index 5f221acd21ae..640748e27035 100644 --- a/arch/arm/kernel/sys_oabi-compat.c +++ b/arch/arm/kernel/sys_oabi-compat.c @@ -328,9 +328,11 @@ asmlinkage long sys_oabi_semtimedop(int semid, return -ENOMEM; err = 0; for (i = 0; i < nsops; i++) { - __get_user_error(sops[i].sem_num, &tsops->sem_num, err); - __get_user_error(sops[i].sem_op, &tsops->sem_op, err); - __get_user_error(sops[i].sem_flg, &tsops->sem_flg, err); + struct oabi_sembuf osb; + err |= __copy_from_user(&osb, tsops, sizeof(osb)); + sops[i].sem_num = osb.sem_num; + sops[i].sem_op = osb.sem_op; + sops[i].sem_flg = osb.sem_flg; tsops++; } if (timeout) {
From: Russell King rmk+kernel@armlinux.org.uk
Commit d09fbb327d670737ab40fd8bbb0765ae06b8b739 upstream.
Borrow the x86 implementation of __inttype() to use in get_user() to select an integer type suitable to temporarily hold the result value. This is necessary to avoid propagating the volatile nature of the result argument, which can cause the following warning:
lib/iov_iter.c:413:5: warning: optimization may eliminate reads and/or writes to register variables [-Wvolatile-register-var]
Acked-by: Mark Rutland mark.rutland@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/uaccess.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index cd8b589111ba..968b50063431 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -122,6 +122,13 @@ static inline void set_fs(mm_segment_t fs) : "cc"); \ flag; })
+/* + * This is a type: either unsigned long, if the argument fits into + * that type, or otherwise unsigned long long. + */ +#define __inttype(x) \ + __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL)) + /* * Single-value transfer routines. They automatically use the right * size if we just have the right pointer type. Note that the functions @@ -191,7 +198,7 @@ extern int __get_user_64t_4(void *); ({ \ unsigned long __limit = current_thread_info()->addr_limit - 1; \ register const typeof(*(p)) __user *__p asm("r0") = (p);\ - register typeof(x) __r2 asm("r2"); \ + register __inttype(x) __r2 asm("r2"); \ register unsigned long __l asm("r1") = __limit; \ register int __e asm("r0"); \ unsigned int __ua_flags = uaccess_save_and_enable(); \
From: Russell King rmk+kernel@armlinux.org.uk
Commit b1cd0a14806321721aae45f5446ed83a3647c914 upstream.
Fixing __get_user() for spectre variant 1 is not sane: we would have to add address space bounds checking in order to validate that the location should be accessed, and then zero the address if found to be invalid.
Since __get_user() is supposed to avoid the bounds check, and this is exactly what get_user() does, there's no point having two different implementations that are doing the same thing. So, when the Spectre workarounds are required, make __get_user() an alias of get_user().
Acked-by: Mark Rutland mark.rutland@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/uaccess.h | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 968b50063431..ecd159b45f12 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -314,6 +314,15 @@ static inline void set_fs(mm_segment_t fs) #define user_addr_max() \ (segment_eq(get_fs(), KERNEL_DS) ? ~0UL : get_fs())
+#ifdef CONFIG_CPU_SPECTRE +/* + * When mitigating Spectre variant 1, it is not worth fixing the non- + * verifying accessors, because we need to add verification of the + * address space there. Force these to use the standard get_user() + * version instead. + */ +#define __get_user(x, ptr) get_user(x, ptr) +#else /* * The "__xxx" versions of the user access functions do not verify the * address space - it must have been done previously with a separate @@ -330,12 +339,6 @@ static inline void set_fs(mm_segment_t fs) __gu_err; \ })
-#define __get_user_error(x, ptr, err) \ -({ \ - __get_user_err((x), (ptr), err); \ - (void) 0; \ -}) - #define __get_user_err(x, ptr, err) \ do { \ unsigned long __gu_addr = (unsigned long)(ptr); \ @@ -395,6 +398,7 @@ do { \
#define __get_user_asm_word(x, addr, err) \ __get_user_asm(x, addr, err, ldr) +#endif
#define __put_user(x, ptr) \ ({ \
From: Russell King rmk+kernel@armlinux.org.uk
Commit a3c0f84765bb429ba0fd23de1c57b5e1591c9389 upstream.
Spectre variant 1 attacks are about this sequence of pseudo-code:
index = load(user-manipulated pointer); access(base + index * stride);
In order for the cache side-channel to work, the access() must me made to memory which userspace can detect whether cache lines have been loaded. On 32-bit ARM, this must be either user accessible memory, or a kernel mapping of that same user accessible memory.
The problem occurs when the load() speculatively loads privileged data, and the subsequent access() is made to user accessible memory.
Any load() which makes use of a user-maniplated pointer is a potential problem if the data it has loaded is used in a subsequent access. This also applies for the access() if the data loaded by that access is used by a subsequent access.
Harden the get_user() accessors against Spectre attacks by forcing out of bounds addresses to a NULL pointer. This prevents get_user() being used as the load() step above. As a side effect, put_user() will also be affected even though it isn't implicated.
Also harden copy_from_user() by redoing the bounds check within the arm_copy_from_user() code, and NULLing the pointer if out of bounds.
Acked-by: Mark Rutland mark.rutland@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/assembler.h | 4 ++++ arch/arm/lib/copy_from_user.S | 9 +++++++++ 2 files changed, 13 insertions(+)
diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 307901f88a1e..483481c6937e 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -454,6 +454,10 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) adds \tmp, \addr, #\size - 1 sbcccs \tmp, \tmp, \limit bcs \bad +#ifdef CONFIG_CPU_SPECTRE + movcs \addr, #0 + csdb +#endif #endif .endm
diff --git a/arch/arm/lib/copy_from_user.S b/arch/arm/lib/copy_from_user.S index 1512bebfbf1b..d36329cefedc 100644 --- a/arch/arm/lib/copy_from_user.S +++ b/arch/arm/lib/copy_from_user.S @@ -90,6 +90,15 @@ .text
ENTRY(arm_copy_from_user) +#ifdef CONFIG_CPU_SPECTRE + get_thread_info r3 + ldr r3, [r3, #TI_ADDR_LIMIT] + adds ip, r1, r2 @ ip=addr+size + sub r3, r3, #1 @ addr_limit - 1 + cmpcc ip, r3 @ if (addr+size > addr_limit - 1) + movcs r1, #0 @ addr = NULL + csdb +#endif
#include "copy_template.S"
On 2018/10/31 22:04, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Thanks Hanjun
On 23/11/2018 01:25, Hanjun Guo wrote:
On 2018/10/31 22:04, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Well, not having these patches means that a 32bit kernel won't be get any Spectre-v2 mitigation when run as a guest on an arm64 platform. It turns out that this is a pretty common setup among people building large pieces of SW, such as distributions.
Not having KVM host mitigation on 32bit ARM is probably OK (let's face it, I'm the only user), but not mitigating it as a guest doesn't seem completely OK to me.
Thanks,
M.
Hi Marc,
On 2018/11/23 17:10, Marc Zyngier wrote:
On 23/11/2018 01:25, Hanjun Guo wrote:
On 2018/10/31 22:04, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Well, not having these patches means that a 32bit kernel won't be get any Spectre-v2 mitigation when run as a guest on an arm64 platform. It turns out that this is a pretty common setup among people building large pieces of SW, such as distributions.
I almost miss this point, that makes sense to me :)
Not having KVM host mitigation on 32bit ARM is probably OK (let's face it, I'm the only user), but not mitigating it as a guest doesn't seem completely OK to me.
We are working on a patch set which is backported from mainline to fix ARM64 spectre-v1, spectre-v2 and SSBD for stable 4.4 kernel, and that patch set (almost done) has PSCI patches which is needed by 32bit ARM, so how about posting those ARM64 spectre fixes then backport all those kvm patches for 32bit ARM spectre fix as well?
Thanks Hanjun
Hi Hanjun,
On 23/11/2018 09:40, Hanjun Guo wrote:
Hi Marc,
On 2018/11/23 17:10, Marc Zyngier wrote:
On 23/11/2018 01:25, Hanjun Guo wrote:
On 2018/10/31 22:04, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Well, not having these patches means that a 32bit kernel won't be get any Spectre-v2 mitigation when run as a guest on an arm64 platform. It turns out that this is a pretty common setup among people building large pieces of SW, such as distributions.
I almost miss this point, that makes sense to me :)
Not having KVM host mitigation on 32bit ARM is probably OK (let's face it, I'm the only user), but not mitigating it as a guest doesn't seem completely OK to me.
We are working on a patch set which is backported from mainline to fix ARM64 spectre-v1, spectre-v2 and SSBD for stable 4.4 kernel, and that patch set (almost done) has PSCI patches which is needed by 32bit ARM, so how about posting those ARM64 spectre fixes then backport all those kvm patches for 32bit ARM spectre fix as well?
I'm not sure I get what you mean by PSCI. PSCI is not involved in the Spectre-v2 mitigation, as we use a specially designed SMC call, relying on the SMCCC 1.1 infrastructure. Maybe it is what you're referring to here?
Again, I don't think it is worth the hassle backporting the KVM patches. What I'd like to see is the guest (and bare metal) support code that uses the ARCH_WORKAROUND_1 SMCCC 1.1 infrastructure.
I also don't think it is worth creating an artificial dependency between the two architectures. Yes, some patches are common (the SMCCC infrastructure), but that can be easily be solved at merge time. My vote would be for David to carry the relevant patches in this series.
Thanks,
M.
Hi Marc, David,
On 2018/11/23 19:09, Marc Zyngier wrote:
Hi Hanjun,
On 23/11/2018 09:40, Hanjun Guo wrote:
Hi Marc,
On 2018/11/23 17:10, Marc Zyngier wrote:
On 23/11/2018 01:25, Hanjun Guo wrote:
On 2018/10/31 22:04, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Well, not having these patches means that a 32bit kernel won't be get any Spectre-v2 mitigation when run as a guest on an arm64 platform. It turns out that this is a pretty common setup among people building large pieces of SW, such as distributions.
I almost miss this point, that makes sense to me :)
Not having KVM host mitigation on 32bit ARM is probably OK (let's face it, I'm the only user), but not mitigating it as a guest doesn't seem completely OK to me.
We are working on a patch set which is backported from mainline to fix ARM64 spectre-v1, spectre-v2 and SSBD for stable 4.4 kernel, and that patch set (almost done) has PSCI patches which is needed by 32bit ARM, so how about posting those ARM64 spectre fixes then backport all those kvm patches for 32bit ARM spectre fix as well?
I'm not sure I get what you mean by PSCI. PSCI is not involved in the Spectre-v2 mitigation, as we use a specially designed SMC call, relying on the SMCCC 1.1 infrastructure. Maybe it is what you're referring to here?
Sorry for the inexplicit, yes, it's the SMCCC 1.1 I'm referring here.
Again, I don't think it is worth the hassle backporting the KVM patches. What I'd like to see is the guest (and bare metal) support code that uses the ARCH_WORKAROUND_1 SMCCC 1.1 infrastructure.
I also don't think it is worth creating an artificial dependency between the two architectures. Yes, some patches are common (the SMCCC infrastructure), but that can be easily be solved at merge time. My vote would be for David to carry the relevant patches in this series.
Both are OK to me. David, could you please update this patch set as Marc suggested?
Thanks Hanjun
On 11/23/18 6:09 AM, Marc Zyngier wrote:
Hi Hanjun,
On 23/11/2018 09:40, Hanjun Guo wrote:
Hi Marc,
On 2018/11/23 17:10, Marc Zyngier wrote:
On 23/11/2018 01:25, Hanjun Guo wrote:
On 2018/10/31 22:04, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Well, not having these patches means that a 32bit kernel won't be get any Spectre-v2 mitigation when run as a guest on an arm64 platform. It turns out that this is a pretty common setup among people building large pieces of SW, such as distributions.
I almost miss this point, that makes sense to me :)
I've been watching arm32 spectre patches appear since September and I have a work item to backport these too in the near future. I've been trying to focus on backporting 64-bit security patches to v4.4 in the short term though.
Not having KVM host mitigation on 32bit ARM is probably OK (let's face it, I'm the only user), but not mitigating it as a guest doesn't seem completely OK to me.
We are working on a patch set which is backported from mainline to fix ARM64 spectre-v1, spectre-v2 and SSBD for stable 4.4 kernel, and that patch set (almost done) has PSCI patches which is needed by 32bit ARM, so how about posting those ARM64 spectre fixes then backport all those kvm patches for 32bit ARM spectre fix as well?
I'm not sure I get what you mean by PSCI. PSCI is not involved in the Spectre-v2 mitigation, as we use a specially designed SMC call, relying on the SMCCC 1.1 infrastructure. Maybe it is what you're referring to here?
Again, I don't think it is worth the hassle backporting the KVM patches. What I'd like to see is the guest (and bare metal) support code that uses the ARCH_WORKAROUND_1 SMCCC 1.1 infrastructure.
I also don't think it is worth creating an artificial dependency between the two architectures. Yes, some patches are common (the SMCCC infrastructure), but that can be easily be solved at merge time. My vote would be for David to carry the relevant patches in this series.
Thanks,
M.
I will look at what this means for a new patch set. I'm concerned about how much infrastructure this might mean backporting to v4.4, but I don't have enough familiarity with the code yet to know if that's a valid concern.
-dl
On 2018/11/27 10:16, David Long wrote:
On 11/23/18 6:09 AM, Marc Zyngier wrote:
Hi Hanjun,
On 23/11/2018 09:40, Hanjun Guo wrote:
Hi Marc,
On 2018/11/23 17:10, Marc Zyngier wrote:
On 23/11/2018 01:25, Hanjun Guo wrote:
On 2018/10/31 22:04, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Well, not having these patches means that a 32bit kernel won't be get any Spectre-v2 mitigation when run as a guest on an arm64 platform. It turns out that this is a pretty common setup among people building large pieces of SW, such as distributions.
I almost miss this point, that makes sense to me :)
I've been watching arm32 spectre patches appear since September and I have a work item to backport these too in the near future. I've been trying to focus on backporting 64-bit security patches to v4.4 in the shortterm though.
It's great, I'm happy to test your patches, please cc me for next version.
Thanks Hanjun
On 11/23/18 6:09 AM, Marc Zyngier wrote:
Hi Hanjun,
On 23/11/2018 09:40, Hanjun Guo wrote:
Hi Marc,
On 2018/11/23 17:10, Marc Zyngier wrote:
On 23/11/2018 01:25, Hanjun Guo wrote:
On 2018/10/31 22:04, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Well, not having these patches means that a 32bit kernel won't be get any Spectre-v2 mitigation when run as a guest on an arm64 platform. It turns out that this is a pretty common setup among people building large pieces of SW, such as distributions.
I almost miss this point, that makes sense to me :)
Not having KVM host mitigation on 32bit ARM is probably OK (let's face it, I'm the only user), but not mitigating it as a guest doesn't seem completely OK to me.
We are working on a patch set which is backported from mainline to fix ARM64 spectre-v1, spectre-v2 and SSBD for stable 4.4 kernel, and that patch set (almost done) has PSCI patches which is needed by 32bit ARM, so how about posting those ARM64 spectre fixes then backport all those kvm patches for 32bit ARM spectre fix as well?
I'm not sure I get what you mean by PSCI. PSCI is not involved in the Spectre-v2 mitigation, as we use a specially designed SMC call, relying on the SMCCC 1.1 infrastructure. Maybe it is what you're referring to here?
Again, I don't think it is worth the hassle backporting the KVM patches. What I'd like to see is the guest (and bare metal) support code that uses the ARCH_WORKAROUND_1 SMCCC 1.1 infrastructure.
I also don't think it is worth creating an artificial dependency between the two architectures. Yes, some patches are common (the SMCCC infrastructure), but that can be easily be solved at merge time. My vote would be for David to carry the relevant patches in this series.
Thanks,
M.
Marc,
Sorry to be slow in getting back to you on this.
As I've been looking at the six or so virtualization-related patches I excluded from the backports for less ancient release streams, for the v4.4 stream, I'm having a hard time believing you want the "KVM" patches left out. Just their subject lines sure make them sound like they would have the guest impact you are worried about. Here's the ones that worry me from the v4.9 backport:
[PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 [PATCH 4.9 12/24] ARM: KVM: invalidate icache on guest exit for Cortex-A15 [PATCH 4.9 13/24] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
Are these really not interesting for v4.4, or am I misunderstanding which patches you meant?
Thanks, -dl
On Fri, Dec 14, 2018 at 12:23:48AM -0500, David Long wrote:
On 11/23/18 6:09 AM, Marc Zyngier wrote:
Hi Hanjun,
On 23/11/2018 09:40, Hanjun Guo wrote:
Hi Marc,
On 2018/11/23 17:10, Marc Zyngier wrote:
On 23/11/2018 01:25, Hanjun Guo wrote:
On 2018/10/31 22:04, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.4 backport of spectre patches from Russell M. King's spectre branch. Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Well, not having these patches means that a 32bit kernel won't be get any Spectre-v2 mitigation when run as a guest on an arm64 platform. It turns out that this is a pretty common setup among people building large pieces of SW, such as distributions.
I almost miss this point, that makes sense to me :)
Not having KVM host mitigation on 32bit ARM is probably OK (let's face it, I'm the only user), but not mitigating it as a guest doesn't seem completely OK to me.
We are working on a patch set which is backported from mainline to fix ARM64 spectre-v1, spectre-v2 and SSBD for stable 4.4 kernel, and that patch set (almost done) has PSCI patches which is needed by 32bit ARM, so how about posting those ARM64 spectre fixes then backport all those kvm patches for 32bit ARM spectre fix as well?
I'm not sure I get what you mean by PSCI. PSCI is not involved in the Spectre-v2 mitigation, as we use a specially designed SMC call, relying on the SMCCC 1.1 infrastructure. Maybe it is what you're referring to here?
Again, I don't think it is worth the hassle backporting the KVM patches. What I'd like to see is the guest (and bare metal) support code that uses the ARCH_WORKAROUND_1 SMCCC 1.1 infrastructure.
I also don't think it is worth creating an artificial dependency between the two architectures. Yes, some patches are common (the SMCCC infrastructure), but that can be easily be solved at merge time. My vote would be for David to carry the relevant patches in this series.
Thanks,
M.
Marc,
Sorry to be slow in getting back to you on this.
As I've been looking at the six or so virtualization-related patches I excluded from the backports for less ancient release streams, for the v4.4 stream, I'm having a hard time believing you want the "KVM" patches left out. Just their subject lines sure make them sound like they would have the guest impact you are worried about. Here's the ones that worry me from the v4.9 backport:
[PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 [PATCH 4.9 12/24] ARM: KVM: invalidate icache on guest exit for Cortex-A15 [PATCH 4.9 13/24] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
Are these really not interesting for v4.4, or am I misunderstanding which patches you meant?
I'm getting the impression that this has completely de-railed the backporting effort.
I'm also wondering if this is actually a good idea. The 4.4 and 4.9 32-bit kernels implement a particularly simple hypervisor, where the hypervisor is nothing more than:
__hyp_stub_do_trap: cmp r0, #-1 mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR mcrne p15, 4, r0, c12, c0, 0 @ set HVBAR __ERET
Would it not be sane to assume that a v4.4 host kernel would support a v4.4 guest kernel under virtualisation? Since this same code is in v4.9, the same is true there (it got changed in v4.12).
If we backport the SMCCC_ARCH_WORKAROUND_1 bits, we end up with an incompatibility between the hypervisor code in v4.4 and the guest kernel code - the guest kernel will attempt to make a hypervisor call with r0=SMCCC_ARCH_WORKAROUND_1, which is 0x80008000. This will end up setting the HVBAR to that value, which is clearly not intended, which will end up pointing the hypervisor vectors to that address.
Of course, the same will be true if we run a pre-4.12 host kernel under a post-4.12 kernel with the SMCCC_ARCH_WORKAROUND_1 hack.
I forget whether the stable kernels picked up on the changes for this hypervisor or not - if not, it isn't a trivial "just make the guest use the SMCCC_ARCH_WORKAROUND_1 call."
I think Marc's of the opinion that he's the only one who runs kernels under a 32-bit host kernel - how sure can we be that no one out there other than Marc does this?
Apart from that, I'm getting very concerned about the amount of time the backporting is taking - there are about 40 patches in all, and I believe only around half that have so far been backported to any of the stable kernels. We seem to be hung-up on dealing with v4.4 that other stable kernels aren't getting the other fixes backported.
We've seen the stable people attempt to pick up patches from the series that make no sense on their own, because the real Spectre fixes don't apply because of previous Spectre patches that are missing.
All the time, we have people using the stable kernels without Spectre mitigation in place - and Spectre will have been known about for a year next month.
On 14/12/2018 16:37, Russell King - ARM Linux wrote:
On Fri, Dec 14, 2018 at 12:23:48AM -0500, David Long wrote:
On 11/23/18 6:09 AM, Marc Zyngier wrote:
Hi Hanjun,
On 23/11/2018 09:40, Hanjun Guo wrote:
Hi Marc,
On 2018/11/23 17:10, Marc Zyngier wrote:
On 23/11/2018 01:25, Hanjun Guo wrote:
On 2018/10/31 22:04, David Long wrote: > From: "David A. Long" dave.long@linaro.org > > V4.4 backport of spectre patches from Russell M. King's spectre branch. > Most KVM patches are excluded. Patches not yet in upstream are excluded.
I tested this patch set on top of stable 4.4 kernel, running on boards with A9 and A15 based Hisilicon SoCs, didn't see boot regression and other function regressions in our CI system,
Tested-by: Hanjun Guo hanjun.guo@linaro.org
Since this patch set didn't include PSCI based hardening for arm32, so bugfix 6282e916f774 ("ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm") is not needed for this patch set and this patch set is in a good shape I think. So what's the plan for this patch set?
Well, not having these patches means that a 32bit kernel won't be get any Spectre-v2 mitigation when run as a guest on an arm64 platform. It turns out that this is a pretty common setup among people building large pieces of SW, such as distributions.
I almost miss this point, that makes sense to me :)
Not having KVM host mitigation on 32bit ARM is probably OK (let's face it, I'm the only user), but not mitigating it as a guest doesn't seem completely OK to me.
We are working on a patch set which is backported from mainline to fix ARM64 spectre-v1, spectre-v2 and SSBD for stable 4.4 kernel, and that patch set (almost done) has PSCI patches which is needed by 32bit ARM, so how about posting those ARM64 spectre fixes then backport all those kvm patches for 32bit ARM spectre fix as well?
I'm not sure I get what you mean by PSCI. PSCI is not involved in the Spectre-v2 mitigation, as we use a specially designed SMC call, relying on the SMCCC 1.1 infrastructure. Maybe it is what you're referring to here?
Again, I don't think it is worth the hassle backporting the KVM patches. What I'd like to see is the guest (and bare metal) support code that uses the ARCH_WORKAROUND_1 SMCCC 1.1 infrastructure.
I also don't think it is worth creating an artificial dependency between the two architectures. Yes, some patches are common (the SMCCC infrastructure), but that can be easily be solved at merge time. My vote would be for David to carry the relevant patches in this series.
Thanks,
M.
Marc,
Sorry to be slow in getting back to you on this.
As I've been looking at the six or so virtualization-related patches I excluded from the backports for less ancient release streams, for the v4.4 stream, I'm having a hard time believing you want the "KVM" patches left out. Just their subject lines sure make them sound like they would have the guest impact you are worried about. Here's the ones that worry me from the v4.9 backport:
[PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 [PATCH 4.9 12/24] ARM: KVM: invalidate icache on guest exit for Cortex-A15 [PATCH 4.9 13/24] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
Are these really not interesting for v4.4, or am I misunderstanding which patches you meant?
I'm getting the impression that this has completely de-railed the backporting effort.
I'm also wondering if this is actually a good idea. The 4.4 and 4.9 32-bit kernels implement a particularly simple hypervisor, where the hypervisor is nothing more than:
__hyp_stub_do_trap: cmp r0, #-1 mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR mcrne p15, 4, r0, c12, c0, 0 @ set HVBAR __ERET
Hypervisor? More like a glorified exception handler, and not something that can be reached by a guest.
Would it not be sane to assume that a v4.4 host kernel would support a v4.4 guest kernel under virtualisation? Since this same code is in v4.9, the same is true there (it got changed in v4.12).
If we backport the SMCCC_ARCH_WORKAROUND_1 bits, we end up with an incompatibility between the hypervisor code in v4.4 and the guest kernel code - the guest kernel will attempt to make a hypervisor call with r0=SMCCC_ARCH_WORKAROUND_1, which is 0x80008000. This will end up setting the HVBAR to that value, which is clearly not intended, which will end up pointing the hypervisor vectors to that address.
I don't get it. How does the guest issues an HVC if you don't have KVM installed and running? How does it reach it when KVM is in use?
Of course, the same will be true if we run a pre-4.12 host kernel under a post-4.12 kernel with the SMCCC_ARCH_WORKAROUND_1 hack.
I forget whether the stable kernels picked up on the changes for this hypervisor or not - if not, it isn't a trivial "just make the guest use the SMCCC_ARCH_WORKAROUND_1 call."
I think Marc's of the opinion that he's the only one who runs kernels under a 32-bit host kernel - how sure can we be that no one out there other than Marc does this?
I'm quite curious to find out. During the 4.19 dev cycle, a KVM-enabled 32bit host kernel would die at boot-time. This was only caught at -rc6. At that stage, I'm wondering whether it is worth the effort.
Apart from that, I'm getting very concerned about the amount of time the backporting is taking - there are about 40 patches in all, and I believe only around half that have so far been backported to any of the stable kernels. We seem to be hung-up on dealing with v4.4 that other stable kernels aren't getting the other fixes backported.
We've seen the stable people attempt to pick up patches from the series that make no sense on their own, because the real Spectre fixes don't apply because of previous Spectre patches that are missing.
All the time, we have people using the stable kernels without Spectre mitigation in place - and Spectre will have been known about for a year next month.
If you want a reduced scope to the mitigation on older kernels, this is your call.
Thanks,
M.
linux-stable-mirror@lists.linaro.org