The following patches bring in
commit 20af540051e1c1f7a7d9b9e4613c1ecc53ded4a6 Author: James Morse james.morse@arm.com Date: Wed Jul 22 19:05:54 2015 +0100
arm64: kernel: Add support for Privileged Access Never commit 338d4f49d6f7114a017d294ccf7374df4f998edc upstream. 'Privileged Access Never' is a new arm8.1 feature which prevents privileged code from accessing any virtual address where read or write access is also permitted at EL0. This patch enables the PAN feature on all CPUs, and modifies {get,put}_user helpers temporarily to permit access. This will catch kernel bugs where user memory is accessed directly. 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.
along with an additional 11 patches it depends on.
Daniel Thompson (1): arm64: alternative: Provide if/else/endif assembler macros
James Morse (6): arm64: kernel: Move config_sctlr_el1 arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension arm64: kernel: Add cpufeature 'enable' callback arm64: kernel: Add min_field_value and use '>=' for feature detection arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() arm64: kernel: Add support for Privileged Access Never
Marc Zyngier (3): arm64: alternative: Introduce feature for GICv3 CPU interface arm64: alternative: Merge alternative-asm.h into alternative.h arm64: alternative: Work around .inst assembler bugs
Suzuki K. Poulose (1): arm64: Generalise msr_s/mrs_s operations
Will Deacon (1): arm64: lib: use pair accessors for copy_*_user routines
arch/arm64/Kconfig | 14 ++++ arch/arm64/include/asm/alternative-asm.h | 29 -------- arch/arm64/include/asm/alternative.h | 109 +++++++++++++++++++++++++++++-- arch/arm64/include/asm/cpufeature.h | 17 ++++- arch/arm64/include/asm/cputype.h | 3 - arch/arm64/include/asm/futex.h | 8 +++ arch/arm64/include/asm/processor.h | 2 + arch/arm64/include/asm/sysreg.h | 40 ++++++++++-- arch/arm64/include/asm/uaccess.h | 11 ++++ arch/arm64/include/uapi/asm/ptrace.h | 1 + arch/arm64/kernel/armv8_deprecated.c | 19 +++--- arch/arm64/kernel/cpufeature.c | 50 ++++++++++++++ arch/arm64/kernel/entry.S | 2 +- arch/arm64/lib/clear_user.S | 8 +++ arch/arm64/lib/copy_from_user.S | 25 +++++-- arch/arm64/lib/copy_in_user.S | 25 +++++-- arch/arm64/lib/copy_to_user.S | 25 +++++-- arch/arm64/mm/cache.S | 2 +- arch/arm64/mm/fault.c | 16 +++++ 19 files changed, 333 insertions(+), 73 deletions(-) delete mode 100644 arch/arm64/include/asm/alternative-asm.h
From: Marc Zyngier marc.zyngier@arm.com
commit 94a9e04aa16abd1194d9b4158c618ba87f5d01e6 upstream.
Add a new item to the feature set (ARM64_HAS_SYSREG_GIC_CPUIF) to indicate that we have a system register GIC CPU interface
This will help KVM switching to alternative instruction patching.
Reviewed-by: Andre Przywara andre.przywara@arm.com Reviewed-by: Christoffer Dall christoffer.dall@linaro.org Acked-by: Will Deacon will.deacon@arm.com Signed-off-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/cpufeature.h | 8 +++++++- arch/arm64/kernel/cpufeature.c | 16 ++++++++++++++++ 2 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 82cb9f9..c104421 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -24,8 +24,9 @@ #define ARM64_WORKAROUND_CLEAN_CACHE 0 #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE 1 #define ARM64_WORKAROUND_845719 2 +#define ARM64_HAS_SYSREG_GIC_CPUIF 3
-#define ARM64_NCAPS 3 +#define ARM64_NCAPS 4
#ifndef __ASSEMBLY__
@@ -38,6 +39,11 @@ struct arm64_cpu_capabilities { u32 midr_model; u32 midr_range_min, midr_range_max; }; + + struct { /* Feature register checking */ + u64 register_mask; + u64 register_value; + }; }; };
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 3d9967e..5ad86ce 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -22,7 +22,23 @@ #include <asm/cpu.h> #include <asm/cpufeature.h>
+static bool +has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry) +{ + u64 val; + + val = read_cpuid(id_aa64pfr0_el1); + return (val & entry->register_mask) == entry->register_value; +} + static const struct arm64_cpu_capabilities arm64_features[] = { + { + .desc = "GIC system register CPU interface", + .capability = ARM64_HAS_SYSREG_GIC_CPUIF, + .matches = has_id_aa64pfr0_feature, + .register_mask = (0xf << 24), + .register_value = (1 << 24), + }, {}, };
From: Will Deacon will.deacon@arm.com
commit 23e94994464a7281838785675e09c8ed1055f62f upstream.
The AArch64 instruction set contains load/store pair memory accessors, so use these in our copy_*_user routines to transfer 16 bytes per iteration.
Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/lib/copy_from_user.S | 17 +++++++++++------ arch/arm64/lib/copy_in_user.S | 17 +++++++++++------ arch/arm64/lib/copy_to_user.S | 17 +++++++++++------ 3 files changed, 33 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S index 5e27add..47c3fa5 100644 --- a/arch/arm64/lib/copy_from_user.S +++ b/arch/arm64/lib/copy_from_user.S @@ -28,14 +28,19 @@ * x0 - bytes not copied */ ENTRY(__copy_from_user) - add x4, x1, x2 // upper user buffer boundary - subs x2, x2, #8 + add x5, x1, x2 // upper user buffer boundary + subs x2, x2, #16 + b.mi 1f +0: +USER(9f, ldp x3, x4, [x1], #16) + subs x2, x2, #16 + stp x3, x4, [x0], #16 + b.pl 0b +1: adds x2, x2, #8 b.mi 2f -1: USER(9f, ldr x3, [x1], #8 ) - subs x2, x2, #8 + sub x2, x2, #8 str x3, [x0], #8 - b.pl 1b 2: adds x2, x2, #4 b.mi 3f USER(9f, ldr w3, [x1], #4 ) @@ -56,7 +61,7 @@ ENDPROC(__copy_from_user)
.section .fixup,"ax" .align 2 -9: sub x2, x4, x1 +9: sub x2, x5, x1 mov x3, x2 10: strb wzr, [x0], #1 // zero remaining buffer space subs x3, x3, #1 diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S index 84b6c9b..436bcc5 100644 --- a/arch/arm64/lib/copy_in_user.S +++ b/arch/arm64/lib/copy_in_user.S @@ -30,14 +30,19 @@ * x0 - bytes not copied */ ENTRY(__copy_in_user) - add x4, x0, x2 // upper user buffer boundary - subs x2, x2, #8 + add x5, x0, x2 // upper user buffer boundary + subs x2, x2, #16 + b.mi 1f +0: +USER(9f, ldp x3, x4, [x1], #16) + subs x2, x2, #16 +USER(9f, stp x3, x4, [x0], #16) + b.pl 0b +1: adds x2, x2, #8 b.mi 2f -1: USER(9f, ldr x3, [x1], #8 ) - subs x2, x2, #8 + sub x2, x2, #8 USER(9f, str x3, [x0], #8 ) - b.pl 1b 2: adds x2, x2, #4 b.mi 3f USER(9f, ldr w3, [x1], #4 ) @@ -58,6 +63,6 @@ ENDPROC(__copy_in_user)
.section .fixup,"ax" .align 2 -9: sub x0, x4, x0 // bytes not copied +9: sub x0, x5, x0 // bytes not copied ret .previous diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index a0aeeb9..f5e1f52 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -28,14 +28,19 @@ * x0 - bytes not copied */ ENTRY(__copy_to_user) - add x4, x0, x2 // upper user buffer boundary - subs x2, x2, #8 + add x5, x0, x2 // upper user buffer boundary + subs x2, x2, #16 + b.mi 1f +0: + ldp x3, x4, [x1], #16 + subs x2, x2, #16 +USER(9f, stp x3, x4, [x0], #16) + b.pl 0b +1: adds x2, x2, #8 b.mi 2f -1: ldr x3, [x1], #8 - subs x2, x2, #8 + sub x2, x2, #8 USER(9f, str x3, [x0], #8 ) - b.pl 1b 2: adds x2, x2, #4 b.mi 3f ldr w3, [x1], #4 @@ -56,6 +61,6 @@ ENDPROC(__copy_to_user)
.section .fixup,"ax" .align 2 -9: sub x0, x4, x0 // bytes not copied +9: sub x0, x5, x0 // bytes not copied ret .previous
From: James Morse james.morse@arm.com
commit 870828e57b141eff76a5325f20e4691dd2a599b1 upstream.
Later patches need config_sctlr_el1 to set/clear bits in the sctlr_el1 register.
This patch moves this function into header a file.
Acked-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: James Morse james.morse@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/cputype.h | 3 --- arch/arm64/include/asm/sysreg.h | 12 ++++++++++++ arch/arm64/kernel/armv8_deprecated.c | 11 +---------- 3 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index a84ec60..ee6403d 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -81,9 +81,6 @@ #define ID_AA64MMFR0_BIGEND(mmfr0) \ (((mmfr0) & ID_AA64MMFR0_BIGEND_MASK) >> ID_AA64MMFR0_BIGEND_SHIFT)
-#define SCTLR_EL1_CP15BEN (0x1 << 5) -#define SCTLR_EL1_SED (0x1 << 8) - #ifndef __ASSEMBLY__
/* diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 5c89df0..56391fb 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -20,6 +20,9 @@ #ifndef __ASM_SYSREG_H #define __ASM_SYSREG_H
+#define SCTLR_EL1_CP15BEN (0x1 << 5) +#define SCTLR_EL1_SED (0x1 << 8) + #define sys_reg(op0, op1, crn, crm, op2) \ ((((op0)-2)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5))
@@ -55,6 +58,15 @@ asm( " .endm\n" );
+static inline void config_sctlr_el1(u32 clear, u32 set) +{ + u32 val; + + asm volatile("mrs %0, sctlr_el1" : "=r" (val)); + val &= ~clear; + val |= set; + asm volatile("msr sctlr_el1, %0" : : "r" (val)); +} #endif
#endif /* __ASM_SYSREG_H */ diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c index 7922c2e..78d56bf 100644 --- a/arch/arm64/kernel/armv8_deprecated.c +++ b/arch/arm64/kernel/armv8_deprecated.c @@ -16,6 +16,7 @@
#include <asm/insn.h> #include <asm/opcodes.h> +#include <asm/sysreg.h> #include <asm/system_misc.h> #include <asm/traps.h> #include <asm/uaccess.h> @@ -504,16 +505,6 @@ ret: return 0; }
-static inline void config_sctlr_el1(u32 clear, u32 set) -{ - u32 val; - - asm volatile("mrs %0, sctlr_el1" : "=r" (val)); - val &= ~clear; - val |= set; - asm volatile("msr sctlr_el1, %0" : : "r" (val)); -} - static int cp15_barrier_set_hw_mode(bool enable) { if (enable)
From: James Morse james.morse@arm.com
commit 79b0e09a3c9bd74ee54582efdb351179d7c00351 upstream.
Based on arch/arm/include/asm/cputype.h, this function does the shifting and sign extension necessary when accessing cpu feature fields.
Signed-off-by: James Morse james.morse@arm.com Suggested-by: Russell King linux@arm.linux.org.uk Acked-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/cpufeature.h | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c104421..9fafa75 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -70,6 +70,13 @@ static inline void cpus_set_cap(unsigned int num) __set_bit(num, cpu_hwcaps); }
+static inline int __attribute_const__ cpuid_feature_extract_field(u64 features, + int field) +{ + return (s64)(features << (64 - 4 - field)) >> (64 - 4); +} + + void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps, const char *info); void check_local_cpu_errata(void);
From: James Morse james.morse@arm.com
commit 1c0763037f1e1caef739e36e09c6d41ed7b61b2d upstream.
This patch adds an 'enable()' callback to cpu capability/feature detection, allowing features that require some setup or configuration to get this opportunity once the feature has been detected.
Acked-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: James Morse james.morse@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/kernel/cpufeature.c | 6 ++++++ 2 files changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 9fafa75..484fa94 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -34,6 +34,7 @@ struct arm64_cpu_capabilities { const char *desc; u16 capability; bool (*matches)(const struct arm64_cpu_capabilities *); + void (*enable)(void); union { struct { /* To be used for erratum handling only */ u32 midr_model; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 5ad86ce..650ffc2 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -55,6 +55,12 @@ void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps, pr_info("%s %s\n", info, caps[i].desc); cpus_set_cap(caps[i].capability); } + + /* second pass allows enable() to consider interacting capabilities */ + for (i = 0; caps[i].desc; i++) { + if (cpus_have_cap(caps[i].capability) && caps[i].enable) + caps[i].enable(); + } }
void check_local_cpu_features(void)
From: James Morse james.morse@arm.com
commit 18ffa046c509d0cd011eeea2c0418f2d014771fc upstream.
When a new cpu feature is available, the cpu feature bits will have some initial value, which is incremented when the feature is updated. This patch changes 'register_value' to be 'min_field_value', and checks the feature bits value (interpreted as a signed int) is greater than this minimum.
Acked-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: James Morse james.morse@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/cpufeature.h | 4 ++-- arch/arm64/kernel/cpufeature.c | 14 +++++++++++--- 2 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 484fa94..f595f7d 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -42,8 +42,8 @@ struct arm64_cpu_capabilities { };
struct { /* Feature register checking */ - u64 register_mask; - u64 register_value; + int field_pos; + int min_field_value; }; }; }; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 650ffc2..74fd0f7 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -23,12 +23,20 @@ #include <asm/cpufeature.h>
static bool +feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry) +{ + int val = cpuid_feature_extract_field(reg, entry->field_pos); + + return val >= entry->min_field_value; +} + +static bool has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry) { u64 val;
val = read_cpuid(id_aa64pfr0_el1); - return (val & entry->register_mask) == entry->register_value; + return feature_matches(val, entry); }
static const struct arm64_cpu_capabilities arm64_features[] = { @@ -36,8 +44,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .desc = "GIC system register CPU interface", .capability = ARM64_HAS_SYSREG_GIC_CPUIF, .matches = has_id_aa64pfr0_feature, - .register_mask = (0xf << 24), - .register_value = (1 << 24), + .field_pos = 24, + .min_field_value = 1, }, {}, };
From: "Suzuki K. Poulose" suzuki.poulose@arm.com
commit 9ded63aaf83eba76e1a54ac02581c2badc497f1a upstream.
The system register encoding generated by sys_reg() works only for MRS/MSR(Register) operations, as we hardcode Bit20 to 1 in mrs_s/msr_s mask. This makes it unusable for generating instructions accessing registers with Op0 < 2(e.g, PSTATE.x with Op0=0).
As per ARMv8 ARM, (Ref: ARMv8 ARM, Section: "System instruction class encoding overview", C5.2, version:ARM DDI 0487A.f), the instruction encoding reserves bits [20-19] for Op0.
This patch generalises the sys_reg, mrs_s and msr_s macros, so that we could use them to access any of the supported system register.
Cc: James Morse james.morse@arm.com Cc: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Suzuki K. Poulose suzuki.poulose@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/sysreg.h | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 56391fb..5295bcb 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -23,8 +23,18 @@ #define SCTLR_EL1_CP15BEN (0x1 << 5) #define SCTLR_EL1_SED (0x1 << 8)
+/* + * ARMv8 ARM reserves the following encoding for system registers: + * (Ref: ARMv8 ARM, Section: "System instruction class encoding overview", + * C5.2, version:ARM DDI 0487A.f) + * [20-19] : Op0 + * [18-16] : Op1 + * [15-12] : CRn + * [11-8] : CRm + * [7-5] : Op2 + */ #define sys_reg(op0, op1, crn, crm, op2) \ - ((((op0)-2)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5)) + ((((op0)&3)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5))
#ifdef __ASSEMBLY__
@@ -34,11 +44,11 @@ .equ __reg_num_xzr, 31
.macro mrs_s, rt, sreg - .inst 0xd5300000|(\sreg)|(__reg_num_\rt) + .inst 0xd5200000|(\sreg)|(__reg_num_\rt) .endm
.macro msr_s, sreg, rt - .inst 0xd5100000|(\sreg)|(__reg_num_\rt) + .inst 0xd5000000|(\sreg)|(__reg_num_\rt) .endm
#else @@ -50,11 +60,11 @@ asm( " .equ __reg_num_xzr, 31\n" "\n" " .macro mrs_s, rt, sreg\n" -" .inst 0xd5300000|(\sreg)|(__reg_num_\rt)\n" +" .inst 0xd5200000|(\sreg)|(__reg_num_\rt)\n" " .endm\n" "\n" " .macro msr_s, sreg, rt\n" -" .inst 0xd5100000|(\sreg)|(__reg_num_\rt)\n" +" .inst 0xd5000000|(\sreg)|(__reg_num_\rt)\n" " .endm\n" );
From: Marc Zyngier marc.zyngier@arm.com
commit 8d883b23aed73cad844ba48051c7e96eddf0f51c upstream.
asm/alternative-asm.h and asm/alternative.h are extremely similar, and really deserve to live in the same file (as this makes further modufications a bit easier).
Fold the content of alternative-asm.h into alternative.h, and update the few users.
Acked-by: Will Deacon will.deacon@arm.com Signed-off-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/alternative-asm.h | 29 ----------------------------- arch/arm64/include/asm/alternative.h | 27 +++++++++++++++++++++++++++ arch/arm64/kernel/entry.S | 2 +- arch/arm64/mm/cache.S | 2 +- 4 files changed, 29 insertions(+), 31 deletions(-) delete mode 100644 arch/arm64/include/asm/alternative-asm.h
diff --git a/arch/arm64/include/asm/alternative-asm.h b/arch/arm64/include/asm/alternative-asm.h deleted file mode 100644 index 919a678..0000000 --- a/arch/arm64/include/asm/alternative-asm.h +++ /dev/null @@ -1,29 +0,0 @@ -#ifndef __ASM_ALTERNATIVE_ASM_H -#define __ASM_ALTERNATIVE_ASM_H - -#ifdef __ASSEMBLY__ - -.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len - .word \orig_offset - . - .word \alt_offset - . - .hword \feature - .byte \orig_len - .byte \alt_len -.endm - -.macro alternative_insn insn1 insn2 cap -661: \insn1 -662: .pushsection .altinstructions, "a" - altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f - .popsection - .pushsection .altinstr_replacement, "ax" -663: \insn2 -664: .popsection - .if ((664b-663b) != (662b-661b)) - .error "Alternatives instruction length mismatch" - .endif -.endm - -#endif /* __ASSEMBLY__ */ - -#endif /* __ASM_ALTERNATIVE_ASM_H */ diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index d261f01..265b13e 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -1,6 +1,8 @@ #ifndef __ASM_ALTERNATIVE_H #define __ASM_ALTERNATIVE_H
+#ifndef __ASSEMBLY__ + #include <linux/types.h> #include <linux/stddef.h> #include <linux/stringify.h> @@ -41,4 +43,29 @@ void free_alternatives_memory(void); " .error "Alternatives instruction length mismatch"\n\t"\ ".endif\n"
+#else + +.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len + .word \orig_offset - . + .word \alt_offset - . + .hword \feature + .byte \orig_len + .byte \alt_len +.endm + +.macro alternative_insn insn1 insn2 cap +661: \insn1 +662: .pushsection .altinstructions, "a" + altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f + .popsection + .pushsection .altinstr_replacement, "ax" +663: \insn2 +664: .popsection + .if ((664b-663b) != (662b-661b)) + .error "Alternatives instruction length mismatch" + .endif +.endm + +#endif /* __ASSEMBLY__ */ + #endif /* __ASM_ALTERNATIVE_H */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index bddd04d..3661b12 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -21,7 +21,7 @@ #include <linux/init.h> #include <linux/linkage.h>
-#include <asm/alternative-asm.h> +#include <asm/alternative.h> #include <asm/assembler.h> #include <asm/asm-offsets.h> #include <asm/cpufeature.h> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index 2560e1e..70a79cb 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -22,7 +22,7 @@ #include <linux/init.h> #include <asm/assembler.h> #include <asm/cpufeature.h> -#include <asm/alternative-asm.h> +#include <asm/alternative.h>
#include "proc-macros.S"
From: Marc Zyngier marc.zyngier@arm.com
commit eb7c11ee3c5ce6c45ac28a5015a8e60ed458b412 upstream.
AArch64 toolchains suffer from the following bug:
$ cat blah.S 1: .inst 0x01020304 .if ((. - 1b) != 4) .error "blah" .endif $ aarch64-linux-gnu-gcc -c blah.S blah.S: Assembler messages: blah.S:3: Error: non-constant expression in ".if" statement
which precludes the use of msr_s and co as part of alternatives.
We workaround this issue by not directly testing the labels themselves, but by moving the current output pointer by a value that should always be zero. If this value is not null, then we will trigger a backward move, which is expclicitely forbidden. This triggers the error we're after:
AS arch/arm64/kvm/hyp.o arch/arm64/kvm/hyp.S: Assembler messages: arch/arm64/kvm/hyp.S:1377: Error: attempt to move .org backwards scripts/Makefile.build:294: recipe for target 'arch/arm64/kvm/hyp.o' failed make[1]: *** [arch/arm64/kvm/hyp.o] Error 1 Makefile:946: recipe for target 'arch/arm64/kvm' failed
Not pretty, but at least works on the current toolchains.
Acked-by: Will Deacon will.deacon@arm.com Signed-off-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/alternative.h | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index 265b13e..c385a0c 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -26,7 +26,20 @@ void free_alternatives_memory(void); " .byte 662b-661b\n" /* source len */ \ " .byte 664f-663f\n" /* replacement len */
-/* alternative assembly primitive: */ +/* + * alternative assembly primitive: + * + * If any of these .org directive fail, it means that insn1 and insn2 + * don't have the same length. This used to be written as + * + * .if ((664b-663b) != (662b-661b)) + * .error "Alternatives instruction length mismatch" + * .endif + * + * but most assemblers die if insn1 or insn2 have a .inst. This should + * be fixed in a binutils release posterior to 2.25.51.0.2 (anything + * containing commit 4e4d08cf7399b606 or c1baaddf8861). + */ #define ALTERNATIVE(oldinstr, newinstr, feature) \ "661:\n\t" \ oldinstr "\n" \ @@ -39,9 +52,8 @@ void free_alternatives_memory(void); newinstr "\n" \ "664:\n\t" \ ".popsection\n\t" \ - ".if ((664b-663b) != (662b-661b))\n\t" \ - " .error "Alternatives instruction length mismatch"\n\t"\ - ".endif\n" + ".org . - (664b-663b) + (662b-661b)\n\t" \ + ".org . - (662b-661b) + (664b-663b)\n"
#else
@@ -61,9 +73,8 @@ void free_alternatives_memory(void); .pushsection .altinstr_replacement, "ax" 663: \insn2 664: .popsection - .if ((664b-663b) != (662b-661b)) - .error "Alternatives instruction length mismatch" - .endif + .org . - (664b-663b) + (662b-661b) + .org . - (662b-661b) + (664b-663b) .endm
#endif /* __ASSEMBLY__ */
From: Daniel Thompson daniel.thompson@linaro.org
commit 63e40815f02584ba8174e0f6af40924b2b335cae upstream.
The existing alternative_insn macro has some limitations that make it hard to work with. In particular the fact it takes instructions from it own macro arguments means it doesn't play very nicely with C pre-processor macros because the macro arguments look like a string to the C pre-processor. Workarounds are (probably) possible but things start to look ugly.
Introduce an alternative set of macros that allows instructions to be presented to the assembler as normal and switch everything over to the new macros.
Signed-off-by: Daniel Thompson daniel.thompson@linaro.org Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/alternative.h | 41 ++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+)
diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index c385a0c..e86681ad 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -77,6 +77,47 @@ void free_alternatives_memory(void); .org . - (662b-661b) + (664b-663b) .endm
+/* + * Begin an alternative code sequence. + * + * The code that follows this macro will be assembled and linked as + * normal. There are no restrictions on this code. + */ +.macro alternative_if_not cap + .pushsection .altinstructions, "a" + altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f + .popsection +661: +.endm + +/* + * Provide the alternative code sequence. + * + * The code that follows this macro is assembled into a special + * section to be used for dynamic patching. Code that follows this + * macro must: + * + * 1. Be exactly the same length (in bytes) as the default code + * sequence. + * + * 2. Not contain a branch target that is used outside of the + * alternative sequence it is defined in (branches into an + * alternative sequence are not fixed up). + */ +.macro alternative_else +662: .pushsection .altinstr_replacement, "ax" +663: +.endm + +/* + * Complete an alternative code sequence. + */ +.macro alternative_endif +664: .popsection + .org . - (664b-663b) + (662b-661b) + .org . - (662b-661b) + (664b-663b) +.endm + #endif /* __ASSEMBLY__ */
#endif /* __ASM_ALTERNATIVE_H */
From: James Morse james.morse@arm.com
commit 91a5cefa2f98bdd3404c2fba57048c4fa225cc37 upstream.
Some uses of ALTERNATIVE() may depend on a feature that is disabled at compile time by a Kconfig option. In this case the unused alternative instructions waste space, and if the original instruction is a nop, it wastes time and space.
This patch adds an optional 'config' option to ALTERNATIVE() and alternative_insn that allows the compiler to remove both the original and alternative instructions if the config option is not defined.
Suggested-by: Catalin Marinas catalin.marinas@arm.com Acked-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: James Morse james.morse@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/include/asm/alternative.h | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index e86681ad..2036788 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -3,6 +3,7 @@
#ifndef __ASSEMBLY__
+#include <linux/kconfig.h> #include <linux/types.h> #include <linux/stddef.h> #include <linux/stringify.h> @@ -40,7 +41,8 @@ void free_alternatives_memory(void); * be fixed in a binutils release posterior to 2.25.51.0.2 (anything * containing commit 4e4d08cf7399b606 or c1baaddf8861). */ -#define ALTERNATIVE(oldinstr, newinstr, feature) \ +#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled) \ + ".if "__stringify(cfg_enabled)" == 1\n" \ "661:\n\t" \ oldinstr "\n" \ "662:\n" \ @@ -53,7 +55,11 @@ void free_alternatives_memory(void); "664:\n\t" \ ".popsection\n\t" \ ".org . - (664b-663b) + (662b-661b)\n\t" \ - ".org . - (662b-661b) + (664b-663b)\n" + ".org . - (662b-661b) + (664b-663b)\n" \ + ".endif\n" + +#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...) \ + __ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
#else
@@ -65,7 +71,8 @@ void free_alternatives_memory(void); .byte \alt_len .endm
-.macro alternative_insn insn1 insn2 cap +.macro alternative_insn insn1, insn2, cap, enable = 1 + .if \enable 661: \insn1 662: .pushsection .altinstructions, "a" altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f @@ -75,6 +82,7 @@ void free_alternatives_memory(void); 664: .popsection .org . - (664b-663b) + (662b-661b) .org . - (662b-661b) + (664b-663b) + .endif .endm
/* @@ -118,6 +126,20 @@ void free_alternatives_memory(void); .org . - (662b-661b) + (664b-663b) .endm
+#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \ + alternative_insn insn1, insn2, cap, IS_ENABLED(cfg) + + #endif /* __ASSEMBLY__ */
+/* + * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature)); + * + * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature, CONFIG_FOO)); + * N.B. If CONFIG_FOO is specified, but not selected, the whole block + * will be omitted, including oldinstr. + */ +#define ALTERNATIVE(oldinstr, newinstr, ...) \ + _ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1) + #endif /* __ASM_ALTERNATIVE_H */
From: James Morse james.morse@arm.com
commit 338d4f49d6f7114a017d294ccf7374df4f998edc upstream
'Privileged Access Never' is a new arm8.1 feature which prevents privileged code from accessing any virtual address where read or write access is also permitted at EL0.
This patch enables the PAN feature on all CPUs, and modifies {get,put}_user helpers temporarily to permit access.
This will catch kernel bugs where user memory is accessed directly. 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.
Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: James Morse james.morse@arm.com [will: use ALTERNATIVE in asm and tidy up pan_enable check] Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: David Brown david.brown@linaro.org --- arch/arm64/Kconfig | 14 ++++++++++++++ arch/arm64/include/asm/cpufeature.h | 3 ++- arch/arm64/include/asm/futex.h | 8 ++++++++ arch/arm64/include/asm/processor.h | 2 ++ arch/arm64/include/asm/sysreg.h | 8 ++++++++ arch/arm64/include/asm/uaccess.h | 11 +++++++++++ arch/arm64/include/uapi/asm/ptrace.h | 1 + arch/arm64/kernel/armv8_deprecated.c | 8 +++++++- arch/arm64/kernel/cpufeature.c | 20 ++++++++++++++++++++ arch/arm64/lib/clear_user.S | 8 ++++++++ arch/arm64/lib/copy_from_user.S | 8 ++++++++ arch/arm64/lib/copy_in_user.S | 8 ++++++++ arch/arm64/lib/copy_to_user.S | 8 ++++++++ arch/arm64/mm/fault.c | 16 ++++++++++++++++ 14 files changed, 121 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7796af4..ca31481 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -589,6 +589,20 @@ config FORCE_MAX_ZONEORDER default "14" if (ARM64_64K_PAGES && TRANSPARENT_HUGEPAGE) default "11"
+config ARM64_PAN + bool "Enable support for Privileged Access Never (PAN)" + default y + help + Privileged Access Never (PAN; part of the ARMv8.1 Extensions) + prevents the kernel or hypervisor from accessing user-space (EL0) + memory directly. + + Choosing this option will cause any unprotected (not using + copy_to_user et al) memory access to fail with a permission fault. + + The feature is detected at runtime, and will remain as a 'nop' + instruction if the cpu does not implement the feature. + menuconfig ARMV8_DEPRECATED bool "Emulate deprecated/obsolete ARMv8 instructions" depends on COMPAT diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index f595f7d..d71140b 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -25,8 +25,9 @@ #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE 1 #define ARM64_WORKAROUND_845719 2 #define ARM64_HAS_SYSREG_GIC_CPUIF 3 +#define ARM64_HAS_PAN 4
-#define ARM64_NCAPS 4 +#define ARM64_NCAPS 5
#ifndef __ASSEMBLY__
diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h index 5f750dc..6673462 100644 --- a/arch/arm64/include/asm/futex.h +++ b/arch/arm64/include/asm/futex.h @@ -20,10 +20,16 @@
#include <linux/futex.h> #include <linux/uaccess.h> + +#include <asm/alternative.h> +#include <asm/cpufeature.h> #include <asm/errno.h> +#include <asm/sysreg.h>
#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \ asm volatile( \ + ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) \ "1: ldxr %w1, %2\n" \ insn "\n" \ "2: stlxr %w3, %w0, %2\n" \ @@ -39,6 +45,8 @@ " .align 3\n" \ " .quad 1b, 4b, 2b, 4b\n" \ " .popsection\n" \ + ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) \ : "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp) \ : "r" (oparg), "Ir" (-EFAULT) \ : "memory") diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index d2c37a1..6c2f572 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -169,4 +169,6 @@ static inline void spin_lock_prefetch(const void *x)
#endif
+void cpu_enable_pan(void); + #endif /* __ASM_PROCESSOR_H */ diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 5295bcb..a7f3d4b 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -20,6 +20,8 @@ #ifndef __ASM_SYSREG_H #define __ASM_SYSREG_H
+#include <asm/opcodes.h> + #define SCTLR_EL1_CP15BEN (0x1 << 5) #define SCTLR_EL1_SED (0x1 << 8)
@@ -36,6 +38,12 @@ #define sys_reg(op0, op1, crn, crm, op2) \ ((((op0)&3)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5))
+#define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4) +#define SCTLR_EL1_SPAN (1 << 23) + +#define SET_PSTATE_PAN(x) __inst_arm(0xd5000000 | REG_PSTATE_PAN_IMM |\ + (!!x)<<8 | 0x1f) + #ifdef __ASSEMBLY__
.irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30 diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 07e1ba44..b2ede967 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -24,7 +24,10 @@ #include <linux/string.h> #include <linux/thread_info.h>
+#include <asm/alternative.h> +#include <asm/cpufeature.h> #include <asm/ptrace.h> +#include <asm/sysreg.h> #include <asm/errno.h> #include <asm/memory.h> #include <asm/compiler.h> @@ -131,6 +134,8 @@ static inline void set_fs(mm_segment_t fs) do { \ unsigned long __gu_val; \ __chk_user_ptr(ptr); \ + asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN)); \ switch (sizeof(*(ptr))) { \ case 1: \ __get_user_asm("ldrb", "%w", __gu_val, (ptr), (err)); \ @@ -148,6 +153,8 @@ do { \ BUILD_BUG(); \ } \ (x) = (__force __typeof__(*(ptr)))__gu_val; \ + asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN)); \ } while (0)
#define __get_user(x, ptr) \ @@ -194,6 +201,8 @@ do { \ do { \ __typeof__(*(ptr)) __pu_val = (x); \ __chk_user_ptr(ptr); \ + asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN)); \ switch (sizeof(*(ptr))) { \ case 1: \ __put_user_asm("strb", "%w", __pu_val, (ptr), (err)); \ @@ -210,6 +219,8 @@ do { \ default: \ BUILD_BUG(); \ } \ + asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN)); \ } while (0)
#define __put_user(x, ptr) \ diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h index 6913643..208db3d 100644 --- a/arch/arm64/include/uapi/asm/ptrace.h +++ b/arch/arm64/include/uapi/asm/ptrace.h @@ -44,6 +44,7 @@ #define PSR_I_BIT 0x00000080 #define PSR_A_BIT 0x00000100 #define PSR_D_BIT 0x00000200 +#define PSR_PAN_BIT 0x00400000 #define PSR_Q_BIT 0x08000000 #define PSR_V_BIT 0x10000000 #define PSR_C_BIT 0x20000000 diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c index 78d56bf..bcee7ab 100644 --- a/arch/arm64/kernel/armv8_deprecated.c +++ b/arch/arm64/kernel/armv8_deprecated.c @@ -14,6 +14,8 @@ #include <linux/slab.h> #include <linux/sysctl.h>
+#include <asm/alternative.h> +#include <asm/cpufeature.h> #include <asm/insn.h> #include <asm/opcodes.h> #include <asm/sysreg.h> @@ -280,6 +282,8 @@ static void register_insn_emulation_sysctl(struct ctl_table *table) */ #define __user_swpX_asm(data, addr, res, temp, B) \ __asm__ __volatile__( \ + ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) \ " mov %w2, %w1\n" \ "0: ldxr"B" %w1, [%3]\n" \ "1: stxr"B" %w0, %w2, [%3]\n" \ @@ -295,7 +299,9 @@ static void register_insn_emulation_sysctl(struct ctl_table *table) " .align 3\n" \ " .quad 0b, 3b\n" \ " .quad 1b, 3b\n" \ - " .popsection" \ + " .popsection\n" \ + ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) \ : "=&r" (res), "+r" (data), "=&r" (temp) \ : "r" (addr), "i" (-EAGAIN), "i" (-EFAULT) \ : "memory") diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 74fd0f7..978fa16 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -21,6 +21,7 @@ #include <linux/types.h> #include <asm/cpu.h> #include <asm/cpufeature.h> +#include <asm/processor.h>
static bool feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry) @@ -39,6 +40,15 @@ has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry) return feature_matches(val, entry); }
+static bool __maybe_unused +has_id_aa64mmfr1_feature(const struct arm64_cpu_capabilities *entry) +{ + u64 val; + + val = read_cpuid(id_aa64mmfr1_el1); + return feature_matches(val, entry); +} + static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "GIC system register CPU interface", @@ -47,6 +57,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = 24, .min_field_value = 1, }, +#ifdef CONFIG_ARM64_PAN + { + .desc = "Privileged Access Never", + .capability = ARM64_HAS_PAN, + .matches = has_id_aa64mmfr1_feature, + .field_pos = 20, + .min_field_value = 1, + .enable = cpu_enable_pan, + }, +#endif /* CONFIG_ARM64_PAN */ {}, };
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S index c17967f..a9723c7 100644 --- a/arch/arm64/lib/clear_user.S +++ b/arch/arm64/lib/clear_user.S @@ -16,7 +16,11 @@ * along with this program. If not, see http://www.gnu.org/licenses/. */ #include <linux/linkage.h> + +#include <asm/alternative.h> #include <asm/assembler.h> +#include <asm/cpufeature.h> +#include <asm/sysreg.h>
.text
@@ -29,6 +33,8 @@ * Alignment fixed up by hardware. */ ENTRY(__clear_user) +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) mov x2, x1 // save the size for fixup return subs x1, x1, #8 b.mi 2f @@ -48,6 +54,8 @@ USER(9f, strh wzr, [x0], #2 ) b.mi 5f USER(9f, strb wzr, [x0] ) 5: mov x0, #0 +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) ret ENDPROC(__clear_user)
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S index 47c3fa5..1be9ef2 100644 --- a/arch/arm64/lib/copy_from_user.S +++ b/arch/arm64/lib/copy_from_user.S @@ -15,7 +15,11 @@ */
#include <linux/linkage.h> + +#include <asm/alternative.h> #include <asm/assembler.h> +#include <asm/cpufeature.h> +#include <asm/sysreg.h>
/* * Copy from user space to a kernel buffer (alignment handled by the hardware) @@ -28,6 +32,8 @@ * x0 - bytes not copied */ ENTRY(__copy_from_user) +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) add x5, x1, x2 // upper user buffer boundary subs x2, x2, #16 b.mi 1f @@ -56,6 +62,8 @@ USER(9f, ldrh w3, [x1], #2 ) USER(9f, ldrb w3, [x1] ) strb w3, [x0] 5: mov x0, #0 +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) ret ENDPROC(__copy_from_user)
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S index 436bcc5..1b94661e 100644 --- a/arch/arm64/lib/copy_in_user.S +++ b/arch/arm64/lib/copy_in_user.S @@ -17,7 +17,11 @@ */
#include <linux/linkage.h> + +#include <asm/alternative.h> #include <asm/assembler.h> +#include <asm/cpufeature.h> +#include <asm/sysreg.h>
/* * Copy from user space to user space (alignment handled by the hardware) @@ -30,6 +34,8 @@ * x0 - bytes not copied */ ENTRY(__copy_in_user) +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) add x5, x0, x2 // upper user buffer boundary subs x2, x2, #16 b.mi 1f @@ -58,6 +64,8 @@ USER(9f, strh w3, [x0], #2 ) USER(9f, ldrb w3, [x1] ) USER(9f, strb w3, [x0] ) 5: mov x0, #0 +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) ret ENDPROC(__copy_in_user)
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index f5e1f52..a257b47 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -15,7 +15,11 @@ */
#include <linux/linkage.h> + +#include <asm/alternative.h> #include <asm/assembler.h> +#include <asm/cpufeature.h> +#include <asm/sysreg.h>
/* * Copy to user space from a kernel buffer (alignment handled by the hardware) @@ -28,6 +32,8 @@ * x0 - bytes not copied */ ENTRY(__copy_to_user) +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) add x5, x0, x2 // upper user buffer boundary subs x2, x2, #16 b.mi 1f @@ -56,6 +62,8 @@ USER(9f, strh w3, [x0], #2 ) ldrb w3, [x1] USER(9f, strb w3, [x0] ) 5: mov x0, #0 +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) ret ENDPROC(__copy_to_user)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 96da131..2d1ceba 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -30,9 +30,11 @@ #include <linux/highmem.h> #include <linux/perf_event.h>
+#include <asm/cpufeature.h> #include <asm/exception.h> #include <asm/debug-monitors.h> #include <asm/esr.h> +#include <asm/sysreg.h> #include <asm/system_misc.h> #include <asm/pgtable.h> #include <asm/tlbflush.h> @@ -225,6 +227,13 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, }
/* + * PAN bit set implies the fault happened in kernel space, but not + * in the arch's user access functions. + */ + if (IS_ENABLED(CONFIG_ARM64_PAN) && (regs->pstate & PSR_PAN_BIT)) + goto no_context; + + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, * we can bug out early if this is from code which shouldn't. @@ -530,3 +539,10 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
return 0; } + +#ifdef CONFIG_ARM64_PAN +void cpu_enable_pan(void) +{ + config_sctlr_el1(SCTLR_EL1_SPAN, 0); +} +#endif /* CONFIG_ARM64_PAN */
The following changes since commit 4ff62ca06c0c0b084f585f7a2cfcf832b21d94fc:
Linux 4.1.6 (2015-08-16 20:52:51 -0700)
are available in the git repository at:
https://git.linaro.org/people/david.brown/linux-lsk pan-4.1
for you to fetch changes up to 4cf4454ff0137f000d68d5e5fcf7dd7025b51482:
arm64: kernel: Add support for Privileged Access Never (2015-09-10 15:02:08 -0700)
---------------------------------------------------------------- Daniel Thompson (1): arm64: alternative: Provide if/else/endif assembler macros
James Morse (6): arm64: kernel: Move config_sctlr_el1 arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension arm64: kernel: Add cpufeature 'enable' callback arm64: kernel: Add min_field_value and use '>=' for feature detection arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() arm64: kernel: Add support for Privileged Access Never
Marc Zyngier (3): arm64: alternative: Introduce feature for GICv3 CPU interface arm64: alternative: Merge alternative-asm.h into alternative.h arm64: alternative: Work around .inst assembler bugs
Suzuki K. Poulose (1): arm64: Generalise msr_s/mrs_s operations
Will Deacon (1): arm64: lib: use pair accessors for copy_*_user routines
arch/arm64/Kconfig | 14 ++++ arch/arm64/include/asm/alternative-asm.h | 29 -------- arch/arm64/include/asm/alternative.h | 109 +++++++++++++++++++++++++++++-- arch/arm64/include/asm/cpufeature.h | 17 ++++- arch/arm64/include/asm/cputype.h | 3 - arch/arm64/include/asm/futex.h | 8 +++ arch/arm64/include/asm/processor.h | 2 + arch/arm64/include/asm/sysreg.h | 40 ++++++++++-- arch/arm64/include/asm/uaccess.h | 11 ++++ arch/arm64/include/uapi/asm/ptrace.h | 1 + arch/arm64/kernel/armv8_deprecated.c | 19 +++--- arch/arm64/kernel/cpufeature.c | 50 ++++++++++++++ arch/arm64/kernel/entry.S | 2 +- arch/arm64/lib/clear_user.S | 8 +++ arch/arm64/lib/copy_from_user.S | 25 +++++-- arch/arm64/lib/copy_in_user.S | 25 +++++-- arch/arm64/lib/copy_to_user.S | 25 +++++-- arch/arm64/mm/cache.S | 2 +- arch/arm64/mm/fault.c | 16 +++++ 19 files changed, 333 insertions(+), 73 deletions(-) delete mode 100644 arch/arm64/include/asm/alternative-asm.h
Hi David,
I found your git tree, but there is no pan-4.1 branch, neither 4cf4454ff01 commits.
the 3.18 kernel has the same issue. Could you double check the git tree?
Regards Alex
On 12/05/2015 02:08 AM, David Brown wrote:
https://git.linaro.org/people/david.brown/linux-lsk pan-4.1
for you to fetch changes up to 4cf4454ff0137f000d68d5e5fcf7dd7025b51482:
On Mon, Dec 07, 2015 at 11:11:19AM +0800, Alex Shi wrote:
I found your git tree, but there is no pan-4.1 branch, neither 4cf4454ff01 commits.
the 3.18 kernel has the same issue. Could you double check the git tree?
How weird:
% git ls-remote https://git.linaro.org/people/david.brown/linux-lsk pan-4.1 4cf4454ff0137f000d68d5e5fcf7dd7025b51482 refs/remotes/davidb/pan-4.1
% git ls-remote https://git.linaro.org/people/david.brown/linux-lsk pan-3.18 20af540051e1c1f7a7d9b9e4613c1ecc53ded4a6 refs/remotes/davidb/pan-3.18
Does this work for you? If it doesn't I wonder if we have some kind of mirroring problem with the linaro git server.
I can push them somewhere else if you'd like, but it'd be nice to find any problems with the Linaro git server, since I'm sure it'll come up again.
David
On 7 December 2015 at 16:10, David Brown david.brown@linaro.org wrote:
On Mon, Dec 07, 2015 at 11:11:19AM +0800, Alex Shi wrote:
I found your git tree, but there is no pan-4.1 branch, neither
4cf4454ff01 commits.
the 3.18 kernel has the same issue. Could you double check the git tree?
How weird:
% git ls-remote https://git.linaro.org/people/david.brown/linux-lsk pan-4.1 4cf4454ff0137f000d68d5e5fcf7dd7025b51482 refs/remotes/davidb/pan-4.1
$ git ls-remote https://git.linaro.org/people/david.brown/linux-lsk pan-4.1 4cf4454ff0137f000d68d5e5fcf7dd7025b51482 refs/remotes/davidb/pan-4.1 $ git fetch https://git.linaro.org/people/david.brown/linux-lsk pan-4.1 fatal: Couldn't find remote ref pan-4.1
fetch is what people actually need to be able to get the code. You can also see this being broken if you look in gitweb - there's no reference to PAN branches.
It looks like what you've done here is pushed to a remote object called "refs/remotes/davidb/pan-4.1" not "refs/heads/davidb/pan-4.1" which is probably what you meant.
On Mon, Dec 07, 2015 at 04:30:23PM +0000, Mark Brown wrote:
% git ls-remote [2]https://git.linaro.org/people/david.brown/linux-lsk pan-4.1 4cf4454ff0137f000d68d5e5fcf7dd7025b51482 refs/remotes/davidb/pan-4.1
$ git ls-remote [3]https://git.linaro.org/people/david.brown/linux-lsk pan-4.1 4cf4454ff0137f000d68d5e5fcf7dd7025b51482 refs/remotes/davidb/pan-4.1 $ git fetch [4]https://git.linaro.org/people/david.brown/linux-lsk pan-4.1 fatal: Couldn't find remote ref pan-4.1 fetch is what people actually need to be able to get the code. You can also see this being broken if you look in gitweb - there's no reference to PAN branches. It looks like what you've done here is pushed to a remote object called "refs/remotes/davidb/pan-4.1" not "refs/heads/davidb/pan-4.1" which is probably what you meant.
Thanks. That was an awful silly thing of me to do. I've pushed to the proper heads now.
David
Thanks David and Mark!
I merged the pan 4.1 branch into lsk. the following is conflict solution. Could you like review it?
commit 8fe55b915599dc7ad78f63a3b93dc1753f178dca Merge: 6844488 4cf4454 Author: Alex Shi alex.shi@linaro.org Date: Tue Dec 8 14:33:14 2015 +0800
Merge branch 'davidb/pan-4.1' of https://git.linaro.org/people/david.brown/linux-lsk into linux-linaro-lsk-v4.1
Conflicts solution: /* compatible with 589cb22 arm64: compat: fix stxr failure case in SWP emulation */ arch/arm64/kernel/armv8_deprecated.c
diff --cc arch/arm64/kernel/armv8_deprecated.c index 7ac3920,bcee7ab..937f5e5 --- a/arch/arm64/kernel/armv8_deprecated.c +++ b/arch/arm64/kernel/armv8_deprecated.c @@@ -279,24 -282,26 +282,28 @@@ static void register_insn_emulation_sys */ #define __user_swpX_asm(data, addr, res, temp, B) \ __asm__ __volatile__( \ + ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) \ - " mov %w2, %w1\n" \ - "0: ldxr"B" %w1, [%3]\n" \ - "1: stxr"B" %w0, %w2, [%3]\n" \ + "0: ldxr"B" %w2, [%3]\n" \ + "1: stxr"B" %w0, %w1, [%3]\n" \ " cbz %w0, 2f\n" \ " mov %w0, %w4\n" \ + " b 3f\n" \ "2:\n" \ + " mov %w1, %w2\n" \ + "3:\n" \ " .pushsection .fixup,"ax"\n" \ " .align 2\n" \ - "3: mov %w0, %w5\n" \ - " b 2b\n" \ + "4: mov %w0, %w5\n" \ + " b 3b\n" \ " .popsection" \ " .pushsection __ex_table,"a"\n" \ " .align 3\n" \ - " .quad 0b, 3b\n" \ - " .quad 1b, 3b\n" \ + " .quad 0b, 4b\n" \ + " .quad 1b, 4b\n" \ " .popsection\n" \ + ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN, \ + CONFIG_ARM64_PAN) \ : "=&r" (res), "+r" (data), "=&r" (temp) \ : "r" (addr), "i" (-EAGAIN), "i" (-EFAULT) \ : "memory")
On Tue, Dec 08, 2015 at 02:42:05PM +0800, Alex Shi wrote:
Thanks David and Mark!
I merged the pan 4.1 branch into lsk. the following is conflict solution. Could you like review it?
I think that's right, but let me just rebase my patch on v4.1.13, and it should merge without conflict. I'll send another pull request later today, after I do some additional testing.
David
v2: Rebase onto 4.1.13 (to resolve conflicts between 4.1.6 and 4.1.13)
The following changes since commit 1f2ce4a2e7aea3a2123b17aff62a80553df31e21:
Linux 4.1.13 (2015-11-09 14:34:10 -0800)
are available in the git repository at:
https://git.linaro.org/people/david.brown/linux-lsk pan-4.1.13
for you to fetch changes up to f45972d0017cff35d576e10f1e276f9f3991bea6:
arm64: kernel: Add support for Privileged Access Never (2015-12-08 14:01:12 -0800)
---------------------------------------------------------------- Daniel Thompson (1): arm64: alternative: Provide if/else/endif assembler macros
James Morse (6): arm64: kernel: Move config_sctlr_el1 arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension arm64: kernel: Add cpufeature 'enable' callback arm64: kernel: Add min_field_value and use '>=' for feature detection arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() arm64: kernel: Add support for Privileged Access Never
Marc Zyngier (3): arm64: alternative: Introduce feature for GICv3 CPU interface arm64: alternative: Merge alternative-asm.h into alternative.h arm64: alternative: Work around .inst assembler bugs
Suzuki K. Poulose (1): arm64: Generalise msr_s/mrs_s operations
Will Deacon (1): arm64: lib: use pair accessors for copy_*_user routines
arch/arm64/Kconfig | 14 ++++ arch/arm64/include/asm/alternative-asm.h | 29 -------- arch/arm64/include/asm/alternative.h | 109 +++++++++++++++++++++++++++++-- arch/arm64/include/asm/cpufeature.h | 17 ++++- arch/arm64/include/asm/cputype.h | 3 - arch/arm64/include/asm/futex.h | 8 +++ arch/arm64/include/asm/processor.h | 2 + arch/arm64/include/asm/sysreg.h | 40 ++++++++++-- arch/arm64/include/asm/uaccess.h | 11 ++++ arch/arm64/include/uapi/asm/ptrace.h | 1 + arch/arm64/kernel/armv8_deprecated.c | 17 ++--- arch/arm64/kernel/cpufeature.c | 50 ++++++++++++++ arch/arm64/kernel/entry.S | 2 +- arch/arm64/lib/clear_user.S | 8 +++ arch/arm64/lib/copy_from_user.S | 25 +++++-- arch/arm64/lib/copy_in_user.S | 25 +++++-- arch/arm64/lib/copy_to_user.S | 25 +++++-- arch/arm64/mm/cache.S | 2 +- arch/arm64/mm/fault.c | 16 +++++ 19 files changed, 332 insertions(+), 72 deletions(-) delete mode 100644 arch/arm64/include/asm/alternative-asm.h
On 12/09/2015 06:26 AM, David Brown wrote:
v2: Rebase onto 4.1.13 (to resolve conflicts between 4.1.6 and 4.1.13)
The following changes since commit 1f2ce4a2e7aea3a2123b17aff62a80553df31e21:
Linux 4.1.13 (2015-11-09 14:34:10 -0800)
are available in the git repository at:
https://git.linaro.org/people/david.brown/linux-lsk pan-4.1.13
for you to fetch changes up to f45972d0017cff35d576e10f1e276f9f3991bea6:
arm64: kernel: Add support for Privileged Access Never (2015-12-08 14:01:12 -0800)
Thanks David. Your branch was pushed into LSK as v4.1/topic/PAN, and lsk-v4.1-test for testing.
In fact, your conflict solution is same with mine. :)
Thanks! Alex
On 12/09/2015 11:05 AM, Alex Shi wrote:
On 12/09/2015 06:26 AM, David Brown wrote:
v2: Rebase onto 4.1.13 (to resolve conflicts between 4.1.6 and 4.1.13)
The following changes since commit 1f2ce4a2e7aea3a2123b17aff62a80553df31e21:
Linux 4.1.13 (2015-11-09 14:34:10 -0800)
are available in the git repository at:
https://git.linaro.org/people/david.brown/linux-lsk pan-4.1.13
for you to fetch changes up to f45972d0017cff35d576e10f1e276f9f3991bea6:
arm64: kernel: Add support for Privileged Access Never (2015-12-08 14:01:12 -0800)
Thanks David. Your branch was pushed into LSK as v4.1/topic/PAN, and lsk-v4.1-test for testing.
the boot testing for this testing branch is at http://kernelci.org/boot/all/job/lsk/kernel/v4.1.13-288-gb1638dab1fe6/
In fact, your conflict solution is same with mine. :)
Thanks! Alex
cc to Chase,
Chase, could you track the following testing and give us a report if it has extra boot failure?
thanks Alex
On 12/09/2015 03:55 PM, Alex Shi wrote:
Thanks David. Your branch was pushed into LSK as v4.1/topic/PAN, and lsk-v4.1-test for testing.
the boot testing for this testing branch is at http://kernelci.org/boot/all/job/lsk/kernel/v4.1.13-288-gb1638dab1fe6/
On Fri, Dec 04, 2015 at 10:08:27AM -0800, David Brown wrote:
arm64: kernel: Add support for Privileged Access Never (2015-09-10 15:02:08 -0700)
A quick note on how to test this. With recent kernels, the Linux Kernel Dump Test Tool Module is able to provoke the kinds of accesses that PAN detects. Enable CONFIG_LKDTM in your kernel, and
# echo ACCESS_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT
will generate an oops if the PAN support is working. Otherwise, it will just print the values it read.
David
On 12/09/2015 06:36 AM, David Brown wrote:
arm64: kernel: Add support for Privileged Access Never (2015-09-10 15:02:08 -0700)
A quick note on how to test this. With recent kernels, the Linux Kernel Dump Test Tool Module is able to provoke the kinds of accesses that PAN detects. Enable CONFIG_LKDTM in your kernel, and
# echo ACCESS_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT
will generate an oops if the PAN support is working. Otherwise, it will just print the values it read.
Thanks a lot for show us testing method!
linaro-kernel@lists.linaro.org