From: Pu Lehui pulehui@huawei.com
The backport mainly refers to the merge tag [0], and the corresponding patches are:
arm64: proton-pack: Add new CPUs 'k' values for branch mitigation arm64: bpf: Only mitigate cBPF programs loaded by unprivileged users arm64: bpf: Add BHB mitigation to the epilogue for cBPF programs arm64: proton-pack: Expose whether the branchy loop k value arm64: proton-pack: Expose whether the platform is mitigated by firmware arm64: insn: Add support for encoding DSB
Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h... [0]
Hou Tao (2): arm64: move AARCH64_BREAK_FAULT into insn-def.h arm64: insn: add encoders for atomic operations
James Morse (6): arm64: insn: Add support for encoding DSB arm64: proton-pack: Expose whether the platform is mitigated by firmware arm64: proton-pack: Expose whether the branchy loop k value arm64: bpf: Add BHB mitigation to the epilogue for cBPF programs arm64: bpf: Only mitigate cBPF programs loaded by unprivileged users arm64: proton-pack: Add new CPUs 'k' values for branch mitigation
Liu Song (1): arm64: spectre: increase parameters that can be used to turn off bhb mitigation individually
.../admin-guide/kernel-parameters.txt | 5 + arch/arm64/include/asm/cputype.h | 2 + arch/arm64/include/asm/debug-monitors.h | 12 -- arch/arm64/include/asm/insn-def.h | 14 ++ arch/arm64/include/asm/insn.h | 81 ++++++- arch/arm64/include/asm/spectre.h | 3 + arch/arm64/kernel/proton-pack.c | 21 +- arch/arm64/lib/insn.c | 199 ++++++++++++++++-- arch/arm64/net/bpf_jit.h | 11 +- arch/arm64/net/bpf_jit_comp.c | 58 ++++- 10 files changed, 366 insertions(+), 40 deletions(-)
From: Hou Tao houtao1@huawei.com
[ Upstream commit 97e58e395e9c074fd096dad13c54e9f4112cf71d ]
If CONFIG_ARM64_LSE_ATOMICS is off, encoders for LSE-related instructions can return AARCH64_BREAK_FAULT directly in insn.h. In order to access AARCH64_BREAK_FAULT in insn.h, we can not include debug-monitors.h in insn.h, because debug-monitors.h has already depends on insn.h, so just move AARCH64_BREAK_FAULT into insn-def.h.
It will be used by the following patch to eliminate unnecessary LSE-related encoders when CONFIG_ARM64_LSE_ATOMICS is off.
Signed-off-by: Hou Tao houtao1@huawei.com Link: https://lore.kernel.org/r/20220217072232.1186625-2-houtao1@huawei.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Pu Lehui pulehui@huawei.com --- arch/arm64/include/asm/debug-monitors.h | 12 ------------ arch/arm64/include/asm/insn-def.h | 14 ++++++++++++++ 2 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h index 8de1a840ad97..13d437bcbf58 100644 --- a/arch/arm64/include/asm/debug-monitors.h +++ b/arch/arm64/include/asm/debug-monitors.h @@ -34,18 +34,6 @@ */ #define BREAK_INSTR_SIZE AARCH64_INSN_SIZE
-/* - * BRK instruction encoding - * The #imm16 value should be placed at bits[20:5] within BRK ins - */ -#define AARCH64_BREAK_MON 0xd4200000 - -/* - * BRK instruction for provoking a fault on purpose - * Unlike kgdb, #imm16 value with unallocated handler is used for faulting. - */ -#define AARCH64_BREAK_FAULT (AARCH64_BREAK_MON | (FAULT_BRK_IMM << 5)) - #define AARCH64_BREAK_KGDB_DYN_DBG \ (AARCH64_BREAK_MON | (KGDB_DYN_DBG_BRK_IMM << 5))
diff --git a/arch/arm64/include/asm/insn-def.h b/arch/arm64/include/asm/insn-def.h index 2c075f615c6a..1a7d0d483698 100644 --- a/arch/arm64/include/asm/insn-def.h +++ b/arch/arm64/include/asm/insn-def.h @@ -3,7 +3,21 @@ #ifndef __ASM_INSN_DEF_H #define __ASM_INSN_DEF_H
+#include <asm/brk-imm.h> + /* A64 instructions are always 32 bits. */ #define AARCH64_INSN_SIZE 4
+/* + * BRK instruction encoding + * The #imm16 value should be placed at bits[20:5] within BRK ins + */ +#define AARCH64_BREAK_MON 0xd4200000 + +/* + * BRK instruction for provoking a fault on purpose + * Unlike kgdb, #imm16 value with unallocated handler is used for faulting. + */ +#define AARCH64_BREAK_FAULT (AARCH64_BREAK_MON | (FAULT_BRK_IMM << 5)) + #endif /* __ASM_INSN_DEF_H */
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: 97e58e395e9c074fd096dad13c54e9f4112cf71d
WARNING: Author mismatch between patch and upstream commit: Backport author: Pu Lehuipulehui@huaweicloud.com Commit author: Hou Taohoutao1@huawei.com
Status in newer kernel trees: 6.15.y | Present (exact SHA1) 6.14.y | Present (exact SHA1) 6.12.y | Present (exact SHA1) 6.6.y | Present (exact SHA1) 6.1.y | Present (exact SHA1)
Note: The patch differs from the upstream commit: --- 1: 97e58e395e9c0 ! 1: a01fa5ebdbff5 arm64: move AARCH64_BREAK_FAULT into insn-def.h @@ Metadata ## Commit message ## arm64: move AARCH64_BREAK_FAULT into insn-def.h
+ [ Upstream commit 97e58e395e9c074fd096dad13c54e9f4112cf71d ] + If CONFIG_ARM64_LSE_ATOMICS is off, encoders for LSE-related instructions can return AARCH64_BREAK_FAULT directly in insn.h. In order to access AARCH64_BREAK_FAULT in insn.h, we can not include debug-monitors.h in @@ Commit message Signed-off-by: Hou Tao houtao1@huawei.com Link: https://lore.kernel.org/r/20220217072232.1186625-2-houtao1@huawei.com Signed-off-by: Will Deacon will@kernel.org + Signed-off-by: Pu Lehui pulehui@huawei.com
## arch/arm64/include/asm/debug-monitors.h ## @@ ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-5.15.y | Success | Success |
From: Hou Tao houtao1@huawei.com
[ Upstream commit fa1114d9eba5087ba5e81aab4c56f546995e6cd3 ]
It is a preparation patch for eBPF atomic supports under arm64. eBPF needs support atomic[64]_fetch_add, atomic[64]_[fetch_]{and,or,xor} and atomic[64]_{xchg|cmpxchg}. The ordering semantics of eBPF atomics are the same with the implementations in linux kernel.
Add three helpers to support LDCLR/LDEOR/LDSET/SWP, CAS and DMB instructions. STADD/STCLR/STEOR/STSET are simply encoded as aliases for LDADD/LDCLR/LDEOR/LDSET with XZR as the destination register, so no extra helper is added. atomic_fetch_add() and other atomic ops needs support for STLXR instruction, so extend enum aarch64_insn_ldst_type to do that.
LDADD/LDEOR/LDSET/SWP and CAS instructions are only available when LSE atomics is enabled, so just return AARCH64_BREAK_FAULT directly in these newly-added helpers if CONFIG_ARM64_LSE_ATOMICS is disabled.
Signed-off-by: Hou Tao houtao1@huawei.com Link: https://lore.kernel.org/r/20220217072232.1186625-3-houtao1@huawei.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Pu Lehui pulehui@huawei.com --- arch/arm64/include/asm/insn.h | 80 +++++++++++++-- arch/arm64/lib/insn.c | 185 +++++++++++++++++++++++++++++++--- arch/arm64/net/bpf_jit.h | 11 +- 3 files changed, 253 insertions(+), 23 deletions(-)
diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index b02f0c328c8e..1e5760d567ae 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -206,7 +206,9 @@ enum aarch64_insn_ldst_type { AARCH64_INSN_LDST_LOAD_PAIR_POST_INDEX, AARCH64_INSN_LDST_STORE_PAIR_POST_INDEX, AARCH64_INSN_LDST_LOAD_EX, + AARCH64_INSN_LDST_LOAD_ACQ_EX, AARCH64_INSN_LDST_STORE_EX, + AARCH64_INSN_LDST_STORE_REL_EX, };
enum aarch64_insn_adsb_type { @@ -281,6 +283,36 @@ enum aarch64_insn_adr_type { AARCH64_INSN_ADR_TYPE_ADR, };
+enum aarch64_insn_mem_atomic_op { + AARCH64_INSN_MEM_ATOMIC_ADD, + AARCH64_INSN_MEM_ATOMIC_CLR, + AARCH64_INSN_MEM_ATOMIC_EOR, + AARCH64_INSN_MEM_ATOMIC_SET, + AARCH64_INSN_MEM_ATOMIC_SWP, +}; + +enum aarch64_insn_mem_order_type { + AARCH64_INSN_MEM_ORDER_NONE, + AARCH64_INSN_MEM_ORDER_ACQ, + AARCH64_INSN_MEM_ORDER_REL, + AARCH64_INSN_MEM_ORDER_ACQREL, +}; + +enum aarch64_insn_mb_type { + AARCH64_INSN_MB_SY, + AARCH64_INSN_MB_ST, + AARCH64_INSN_MB_LD, + AARCH64_INSN_MB_ISH, + AARCH64_INSN_MB_ISHST, + AARCH64_INSN_MB_ISHLD, + AARCH64_INSN_MB_NSH, + AARCH64_INSN_MB_NSHST, + AARCH64_INSN_MB_NSHLD, + AARCH64_INSN_MB_OSH, + AARCH64_INSN_MB_OSHST, + AARCH64_INSN_MB_OSHLD, +}; + #define __AARCH64_INSN_FUNCS(abbr, mask, val) \ static __always_inline bool aarch64_insn_is_##abbr(u32 code) \ { \ @@ -304,6 +336,11 @@ __AARCH64_INSN_FUNCS(store_post, 0x3FE00C00, 0x38000400) __AARCH64_INSN_FUNCS(load_post, 0x3FE00C00, 0x38400400) __AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800) __AARCH64_INSN_FUNCS(ldadd, 0x3F20FC00, 0x38200000) +__AARCH64_INSN_FUNCS(ldclr, 0x3F20FC00, 0x38201000) +__AARCH64_INSN_FUNCS(ldeor, 0x3F20FC00, 0x38202000) +__AARCH64_INSN_FUNCS(ldset, 0x3F20FC00, 0x38203000) +__AARCH64_INSN_FUNCS(swp, 0x3F20FC00, 0x38208000) +__AARCH64_INSN_FUNCS(cas, 0x3FA07C00, 0x08A07C00) __AARCH64_INSN_FUNCS(ldr_reg, 0x3FE0EC00, 0x38606800) __AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000) __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) @@ -475,13 +512,6 @@ u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, enum aarch64_insn_register state, enum aarch64_insn_size_type size, enum aarch64_insn_ldst_type type); -u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result, - enum aarch64_insn_register address, - enum aarch64_insn_register value, - enum aarch64_insn_size_type size); -u32 aarch64_insn_gen_stadd(enum aarch64_insn_register address, - enum aarch64_insn_register value, - enum aarch64_insn_size_type size); u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst, enum aarch64_insn_register src, int imm, enum aarch64_insn_variant variant, @@ -542,6 +572,42 @@ u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base, enum aarch64_insn_prfm_type type, enum aarch64_insn_prfm_target target, enum aarch64_insn_prfm_policy policy); +#ifdef CONFIG_ARM64_LSE_ATOMICS +u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result, + enum aarch64_insn_register address, + enum aarch64_insn_register value, + enum aarch64_insn_size_type size, + enum aarch64_insn_mem_atomic_op op, + enum aarch64_insn_mem_order_type order); +u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, + enum aarch64_insn_register address, + enum aarch64_insn_register value, + enum aarch64_insn_size_type size, + enum aarch64_insn_mem_order_type order); +#else +static inline +u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result, + enum aarch64_insn_register address, + enum aarch64_insn_register value, + enum aarch64_insn_size_type size, + enum aarch64_insn_mem_atomic_op op, + enum aarch64_insn_mem_order_type order) +{ + return AARCH64_BREAK_FAULT; +} + +static inline +u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, + enum aarch64_insn_register address, + enum aarch64_insn_register value, + enum aarch64_insn_size_type size, + enum aarch64_insn_mem_order_type order) +{ + return AARCH64_BREAK_FAULT; +} +#endif +u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type); + s32 aarch64_get_branch_offset(u32 insn); u32 aarch64_set_branch_offset(u32 insn, s32 offset);
diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c index fccfe363e567..bd119fde8504 100644 --- a/arch/arm64/lib/insn.c +++ b/arch/arm64/lib/insn.c @@ -578,10 +578,16 @@ u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg,
switch (type) { case AARCH64_INSN_LDST_LOAD_EX: + case AARCH64_INSN_LDST_LOAD_ACQ_EX: insn = aarch64_insn_get_load_ex_value(); + if (type == AARCH64_INSN_LDST_LOAD_ACQ_EX) + insn |= BIT(15); break; case AARCH64_INSN_LDST_STORE_EX: + case AARCH64_INSN_LDST_STORE_REL_EX: insn = aarch64_insn_get_store_ex_value(); + if (type == AARCH64_INSN_LDST_STORE_REL_EX) + insn |= BIT(15); break; default: pr_err("%s: unknown load/store exclusive encoding %d\n", __func__, type); @@ -603,12 +609,65 @@ u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, state); }
-u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result, - enum aarch64_insn_register address, - enum aarch64_insn_register value, - enum aarch64_insn_size_type size) +#ifdef CONFIG_ARM64_LSE_ATOMICS +static u32 aarch64_insn_encode_ldst_order(enum aarch64_insn_mem_order_type type, + u32 insn) { - u32 insn = aarch64_insn_get_ldadd_value(); + u32 order; + + switch (type) { + case AARCH64_INSN_MEM_ORDER_NONE: + order = 0; + break; + case AARCH64_INSN_MEM_ORDER_ACQ: + order = 2; + break; + case AARCH64_INSN_MEM_ORDER_REL: + order = 1; + break; + case AARCH64_INSN_MEM_ORDER_ACQREL: + order = 3; + break; + default: + pr_err("%s: unknown mem order %d\n", __func__, type); + return AARCH64_BREAK_FAULT; + } + + insn &= ~GENMASK(23, 22); + insn |= order << 22; + + return insn; +} + +u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result, + enum aarch64_insn_register address, + enum aarch64_insn_register value, + enum aarch64_insn_size_type size, + enum aarch64_insn_mem_atomic_op op, + enum aarch64_insn_mem_order_type order) +{ + u32 insn; + + switch (op) { + case AARCH64_INSN_MEM_ATOMIC_ADD: + insn = aarch64_insn_get_ldadd_value(); + break; + case AARCH64_INSN_MEM_ATOMIC_CLR: + insn = aarch64_insn_get_ldclr_value(); + break; + case AARCH64_INSN_MEM_ATOMIC_EOR: + insn = aarch64_insn_get_ldeor_value(); + break; + case AARCH64_INSN_MEM_ATOMIC_SET: + insn = aarch64_insn_get_ldset_value(); + break; + case AARCH64_INSN_MEM_ATOMIC_SWP: + insn = aarch64_insn_get_swp_value(); + break; + default: + pr_err("%s: unimplemented mem atomic op %d\n", __func__, op); + return AARCH64_BREAK_FAULT; + }
switch (size) { case AARCH64_INSN_SIZE_32: @@ -621,6 +680,8 @@ u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result,
insn = aarch64_insn_encode_ldst_size(size, insn);
+ insn = aarch64_insn_encode_ldst_order(order, insn); + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, result);
@@ -631,17 +692,68 @@ u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result, value); }
-u32 aarch64_insn_gen_stadd(enum aarch64_insn_register address, - enum aarch64_insn_register value, - enum aarch64_insn_size_type size) +static u32 aarch64_insn_encode_cas_order(enum aarch64_insn_mem_order_type type, + u32 insn) { - /* - * STADD is simply encoded as an alias for LDADD with XZR as - * the destination register. - */ - return aarch64_insn_gen_ldadd(AARCH64_INSN_REG_ZR, address, - value, size); + u32 order; + + switch (type) { + case AARCH64_INSN_MEM_ORDER_NONE: + order = 0; + break; + case AARCH64_INSN_MEM_ORDER_ACQ: + order = BIT(22); + break; + case AARCH64_INSN_MEM_ORDER_REL: + order = BIT(15); + break; + case AARCH64_INSN_MEM_ORDER_ACQREL: + order = BIT(15) | BIT(22); + break; + default: + pr_err("%s: unknown mem order %d\n", __func__, type); + return AARCH64_BREAK_FAULT; + } + + insn &= ~(BIT(15) | BIT(22)); + insn |= order; + + return insn; +} + +u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, + enum aarch64_insn_register address, + enum aarch64_insn_register value, + enum aarch64_insn_size_type size, + enum aarch64_insn_mem_order_type order) +{ + u32 insn; + + switch (size) { + case AARCH64_INSN_SIZE_32: + case AARCH64_INSN_SIZE_64: + break; + default: + pr_err("%s: unimplemented size encoding %d\n", __func__, size); + return AARCH64_BREAK_FAULT; + } + + insn = aarch64_insn_get_cas_value(); + + insn = aarch64_insn_encode_ldst_size(size, insn); + + insn = aarch64_insn_encode_cas_order(order, insn); + + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, + result); + + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, + address); + + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RS, insn, + value); } +#endif
static u32 aarch64_insn_encode_prfm_imm(enum aarch64_insn_prfm_type type, enum aarch64_insn_prfm_target target, @@ -1456,3 +1568,48 @@ u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant, insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn); return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, Rm); } + +u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type) +{ + u32 opt; + u32 insn; + + switch (type) { + case AARCH64_INSN_MB_SY: + opt = 0xf; + break; + case AARCH64_INSN_MB_ST: + opt = 0xe; + break; + case AARCH64_INSN_MB_LD: + opt = 0xd; + break; + case AARCH64_INSN_MB_ISH: + opt = 0xb; + break; + case AARCH64_INSN_MB_ISHST: + opt = 0xa; + break; + case AARCH64_INSN_MB_ISHLD: + opt = 0x9; + break; + case AARCH64_INSN_MB_NSH: + opt = 0x7; + break; + case AARCH64_INSN_MB_NSHST: + opt = 0x6; + break; + case AARCH64_INSN_MB_NSHLD: + opt = 0x5; + break; + default: + pr_err("%s: unknown dmb type %d\n", __func__, type); + return AARCH64_BREAK_FAULT; + } + + insn = aarch64_insn_get_dmb_value(); + insn &= ~GENMASK(11, 8); + insn |= (opt << 8); + + return insn; +} diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h index cc0cf0f5c7c3..9d9250c7cc72 100644 --- a/arch/arm64/net/bpf_jit.h +++ b/arch/arm64/net/bpf_jit.h @@ -89,9 +89,16 @@ #define A64_STXR(sf, Rt, Rn, Rs) \ A64_LSX(sf, Rt, Rn, Rs, STORE_EX)
-/* LSE atomics */ +/* + * LSE atomics + * + * STADD is simply encoded as an alias for LDADD with XZR as + * the destination register. + */ #define A64_STADD(sf, Rn, Rs) \ - aarch64_insn_gen_stadd(Rn, Rs, A64_SIZE(sf)) + aarch64_insn_gen_atomic_ld_op(A64_ZR, Rn, Rs, \ + A64_SIZE(sf), AARCH64_INSN_MEM_ATOMIC_ADD, \ + AARCH64_INSN_MEM_ORDER_NONE)
/* Add/subtract (immediate) */ #define A64_ADDSUB_IMM(sf, Rd, Rn, imm12, type) \
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: fa1114d9eba5087ba5e81aab4c56f546995e6cd3
WARNING: Author mismatch between patch and upstream commit: Backport author: Pu Lehuipulehui@huaweicloud.com Commit author: Hou Taohoutao1@huawei.com
Status in newer kernel trees: 6.15.y | Present (exact SHA1) 6.14.y | Present (exact SHA1) 6.12.y | Present (exact SHA1) 6.6.y | Present (exact SHA1) 6.1.y | Present (exact SHA1)
Note: The patch differs from the upstream commit: --- 1: fa1114d9eba50 ! 1: 4087c33bc97c2 arm64: insn: add encoders for atomic operations @@ Metadata ## Commit message ## arm64: insn: add encoders for atomic operations
+ [ Upstream commit fa1114d9eba5087ba5e81aab4c56f546995e6cd3 ] + It is a preparation patch for eBPF atomic supports under arm64. eBPF needs support atomic[64]_fetch_add, atomic[64]_[fetch_]{and,or,xor} and atomic[64]_{xchg|cmpxchg}. The ordering semantics of eBPF atomics are @@ Commit message Signed-off-by: Hou Tao houtao1@huawei.com Link: https://lore.kernel.org/r/20220217072232.1186625-3-houtao1@huawei.com Signed-off-by: Will Deacon will@kernel.org + Signed-off-by: Pu Lehui pulehui@huawei.com
## arch/arm64/include/asm/insn.h ## @@ arch/arm64/include/asm/insn.h: enum aarch64_insn_ldst_type { ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.1.y | Success | Success |
From: James Morse james.morse@arm.com
[ Upstream commit 63de8abd97ddb9b758bd8f915ecbd18e1f1a87a0 ]
To generate code in the eBPF epilogue that uses the DSB instruction, insn.c needs a heler to encode the type and domain.
Re-use the crm encoding logic from the DMB instruction.
Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Pu Lehui pulehui@huawei.com --- arch/arm64/include/asm/insn.h | 1 + arch/arm64/lib/insn.c | 60 +++++++++++++++++++++-------------- 2 files changed, 38 insertions(+), 23 deletions(-)
diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index 1e5760d567ae..76c8a43604f3 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -607,6 +607,7 @@ u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, } #endif u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type); +u32 aarch64_insn_gen_dsb(enum aarch64_insn_mb_type type);
s32 aarch64_get_branch_offset(u32 insn); u32 aarch64_set_branch_offset(u32 insn, s32 offset); diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c index bd119fde8504..edb85b33be10 100644 --- a/arch/arm64/lib/insn.c +++ b/arch/arm64/lib/insn.c @@ -5,6 +5,7 @@ * * Copyright (C) 2014-2016 Zi Shen Lim zlim.lnx@gmail.com */ +#include <linux/bitfield.h> #include <linux/bitops.h> #include <linux/bug.h> #include <linux/printk.h> @@ -1569,43 +1570,41 @@ u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant, return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, Rm); }
-u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type) +static u32 __get_barrier_crm_val(enum aarch64_insn_mb_type type) { - u32 opt; - u32 insn; - switch (type) { case AARCH64_INSN_MB_SY: - opt = 0xf; - break; + return 0xf; case AARCH64_INSN_MB_ST: - opt = 0xe; - break; + return 0xe; case AARCH64_INSN_MB_LD: - opt = 0xd; - break; + return 0xd; case AARCH64_INSN_MB_ISH: - opt = 0xb; - break; + return 0xb; case AARCH64_INSN_MB_ISHST: - opt = 0xa; - break; + return 0xa; case AARCH64_INSN_MB_ISHLD: - opt = 0x9; - break; + return 0x9; case AARCH64_INSN_MB_NSH: - opt = 0x7; - break; + return 0x7; case AARCH64_INSN_MB_NSHST: - opt = 0x6; - break; + return 0x6; case AARCH64_INSN_MB_NSHLD: - opt = 0x5; - break; + return 0x5; default: - pr_err("%s: unknown dmb type %d\n", __func__, type); + pr_err("%s: unknown barrier type %d\n", __func__, type); return AARCH64_BREAK_FAULT; } +} + +u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type) +{ + u32 opt; + u32 insn; + + opt = __get_barrier_crm_val(type); + if (opt == AARCH64_BREAK_FAULT) + return AARCH64_BREAK_FAULT;
insn = aarch64_insn_get_dmb_value(); insn &= ~GENMASK(11, 8); @@ -1613,3 +1612,18 @@ u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type)
return insn; } + +u32 aarch64_insn_gen_dsb(enum aarch64_insn_mb_type type) +{ + u32 opt, insn; + + opt = __get_barrier_crm_val(type); + if (opt == AARCH64_BREAK_FAULT) + return AARCH64_BREAK_FAULT; + + insn = aarch64_insn_get_dsb_base_value(); + insn &= ~GENMASK(11, 8); + insn |= (opt << 8); + + return insn; +}
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: 63de8abd97ddb9b758bd8f915ecbd18e1f1a87a0
WARNING: Author mismatch between patch and upstream commit: Backport author: Pu Lehuipulehui@huaweicloud.com Commit author: James Morsejames.morse@arm.com
Status in newer kernel trees: 6.15.y | Present (exact SHA1) 6.14.y | Present (different SHA1: 1e1963205784) 6.12.y | Present (different SHA1: 2a3915e86187) 6.6.y | Present (different SHA1: 054fc98d691a) 6.1.y | Present (different SHA1: cc0b8e148c33)
Note: The patch differs from the upstream commit: --- 1: 63de8abd97ddb ! 1: 97289ab86ac0c arm64: insn: Add support for encoding DSB @@ Metadata ## Commit message ## arm64: insn: Add support for encoding DSB
+ [ Upstream commit 63de8abd97ddb9b758bd8f915ecbd18e1f1a87a0 ] + To generate code in the eBPF epilogue that uses the DSB instruction, insn.c needs a heler to encode the type and domain.
@@ Commit message
Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com + Signed-off-by: Pu Lehui pulehui@huawei.com
## arch/arm64/include/asm/insn.h ## @@ arch/arm64/include/asm/insn.h: u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, @@ arch/arm64/include/asm/insn.h: u32 aarch64_insn_gen_cas(enum aarch64_insn_regist #endif u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type); +u32 aarch64_insn_gen_dsb(enum aarch64_insn_mb_type type); - u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result, - enum aarch64_insn_system_register sysreg);
+ s32 aarch64_get_branch_offset(u32 insn); + u32 aarch64_set_branch_offset(u32 insn, s32 offset);
## arch/arm64/lib/insn.c ## @@ @@ arch/arm64/lib/insn.c: u32 aarch64_insn_gen_extr(enum aarch64_insn_variant varia insn = aarch64_insn_get_dmb_value(); insn &= ~GENMASK(11, 8); @@ arch/arm64/lib/insn.c: u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type) + return insn; } - ++ +u32 aarch64_insn_gen_dsb(enum aarch64_insn_mb_type type) +{ + u32 opt, insn; @@ arch/arm64/lib/insn.c: u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type) + + return insn; +} -+ - u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result, - enum aarch64_insn_system_register sysreg) - { ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.1.y | Success | Success |
From: James Morse james.morse@arm.com
[ Upstream commit e7956c92f396a44eeeb6eaf7a5b5e1ad24db6748 ]
is_spectre_bhb_fw_affected() allows the caller to determine if the CPU is known to need a firmware mitigation. CPUs are either on the list of CPUs we know about, or firmware has been queried and reported that the platform is affected - and mitigated by firmware.
This helper is not useful to determine if the platform is mitigated by firmware. A CPU could be on the know list, but the firmware may not be implemented. Its affected but not mitigated.
spectre_bhb_enable_mitigation() handles this distinction by checking the firmware state before enabling the mitigation.
Add a helper to expose this state. This will be used by the BPF JIT to determine if calling firmware for a mitigation is necessary and supported.
Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Pu Lehui pulehui@huawei.com --- arch/arm64/include/asm/spectre.h | 1 + arch/arm64/kernel/proton-pack.c | 5 +++++ 2 files changed, 6 insertions(+)
diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h index 6d7f03adece8..af06d2a4c49c 100644 --- a/arch/arm64/include/asm/spectre.h +++ b/arch/arm64/include/asm/spectre.h @@ -97,6 +97,7 @@ enum mitigation_state arm64_get_meltdown_state(void);
enum mitigation_state arm64_get_spectre_bhb_state(void); bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope); +bool is_spectre_bhb_fw_mitigated(void); void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused); bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr); #endif /* __ASSEMBLY__ */ diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index df8188193c17..e79ca2c4841d 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -1088,6 +1088,11 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry) update_mitigation_state(&spectre_bhb_state, state); }
+bool is_spectre_bhb_fw_mitigated(void) +{ + return test_bit(BHB_FW, &system_bhb_mitigations); +} + /* Patched to NOP when enabled */ void noinstr spectre_bhb_patch_loop_mitigation_enable(struct alt_instr *alt, __le32 *origptr,
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: e7956c92f396a44eeeb6eaf7a5b5e1ad24db6748
WARNING: Author mismatch between patch and upstream commit: Backport author: Pu Lehuipulehui@huaweicloud.com Commit author: James Morsejames.morse@arm.com
Status in newer kernel trees: 6.15.y | Present (exact SHA1) 6.14.y | Present (different SHA1: aa32707744d6) 6.12.y | Present (different SHA1: ec5bca57afc6) 6.6.y | Present (different SHA1: 854da0ed0671) 6.1.y | Present (different SHA1: 351a505eb478)
Note: The patch differs from the upstream commit: --- 1: e7956c92f396a ! 1: 9cc988b44d8ee arm64: proton-pack: Expose whether the platform is mitigated by firmware @@ Metadata ## Commit message ## arm64: proton-pack: Expose whether the platform is mitigated by firmware
+ [ Upstream commit e7956c92f396a44eeeb6eaf7a5b5e1ad24db6748 ] + is_spectre_bhb_fw_affected() allows the caller to determine if the CPU is known to need a firmware mitigation. CPUs are either on the list of CPUs we know about, or firmware has been queried and reported that @@ Commit message
Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com + Signed-off-by: Pu Lehui pulehui@huawei.com
## arch/arm64/include/asm/spectre.h ## @@ arch/arm64/include/asm/spectre.h: enum mitigation_state arm64_get_meltdown_state(void); @@ arch/arm64/include/asm/spectre.h: enum mitigation_state arm64_get_meltdown_state +bool is_spectre_bhb_fw_mitigated(void); void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused); bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr); - + #endif /* __ASSEMBLY__ */
## arch/arm64/kernel/proton-pack.c ## @@ arch/arm64/kernel/proton-pack.c: void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry) ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.1.y | Success | Success |
From: James Morse james.morse@arm.com
[ Upstream commit a1152be30a043d2d4dcb1683415f328bf3c51978 ]
Add a helper to expose the k value of the branchy loop. This is needed by the BPF JIT to generate the mitigation sequence in BPF programs.
Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Pu Lehui pulehui@huawei.com --- arch/arm64/include/asm/spectre.h | 1 + arch/arm64/kernel/proton-pack.c | 5 +++++ 2 files changed, 6 insertions(+)
diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h index af06d2a4c49c..56d1427b95d0 100644 --- a/arch/arm64/include/asm/spectre.h +++ b/arch/arm64/include/asm/spectre.h @@ -97,6 +97,7 @@ enum mitigation_state arm64_get_meltdown_state(void);
enum mitigation_state arm64_get_spectre_bhb_state(void); bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope); +u8 get_spectre_bhb_loop_value(void); bool is_spectre_bhb_fw_mitigated(void); void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused); bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr); diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index e79ca2c4841d..71a6d3dd393a 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -998,6 +998,11 @@ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, return true; }
+u8 get_spectre_bhb_loop_value(void) +{ + return max_bhb_k; +} + static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot) { const char *v = arm64_get_bp_hardening_vector(slot);
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: a1152be30a043d2d4dcb1683415f328bf3c51978
WARNING: Author mismatch between patch and upstream commit: Backport author: Pu Lehuipulehui@huaweicloud.com Commit author: James Morsejames.morse@arm.com
Status in newer kernel trees: 6.15.y | Present (exact SHA1) 6.14.y | Present (different SHA1: 00565e8c0860) 6.12.y | Present (different SHA1: f2aebb8ec64d) 6.6.y | Present (different SHA1: 73591041a551) 6.1.y | Present (different SHA1: 497771234133)
Note: The patch differs from the upstream commit: --- 1: a1152be30a043 ! 1: 20723d888fe98 arm64: proton-pack: Expose whether the branchy loop k value @@ Metadata ## Commit message ## arm64: proton-pack: Expose whether the branchy loop k value
+ [ Upstream commit a1152be30a043d2d4dcb1683415f328bf3c51978 ] + Add a helper to expose the k value of the branchy loop. This is needed by the BPF JIT to generate the mitigation sequence in BPF programs.
Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com + Signed-off-by: Pu Lehui pulehui@huawei.com
## arch/arm64/include/asm/spectre.h ## @@ arch/arm64/include/asm/spectre.h: enum mitigation_state arm64_get_meltdown_state(void); ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.1.y | Success | Success |
From: Liu Song liusong@linux.alibaba.com
[ Upstream commit 877ace9eab7de032f954533afd5d1ecd0cf62eaf ]
In our environment, it was found that the mitigation BHB has a great impact on the benchmark performance. For example, in the lmbench test, the "process fork && exit" test performance drops by 20%. So it is necessary to have the ability to turn off the mitigation individually through cmdline, thus avoiding having to compile the kernel by adjusting the config.
Signed-off-by: Liu Song liusong@linux.alibaba.com Acked-by: Catalin Marinas catalin.marinas@arm.com Link: https://lore.kernel.org/r/1661514050-22263-1-git-send-email-liusong@linux.al... Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Pu Lehui pulehui@huawei.com --- Documentation/admin-guide/kernel-parameters.txt | 5 +++++ arch/arm64/kernel/proton-pack.c | 10 +++++++++- 2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index e0670357d23f..d40b57f3d1e1 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3105,6 +3105,7 @@ spectre_bhi=off [X86] spectre_v2_user=off [X86] ssbd=force-off [ARM64] + nospectre_bhb [ARM64] tsx_async_abort=off [X86]
Exceptions: @@ -3526,6 +3527,10 @@ vulnerability. System may allow data leaks with this option.
+ nospectre_bhb [ARM64] Disable all mitigations for Spectre-BHB (branch + history injection) vulnerability. System may allow data leaks + with this option. + nospec_store_bypass_disable [HW] Disable all mitigations for the Speculative Store Bypass vulnerability
diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index 71a6d3dd393a..9cc14f6de63e 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -1023,6 +1023,14 @@ static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot) isb(); }
+static bool __read_mostly __nospectre_bhb; +static int __init parse_spectre_bhb_param(char *str) +{ + __nospectre_bhb = true; + return 0; +} +early_param("nospectre_bhb", parse_spectre_bhb_param); + void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry) { bp_hardening_cb_t cpu_cb; @@ -1036,7 +1044,7 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry) /* No point mitigating Spectre-BHB alone. */ } else if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY)) { pr_info_once("spectre-bhb mitigation disabled by compile time option\n"); - } else if (cpu_mitigations_off()) { + } else if (cpu_mitigations_off() || __nospectre_bhb) { pr_info_once("spectre-bhb mitigation disabled by command line option\n"); } else if (supports_ecbhb(SCOPE_LOCAL_CPU)) { state = SPECTRE_MITIGATED;
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: 877ace9eab7de032f954533afd5d1ecd0cf62eaf
WARNING: Author mismatch between patch and upstream commit: Backport author: Pu Lehuipulehui@huaweicloud.com Commit author: Liu Songliusong@linux.alibaba.com
Status in newer kernel trees: 6.15.y | Present (exact SHA1) 6.14.y | Present (exact SHA1) 6.12.y | Present (exact SHA1) 6.6.y | Present (exact SHA1) 6.1.y | Present (exact SHA1)
Note: The patch differs from the upstream commit: --- 1: 877ace9eab7de ! 1: 8c7e23e93b2d7 arm64: spectre: increase parameters that can be used to turn off bhb mitigation individually @@ Metadata ## Commit message ## arm64: spectre: increase parameters that can be used to turn off bhb mitigation individually
+ [ Upstream commit 877ace9eab7de032f954533afd5d1ecd0cf62eaf ] + In our environment, it was found that the mitigation BHB has a great impact on the benchmark performance. For example, in the lmbench test, the "process fork && exit" test performance drops by 20%. @@ Commit message Acked-by: Catalin Marinas catalin.marinas@arm.com Link: https://lore.kernel.org/r/1661514050-22263-1-git-send-email-liusong@linux.al... Signed-off-by: Catalin Marinas catalin.marinas@arm.com + Signed-off-by: Pu Lehui pulehui@huawei.com
## Documentation/admin-guide/kernel-parameters.txt ## @@ + spectre_bhi=off [X86] spectre_v2_user=off [X86] - spec_store_bypass_disable=off [X86,PPC] ssbd=force-off [ARM64] + nospectre_bhb [ARM64] - l1tf=off [X86] - mds=off [X86] tsx_async_abort=off [X86] + + Exceptions: @@ vulnerability. System may allow data leaks with this option. ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.1.y | Success | Success |
From: James Morse james.morse@arm.com
[ Upstream commit 0dfefc2ea2f29ced2416017d7e5b1253a54c2735 ]
A malicious BPF program may manipulate the branch history to influence what the hardware speculates will happen next.
On exit from a BPF program, emit the BHB mititgation sequence.
This is only applied for 'classic' cBPF programs that are loaded by seccomp.
Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Acked-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Pu Lehui pulehui@huawei.com --- arch/arm64/include/asm/spectre.h | 1 + arch/arm64/kernel/proton-pack.c | 2 +- arch/arm64/net/bpf_jit_comp.c | 55 +++++++++++++++++++++++++++++--- 3 files changed, 53 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h index 56d1427b95d0..0c55d6ed435d 100644 --- a/arch/arm64/include/asm/spectre.h +++ b/arch/arm64/include/asm/spectre.h @@ -97,6 +97,7 @@ enum mitigation_state arm64_get_meltdown_state(void);
enum mitigation_state arm64_get_spectre_bhb_state(void); bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope); +extern bool __nospectre_bhb; u8 get_spectre_bhb_loop_value(void); bool is_spectre_bhb_fw_mitigated(void); void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused); diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index 9cc14f6de63e..535fab25fde6 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -1023,7 +1023,7 @@ static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot) isb(); }
-static bool __read_mostly __nospectre_bhb; +bool __read_mostly __nospectre_bhb; static int __init parse_spectre_bhb_param(char *str) { __nospectre_bhb = true; diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 4895b4d7e150..2691e53007eb 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -7,14 +7,17 @@
#define pr_fmt(fmt) "bpf_jit: " fmt
+#include <linux/arm-smccc.h> #include <linux/bitfield.h> #include <linux/bpf.h> +#include <linux/cpu.h> #include <linux/filter.h> #include <linux/printk.h> #include <linux/slab.h>
#include <asm/byteorder.h> #include <asm/cacheflush.h> +#include <asm/cpufeature.h> #include <asm/debug-monitors.h> #include <asm/insn.h> #include <asm/set_memory.h> @@ -327,7 +330,48 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx) #undef jmp_offset }
-static void build_epilogue(struct jit_ctx *ctx) +/* Clobbers BPF registers 1-4, aka x0-x3 */ +static void __maybe_unused build_bhb_mitigation(struct jit_ctx *ctx) +{ + const u8 r1 = bpf2a64[BPF_REG_1]; /* aka x0 */ + u8 k = get_spectre_bhb_loop_value(); + + if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY) || + cpu_mitigations_off() || __nospectre_bhb || + arm64_get_spectre_v2_state() == SPECTRE_VULNERABLE) + return; + + if (supports_clearbhb(SCOPE_SYSTEM)) { + emit(aarch64_insn_gen_hint(AARCH64_INSN_HINT_CLEARBHB), ctx); + return; + } + + if (k) { + emit_a64_mov_i64(r1, k, ctx); + emit(A64_B(1), ctx); + emit(A64_SUBS_I(true, r1, r1, 1), ctx); + emit(A64_B_(A64_COND_NE, -2), ctx); + emit(aarch64_insn_gen_dsb(AARCH64_INSN_MB_ISH), ctx); + emit(aarch64_insn_get_isb_value(), ctx); + } + + if (is_spectre_bhb_fw_mitigated()) { + emit(A64_ORR_I(false, r1, AARCH64_INSN_REG_ZR, + ARM_SMCCC_ARCH_WORKAROUND_3), ctx); + switch (arm_smccc_1_1_get_conduit()) { + case SMCCC_CONDUIT_HVC: + emit(aarch64_insn_get_hvc_value(), ctx); + break; + case SMCCC_CONDUIT_SMC: + emit(aarch64_insn_get_smc_value(), ctx); + break; + default: + pr_err_once("Firmware mitigation enabled with unknown conduit\n"); + } + } +} + +static void build_epilogue(struct jit_ctx *ctx, bool was_classic) { const u8 r0 = bpf2a64[BPF_REG_0]; const u8 r6 = bpf2a64[BPF_REG_6]; @@ -346,10 +390,13 @@ static void build_epilogue(struct jit_ctx *ctx) emit(A64_POP(r8, r9, A64_SP), ctx); emit(A64_POP(r6, r7, A64_SP), ctx);
+ if (was_classic) + build_bhb_mitigation(ctx); + /* Restore FP/LR registers */ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
- /* Set return value */ + /* Move the return value from bpf:r0 (aka x7) to x0 */ emit(A64_MOV(1, A64_R(0), r0), ctx);
emit(A64_RET(A64_LR), ctx); @@ -1062,7 +1109,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) }
ctx.epilogue_offset = ctx.idx; - build_epilogue(&ctx); + build_epilogue(&ctx, was_classic);
extable_size = prog->aux->num_exentries * sizeof(struct exception_table_entry); @@ -1094,7 +1141,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) goto out_off; }
- build_epilogue(&ctx); + build_epilogue(&ctx, was_classic);
/* 3. Extra pass to validate JITed code. */ if (validate_code(&ctx)) {
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: 0dfefc2ea2f29ced2416017d7e5b1253a54c2735
WARNING: Author mismatch between patch and upstream commit: Backport author: Pu Lehuipulehui@huaweicloud.com Commit author: James Morsejames.morse@arm.com
Status in newer kernel trees: 6.15.y | Present (exact SHA1) 6.14.y | Present (different SHA1: 852b8ae934b5) 6.12.y | Present (different SHA1: 38c345fd54af) 6.6.y | Present (different SHA1: 42a20cf51011) 6.1.y | Present (different SHA1: 8fe5c37b0e08)
Note: The patch differs from the upstream commit: --- 1: 0dfefc2ea2f29 ! 1: f697b935a7719 arm64: bpf: Add BHB mitigation to the epilogue for cBPF programs @@ Metadata ## Commit message ## arm64: bpf: Add BHB mitigation to the epilogue for cBPF programs
+ [ Upstream commit 0dfefc2ea2f29ced2416017d7e5b1253a54c2735 ] + A malicious BPF program may manipulate the branch history to influence what the hardware speculates will happen next.
@@ Commit message Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Acked-by: Daniel Borkmann daniel@iogearbox.net + Signed-off-by: Pu Lehui pulehui@huawei.com
## arch/arm64/include/asm/spectre.h ## @@ arch/arm64/include/asm/spectre.h: enum mitigation_state arm64_get_meltdown_state(void); @@ arch/arm64/net/bpf_jit_comp.c +#include <linux/arm-smccc.h> #include <linux/bitfield.h> #include <linux/bpf.h> ++#include <linux/cpu.h> #include <linux/filter.h> -@@ - #include <asm/asm-extable.h> + #include <linux/printk.h> + #include <linux/slab.h> + #include <asm/byteorder.h> #include <asm/cacheflush.h> +#include <asm/cpufeature.h> #include <asm/debug-monitors.h> #include <asm/insn.h> - #include <asm/text-patching.h> -@@ arch/arm64/net/bpf_jit_comp.c: static void build_plt(struct jit_ctx *ctx) - plt->target = (u64)&dummy_tramp; + #include <asm/set_memory.h> +@@ arch/arm64/net/bpf_jit_comp.c: static int emit_bpf_tail_call(struct jit_ctx *ctx) + #undef jmp_offset }
-static void build_epilogue(struct jit_ctx *ctx) @@ arch/arm64/net/bpf_jit_comp.c: static void build_plt(struct jit_ctx *ctx) +static void build_epilogue(struct jit_ctx *ctx, bool was_classic) { const u8 r0 = bpf2a64[BPF_REG_0]; - const u8 ptr = bpf2a64[TCCNT_PTR]; + const u8 r6 = bpf2a64[BPF_REG_6]; @@ arch/arm64/net/bpf_jit_comp.c: static void build_epilogue(struct jit_ctx *ctx) - - emit(A64_POP(A64_ZR, ptr, A64_SP), ctx); + emit(A64_POP(r8, r9, A64_SP), ctx); + emit(A64_POP(r6, r7, A64_SP), ctx);
+ if (was_classic) + build_bhb_mitigation(ctx); @@ arch/arm64/net/bpf_jit_comp.c: static void build_epilogue(struct jit_ctx *ctx) + /* Move the return value from bpf:r0 (aka x7) to x0 */ emit(A64_MOV(1, A64_R(0), r0), ctx);
- /* Authenticate lr */ + emit(A64_RET(A64_LR), ctx); @@ arch/arm64/net/bpf_jit_comp.c: struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) }
ctx.epilogue_offset = ctx.idx; - build_epilogue(&ctx); + build_epilogue(&ctx, was_classic); - build_plt(&ctx);
- extable_align = __alignof__(struct exception_table_entry); + extable_size = prog->aux->num_exentries * + sizeof(struct exception_table_entry); @@ arch/arm64/net/bpf_jit_comp.c: struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) - goto out_free_hdr; + goto out_off; }
- build_epilogue(&ctx); + build_epilogue(&ctx, was_classic); - build_plt(&ctx);
- /* Extra pass to validate JITed code. */ + /* 3. Extra pass to validate JITed code. */ + if (validate_code(&ctx)) { ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.1.y | Success | Success |
From: James Morse james.morse@arm.com
[ Upstream commit f300769ead032513a68e4a02e806393402e626f8 ]
Support for eBPF programs loaded by unprivileged users is typically disabled. This means only cBPF programs need to be mitigated for BHB.
In addition, only mitigate cBPF programs that were loaded by an unprivileged user. Privileged users can also load the same program via eBPF, making the mitigation pointless.
Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Acked-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Pu Lehui pulehui@huawei.com --- arch/arm64/net/bpf_jit_comp.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 2691e53007eb..654e7ed2d1a6 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -341,6 +341,9 @@ static void __maybe_unused build_bhb_mitigation(struct jit_ctx *ctx) arm64_get_spectre_v2_state() == SPECTRE_VULNERABLE) return;
+ if (capable(CAP_SYS_ADMIN)) + return; + if (supports_clearbhb(SCOPE_SYSTEM)) { emit(aarch64_insn_gen_hint(AARCH64_INSN_HINT_CLEARBHB), ctx); return;
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: f300769ead032513a68e4a02e806393402e626f8
WARNING: Author mismatch between patch and upstream commit: Backport author: Pu Lehuipulehui@huaweicloud.com Commit author: James Morsejames.morse@arm.com
Status in newer kernel trees: 6.15.y | Present (exact SHA1) 6.14.y | Present (different SHA1: 477481c43482) 6.12.y | Present (different SHA1: e5f5100f1c64) 6.6.y | Present (different SHA1: 80251f62028f) 6.1.y | Present (different SHA1: 6e52d043f7db)
Note: The patch differs from the upstream commit: --- 1: f300769ead032 ! 1: 238fedf68f1c7 arm64: bpf: Only mitigate cBPF programs loaded by unprivileged users @@ Metadata ## Commit message ## arm64: bpf: Only mitigate cBPF programs loaded by unprivileged users
+ [ Upstream commit f300769ead032513a68e4a02e806393402e626f8 ] + Support for eBPF programs loaded by unprivileged users is typically disabled. This means only cBPF programs need to be mitigated for BHB.
@@ Commit message Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Acked-by: Daniel Borkmann daniel@iogearbox.net + Signed-off-by: Pu Lehui pulehui@huawei.com
## arch/arm64/net/bpf_jit_comp.c ## @@ arch/arm64/net/bpf_jit_comp.c: static void __maybe_unused build_bhb_mitigation(struct jit_ctx *ctx) ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.1.y | Success | Success |
From: James Morse james.morse@arm.com
[ Upstream commit efe676a1a7554219eae0b0dcfe1e0cdcc9ef9aef ]
Update the list of 'k' values for the branch mitigation from arm's website.
Add the values for Cortex-X1C. The MIDR_EL1 value can be found here: https://developer.arm.com/documentation/101968/0002/Register-descriptions/AA...
Link: https://developer.arm.com/documentation/110280/2-0/?lang=en Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Pu Lehui pulehui@huawei.com --- arch/arm64/include/asm/cputype.h | 2 ++ arch/arm64/kernel/proton-pack.c | 1 + 2 files changed, 3 insertions(+)
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index 8fe0c8d0057a..ca093982cbf7 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -81,6 +81,7 @@ #define ARM_CPU_PART_CORTEX_A78AE 0xD42 #define ARM_CPU_PART_CORTEX_X1 0xD44 #define ARM_CPU_PART_CORTEX_A510 0xD46 +#define ARM_CPU_PART_CORTEX_X1C 0xD4C #define ARM_CPU_PART_CORTEX_A520 0xD80 #define ARM_CPU_PART_CORTEX_A710 0xD47 #define ARM_CPU_PART_CORTEX_A715 0xD4D @@ -147,6 +148,7 @@ #define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE) #define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1) #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510) +#define MIDR_CORTEX_X1C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C) #define MIDR_CORTEX_A520 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A520) #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) #define MIDR_CORTEX_A715 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A715) diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index 535fab25fde6..42359eaba2db 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -891,6 +891,7 @@ static u8 spectre_bhb_loop_affected(void) MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE), MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C), MIDR_ALL_VERSIONS(MIDR_CORTEX_X1), + MIDR_ALL_VERSIONS(MIDR_CORTEX_X1C), MIDR_ALL_VERSIONS(MIDR_CORTEX_A710), MIDR_ALL_VERSIONS(MIDR_CORTEX_X2), MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: efe676a1a7554219eae0b0dcfe1e0cdcc9ef9aef
WARNING: Author mismatch between patch and upstream commit: Backport author: Pu Lehuipulehui@huaweicloud.com Commit author: James Morsejames.morse@arm.com
Status in newer kernel trees: 6.15.y | Present (exact SHA1) 6.14.y | Present (different SHA1: 2dc0e36bb942) 6.12.y | Present (different SHA1: 2176530849b1) 6.6.y | Present (different SHA1: ca8a5626ca0c) 6.1.y | Present (different SHA1: 9fc1391552ad)
Note: The patch differs from the upstream commit: --- 1: efe676a1a7554 ! 1: f17311a7544fc arm64: proton-pack: Add new CPUs 'k' values for branch mitigation @@ Metadata ## Commit message ## arm64: proton-pack: Add new CPUs 'k' values for branch mitigation
+ [ Upstream commit efe676a1a7554219eae0b0dcfe1e0cdcc9ef9aef ] + Update the list of 'k' values for the branch mitigation from arm's website.
@@ Commit message Link: https://developer.arm.com/documentation/110280/2-0/?lang=en Signed-off-by: James Morse james.morse@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com + Signed-off-by: Pu Lehui pulehui@huawei.com
## arch/arm64/include/asm/cputype.h ## @@ ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.1.y | Success | Success |
linux-stable-mirror@lists.linaro.org