Syzbot caught an array out-of-bounds bug [1]. It turns out that when the BPF program runs through do_misc_fixups(), it allocates an extra 8 bytes on the call stack, which eventually causes stack_depth to exceed 512.
I was able to reproduce this issue probabilistically by enabling CONFIG_UBSAN=y and disabling CONFIG_BPF_JIT_ALWAYS_ON with the selfttest I provide in second patch(although it doesn't happen every time - I didn't dig deeper into why UBSAN behaves this way).
Furthermore, if I set /proc/sys/net/core/bpf_jit_enable to 0 to disable the jit, a panic occurs, and the reason is the same, that bpf_func is assigned an incorrect address.
[---[ end trace ]--- [Oops: general protection fault, probably for non-canonical address 0x100f0e0e0d090808: 0000 [#1] PREEMPT SMP NOPTI [Tainted: [W]=WARN, [O]=OOT_MODULE [RIP: 0010:bpf_test_run+0x1d2/0x360 [RSP: 0018:ffffafc7955178a0 EFLAGS: 00010246 [RAX: 100f0e0e0d090808 RBX: ffff8e9fdb2c4100 RCX: 0000000000000018 [RDX: 00000000002b5b18 RSI: ffffafc780497048 RDI: ffff8ea04d601700 [RBP: ffffafc780497000 R08: ffffafc795517a0c R09: 0000000000000000 [R10: 0000000000000000 R11: fefefefefefefeff R12: ffff8ea04d601700 [R13: ffffafc795517928 R14: ffffafc795517928 R15: 0000000000000000 [CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [CR2: 00007f181c064648 CR3: 00000001aa2be003 CR4: 0000000000770ef0 [DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400 [PKRU: 55555554 [Call Trace: [ <TASK> [ ? die_addr+0x36/0x90 [ ? exc_general_protection+0x237/0x430 [ ? asm_exc_general_protection+0x26/0x30 [ ? bpf_test_run+0x1d2/0x360 [ ? bpf_test_run+0x10d/0x360 [ ? __link_object+0x12a/0x1e0 [ ? slab_build_skb+0x23/0x130 [ ? kmem_cache_alloc_noprof+0x2ea/0x3f0 [ ? sk_prot_alloc+0xc2/0x120 [ bpf_prog_test_run_skb+0x21b/0x590 [ __sys_bpf+0x340/0xa80 [ __x64_sys_bpf+0x1e/0x30
--- v1 -> v2: Directly reject loading programs with a stack size greater than 512 when jit disabled.(Suggested by Alexei Starovoitov) https://lore.kernel.org/bpf/20250212135251.85487-1-mrpre@163.com/T/#u
--- Jiayuan Chen (3): bpf: Fix array bounds error with may_goto selftests/bpf: Allow the program to select specific modes for testing selftests/bpf: Add selftest for may_goto
kernel/bpf/core.c | 18 +++++-- kernel/bpf/verifier.c | 7 +++ tools/testing/selftests/bpf/progs/bpf_misc.h | 2 + .../selftests/bpf/progs/verifier_stack_ptr.c | 50 +++++++++++++++++++ tools/testing/selftests/bpf/test_loader.c | 27 ++++++++++ 5 files changed, 100 insertions(+), 4 deletions(-)
may_goto uses an additional 8 bytes on the stack, which causes the interpreters[] array to go out of bounds when calculating index by stack_size.
1. If a BPF program is rewritten, re-evaluate the stack size. For non-JIT cases, reject loading directly.
2. For non-JIT cases, calculating interpreters[idx] may still cause out-of-bounds array access, and just warn about it.
3. For jit_requested cases, the execution of bpf_func also needs to be warned. So Move the definition of function __bpf_prog_ret0_warn out of the macro definition CONFIG_BPF_JIT_ALWAYS_ON
Reported-by: syzbot+d2a2c639d03ac200a4f1@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/0000000000000f823606139faa5d@google.com/ Fixes: 011832b97b311 ("bpf: Introduce may_goto instruction") Signed-off-by: Jiayuan Chen mrpre@163.com --- kernel/bpf/core.c | 18 ++++++++++++++---- kernel/bpf/verifier.c | 7 +++++++ 2 files changed, 21 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index da729cbbaeb9..59291261f825 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2269,6 +2269,9 @@ EVAL6(PROG_NAME_LIST, 32, 64, 96, 128, 160, 192) EVAL6(PROG_NAME_LIST, 224, 256, 288, 320, 352, 384) EVAL4(PROG_NAME_LIST, 416, 448, 480, 512) }; + +#define MAX_INTERPRETERS_CALLBACK (sizeof(interpreters) / sizeof(*interpreters)) + #undef PROG_NAME_LIST #define PROG_NAME_LIST(stack_size) PROG_NAME_ARGS(stack_size), static __maybe_unused @@ -2290,17 +2293,18 @@ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth) insn->code = BPF_JMP | BPF_CALL_ARGS; } #endif -#else +#endif + static unsigned int __bpf_prog_ret0_warn(const void *ctx, const struct bpf_insn *insn) { /* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON - * is not working properly, so warn about it! + * is not working properly, or interpreter is being used when + * prog->jit_requested is not 0, so warn about it! */ WARN_ON_ONCE(1); return 0; } -#endif
bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp) @@ -2380,8 +2384,14 @@ static void bpf_prog_select_func(struct bpf_prog *fp) { #ifndef CONFIG_BPF_JIT_ALWAYS_ON u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1); + u32 idx = (round_up(stack_depth, 32) / 32) - 1;
- fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1]; + if (!fp->jit_requested) { + WARN_ON_ONCE(idx >= MAX_INTERPRETERS_CALLBACK); + fp->bpf_func = interpreters[idx]; + } else { + fp->bpf_func = __bpf_prog_ret0_warn; + } #else fp->bpf_func = __bpf_prog_ret0_warn; #endif diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9971c03adfd5..fcd302904ba0 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -21882,6 +21882,13 @@ static int do_misc_fixups(struct bpf_verifier_env *env) if (subprogs[cur_subprog + 1].start == i + delta + 1) { subprogs[cur_subprog].stack_depth += stack_depth_extra; subprogs[cur_subprog].stack_extra = stack_depth_extra; + + stack_depth = subprogs[cur_subprog].stack_depth; + if (stack_depth > MAX_BPF_STACK && !prog->jit_requested) { + verbose(env, "stack size %d(extra %d) is too large\n", + stack_depth, stack_depth_extra); + return -EINVAL; + } cur_subprog++; stack_depth = subprogs[cur_subprog].stack_depth; stack_depth_extra = 0;
On Thu, Feb 13, 2025 at 5:13 AM Jiayuan Chen mrpre@163.com wrote:
may_goto uses an additional 8 bytes on the stack, which causes the interpreters[] array to go out of bounds when calculating index by stack_size.
- If a BPF program is rewritten, re-evaluate the stack size. For non-JIT
cases, reject loading directly.
- For non-JIT cases, calculating interpreters[idx] may still cause
out-of-bounds array access, and just warn about it.
- For jit_requested cases, the execution of bpf_func also needs to be
warned. So Move the definition of function __bpf_prog_ret0_warn out of the macro definition CONFIG_BPF_JIT_ALWAYS_ON
Reported-by: syzbot+d2a2c639d03ac200a4f1@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/0000000000000f823606139faa5d@google.com/ Fixes: 011832b97b311 ("bpf: Introduce may_goto instruction") Signed-off-by: Jiayuan Chen mrpre@163.com
kernel/bpf/core.c | 18 ++++++++++++++---- kernel/bpf/verifier.c | 7 +++++++ 2 files changed, 21 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index da729cbbaeb9..59291261f825 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2269,6 +2269,9 @@ EVAL6(PROG_NAME_LIST, 32, 64, 96, 128, 160, 192) EVAL6(PROG_NAME_LIST, 224, 256, 288, 320, 352, 384) EVAL4(PROG_NAME_LIST, 416, 448, 480, 512) };
+#define MAX_INTERPRETERS_CALLBACK (sizeof(interpreters) / sizeof(*interpreters))
There is ARRAY_SIZE macro.
#undef PROG_NAME_LIST #define PROG_NAME_LIST(stack_size) PROG_NAME_ARGS(stack_size), static __maybe_unused @@ -2290,17 +2293,18 @@ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth) insn->code = BPF_JMP | BPF_CALL_ARGS; } #endif -#else +#endif
static unsigned int __bpf_prog_ret0_warn(const void *ctx, const struct bpf_insn *insn) { /* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON
* is not working properly, so warn about it!
* is not working properly, or interpreter is being used when
* prog->jit_requested is not 0, so warn about it! */ WARN_ON_ONCE(1); return 0;
} -#endif
bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp) @@ -2380,8 +2384,14 @@ static void bpf_prog_select_func(struct bpf_prog *fp) { #ifndef CONFIG_BPF_JIT_ALWAYS_ON u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
u32 idx = (round_up(stack_depth, 32) / 32) - 1;
fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1];
if (!fp->jit_requested) {
I don't think above check is necessary. Why not just if (WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters))) fp->bpf_func = __bpf_prog_ret0_warn; else fp->bpf_func = interpreters[idx];
WARN_ON_ONCE(idx >= MAX_INTERPRETERS_CALLBACK);
fp->bpf_func = interpreters[idx];
} else {
fp->bpf_func = __bpf_prog_ret0_warn;
}
#else fp->bpf_func = __bpf_prog_ret0_warn; #endif diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9971c03adfd5..fcd302904ba0 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -21882,6 +21882,13 @@ static int do_misc_fixups(struct bpf_verifier_env *env) if (subprogs[cur_subprog + 1].start == i + delta + 1) { subprogs[cur_subprog].stack_depth += stack_depth_extra; subprogs[cur_subprog].stack_extra = stack_depth_extra;
stack_depth = subprogs[cur_subprog].stack_depth;
if (stack_depth > MAX_BPF_STACK && !prog->jit_requested) {
verbose(env, "stack size %d(extra %d) is too large\n",
stack_depth, stack_depth_extra);
return -EINVAL;
} cur_subprog++; stack_depth = subprogs[cur_subprog].stack_depth; stack_depth_extra = 0;
-- 2.47.1
On Thu, Feb 13, 2025 at 08:02:55AM -0800, Alexei Starovoitov wrote:
On Thu, Feb 13, 2025 at 5:13 AM Jiayuan Chen mrpre@163.com wrote:
may_goto uses an additional 8 bytes on the stack, which causes the interpreters[] array to go out of bounds when calculating index by stack_size.
- If a BPF program is rewritten, re-evaluate the stack size. For non-JIT
cases, reject loading directly.
- For non-JIT cases, calculating interpreters[idx] may still cause
out-of-bounds array access, and just warn about it.
- For jit_requested cases, the execution of bpf_func also needs to be
warned. So Move the definition of function __bpf_prog_ret0_warn out of the macro definition CONFIG_BPF_JIT_ALWAYS_ON
[...]
EVAL6(PROG_NAME_LIST, 224, 256, 288, 320, 352, 384) EVAL4(PROG_NAME_LIST, 416, 448, 480, 512) };
+#define MAX_INTERPRETERS_CALLBACK (sizeof(interpreters) / sizeof(*interpreters))
There is ARRAY_SIZE macro.
Thanks, I will use it.
#undef PROG_NAME_LIST #define PROG_NAME_LIST(stack_size) PROG_NAME_ARGS(stack_size), static __maybe_unused @@ -2290,17 +2293,18 @@ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth) insn->code = BPF_JMP | BPF_CALL_ARGS; } #endif -#else +#endif
static unsigned int __bpf_prog_ret0_warn(const void *ctx, const struct bpf_insn *insn) { /* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON
* is not working properly, so warn about it!
* is not working properly, or interpreter is being used when
* prog->jit_requested is not 0, so warn about it! */ WARN_ON_ONCE(1); return 0;
} -#endif
bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp) @@ -2380,8 +2384,14 @@ static void bpf_prog_select_func(struct bpf_prog *fp) { #ifndef CONFIG_BPF_JIT_ALWAYS_ON u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
u32 idx = (round_up(stack_depth, 32) / 32) - 1;
fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1];
if (!fp->jit_requested) {
I don't think above check is necessary. Why not just if (WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters))) fp->bpf_func = __bpf_prog_ret0_warn; else fp->bpf_func = interpreters[idx];
When jit_requested is set 1, the stack_depth can still go above 512, and we'd end up executing this function, where the index calculation would overflow, triggering an array out-of-bounds warning from USCAN or WAR().
WARN_ON_ONCE(idx >= MAX_INTERPRETERS_CALLBACK);
fp->bpf_func = interpreters[idx];
} else {
fp->bpf_func = __bpf_prog_ret0_warn; stack_depth_extra = 0;
-- 2.47.1
On Thu, Feb 13, 2025 at 9:03 AM Jiayuan Chen mrpre@163.com wrote:
On Thu, Feb 13, 2025 at 08:02:55AM -0800, Alexei Starovoitov wrote:
On Thu, Feb 13, 2025 at 5:13 AM Jiayuan Chen mrpre@163.com wrote:
may_goto uses an additional 8 bytes on the stack, which causes the interpreters[] array to go out of bounds when calculating index by stack_size.
- If a BPF program is rewritten, re-evaluate the stack size. For non-JIT
cases, reject loading directly.
- For non-JIT cases, calculating interpreters[idx] may still cause
out-of-bounds array access, and just warn about it.
- For jit_requested cases, the execution of bpf_func also needs to be
warned. So Move the definition of function __bpf_prog_ret0_warn out of the macro definition CONFIG_BPF_JIT_ALWAYS_ON
[...]
EVAL6(PROG_NAME_LIST, 224, 256, 288, 320, 352, 384) EVAL4(PROG_NAME_LIST, 416, 448, 480, 512) };
+#define MAX_INTERPRETERS_CALLBACK (sizeof(interpreters) / sizeof(*interpreters))
There is ARRAY_SIZE macro.
Thanks, I will use it.
#undef PROG_NAME_LIST #define PROG_NAME_LIST(stack_size) PROG_NAME_ARGS(stack_size), static __maybe_unused @@ -2290,17 +2293,18 @@ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth) insn->code = BPF_JMP | BPF_CALL_ARGS; } #endif -#else +#endif
static unsigned int __bpf_prog_ret0_warn(const void *ctx, const struct bpf_insn *insn) { /* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON
* is not working properly, so warn about it!
* is not working properly, or interpreter is being used when
* prog->jit_requested is not 0, so warn about it! */ WARN_ON_ONCE(1); return 0;
} -#endif
bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp) @@ -2380,8 +2384,14 @@ static void bpf_prog_select_func(struct bpf_prog *fp) { #ifndef CONFIG_BPF_JIT_ALWAYS_ON u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
u32 idx = (round_up(stack_depth, 32) / 32) - 1;
fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1];
if (!fp->jit_requested) {
I don't think above check is necessary. Why not just if (WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters))) fp->bpf_func = __bpf_prog_ret0_warn; else fp->bpf_func = interpreters[idx];
When jit_requested is set 1, the stack_depth can still go above 512, and we'd end up executing this function, where the index calculation would overflow, triggering an array out-of-bounds warning from USCAN or WAR().
Ok, then do: if (!fp->jit_requested && WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters)))
WARN_ON_ONCE(idx >= MAX_INTERPRETERS_CALLBACK);
fp->bpf_func = interpreters[idx];
since warning and anyway proceeding to access the array out of bounds is just wrong.
In some cases, the verification logic under the interpreter and JIT differs, such as may_goto, and the test program behaves differently under different runtime modes, requiring separate verification logic for each result.
Signed-off-by: Jiayuan Chen mrpre@163.com --- tools/testing/selftests/bpf/progs/bpf_misc.h | 2 ++ tools/testing/selftests/bpf/test_loader.c | 27 ++++++++++++++++++++ 2 files changed, 29 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h index f45f4352feeb..dd23822cfb5a 100644 --- a/tools/testing/selftests/bpf/progs/bpf_misc.h +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h @@ -135,6 +135,8 @@ #define __arch_arm64 __arch("ARM64") #define __arch_riscv64 __arch("RISCV64") #define __caps_unpriv(caps) __attribute__((btf_decl_tag("comment:test_caps_unpriv=" EXPAND_QUOTE(caps)))) +#define __use_jit() __attribute__((btf_decl_tag("comment:run_mode=jit"))) +#define __use_interp() __attribute__((btf_decl_tag("comment:run_mode=interpreter")))
/* Define common capabilities tested using __caps_unpriv */ #define CAP_NET_ADMIN 12 diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c index 53b06647cf57..2c23178d9a7a 100644 --- a/tools/testing/selftests/bpf/test_loader.c +++ b/tools/testing/selftests/bpf/test_loader.c @@ -37,6 +37,7 @@ #define TEST_TAG_JITED_PFX "comment:test_jited=" #define TEST_TAG_JITED_PFX_UNPRIV "comment:test_jited_unpriv=" #define TEST_TAG_CAPS_UNPRIV "comment:test_caps_unpriv=" +#define TEST_TAG_RUN_MODE_PFX "comment:run_mode="
/* Warning: duplicated in bpf_misc.h */ #define POINTER_VALUE 0xcafe4all @@ -55,6 +56,11 @@ enum mode { UNPRIV = 2 };
+enum run_mode { + JIT = 1 << 0, + INTERP = 1 << 1 +}; + struct expect_msg { const char *substr; /* substring match */ regex_t regex; @@ -87,6 +93,7 @@ struct test_spec { int prog_flags; int mode_mask; int arch_mask; + int run_mode; bool auxiliary; bool valid; }; @@ -406,6 +413,7 @@ static int parse_test_spec(struct test_loader *tester, bool collect_jit = false; int func_id, i, err = 0; u32 arch_mask = 0; + u32 run_mode = 0; struct btf *btf; enum arch arch;
@@ -580,10 +588,22 @@ static int parse_test_spec(struct test_loader *tester, if (err) goto cleanup; spec->mode_mask |= UNPRIV; + } else if (str_has_pfx(s, TEST_TAG_RUN_MODE_PFX)) { + val = s + sizeof(TEST_TAG_RUN_MODE_PFX) - 1; + if (strcmp(val, "jit") == 0) { + run_mode = JIT; + } else if (strcmp(val, "interpreter") == 0) { + run_mode = INTERP; + } else { + PRINT_FAIL("bad run mode spec: '%s'", val); + err = -EINVAL; + goto cleanup; + } } }
spec->arch_mask = arch_mask ?: -1; + spec->run_mode = run_mode ?: (JIT | INTERP);
if (spec->mode_mask == 0) spec->mode_mask = PRIV; @@ -930,6 +950,7 @@ void run_subtest(struct test_loader *tester, struct test_subspec *subspec = unpriv ? &spec->unpriv : &spec->priv; struct bpf_program *tprog = NULL, *tprog_iter; struct bpf_link *link, *links[32] = {}; + bool jit_enabled = is_jit_enabled(); struct test_spec *spec_iter; struct cap_state caps = {}; struct bpf_object *tobj; @@ -946,6 +967,12 @@ void run_subtest(struct test_loader *tester, return; }
+ if ((jit_enabled && spec->run_mode & INTERP) || + (!jit_enabled && spec->run_mode & JIT)) { + test__skip(); + return; + } + if (unpriv) { if (!can_execute_unpriv(tester, spec)) { test__skip();
Add test cases to ensure the maximum stack size can be properly limited to 512.
Test result: echo "0" > /proc/sys/net/core/bpf_jit_enable ./test_progs -t verifier_stack_ptr verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto with jit:SKIP verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto without jit:OK
echo "1" > /proc/sys/net/core/bpf_jit_enable verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto with jit:OK verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto without jit:SKIP
Signed-off-by: Jiayuan Chen mrpre@163.com --- .../selftests/bpf/progs/verifier_stack_ptr.c | 50 +++++++++++++++++++ 1 file changed, 50 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c index 417c61cd4b19..8ffe5a01d140 100644 --- a/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c +++ b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c @@ -481,4 +481,54 @@ l1_%=: r0 = 42; \ : __clobber_all); }
+SEC("socket") +__description("PTR_TO_STACK stack size > 512") +__failure __msg("invalid write to stack R1 off=-520 size=8") +__naked void stack_check_size_gt_512(void) +{ + asm volatile ( + "r1 = r10;" + "r1 += -520;" + "r0 = 42;" + "*(u64*)(r1 + 0) = r0;" + "exit;" + ::: __clobber_all); +} + +#ifdef __BPF_FEATURE_MAY_GOTO +SEC("socket") +__description("PTR_TO_STACK stack size 512 with may_goto with jit") +__use_jit() +__success __retval(42) +__naked void stack_check_size_512_with_may_goto_jit(void) +{ + asm volatile ( + "r1 = r10;" + "r1 += -512;" + "r0 = 42;" + "*(u32*)(r1 + 0) = r0;" + "may_goto l0_%=;" + "r2 = 100;" +"l0_%=: exit;" + ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK stack size 512 with may_goto without jit") +__use_interp() +__failure __msg("stack size 520(extra 8) is too large") +__naked void stack_check_size_512_with_may_goto(void) +{ + asm volatile ( + "r1 = r10;" + "r1 += -512;" + "r0 = 42;" + "*(u32*)(r1 + 0) = r0;" + "may_goto l0_%=;" + "r2 = 100;" +"l0_%=: exit;" + ::: __clobber_all); +} +#endif + char _license[] SEC("license") = "GPL";
On Thu, Feb 13, 2025 at 5:13 AM Jiayuan Chen mrpre@163.com wrote:
Add test cases to ensure the maximum stack size can be properly limited to 512.
Test result: echo "0" > /proc/sys/net/core/bpf_jit_enable ./test_progs -t verifier_stack_ptr verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto with jit:SKIP verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto without jit:OK
echo "1" > /proc/sys/net/core/bpf_jit_enable verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto with jit:OK verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto without jit:SKIP
echo '0|1' is not longer necessary ? The commit log seems obsolete?
pw-bot: cr
Signed-off-by: Jiayuan Chen mrpre@163.com
.../selftests/bpf/progs/verifier_stack_ptr.c | 50 +++++++++++++++++++ 1 file changed, 50 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c index 417c61cd4b19..8ffe5a01d140 100644 --- a/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c +++ b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c @@ -481,4 +481,54 @@ l1_%=: r0 = 42; \ : __clobber_all); }
+SEC("socket") +__description("PTR_TO_STACK stack size > 512") +__failure __msg("invalid write to stack R1 off=-520 size=8") +__naked void stack_check_size_gt_512(void) +{
asm volatile (
"r1 = r10;"
"r1 += -520;"
"r0 = 42;"
"*(u64*)(r1 + 0) = r0;"
"exit;"
::: __clobber_all);
+}
+#ifdef __BPF_FEATURE_MAY_GOTO +SEC("socket") +__description("PTR_TO_STACK stack size 512 with may_goto with jit") +__use_jit() +__success __retval(42) +__naked void stack_check_size_512_with_may_goto_jit(void) +{
asm volatile (
"r1 = r10;"
"r1 += -512;"
"r0 = 42;"
"*(u32*)(r1 + 0) = r0;"
"may_goto l0_%=;"
"r2 = 100;"
+"l0_%=: exit;"
::: __clobber_all);
+}
+SEC("socket") +__description("PTR_TO_STACK stack size 512 with may_goto without jit") +__use_interp() +__failure __msg("stack size 520(extra 8) is too large") +__naked void stack_check_size_512_with_may_goto(void) +{
asm volatile (
"r1 = r10;"
"r1 += -512;"
"r0 = 42;"
"*(u32*)(r1 + 0) = r0;"
"may_goto l0_%=;"
"r2 = 100;"
+"l0_%=: exit;"
::: __clobber_all);
+} +#endif
char _license[] SEC("license") = "GPL";
2.47.1
On Thu, Feb 13, 2025 at 08:04:05AM -0800, Alexei Starovoitov wrote:
On Thu, Feb 13, 2025 at 5:13 AM Jiayuan Chen mrpre@163.com wrote:
Add test cases to ensure the maximum stack size can be properly limited to 512.
Test result: echo "0" > /proc/sys/net/core/bpf_jit_enable ./test_progs -t verifier_stack_ptr verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto with jit:SKIP verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto without jit:OK
echo "1" > /proc/sys/net/core/bpf_jit_enable verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto with jit:OK verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto without jit:SKIP
echo '0|1' is not longer necessary ? The commit log seems obsolete?
pw-bot: cr
It looks like the problem only arises when CONFIG_BPF_JIT_ALWAYS_ON is turned off, and we're only restricting the stack size when prog->jit_requested is false. To test this, I simulated different scenarios by echoing '0' or '1' to see how the program would behave when jit_requested is enabled or disabled.
As expected, when I echoed '0', the program failed verification, and when I echoed '1', it ran smoothly.
Thanks
Signed-off-by: Jiayuan Chen mrpre@163.com
.../selftests/bpf/progs/verifier_stack_ptr.c | 50 +++++++++++++++++++ 1 file changed, 50 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c index 417c61cd4b19..8ffe5a01d140 100644 --- a/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c +++ b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c @@ -481,4 +481,54 @@ l1_%=: r0 = 42; \ : __clobber_all); }
+SEC("socket") +__description("PTR_TO_STACK stack size > 512") +__failure __msg("invalid write to stack R1 off=-520 size=8") +__naked void stack_check_size_gt_512(void) +{
asm volatile (
"r1 = r10;"
"r1 += -520;"
"r0 = 42;"
"*(u64*)(r1 + 0) = r0;"
"exit;"
::: __clobber_all);
+}
+#ifdef __BPF_FEATURE_MAY_GOTO +SEC("socket") +__description("PTR_TO_STACK stack size 512 with may_goto with jit") +__use_jit() +__success __retval(42) +__naked void stack_check_size_512_with_may_goto_jit(void) +{
asm volatile (
"r1 = r10;"
"r1 += -512;"
"r0 = 42;"
"*(u32*)(r1 + 0) = r0;"
"may_goto l0_%=;"
"r2 = 100;"
+"l0_%=: exit;"
::: __clobber_all);
+}
+SEC("socket") +__description("PTR_TO_STACK stack size 512 with may_goto without jit") +__use_interp() +__failure __msg("stack size 520(extra 8) is too large") +__naked void stack_check_size_512_with_may_goto(void) +{
asm volatile (
"r1 = r10;"
"r1 += -512;"
"r0 = 42;"
"*(u32*)(r1 + 0) = r0;"
"may_goto l0_%=;"
"r2 = 100;"
+"l0_%=: exit;"
::: __clobber_all);
+} +#endif
char _license[] SEC("license") = "GPL";
2.47.1
On Thu, Feb 13, 2025 at 8:42 AM Jiayuan Chen mrpre@163.com wrote:
On Thu, Feb 13, 2025 at 08:04:05AM -0800, Alexei Starovoitov wrote:
On Thu, Feb 13, 2025 at 5:13 AM Jiayuan Chen mrpre@163.com wrote:
Add test cases to ensure the maximum stack size can be properly limited to 512.
Test result: echo "0" > /proc/sys/net/core/bpf_jit_enable ./test_progs -t verifier_stack_ptr verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto with jit:SKIP verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto without jit:OK
echo "1" > /proc/sys/net/core/bpf_jit_enable verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto with jit:OK verifier_stack_ptr/PTR_TO_STACK stack size 512 with may_goto without jit:SKIP
echo '0|1' is not longer necessary ? The commit log seems obsolete?
pw-bot: cr
It looks like the problem only arises when CONFIG_BPF_JIT_ALWAYS_ON is turned off, and we're only restricting the stack size when prog->jit_requested is false. To test this, I simulated different scenarios by echoing '0' or '1' to see how the program would behave when jit_requested is enabled or disabled.
As expected, when I echoed '0', the program failed verification, and when I echoed '1', it ran smoothly.
I misunderstood the tags in patch 2. I thought:
+#define __use_jit() __attribute__((btf_decl_tag("comment:run_mode=jit"))) +#define __use_interp() __attribute__((btf_decl_tag("comment:run_mode=interpreter")))
"use jit" actually means use jit.
while what it's doing is different:
+ if ((jit_enabled && spec->run_mode & INTERP) || + (!jit_enabled && spec->run_mode & JIT)) { + test__skip(); + return; + } +
The tags should probably be named __load_if_JITed and __load_if_interpreted or something like that.
linux-kselftest-mirror@lists.linaro.org