1. Patch1 is dependent patch to fix zext extension error in 32-bit ARM. 2. Patch2 and patch3 solve the problem that the bpf check fails because load's mem size is modified in CO_RE from the kernel and user modes, Currently, there are different opinions and a final solution needs to be selected. 3. Patch4 supports bpf fkunc in 32-bit ARM for EABI. 4. Patch5 is used to add test cases to cover some parameter scenarios states by AAPCS.
The following is the test_progs result in the 32-bit ARM environment:
# uname -m armv7l # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t kfunc_call #1/1 kfunc_call/kfunc_syscall_test_fail:OK #1/2 kfunc_call/kfunc_syscall_test_null_fail:OK #1/3 kfunc_call/kfunc_call_test_get_mem_fail_rdonly:OK #1/4 kfunc_call/kfunc_call_test_get_mem_fail_use_after_free:OK #1/5 kfunc_call/kfunc_call_test_get_mem_fail_oob:OK #1/6 kfunc_call/kfunc_call_test_get_mem_fail_not_const:OK #1/7 kfunc_call/kfunc_call_test_mem_acquire_fail:OK #1/8 kfunc_call/kfunc_call_test1:OK #1/9 kfunc_call/kfunc_call_test2:OK #1/10 kfunc_call/kfunc_call_test4:OK #1/11 kfunc_call/kfunc_call_test_ref_btf_id:OK #1/12 kfunc_call/kfunc_call_test_get_mem:OK #1/13 kfunc_call/kfunc_syscall_test:OK #1/14 kfunc_call/kfunc_syscall_test_null:OK #1/17 kfunc_call/destructive:OK
Yang Jihong (5): bpf: Adapt 32-bit return value kfunc for 32-bit ARM when zext extension bpf: Adjust sk size check for sk in bpf_skb_is_valid_access for CO_RE in 32-bit arch libbpf: Skip adjust mem size for load pointer in 32-bit arch in CO_RE bpf: Add kernel function call support in 32-bit ARM for EABI bpf:selftests: Add kfunc_call test for mixing 32-bit and 64-bit parameters
arch/arm/net/bpf_jit_32.c | 142 ++++++++++++++++++ kernel/bpf/verifier.c | 3 + net/bpf/test_run.c | 18 +++ net/core/filter.c | 8 +- tools/lib/bpf/libbpf.c | 34 ++++- .../selftests/bpf/prog_tests/kfunc_call.c | 3 + .../selftests/bpf/progs/kfunc_call_test.c | 52 +++++++ 7 files changed, 254 insertions(+), 6 deletions(-)
For ARM32 architecture, if data width of kfunc return value is 32 bits, need to do explicit zero extension for high 32-bit, insn_def_regno should return dst_reg for BPF_JMP type of BPF_PSEUDO_KFUNC_CALL. Otherwise, opt_subreg_zext_lo32_rnd_hi32 returns -EFAULT, resulting in BPF failure.
Signed-off-by: Yang Jihong yangjihong1@huawei.com --- kernel/bpf/verifier.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 7f0a9f6cb889..bac37757ffca 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2404,6 +2404,9 @@ static int insn_def_regno(const struct bpf_insn *insn) { switch (BPF_CLASS(insn->code)) { case BPF_JMP: + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) + return insn->dst_reg; + fallthrough; case BPF_JMP32: case BPF_ST: return -1;
On 11/7/22 1:20 AM, Yang Jihong wrote:
For ARM32 architecture, if data width of kfunc return value is 32 bits, need to do explicit zero extension for high 32-bit, insn_def_regno should return dst_reg for BPF_JMP type of BPF_PSEUDO_KFUNC_CALL. Otherwise, opt_subreg_zext_lo32_rnd_hi32 returns -EFAULT, resulting in BPF failure.
Signed-off-by: Yang Jihong yangjihong1@huawei.com
kernel/bpf/verifier.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 7f0a9f6cb889..bac37757ffca 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2404,6 +2404,9 @@ static int insn_def_regno(const struct bpf_insn *insn) { switch (BPF_CLASS(insn->code)) { case BPF_JMP:
if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL)
return insn->dst_reg;
This does not look right. A kfunc can return void. The btf type of the kfunc's return value needs to be checked against "void" first? Also, this will affect insn_has_def32(), does is_reg64 (called from insn_has_def32) need to be adjusted also?
For patch 2, as replied earlier in v1, I would separate out the prog that does __sk_buff->sk and use the uapi's bpf.h instead of vmlinux.h since it does not need CO-RE.
This set should target for bpf-next instead of bpf.
case BPF_JMP32: case BPF_ST: return -1;fallthrough;
Hello,
On 2022/11/9 7:12, Martin KaFai Lau wrote:
On 11/7/22 1:20 AM, Yang Jihong wrote:
For ARM32 architecture, if data width of kfunc return value is 32 bits, need to do explicit zero extension for high 32-bit, insn_def_regno should return dst_reg for BPF_JMP type of BPF_PSEUDO_KFUNC_CALL. Otherwise, opt_subreg_zext_lo32_rnd_hi32 returns -EFAULT, resulting in BPF failure.
Signed-off-by: Yang Jihong yangjihong1@huawei.com
kernel/bpf/verifier.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 7f0a9f6cb889..bac37757ffca 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2404,6 +2404,9 @@ static int insn_def_regno(const struct bpf_insn *insn) { switch (BPF_CLASS(insn->code)) { case BPF_JMP: + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) + return insn->dst_reg;
This does not look right. A kfunc can return void. The btf type of the kfunc's return value needs to be checked against "void" first?
OK, will add the check in next version.
Also, this will affect insn_has_def32(), does is_reg64 (called from insn_has_def32) need to be adjusted also?
Yes, is_reg64 need to be adjusted, will fix in next version.
For patch 2, as replied earlier in v1, I would separate out the prog that does __sk_buff->sk and use the uapi's bpf.h instead of vmlinux.h since it does not need CO-RE.
OK, will remove adjust sk check patches in next verion.
As mentioned in v1: "bpf-tc program can take'struct sk_buff *skb' instead of'struct __sk_buff *skb' but it will be a separate topic."
It is a separate topic, only the lskel test cases are affected. The ARM32 kfunc function is not affected.
This set should target for bpf-next instead of bpf.
OK, will send to bpf-next in next version.
Thanks, Yang
The error code -EACCES is returned when bpf prog is tested in 32-bit arch. This is because bpf_object__relocate modifies instruction to change memory size to 4 bytes, as shown in the following messages:
libbpf: prog 'kfunc_call_test1': relo #2: matching candidate #0 <byte_off> [18342] struct __sk_buff.sk (0:30:0 @ offset 168) libbpf: prog 'kfunc_call_test1': relo #2: patched insn #1 (LDX/ST/STX) off 168 -> 168 libbpf: prog 'kfunc_call_test1': relo #2: patched insn #1 (LDX/ST/STX) mem_sz 8 -> 4
As a result, the bpf_skb_is_valid_access check fails, for 32-bit arch, adjust check sk size.
Signed-off-by: Yang Jihong yangjihong1@huawei.com --- net/core/filter.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/net/core/filter.c b/net/core/filter.c index bb0136e7a8e4..47cbad2e609f 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -8269,7 +8269,13 @@ static bool bpf_skb_is_valid_access(int off, int size, enum bpf_access_type type return false; break; case offsetof(struct __sk_buff, sk): - if (type == BPF_WRITE || size != sizeof(__u64)) + /* CO_RE adjusts pointer accesses from 8-byte read to + * 4-byte reads in 32-bit host arch, so 32-bit can only + * read the 32-bit pointer or the full 64-bit value, + * and 64-bit can read write the 64-bit pointer. + */ + if (type == BPF_WRITE || + (size != sizeof(struct bpf_sock *) && size != sizeof(__u64))) return false; info->reg_type = PTR_TO_SOCK_COMMON_OR_NULL; break;
bpf_core_patch_insn modifies load's mem size from 8 bytes to 4 bytes. As a result, the bpf check fails, we need to skip adjust mem size to fit the verifier.
Signed-off-by: Yang Jihong yangjihong1@huawei.com --- tools/lib/bpf/libbpf.c | 34 +++++++++++++++++++++++++++++----- 1 file changed, 29 insertions(+), 5 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 184ce1684dcd..e1c21b631a0b 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -5634,6 +5634,28 @@ static int bpf_core_resolve_relo(struct bpf_program *prog, targ_res); }
+static bool +bpf_core_patch_insn_skip(const struct btf *local_btf, const struct bpf_insn *insn, + const struct bpf_core_relo_res *res) +{ + __u8 class; + const struct btf_type *orig_t; + + class = BPF_CLASS(insn->code); + orig_t = btf_type_by_id(local_btf, res->orig_type_id); + + /* + * verifier has to see a load of a pointer as a 8-byte load, + * CO_RE should not screws up access, bpf_core_patch_insn modifies + * load's mem size from 8 bytes to 4 bytes in 32-bit arch, + * so we skip adjust mem size. + */ + if (class == BPF_LDX && btf_is_ptr(orig_t)) + return true; + + return false; +} + static int bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path) { @@ -5730,11 +5752,13 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path) goto out; }
- err = bpf_core_patch_insn(prog->name, insn, insn_idx, rec, i, &targ_res); - if (err) { - pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n", - prog->name, i, insn_idx, err); - goto out; + if (!bpf_core_patch_insn_skip(obj->btf, insn, &targ_res)) { + err = bpf_core_patch_insn(prog->name, insn, insn_idx, rec, i, &targ_res); + if (err) { + pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n", + prog->name, i, insn_idx, err); + goto out; + } } } }
On Mon, Nov 7, 2022 at 1:23 AM Yang Jihong yangjihong1@huawei.com wrote:
bpf_core_patch_insn modifies load's mem size from 8 bytes to 4 bytes. As a result, the bpf check fails, we need to skip adjust mem size to fit the verifier.
Signed-off-by: Yang Jihong yangjihong1@huawei.com
tools/lib/bpf/libbpf.c | 34 +++++++++++++++++++++++++++++----- 1 file changed, 29 insertions(+), 5 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 184ce1684dcd..e1c21b631a0b 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -5634,6 +5634,28 @@ static int bpf_core_resolve_relo(struct bpf_program *prog, targ_res); }
+static bool +bpf_core_patch_insn_skip(const struct btf *local_btf, const struct bpf_insn *insn,
const struct bpf_core_relo_res *res)
+{
__u8 class;
const struct btf_type *orig_t;
class = BPF_CLASS(insn->code);
orig_t = btf_type_by_id(local_btf, res->orig_type_id);
/*
* verifier has to see a load of a pointer as a 8-byte load,
* CO_RE should not screws up access, bpf_core_patch_insn modifies
* load's mem size from 8 bytes to 4 bytes in 32-bit arch,
* so we skip adjust mem size.
*/
Nope, this is only for BPF UAPI context types like __sk_buff (right now). fentry/fexit/raw_tp_btf programs traversing kernel types and following pointers actually need this to work correctly. Don't do this.
if (class == BPF_LDX && btf_is_ptr(orig_t))
return true;
return false;
+}
static int bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path) { @@ -5730,11 +5752,13 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path) goto out; }
err = bpf_core_patch_insn(prog->name, insn, insn_idx, rec, i, &targ_res);
if (err) {
pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n",
prog->name, i, insn_idx, err);
goto out;
if (!bpf_core_patch_insn_skip(obj->btf, insn, &targ_res)) {
err = bpf_core_patch_insn(prog->name, insn, insn_idx, rec, i, &targ_res);
if (err) {
pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n",
prog->name, i, insn_idx, err);
goto out;
} } } }
-- 2.30.GIT
Hello,
On 2022/11/8 9:22, Andrii Nakryiko wrote:
On Mon, Nov 7, 2022 at 1:23 AM Yang Jihong yangjihong1@huawei.com wrote:
bpf_core_patch_insn modifies load's mem size from 8 bytes to 4 bytes. As a result, the bpf check fails, we need to skip adjust mem size to fit the verifier.
Signed-off-by: Yang Jihong yangjihong1@huawei.com
tools/lib/bpf/libbpf.c | 34 +++++++++++++++++++++++++++++----- 1 file changed, 29 insertions(+), 5 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 184ce1684dcd..e1c21b631a0b 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -5634,6 +5634,28 @@ static int bpf_core_resolve_relo(struct bpf_program *prog, targ_res); }
+static bool +bpf_core_patch_insn_skip(const struct btf *local_btf, const struct bpf_insn *insn,
const struct bpf_core_relo_res *res)
+{
__u8 class;
const struct btf_type *orig_t;
class = BPF_CLASS(insn->code);
orig_t = btf_type_by_id(local_btf, res->orig_type_id);
/*
* verifier has to see a load of a pointer as a 8-byte load,
* CO_RE should not screws up access, bpf_core_patch_insn modifies
* load's mem size from 8 bytes to 4 bytes in 32-bit arch,
* so we skip adjust mem size.
*/
Nope, this is only for BPF UAPI context types like __sk_buff (right now). fentry/fexit/raw_tp_btf programs traversing kernel types and following pointers actually need this to work correctly. Don't do this.
Distinguishing BPF UAPI context from kernel type requires some work. According to current situation, the solution of patch2 is relatively simple.
Thanks, Yang
This patch adds kernel function call support to 32-bit ARM bpf jit for EABI.
Signed-off-by: Yang Jihong yangjihong1@huawei.com --- arch/arm/net/bpf_jit_32.c | 142 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 142 insertions(+)
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index 6a1c9fca5260..9c0e1c22dc37 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -1337,6 +1337,130 @@ static void build_epilogue(struct jit_ctx *ctx) #endif }
+/* + * Input parameters of function in 32-bit ARM architecture: + * The first four word-sized parameters passed to a function will be + * transferred in registers R0-R3. Sub-word sized arguments, for example, + * char, will still use a whole register. + * Arguments larger than a word will be passed in multiple registers. + * If more arguments are passed, the fifth and subsequent words will be passed + * on the stack. + * + * The first for args of a function will be considered for + * putting into the 32bit register R1, R2, R3 and R4. + * + * Two 32bit registers are used to pass a 64bit arg. + * + * For example, + * void foo(u32 a, u32 b, u32 c, u32 d, u32 e): + * u32 a: R0 + * u32 b: R1 + * u32 c: R2 + * u32 d: R3 + * u32 e: stack + * + * void foo(u64 a, u32 b, u32 c, u32 d): + * u64 a: R0 (lo32) R1 (hi32) + * u32 b: R2 + * u32 c: R3 + * u32 d: stack + * + * void foo(u32 a, u64 b, u32 c, u32 d): + * u32 a: R0 + * u64 b: R2 (lo32) R3 (hi32) + * u32 c: stack + * u32 d: stack + * + * void foo(u32 a, u32 b, u64 c, u32 d): + * u32 a: R0 + * u32 b: R1 + * u64 c: R2 (lo32) R3 (hi32) + * u32 d: stack + * + * void foo(u64 a, u64 b): + * u64 a: R0 (lo32) R1 (hi32) + * u64 b: R2 (lo32) R3 (hi32) + * + * The return value will be stored in the R0 (and R1 for 64bit value). + * + * For example, + * u32 foo(u32 a, u32 b, u32 c): + * return value: R0 + * + * u64 foo(u32 a, u32 b, u32 c): + * return value: R0 (lo32) R1 (hi32) + * + * The above is for AEABI only, OABI does not support this function. + */ +static int emit_kfunc_call(const struct bpf_insn *insn, struct jit_ctx *ctx, const u32 func) +{ + int i; + const struct btf_func_model *fm; + const s8 *tmp = bpf2a32[TMP_REG_1]; + const u8 arg_regs[] = { ARM_R0, ARM_R1, ARM_R2, ARM_R3 }; + int nr_arg_regs = ARRAY_SIZE(arg_regs); + int arg_regs_idx = 0, stack_off = 0; + const s8 *rd; + s8 rt; + + if (!IS_ENABLED(CONFIG_AEABI)) { + pr_info_once("kfunc call only support for AEABI in 32-bit arm\n"); + return -EINVAL; + } + + fm = bpf_jit_find_kfunc_model(ctx->prog, insn); + if (!fm) + return -EINVAL; + + for (i = 0; i < fm->nr_args; i++) { + if (fm->arg_size[i] > sizeof(u32)) { + rd = arm_bpf_get_reg64(bpf2a32[BPF_REG_1 + i], tmp, ctx); + + if (arg_regs_idx + 1 < nr_arg_regs) { + /* + * AAPCS states: + * A double-word sized type is passed in two + * consecutive registers (e.g., r0 and r1, or + * r2 and r3). The content of the registers is + * as if the value had been loaded from memory + * representation with a single LDM instruction. + */ + if (arg_regs_idx & 1) + arg_regs_idx++; + + emit(ARM_MOV_R(arg_regs[arg_regs_idx++], rd[1]), ctx); + emit(ARM_MOV_R(arg_regs[arg_regs_idx++], rd[0]), ctx); + } else { + stack_off = ALIGN(stack_off, STACK_ALIGNMENT); + + if (__LINUX_ARM_ARCH__ >= 6 || + ctx->cpu_architecture >= CPU_ARCH_ARMv5TE) { + emit(ARM_STRD_I(rd[1], ARM_SP, stack_off), ctx); + } else { + emit(ARM_STR_I(rd[1], ARM_SP, stack_off), ctx); + emit(ARM_STR_I(rd[0], ARM_SP, stack_off), ctx); + } + + stack_off += 8; + } + } else { + rt = arm_bpf_get_reg32(bpf2a32[BPF_REG_1 + i][1], tmp[1], ctx); + + if (arg_regs_idx < nr_arg_regs) { + emit(ARM_MOV_R(arg_regs[arg_regs_idx++], rt), ctx); + } else { + emit(ARM_STR_I(rt, ARM_SP, stack_off), ctx); + stack_off += 4; + } + } + } + + emit_a32_mov_i(tmp[1], func, ctx); + emit_blx_r(tmp[1], ctx); + + return 0; +} + /* * Convert an eBPF instruction to native instruction, i.e * JITs an eBPF instruction. @@ -1603,6 +1727,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) case BPF_LDX | BPF_MEM | BPF_H: case BPF_LDX | BPF_MEM | BPF_B: case BPF_LDX | BPF_MEM | BPF_DW: + case BPF_LDX | BPF_PROBE_MEM | BPF_W: + case BPF_LDX | BPF_PROBE_MEM | BPF_H: + case BPF_LDX | BPF_PROBE_MEM | BPF_B: + case BPF_LDX | BPF_PROBE_MEM | BPF_DW: rn = arm_bpf_get_reg32(src_lo, tmp2[1], ctx); emit_ldx_r(dst, rn, off, ctx, BPF_SIZE(code)); break; @@ -1785,6 +1913,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) const s8 *r5 = bpf2a32[BPF_REG_5]; const u32 func = (u32)__bpf_call_base + (u32)imm;
+ if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { + int err; + + err = emit_kfunc_call(insn, ctx, func); + + if (err) + return err; + break; + } + emit_a32_mov_r64(true, r0, r1, ctx); emit_a32_mov_r64(true, r1, r2, ctx); emit_push_r64(r5, ctx); @@ -2022,3 +2160,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) return prog; }
+bool bpf_jit_supports_kfunc_call(void) +{ + return true; +}
On Mon, Nov 07, 2022 at 05:20:31PM +0800, Yang Jihong wrote:
+bool bpf_jit_supports_kfunc_call(void) +{
- return true;
It would be far cleaner to make this:
return IS_ENABLED(CONFIG_AEABI);
So userspace knows that it isn't supported on OABI.
Hello,
On 2022/11/7 20:33, Russell King (Oracle) wrote:
On Mon, Nov 07, 2022 at 05:20:31PM +0800, Yang Jihong wrote:
+bool bpf_jit_supports_kfunc_call(void) +{
- return true;
It would be far cleaner to make this:
return IS_ENABLED(CONFIG_AEABI);
So userspace knows that it isn't supported on OABI.
Thanks for the suggestion, will change.
Thanks, Yang .
32-bit ARM has four registers to save function parameters, add test cases to cover additional scenarios.
Signed-off-by: Yang Jihong yangjihong1@huawei.com --- net/bpf/test_run.c | 18 +++++++ .../selftests/bpf/prog_tests/kfunc_call.c | 3 ++ .../selftests/bpf/progs/kfunc_call_test.c | 52 +++++++++++++++++++ 3 files changed, 73 insertions(+)
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 13d578ce2a09..e7eb5bd4cf0e 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -551,6 +551,21 @@ struct sock * noinline bpf_kfunc_call_test3(struct sock *sk) return sk; }
+u64 noinline bpf_kfunc_call_test4(struct sock *sk, u64 a, u64 b, u32 c, u32 d) +{ + return a + b + c + d; +} + +u64 noinline bpf_kfunc_call_test5(u64 a, u64 b) +{ + return a + b; +} + +u64 noinline bpf_kfunc_call_test6(u32 a, u32 b, u32 c, u32 d, u32 e) +{ + return a + b + c + d + e; +} + struct prog_test_member1 { int a; }; @@ -739,6 +754,9 @@ BTF_SET8_START(test_sk_check_kfunc_ids) BTF_ID_FLAGS(func, bpf_kfunc_call_test1) BTF_ID_FLAGS(func, bpf_kfunc_call_test2) BTF_ID_FLAGS(func, bpf_kfunc_call_test3) +BTF_ID_FLAGS(func, bpf_kfunc_call_test4) +BTF_ID_FLAGS(func, bpf_kfunc_call_test5) +BTF_ID_FLAGS(func, bpf_kfunc_call_test6) BTF_ID_FLAGS(func, bpf_kfunc_call_test_acquire, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_memb_acquire, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_test_release, KF_RELEASE) diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c index 5af1ee8f0e6e..6a6822e99071 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c @@ -72,6 +72,9 @@ static struct kfunc_test_params kfunc_tests[] = { /* success cases */ TC_TEST(kfunc_call_test1, 12), TC_TEST(kfunc_call_test2, 3), + TC_TEST(kfunc_call_test4, 16), + TC_TEST(kfunc_call_test5, 7), + TC_TEST(kfunc_call_test6, 15), TC_TEST(kfunc_call_test_ref_btf_id, 0), TC_TEST(kfunc_call_test_get_mem, 42), SYSCALL_TEST(kfunc_syscall_test, 0), diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_test.c b/tools/testing/selftests/bpf/progs/kfunc_call_test.c index f636e50be259..0385ce2d4c6e 100644 --- a/tools/testing/selftests/bpf/progs/kfunc_call_test.c +++ b/tools/testing/selftests/bpf/progs/kfunc_call_test.c @@ -6,6 +6,11 @@ extern int bpf_kfunc_call_test2(struct sock *sk, __u32 a, __u32 b) __ksym; extern __u64 bpf_kfunc_call_test1(struct sock *sk, __u32 a, __u64 b, __u32 c, __u64 d) __ksym; +extern __u64 bpf_kfunc_call_test4(struct sock *sk, __u64 a, __u64 b, + __u32 c, __u32 d) __ksym; +extern __u64 bpf_kfunc_call_test5(__u64 a, __u64 b) __ksym; +extern __u64 bpf_kfunc_call_test6(__u32 a, __u32 b, __u32 c, __u32 d, + __u32 e) __ksym;
extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym; extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym; @@ -17,6 +22,53 @@ extern void bpf_kfunc_call_test_mem_len_fail2(__u64 *mem, int len) __ksym; extern int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, const int rdwr_buf_size) __ksym; extern int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size) __ksym;
+SEC("tc") +int kfunc_call_test6(struct __sk_buff *skb) +{ + __u64 a = 1ULL << 32; + __u32 ret; + + a = bpf_kfunc_call_test6(1, 2, 3, 4, 5); + ret = a >> 32; /* ret should be 0 */ + ret += (__u32)a; /* ret should be 15 */ + + return ret; +} + +SEC("tc") +int kfunc_call_test5(struct __sk_buff *skb) +{ + __u64 a = 1ULL << 32; + __u32 ret; + + a = bpf_kfunc_call_test5(a | 2, a | 3); + ret = a >> 32; /* ret should be 2 */ + ret += (__u32)a; /* ret should be 7 */ + + return ret; +} + +SEC("tc") +int kfunc_call_test4(struct __sk_buff *skb) +{ + struct bpf_sock *sk = skb->sk; + __u64 a = 1ULL << 32; + __u32 ret; + + if (!sk) + return -1; + + sk = bpf_sk_fullsock(sk); + if (!sk) + return -1; + + a = bpf_kfunc_call_test4((struct sock *)sk, a | 2, a | 3, 4, 5); + ret = a >> 32; /* ret should be 2 */ + ret += (__u32)a; /* ret should be 16 */ + + return ret; +} + SEC("tc") int kfunc_call_test2(struct __sk_buff *skb) {
linux-kselftest-mirror@lists.linaro.org