For now, the BPF program of type BPF_PROG_TYPE_TRACING is not allowed to be attached to multiple hooks, and we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar) logic. This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace.
In this series, we add the support to allow attaching a tracing BPF program to multi hooks, which is similar to BPF_TRACE_KPROBE_MULTI.
In the 1st patch, we add the support to record index of the accessed function args of the target for tracing program. Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the next commit.
In the 2nd patch, we refactor the struct modules_array to ptr_array, as we need similar function to hold the target btf, target program and kernel modules that we reference to in the following commit.
In the 3rd patch, we introduce the struct bpf_tramp_link_conn to be the bridge between bpf_link and trampoline, as the releation between bpf_link and trampoline is not one-to-one anymore.
In the 4th patch, we add the struct bpf_tramp_multi_link and bpf_trampoline_multi_{link,unlink}_prog for multi-link of trampoline.
In the 5th patch, we add target btf to the function args of bpf_check_attach_target(), then the caller can specify the btf to check.
The 6th patch is the main part to add multi-link supporting for tracing. For now, only the following attach type is supported:
BPF_TRACE_FENTRY_MULTI BPF_TRACE_FEXIT_MULTI BPF_MODIFY_RETURN_MULTI
The attach type of BPF_TRACE_RAW_TP has different link type, so we skip this part in this series for now.
In the 7th and 8th patches, we add multi-link supporting of tracing to libbpf. Note that we don't free btfs that we load after the bpf programs are loaded into the kernel now if any programs of type tracing multi-link existing, as we need to lookup the btf types during attaching.
In the 9th patch, we add the testcases for this series.
Changes since v1: - According to the advice of Alexei, introduce multi-link for tracing instead of attaching a tracing program to multiple trampolines with creating multi instance of bpf_link.
Menglong Dong (9): bpf: tracing: add support to record and check the accessed args bpf: refactor the modules_array to ptr_array bpf: trampoline: introduce struct bpf_tramp_link_conn bpf: trampoline: introduce bpf_tramp_multi_link bpf: verifier: add btf to the function args of bpf_check_attach_target bpf: tracing: add multi-link support libbpf: don't free btf if program of multi-link tracing existing libbpf: add support for the multi-link of tracing selftests/bpf: add testcases for multi-link of tracing
arch/arm64/net/bpf_jit_comp.c | 4 +- arch/riscv/net/bpf_jit_comp64.c | 4 +- arch/s390/net/bpf_jit_comp.c | 4 +- arch/x86/net/bpf_jit_comp.c | 4 +- include/linux/bpf.h | 51 ++- include/linux/bpf_verifier.h | 1 + include/uapi/linux/bpf.h | 10 + kernel/bpf/bpf_struct_ops.c | 2 +- kernel/bpf/btf.c | 113 ++++- kernel/bpf/syscall.c | 425 +++++++++++++++++- kernel/bpf/trampoline.c | 97 +++- kernel/bpf/verifier.c | 24 +- kernel/trace/bpf_trace.c | 48 +- net/bpf/test_run.c | 3 + net/core/bpf_sk_storage.c | 2 + tools/bpf/bpftool/common.c | 3 + tools/include/uapi/linux/bpf.h | 10 + tools/lib/bpf/bpf.c | 10 + tools/lib/bpf/bpf.h | 6 + tools/lib/bpf/libbpf.c | 215 ++++++++- tools/lib/bpf/libbpf.h | 16 + tools/lib/bpf/libbpf.map | 2 + .../selftests/bpf/bpf_testmod/bpf_testmod.c | 49 ++ .../bpf/prog_tests/tracing_multi_link.c | 153 +++++++ .../selftests/bpf/progs/tracing_multi_test.c | 209 +++++++++ 25 files changed, 1366 insertions(+), 99 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi_link.c create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_test.c
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 108 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 95e07673cdc1..0f677fdcfcc7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1461,6 +1461,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name; + u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab; @@ -2565,6 +2566,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1, + struct btf *btf2, const struct btf_type *t2, + u64 func_args); const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 170d017e8e4a..c2a0299d4358 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6125,19 +6125,24 @@ static bool is_int_ptr(struct btf *btf, const struct btf_type *t) }
static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto, - int off) + int off, int *aligned_idx) { const struct btf_param *args; const struct btf_type *t; u32 offset = 0, nr_args; int i;
+ if (aligned_idx) + *aligned_idx = -ENOENT; + if (!func_proto) return off / 8;
nr_args = btf_type_vlen(func_proto); args = (const struct btf_param *)(func_proto + 1); for (i = 0; i < nr_args; i++) { + if (aligned_idx && offset == off) + *aligned_idx = i; t = btf_type_skip_modifiers(btf, args[i].type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset) @@ -6207,7 +6212,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, tname, off); return false; } - arg = get_ctx_arg_idx(btf, t, off); + arg = get_ctx_arg_idx(btf, t, off, NULL); args = (const struct btf_param *)(t + 1); /* if (t == NULL) Fall back to default BPF prog with * MAX_BPF_FUNC_REG_ARGS u64 arguments. @@ -6217,6 +6222,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--; + prog->aux->accessed_args |= (1 << (arg + 1)); + } else { + prog->aux->accessed_args |= (1 << arg); }
if (arg > nr_args) { @@ -7024,6 +7032,102 @@ int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *pr return btf_check_func_type_match(log, btf1, t1, btf2, t2); }
+static u32 get_ctx_arg_total_size(struct btf *btf, const struct btf_type *t) +{ + const struct btf_param *args; + u32 size = 0, nr_args; + int i; + + nr_args = btf_type_vlen(t); + args = (const struct btf_param *)(t + 1); + for (i = 0; i < nr_args; i++) { + t = btf_type_skip_modifiers(btf, args[i].type, NULL); + size += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); + } + + return size; +} + +/* This function is similar to btf_check_func_type_match(), except that it + * only compare some function args of the function prototype t1 and t2. + */ +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *func1, + struct btf *btf2, const struct btf_type *func2, + u64 func_args) +{ + const struct btf_param *args1, *args2; + u32 nargs1, i, offset = 0; + const char *s1, *s2; + + if (!btf_type_is_func_proto(func1) || !btf_type_is_func_proto(func2)) + return -EINVAL; + + args1 = (const struct btf_param *)(func1 + 1); + args2 = (const struct btf_param *)(func2 + 1); + nargs1 = btf_type_vlen(func1); + + for (i = 0; i <= nargs1; i++) { + const struct btf_type *t1, *t2; + + if (!(func_args & (1 << i))) + goto next; + + if (i < nargs1) { + int t2_index; + + /* get the index of the arg corresponding to args1[i] + * by the offset. + */ + get_ctx_arg_idx(btf2, func2, offset, &t2_index); + if (t2_index < 0) + return -EINVAL; + + t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL); + t2 = btf_type_skip_modifiers(btf2, args2[t2_index].type, + NULL); + } else { + /* i == nargs1, this is the index of return value of t1 */ + if (get_ctx_arg_total_size(btf1, func1) != + get_ctx_arg_total_size(btf2, func2)) + return -EINVAL; + + /* check the return type of t1 and t2 */ + t1 = btf_type_skip_modifiers(btf1, func1->type, NULL); + t2 = btf_type_skip_modifiers(btf2, func2->type, NULL); + } + + if (t1->info != t2->info || + (btf_type_has_size(t1) && t1->size != t2->size)) + return -EINVAL; + if (btf_type_is_int(t1) || btf_is_any_enum(t1)) + goto next; + + if (btf_type_is_struct(t1)) + goto on_struct; + + if (!btf_type_is_ptr(t1)) + return -EINVAL; + + t1 = btf_type_skip_modifiers(btf1, t1->type, NULL); + t2 = btf_type_skip_modifiers(btf2, t2->type, NULL); + if (!btf_type_is_struct(t1) || !btf_type_is_struct(t2)) + return -EINVAL; + +on_struct: + s1 = btf_name_by_offset(btf1, t1->name_off); + s2 = btf_name_by_offset(btf2, t2->name_off); + if (strcmp(s1, s2)) + return -EINVAL; +next: + if (i < nargs1) { + t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL); + offset += btf_type_is_ptr(t1) ? 8 : roundup(t1->size, 8); + } + } + + return 0; +} + static bool btf_is_dynptr_ptr(const struct btf *btf, const struct btf_type *t) { const char *name;
On Mon, Mar 11, 2024 at 2:34 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 108 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 95e07673cdc1..0f677fdcfcc7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1461,6 +1461,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name;
u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab;
@@ -2565,6 +2566,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1,
struct btf *btf2, const struct btf_type *t2,
u64 func_args);
const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 170d017e8e4a..c2a0299d4358 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6125,19 +6125,24 @@ static bool is_int_ptr(struct btf *btf, const struct btf_type *t) }
static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
int off)
int off, int *aligned_idx)
{ const struct btf_param *args; const struct btf_type *t; u32 offset = 0, nr_args; int i;
if (aligned_idx)
*aligned_idx = -ENOENT;
if (!func_proto) return off / 8; nr_args = btf_type_vlen(func_proto); args = (const struct btf_param *)(func_proto + 1); for (i = 0; i < nr_args; i++) {
if (aligned_idx && offset == off)
*aligned_idx = i; t = btf_type_skip_modifiers(btf, args[i].type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset)
@@ -6207,7 +6212,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, tname, off); return false; }
arg = get_ctx_arg_idx(btf, t, off);
arg = get_ctx_arg_idx(btf, t, off, NULL); args = (const struct btf_param *)(t + 1); /* if (t == NULL) Fall back to default BPF prog with * MAX_BPF_FUNC_REG_ARGS u64 arguments.
@@ -6217,6 +6222,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--;
prog->aux->accessed_args |= (1 << (arg + 1));
} else {
prog->aux->accessed_args |= (1 << arg);
What do you need this aligned_idx for ? I'd expect that above "accessed_args |= (1 << arg);" is enough.
} if (arg > nr_args) {
@@ -7024,6 +7032,102 @@ int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *pr return btf_check_func_type_match(log, btf1, t1, btf2, t2); }
+static u32 get_ctx_arg_total_size(struct btf *btf, const struct btf_type *t) +{
const struct btf_param *args;
u32 size = 0, nr_args;
int i;
nr_args = btf_type_vlen(t);
args = (const struct btf_param *)(t + 1);
for (i = 0; i < nr_args; i++) {
t = btf_type_skip_modifiers(btf, args[i].type, NULL);
size += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8);
}
return size;
+}
+/* This function is similar to btf_check_func_type_match(), except that it
- only compare some function args of the function prototype t1 and t2.
- */
+int btf_check_func_part_match(struct btf *btf1, const struct btf_type *func1,
struct btf *btf2, const struct btf_type *func2,
u64 func_args)
This is way too much copy paste. Please share the code with btf_check_func_type_match.
On Tue, Mar 12, 2024 at 9:46 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:34 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 108 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 95e07673cdc1..0f677fdcfcc7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1461,6 +1461,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name;
u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab;
@@ -2565,6 +2566,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1,
struct btf *btf2, const struct btf_type *t2,
u64 func_args);
const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 170d017e8e4a..c2a0299d4358 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6125,19 +6125,24 @@ static bool is_int_ptr(struct btf *btf, const struct btf_type *t) }
static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
int off)
int off, int *aligned_idx)
{ const struct btf_param *args; const struct btf_type *t; u32 offset = 0, nr_args; int i;
if (aligned_idx)
*aligned_idx = -ENOENT;
if (!func_proto) return off / 8; nr_args = btf_type_vlen(func_proto); args = (const struct btf_param *)(func_proto + 1); for (i = 0; i < nr_args; i++) {
if (aligned_idx && offset == off)
*aligned_idx = i; t = btf_type_skip_modifiers(btf, args[i].type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset)
@@ -6207,7 +6212,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, tname, off); return false; }
arg = get_ctx_arg_idx(btf, t, off);
arg = get_ctx_arg_idx(btf, t, off, NULL); args = (const struct btf_param *)(t + 1); /* if (t == NULL) Fall back to default BPF prog with * MAX_BPF_FUNC_REG_ARGS u64 arguments.
@@ -6217,6 +6222,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--;
prog->aux->accessed_args |= (1 << (arg + 1));
} else {
prog->aux->accessed_args |= (1 << arg);
What do you need this aligned_idx for ? I'd expect that above "accessed_args |= (1 << arg);" is enough.
Which aligned_idx? No aligned_idx in the btf_ctx_access(), and aligned_idx is only used in the btf_check_func_part_match().
In the btf_check_func_part_match(), I need to compare the t1->args[i] and t2->args[j], which have the same offset. And the aligned_idx is to find the "j" according to the offset of t1->args[i].
} if (arg > nr_args) {
@@ -7024,6 +7032,102 @@ int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *pr return btf_check_func_type_match(log, btf1, t1, btf2, t2); }
+static u32 get_ctx_arg_total_size(struct btf *btf, const struct btf_type *t) +{
const struct btf_param *args;
u32 size = 0, nr_args;
int i;
nr_args = btf_type_vlen(t);
args = (const struct btf_param *)(t + 1);
for (i = 0; i < nr_args; i++) {
t = btf_type_skip_modifiers(btf, args[i].type, NULL);
size += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8);
}
return size;
+}
+/* This function is similar to btf_check_func_type_match(), except that it
- only compare some function args of the function prototype t1 and t2.
- */
+int btf_check_func_part_match(struct btf *btf1, const struct btf_type *func1,
struct btf *btf2, const struct btf_type *func2,
u64 func_args)
This is way too much copy paste. Please share the code with btf_check_func_type_match.
Okay!
Thanks! Menglong Dong
On Mon, Mar 11, 2024 at 7:01 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Tue, Mar 12, 2024 at 9:46 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:34 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 108 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 95e07673cdc1..0f677fdcfcc7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1461,6 +1461,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name;
u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab;
@@ -2565,6 +2566,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1,
struct btf *btf2, const struct btf_type *t2,
u64 func_args);
const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 170d017e8e4a..c2a0299d4358 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6125,19 +6125,24 @@ static bool is_int_ptr(struct btf *btf, const struct btf_type *t) }
static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
int off)
int off, int *aligned_idx)
{ const struct btf_param *args; const struct btf_type *t; u32 offset = 0, nr_args; int i;
if (aligned_idx)
*aligned_idx = -ENOENT;
if (!func_proto) return off / 8; nr_args = btf_type_vlen(func_proto); args = (const struct btf_param *)(func_proto + 1); for (i = 0; i < nr_args; i++) {
if (aligned_idx && offset == off)
*aligned_idx = i; t = btf_type_skip_modifiers(btf, args[i].type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset)
@@ -6207,7 +6212,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, tname, off); return false; }
arg = get_ctx_arg_idx(btf, t, off);
arg = get_ctx_arg_idx(btf, t, off, NULL); args = (const struct btf_param *)(t + 1); /* if (t == NULL) Fall back to default BPF prog with * MAX_BPF_FUNC_REG_ARGS u64 arguments.
@@ -6217,6 +6222,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--;
prog->aux->accessed_args |= (1 << (arg + 1));
} else {
prog->aux->accessed_args |= (1 << arg);
What do you need this aligned_idx for ? I'd expect that above "accessed_args |= (1 << arg);" is enough.
Which aligned_idx? No aligned_idx in the btf_ctx_access(), and aligned_idx is only used in the btf_check_func_part_match().
In the btf_check_func_part_match(), I need to compare the t1->args[i] and t2->args[j], which have the same offset. And the aligned_idx is to find the "j" according to the offset of t1->args[i].
And that's my question. Why you don't do the max of accessed_args across all attach points and do btf_check_func_type_match() to that argno instead of nargs1. This 'offset += btf_type_is_ptr(t1) ? 8 : roundup... is odd.
On Tue, Mar 12, 2024 at 10:09 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 7:01 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Tue, Mar 12, 2024 at 9:46 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:34 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 108 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 95e07673cdc1..0f677fdcfcc7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1461,6 +1461,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name;
u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab;
@@ -2565,6 +2566,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1,
struct btf *btf2, const struct btf_type *t2,
u64 func_args);
const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 170d017e8e4a..c2a0299d4358 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6125,19 +6125,24 @@ static bool is_int_ptr(struct btf *btf, const struct btf_type *t) }
static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
int off)
int off, int *aligned_idx)
{ const struct btf_param *args; const struct btf_type *t; u32 offset = 0, nr_args; int i;
if (aligned_idx)
*aligned_idx = -ENOENT;
if (!func_proto) return off / 8; nr_args = btf_type_vlen(func_proto); args = (const struct btf_param *)(func_proto + 1); for (i = 0; i < nr_args; i++) {
if (aligned_idx && offset == off)
*aligned_idx = i; t = btf_type_skip_modifiers(btf, args[i].type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset)
@@ -6207,7 +6212,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, tname, off); return false; }
arg = get_ctx_arg_idx(btf, t, off);
arg = get_ctx_arg_idx(btf, t, off, NULL); args = (const struct btf_param *)(t + 1); /* if (t == NULL) Fall back to default BPF prog with * MAX_BPF_FUNC_REG_ARGS u64 arguments.
@@ -6217,6 +6222,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--;
prog->aux->accessed_args |= (1 << (arg + 1));
} else {
prog->aux->accessed_args |= (1 << arg);
What do you need this aligned_idx for ? I'd expect that above "accessed_args |= (1 << arg);" is enough.
Which aligned_idx? No aligned_idx in the btf_ctx_access(), and aligned_idx is only used in the btf_check_func_part_match().
In the btf_check_func_part_match(), I need to compare the t1->args[i] and t2->args[j], which have the same offset. And the aligned_idx is to find the "j" according to the offset of t1->args[i].
And that's my question. Why you don't do the max of accessed_args across all attach points and do btf_check_func_type_match() to that argno instead of nargs1. This 'offset += btf_type_is_ptr(t1) ? 8 : roundup... is odd.
Hi, I'm trying to make the bpf flexible enough. Let's take an example, now we have the bpf program:
int test1_result = 0; int BPF_PROG(test1, int a, long b, char c) { test1_result = a + c; return 0; }
In this program, only the 1st and 3rd arg is accessed. So all kernel functions whose 1st arg is int and 3rd arg is char can be attached by this bpf program, even if their 2nd arg is different.
And let's take another example for struct. This is our bpf program:
int test1_result = 0; int BPF_PROG(test1, long a, long b, char c) { test1_result = c; return 0; }
Only the 3rd arg is accessed. And we have following kernel function:
int kernel_function1(long a, long b, char c) { xxx }
struct test1 { long a; long b; }; int kernel_function2(struct test1 a, char b) { xxx }
The kernel_function1 and kernel_function2 should be compatible, as the bpf program only accessed the ctx[2], whose offset is 16. And the arg in kernel_function1() with offset 16 is "char c", the arg in kernel_function2() with offset 16 is "char b", which is compatible.
That's why we need to check the consistency of accessed args by offset instead of function arg index.
I'm not sure if I express my idea clearly, is this what you are asking?
Thanks! Menglong Dong
On Tue, Mar 12, 2024 at 10:42 AM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Tue, Mar 12, 2024 at 10:09 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 7:01 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Tue, Mar 12, 2024 at 9:46 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:34 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 108 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 95e07673cdc1..0f677fdcfcc7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1461,6 +1461,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name;
u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab;
@@ -2565,6 +2566,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1,
struct btf *btf2, const struct btf_type *t2,
u64 func_args);
const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 170d017e8e4a..c2a0299d4358 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6125,19 +6125,24 @@ static bool is_int_ptr(struct btf *btf, const struct btf_type *t) }
static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
int off)
int off, int *aligned_idx)
{ const struct btf_param *args; const struct btf_type *t; u32 offset = 0, nr_args; int i;
if (aligned_idx)
*aligned_idx = -ENOENT;
if (!func_proto) return off / 8; nr_args = btf_type_vlen(func_proto); args = (const struct btf_param *)(func_proto + 1); for (i = 0; i < nr_args; i++) {
if (aligned_idx && offset == off)
*aligned_idx = i; t = btf_type_skip_modifiers(btf, args[i].type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset)
@@ -6207,7 +6212,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, tname, off); return false; }
arg = get_ctx_arg_idx(btf, t, off);
arg = get_ctx_arg_idx(btf, t, off, NULL); args = (const struct btf_param *)(t + 1); /* if (t == NULL) Fall back to default BPF prog with * MAX_BPF_FUNC_REG_ARGS u64 arguments.
@@ -6217,6 +6222,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--;
prog->aux->accessed_args |= (1 << (arg + 1));
} else {
prog->aux->accessed_args |= (1 << arg);
What do you need this aligned_idx for ? I'd expect that above "accessed_args |= (1 << arg);" is enough.
Which aligned_idx? No aligned_idx in the btf_ctx_access(), and aligned_idx is only used in the btf_check_func_part_match().
In the btf_check_func_part_match(), I need to compare the t1->args[i] and t2->args[j], which have the same offset. And the aligned_idx is to find the "j" according to the offset of t1->args[i].
And that's my question. Why you don't do the max of accessed_args across all attach points and do btf_check_func_type_match() to that argno instead of nargs1. This 'offset += btf_type_is_ptr(t1) ? 8 : roundup... is odd.
Hi, I'm trying to make the bpf flexible enough. Let's take an example, now we have the bpf program:
int test1_result = 0; int BPF_PROG(test1, int a, long b, char c) { test1_result = a + c; return 0; }
In this program, only the 1st and 3rd arg is accessed. So all kernel functions whose 1st arg is int and 3rd arg is char can be attached by this bpf program, even if their 2nd arg is different.
And let's take another example for struct. This is our bpf program:
int test1_result = 0; int BPF_PROG(test1, long a, long b, char c) { test1_result = c; return 0; }
Only the 3rd arg is accessed. And we have following kernel function:
int kernel_function1(long a, long b, char c) { xxx }
struct test1 { long a; long b; }; int kernel_function2(struct test1 a, char b) { xxx }
The kernel_function1 and kernel_function2 should be compatible, as the bpf program only accessed the ctx[2], whose offset is 16. And the arg in kernel_function1() with offset 16 is "char c", the arg in kernel_function2() with offset 16 is "char b", which is compatible.
That's why we need to check the consistency of accessed args by offset instead of function arg index.
And that's why I didn't share the code with btf_check_func_type_match(). In btf_check_func_part_match(), I'm trying to check the "real" accessed args of t1 and t2, not by the function index, which has quite a difference with btf_check_func_type_match().
I'm not sure if I express my idea clearly, is this what you are asking?
Thanks! Menglong Dong
On Mon, Mar 11, 2024 at 7:42 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Tue, Mar 12, 2024 at 10:09 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 7:01 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Tue, Mar 12, 2024 at 9:46 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:34 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 108 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 95e07673cdc1..0f677fdcfcc7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1461,6 +1461,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name;
u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab;
@@ -2565,6 +2566,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1,
struct btf *btf2, const struct btf_type *t2,
u64 func_args);
const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 170d017e8e4a..c2a0299d4358 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6125,19 +6125,24 @@ static bool is_int_ptr(struct btf *btf, const struct btf_type *t) }
static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
int off)
int off, int *aligned_idx)
{ const struct btf_param *args; const struct btf_type *t; u32 offset = 0, nr_args; int i;
if (aligned_idx)
*aligned_idx = -ENOENT;
if (!func_proto) return off / 8; nr_args = btf_type_vlen(func_proto); args = (const struct btf_param *)(func_proto + 1); for (i = 0; i < nr_args; i++) {
if (aligned_idx && offset == off)
*aligned_idx = i; t = btf_type_skip_modifiers(btf, args[i].type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset)
@@ -6207,7 +6212,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, tname, off); return false; }
arg = get_ctx_arg_idx(btf, t, off);
arg = get_ctx_arg_idx(btf, t, off, NULL); args = (const struct btf_param *)(t + 1); /* if (t == NULL) Fall back to default BPF prog with * MAX_BPF_FUNC_REG_ARGS u64 arguments.
@@ -6217,6 +6222,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--;
prog->aux->accessed_args |= (1 << (arg + 1));
} else {
prog->aux->accessed_args |= (1 << arg);
What do you need this aligned_idx for ? I'd expect that above "accessed_args |= (1 << arg);" is enough.
Which aligned_idx? No aligned_idx in the btf_ctx_access(), and aligned_idx is only used in the btf_check_func_part_match().
In the btf_check_func_part_match(), I need to compare the t1->args[i] and t2->args[j], which have the same offset. And the aligned_idx is to find the "j" according to the offset of t1->args[i].
And that's my question. Why you don't do the max of accessed_args across all attach points and do btf_check_func_type_match() to that argno instead of nargs1. This 'offset += btf_type_is_ptr(t1) ? 8 : roundup... is odd.
Hi, I'm trying to make the bpf flexible enough. Let's take an example, now we have the bpf program:
int test1_result = 0; int BPF_PROG(test1, int a, long b, char c) { test1_result = a + c; return 0; }
In this program, only the 1st and 3rd arg is accessed. So all kernel functions whose 1st arg is int and 3rd arg is char can be attached by this bpf program, even if their 2nd arg is different.
And let's take another example for struct. This is our bpf program:
int test1_result = 0; int BPF_PROG(test1, long a, long b, char c) { test1_result = c; return 0; }
Only the 3rd arg is accessed. And we have following kernel function:
int kernel_function1(long a, long b, char c) { xxx }
struct test1 { long a; long b; }; int kernel_function2(struct test1 a, char b) { xxx }
The kernel_function1 and kernel_function2 should be compatible, as the bpf program only accessed the ctx[2], whose offset is 16. And the arg in kernel_function1() with offset 16 is "char c", the arg in kernel_function2() with offset 16 is "char b", which is compatible.
I see. I thought you're sharing the trampoline across attachments. (since bpf prog is the same). But above approach cannot possibly work with a shared trampoline. You need to create individual trampoline for all attachment and point them to single bpf prog.
tbh I'm less excited about this feature now, since sharing the prog across different attachments is nice, but it won't scale to thousands of attachments. I assumed that there will be a single trampoline with max(argno) across attachments and attach/detach will scale to thousands.
With individual trampoline this will work for up to a hundred attachments max.
Let's step back. What is the exact use case you're trying to solve? Not an artificial one as selftest in patch 9, but the real use case?
On Wed, Mar 13, 2024 at 12:42 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 7:42 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
[......]
I see. I thought you're sharing the trampoline across attachments. (since bpf prog is the same).
That seems to be a good idea, which I hadn't thought before.
But above approach cannot possibly work with a shared trampoline. You need to create individual trampoline for all attachment and point them to single bpf prog.
tbh I'm less excited about this feature now, since sharing the prog across different attachments is nice, but it won't scale to thousands of attachments. I assumed that there will be a single trampoline with max(argno) across attachments and attach/detach will scale to thousands.
With individual trampoline this will work for up to a hundred attachments max.
What does "a hundred attachments max" means? Can't I trace thousands of kernel functions with a bpf program of tracing multi-link?
Let's step back. What is the exact use case you're trying to solve? Not an artificial one as selftest in patch 9, but the real use case?
I have a tool, which is used to diagnose network problems, and its name is "nettrace". It will trace many kernel functions, whose function args contain "skb", like this:
./nettrace -p icmp begin trace... ***************** ffff889be8fbd500,ffff889be8fbcd00 *************** [1272349.614564] [dev_gro_receive ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614579] [__netif_receive_skb_core] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614585] [ip_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614592] [ip_rcv_core ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614599] [skb_clone ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614616] [nf_hook_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614629] [nft_do_chain ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614635] [ip_rcv_finish ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614643] [ip_route_input_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614647] [fib_validate_source ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614652] [ip_local_deliver ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614658] [nf_hook_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614663] [ip_local_deliver_finish] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614666] [icmp_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614671] [icmp_echo ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614675] [icmp_reply ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614715] [consume_skb ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614722] [packet_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614725] [consume_skb ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220
For now, I have to create a bpf program for every kernel function that I want to trace, which is up to 200.
With this multi-link, I only need to create 5 bpf program, like this:
int BPF_PROG(trace_skb_1, struct *skb); int BPF_PROG(trace_skb_2, u64 arg0, struct *skb); int BPF_PROG(trace_skb_3, u64 arg0, u64 arg1, struct *skb); int BPF_PROG(trace_skb_4, u64 arg0, u64 arg1, u64 arg2, struct *skb); int BPF_PROG(trace_skb_5, u64 arg0, u64 arg1, u64 arg2, u64 arg3, struct *skb);
Then, I can attach trace_skb_1 to all the kernel functions that I want to trace and whose first arg is skb; attach trace_skb_2 to kernel functions whose 2nd arg is skb, etc.
Or, I can create only one bpf program and store the index of skb to the attachment cookie, and attach this program to all the kernel functions that I want to trace.
This is my use case. With the multi-link, now I only have 1 bpf program, 1 bpf link, 200 trampolines, instead of 200 bpf programs, 200 bpf link and 200 trampolines.
The shared trampoline you mentioned seems to be a wonderful idea, which can make the 200 trampolines to one. Let me have a look, we create a trampoline and record the max args count of all the target functions, let's mark it as arg_count.
During generating the trampoline, we assume that the function args count is arg_count. During attaching, we check the consistency of all the target functions, just like what we do now.
Am I right?
Thanks! Menglong Dong
On Tue, Mar 12, 2024 at 6:53 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Wed, Mar 13, 2024 at 12:42 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 7:42 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
[......]
I see. I thought you're sharing the trampoline across attachments. (since bpf prog is the same).
That seems to be a good idea, which I hadn't thought before.
But above approach cannot possibly work with a shared trampoline. You need to create individual trampoline for all attachment and point them to single bpf prog.
tbh I'm less excited about this feature now, since sharing the prog across different attachments is nice, but it won't scale to thousands of attachments. I assumed that there will be a single trampoline with max(argno) across attachments and attach/detach will scale to thousands.
With individual trampoline this will work for up to a hundred attachments max.
What does "a hundred attachments max" means? Can't I trace thousands of kernel functions with a bpf program of tracing multi-link?
I mean what time does it take to attach one program to 100 fentry-s ? What is the time for 1k and for 10k ?
The kprobe multi test attaches to pretty much all funcs in /sys/kernel/tracing/available_filter_functions and it's fast enough to run in test_progs on every commit in bpf CI. See get_syms() in prog_tests/kprobe_multi_test.c
Can this new multi fentry do that? and at what speed? The answer will decide how applicable this api is going to be. Generating different trampolines for every attach point is an approach as well. Pls benchmark it too.
Let's step back. What is the exact use case you're trying to solve? Not an artificial one as selftest in patch 9, but the real use case?
I have a tool, which is used to diagnose network problems, and its name is "nettrace". It will trace many kernel functions, whose function args contain "skb", like this:
./nettrace -p icmp begin trace... ***************** ffff889be8fbd500,ffff889be8fbcd00 *************** [1272349.614564] [dev_gro_receive ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614579] [__netif_receive_skb_core] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614585] [ip_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614592] [ip_rcv_core ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614599] [skb_clone ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614616] [nf_hook_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614629] [nft_do_chain ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614635] [ip_rcv_finish ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614643] [ip_route_input_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614647] [fib_validate_source ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614652] [ip_local_deliver ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614658] [nf_hook_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614663] [ip_local_deliver_finish] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614666] [icmp_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614671] [icmp_echo ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614675] [icmp_reply ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614715] [consume_skb ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614722] [packet_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614725] [consume_skb ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220
For now, I have to create a bpf program for every kernel function that I want to trace, which is up to 200.
With this multi-link, I only need to create 5 bpf program, like this:
int BPF_PROG(trace_skb_1, struct *skb); int BPF_PROG(trace_skb_2, u64 arg0, struct *skb); int BPF_PROG(trace_skb_3, u64 arg0, u64 arg1, struct *skb); int BPF_PROG(trace_skb_4, u64 arg0, u64 arg1, u64 arg2, struct *skb); int BPF_PROG(trace_skb_5, u64 arg0, u64 arg1, u64 arg2, u64 arg3, struct *skb);
Then, I can attach trace_skb_1 to all the kernel functions that I want to trace and whose first arg is skb; attach trace_skb_2 to kernel functions whose 2nd arg is skb, etc.
Or, I can create only one bpf program and store the index of skb to the attachment cookie, and attach this program to all the kernel functions that I want to trace.
This is my use case. With the multi-link, now I only have 1 bpf program, 1 bpf link, 200 trampolines, instead of 200 bpf programs, 200 bpf link and 200 trampolines.
I see. The use case makes sense to me. Andrii's retsnoop is used to do similar thing before kprobe multi was introduced.
The shared trampoline you mentioned seems to be a wonderful idea, which can make the 200 trampolines to one. Let me have a look, we create a trampoline and record the max args count of all the target functions, let's mark it as arg_count.
During generating the trampoline, we assume that the function args count is arg_count. During attaching, we check the consistency of all the target functions, just like what we do now.
For one trampoline to handle all attach points we might need some arch support, but we can start simple. Make btf_func_model with MAX_BPF_FUNC_REG_ARGS by calling btf_distill_func_proto() with func==NULL. And use that to build a trampoline.
The challenge is how to use minimal number of trampolines when bpf_progA is attached for func1, func2, func3 and bpf_progB is attached to func3, func4, func5. We'd still need 3 trampolines: for func[12] to call bpf_progA, for func3 to call bpf_progA and bpf_progB, for func[45] to call bpf_progB.
Jiri was trying to solve it in the past. His slides from LPC: https://lpc.events/event/16/contributions/1350/attachments/1033/1983/plumber...
Pls study them and his prior patchsets to avoid stepping on the same rakes.
On Wed, Mar 13, 2024 at 05:25:35PM -0700, Alexei Starovoitov wrote:
On Tue, Mar 12, 2024 at 6:53 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Wed, Mar 13, 2024 at 12:42 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 7:42 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
[......]
I see. I thought you're sharing the trampoline across attachments. (since bpf prog is the same).
That seems to be a good idea, which I hadn't thought before.
But above approach cannot possibly work with a shared trampoline. You need to create individual trampoline for all attachment and point them to single bpf prog.
tbh I'm less excited about this feature now, since sharing the prog across different attachments is nice, but it won't scale to thousands of attachments. I assumed that there will be a single trampoline with max(argno) across attachments and attach/detach will scale to thousands.
With individual trampoline this will work for up to a hundred attachments max.
What does "a hundred attachments max" means? Can't I trace thousands of kernel functions with a bpf program of tracing multi-link?
I mean what time does it take to attach one program to 100 fentry-s ? What is the time for 1k and for 10k ?
The kprobe multi test attaches to pretty much all funcs in /sys/kernel/tracing/available_filter_functions and it's fast enough to run in test_progs on every commit in bpf CI. See get_syms() in prog_tests/kprobe_multi_test.c
Can this new multi fentry do that? and at what speed? The answer will decide how applicable this api is going to be. Generating different trampolines for every attach point is an approach as well. Pls benchmark it too.
Let's step back. What is the exact use case you're trying to solve? Not an artificial one as selftest in patch 9, but the real use case?
I have a tool, which is used to diagnose network problems, and its name is "nettrace". It will trace many kernel functions, whose function args contain "skb", like this:
./nettrace -p icmp begin trace... ***************** ffff889be8fbd500,ffff889be8fbcd00 *************** [1272349.614564] [dev_gro_receive ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614579] [__netif_receive_skb_core] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614585] [ip_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614592] [ip_rcv_core ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614599] [skb_clone ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614616] [nf_hook_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614629] [nft_do_chain ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614635] [ip_rcv_finish ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614643] [ip_route_input_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614647] [fib_validate_source ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614652] [ip_local_deliver ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614658] [nf_hook_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614663] [ip_local_deliver_finish] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614666] [icmp_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614671] [icmp_echo ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614675] [icmp_reply ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614715] [consume_skb ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614722] [packet_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614725] [consume_skb ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220
For now, I have to create a bpf program for every kernel function that I want to trace, which is up to 200.
With this multi-link, I only need to create 5 bpf program, like this:
int BPF_PROG(trace_skb_1, struct *skb); int BPF_PROG(trace_skb_2, u64 arg0, struct *skb); int BPF_PROG(trace_skb_3, u64 arg0, u64 arg1, struct *skb); int BPF_PROG(trace_skb_4, u64 arg0, u64 arg1, u64 arg2, struct *skb); int BPF_PROG(trace_skb_5, u64 arg0, u64 arg1, u64 arg2, u64 arg3, struct *skb);
Then, I can attach trace_skb_1 to all the kernel functions that I want to trace and whose first arg is skb; attach trace_skb_2 to kernel functions whose 2nd arg is skb, etc.
Or, I can create only one bpf program and store the index of skb to the attachment cookie, and attach this program to all the kernel functions that I want to trace.
This is my use case. With the multi-link, now I only have 1 bpf program, 1 bpf link, 200 trampolines, instead of 200 bpf programs, 200 bpf link and 200 trampolines.
I see. The use case makes sense to me. Andrii's retsnoop is used to do similar thing before kprobe multi was introduced.
The shared trampoline you mentioned seems to be a wonderful idea, which can make the 200 trampolines to one. Let me have a look, we create a trampoline and record the max args count of all the target functions, let's mark it as arg_count.
During generating the trampoline, we assume that the function args count is arg_count. During attaching, we check the consistency of all the target functions, just like what we do now.
For one trampoline to handle all attach points we might need some arch support, but we can start simple. Make btf_func_model with MAX_BPF_FUNC_REG_ARGS by calling btf_distill_func_proto() with func==NULL. And use that to build a trampoline.
The challenge is how to use minimal number of trampolines when bpf_progA is attached for func1, func2, func3 and bpf_progB is attached to func3, func4, func5. We'd still need 3 trampolines: for func[12] to call bpf_progA, for func3 to call bpf_progA and bpf_progB, for func[45] to call bpf_progB.
Jiri was trying to solve it in the past. His slides from LPC: https://lpc.events/event/16/contributions/1350/attachments/1033/1983/plumber...
Pls study them and his prior patchsets to avoid stepping on the same rakes.
yep, I refrained from commenting not to take you down the same path I did, but if you insist.. ;-)
I managed to forgot almost all of it, but the IIRC the main pain point was that at some point I had to split existing trampoline which caused the whole trampolines management and error paths to become a mess
I tried to explain things in [1] changelog and the latest patchset is in [0]
feel free to use/take anything, but I advice strongly against it ;-) please let me know if I can help
jirka
[0] https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git/log/?h=bpf/ba... [1] https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git/commit/?h=bpf...
On Thu, Mar 14, 2024 at 2:29 PM Jiri Olsa olsajiri@gmail.com wrote:
On Wed, Mar 13, 2024 at 05:25:35PM -0700, Alexei Starovoitov wrote:
On Tue, Mar 12, 2024 at 6:53 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Wed, Mar 13, 2024 at 12:42 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 7:42 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
[......]
I see. I thought you're sharing the trampoline across attachments. (since bpf prog is the same).
That seems to be a good idea, which I hadn't thought before.
But above approach cannot possibly work with a shared trampoline. You need to create individual trampoline for all attachment and point them to single bpf prog.
tbh I'm less excited about this feature now, since sharing the prog across different attachments is nice, but it won't scale to thousands of attachments. I assumed that there will be a single trampoline with max(argno) across attachments and attach/detach will scale to thousands.
With individual trampoline this will work for up to a hundred attachments max.
What does "a hundred attachments max" means? Can't I trace thousands of kernel functions with a bpf program of tracing multi-link?
I mean what time does it take to attach one program to 100 fentry-s ? What is the time for 1k and for 10k ?
The kprobe multi test attaches to pretty much all funcs in /sys/kernel/tracing/available_filter_functions and it's fast enough to run in test_progs on every commit in bpf CI. See get_syms() in prog_tests/kprobe_multi_test.c
Can this new multi fentry do that? and at what speed? The answer will decide how applicable this api is going to be. Generating different trampolines for every attach point is an approach as well. Pls benchmark it too.
Let's step back. What is the exact use case you're trying to solve? Not an artificial one as selftest in patch 9, but the real use case?
I have a tool, which is used to diagnose network problems, and its name is "nettrace". It will trace many kernel functions, whose function args contain "skb", like this:
./nettrace -p icmp begin trace... ***************** ffff889be8fbd500,ffff889be8fbcd00 *************** [1272349.614564] [dev_gro_receive ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614579] [__netif_receive_skb_core] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614585] [ip_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614592] [ip_rcv_core ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614599] [skb_clone ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614616] [nf_hook_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614629] [nft_do_chain ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614635] [ip_rcv_finish ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614643] [ip_route_input_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614647] [fib_validate_source ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614652] [ip_local_deliver ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614658] [nf_hook_slow ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614663] [ip_local_deliver_finish] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614666] [icmp_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614671] [icmp_echo ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614675] [icmp_reply ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614715] [consume_skb ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614722] [packet_rcv ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220 [1272349.614725] [consume_skb ] ICMP: 169.254.128.15 -> 172.27.0.6 ping request, seq: 48220
For now, I have to create a bpf program for every kernel function that I want to trace, which is up to 200.
With this multi-link, I only need to create 5 bpf program, like this:
int BPF_PROG(trace_skb_1, struct *skb); int BPF_PROG(trace_skb_2, u64 arg0, struct *skb); int BPF_PROG(trace_skb_3, u64 arg0, u64 arg1, struct *skb); int BPF_PROG(trace_skb_4, u64 arg0, u64 arg1, u64 arg2, struct *skb); int BPF_PROG(trace_skb_5, u64 arg0, u64 arg1, u64 arg2, u64 arg3, struct *skb);
Then, I can attach trace_skb_1 to all the kernel functions that I want to trace and whose first arg is skb; attach trace_skb_2 to kernel functions whose 2nd arg is skb, etc.
Or, I can create only one bpf program and store the index of skb to the attachment cookie, and attach this program to all the kernel functions that I want to trace.
This is my use case. With the multi-link, now I only have 1 bpf program, 1 bpf link, 200 trampolines, instead of 200 bpf programs, 200 bpf link and 200 trampolines.
I see. The use case makes sense to me. Andrii's retsnoop is used to do similar thing before kprobe multi was introduced.
The shared trampoline you mentioned seems to be a wonderful idea, which can make the 200 trampolines to one. Let me have a look, we create a trampoline and record the max args count of all the target functions, let's mark it as arg_count.
During generating the trampoline, we assume that the function args count is arg_count. During attaching, we check the consistency of all the target functions, just like what we do now.
For one trampoline to handle all attach points we might need some arch support, but we can start simple. Make btf_func_model with MAX_BPF_FUNC_REG_ARGS by calling btf_distill_func_proto() with func==NULL. And use that to build a trampoline.
The challenge is how to use minimal number of trampolines when bpf_progA is attached for func1, func2, func3 and bpf_progB is attached to func3, func4, func5. We'd still need 3 trampolines: for func[12] to call bpf_progA, for func3 to call bpf_progA and bpf_progB, for func[45] to call bpf_progB.
Jiri was trying to solve it in the past. His slides from LPC: https://lpc.events/event/16/contributions/1350/attachments/1033/1983/plumber...
Pls study them and his prior patchsets to avoid stepping on the same rakes.
yep, I refrained from commenting not to take you down the same path I did, but if you insist.. ;-)
I managed to forgot almost all of it, but the IIRC the main pain point was that at some point I had to split existing trampoline which caused the whole trampolines management and error paths to become a mess
I tried to explain things in [1] changelog and the latest patchset is in [0]
feel free to use/take anything, but I advice strongly against it ;-) please let me know if I can help
I have to say that I have not gone far enough to encounter this problem, and I didn't dig enough to be aware of the complexity.
I suspect that I can't overcome this challenge. The only thing that I thought when I hear about the "shared trampoline" is to fallback and not use the shared trampoline for the kernel functions who already have a trampoline.
Anyway, let's have a try on it, based on your research.
Thanks! Menglong Dong
jirka
[0] https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git/log/?h=bpf/ba... [1] https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git/commit/?h=bpf...
On Thu, Mar 14, 2024 at 8:27 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Tue, Mar 12, 2024 at 6:53 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
[......]
What does "a hundred attachments max" means? Can't I trace thousands of kernel functions with a bpf program of tracing multi-link?
I mean what time does it take to attach one program to 100 fentry-s ? What is the time for 1k and for 10k ?
The kprobe multi test attaches to pretty much all funcs in /sys/kernel/tracing/available_filter_functions and it's fast enough to run in test_progs on every commit in bpf CI. See get_syms() in prog_tests/kprobe_multi_test.c
Can this new multi fentry do that? and at what speed? The answer will decide how applicable this api is going to be. Generating different trampolines for every attach point is an approach as well. Pls benchmark it too.
I see. Creating plenty of trampolines does take a lot of time, and I'll do testing on it.
Let's step back.
[......]
For one trampoline to handle all attach points we might need some arch support, but we can start simple. Make btf_func_model with MAX_BPF_FUNC_REG_ARGS by calling btf_distill_func_proto() with func==NULL. And use that to build a trampoline.
The challenge is how to use minimal number of trampolines when bpf_progA is attached for func1, func2, func3 and bpf_progB is attached to func3, func4, func5. We'd still need 3 trampolines: for func[12] to call bpf_progA, for func3 to call bpf_progA and bpf_progB, for func[45] to call bpf_progB.
Jiri was trying to solve it in the past. His slides from LPC: https://lpc.events/event/16/contributions/1350/attachments/1033/1983/plumber...
Pls study them and his prior patchsets to avoid stepping on the same rakes.
On Fri, Mar 15, 2024 at 4:00 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Thu, Mar 14, 2024 at 8:27 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Tue, Mar 12, 2024 at 6:53 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
[......]
What does "a hundred attachments max" means? Can't I trace thousands of kernel functions with a bpf program of tracing multi-link?
I mean what time does it take to attach one program to 100 fentry-s ? What is the time for 1k and for 10k ?
The kprobe multi test attaches to pretty much all funcs in /sys/kernel/tracing/available_filter_functions and it's fast enough to run in test_progs on every commit in bpf CI. See get_syms() in prog_tests/kprobe_multi_test.c
Can this new multi fentry do that? and at what speed? The answer will decide how applicable this api is going to be. Generating different trampolines for every attach point is an approach as well. Pls benchmark it too.
I see. Creating plenty of trampolines does take a lot of time, and I'll do testing on it.
I have done a simple benchmark on creating 1000 trampolines. It is slow, quite slow, which consume up to 60s. We can't do it this way.
Now, I have a bad idea. How about we introduce a "dynamic trampoline"? The basic logic of it can be:
""" save regs bpfs = trampoline_lookup_ip(ip) fentry = bpfs->fentries while fentry: fentry(ctx) fentry = fentry->next
call origin save return value
fexit = bpfs->fexits while fexit: fexit(ctx) fexit = fexit->next
xxxxxx """
And we lookup the "bpfs" by the function ip in a hash map in trampoline_lookup_ip. The type of "bpfs" is:
struct bpf_array { struct bpf_prog *fentries; struct bpf_prog *fexits; struct bpf_prog *modify_returns; }
When we need to attach the bpf progA to function A/B/C, we only need to create the bpf_arrayA, bpf_arrayB, bpf_arrayC and add the progA to them, and insert them to the hash map "direct_call_bpfs", and attach the "dynamic trampoline" to A/B/C. If bpf_arrayA exist, just add progA to the tail of bpf_arrayA->fentries. When we need to attach progB to B/C, just add progB to bpf_arrayB->fentries and bpf_arrayB->fentries.
Compared to the trampoline, extra overhead is introduced by the hash lookuping.
I have not begun to code yet, and I am not sure the overhead is acceptable. Considering that we also need to do hash lookup by the function in kprobe_multi, maybe the overhead is acceptable?
Thanks! Menglong Dong
Let's step back.
[......]
For one trampoline to handle all attach points we might need some arch support, but we can start simple. Make btf_func_model with MAX_BPF_FUNC_REG_ARGS by calling btf_distill_func_proto() with func==NULL. And use that to build a trampoline.
The challenge is how to use minimal number of trampolines when bpf_progA is attached for func1, func2, func3 and bpf_progB is attached to func3, func4, func5. We'd still need 3 trampolines: for func[12] to call bpf_progA, for func3 to call bpf_progA and bpf_progB, for func[45] to call bpf_progB.
Jiri was trying to solve it in the past. His slides from LPC: https://lpc.events/event/16/contributions/1350/attachments/1033/1983/plumber...
Pls study them and his prior patchsets to avoid stepping on the same rakes.
On Thu, 28 Mar 2024 22:43:46 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
I have done a simple benchmark on creating 1000 trampolines. It is slow, quite slow, which consume up to 60s. We can't do it this way.
Now, I have a bad idea. How about we introduce a "dynamic trampoline"? The basic logic of it can be:
""" save regs bpfs = trampoline_lookup_ip(ip) fentry = bpfs->fentries while fentry: fentry(ctx) fentry = fentry->next
call origin save return value
fexit = bpfs->fexits while fexit: fexit(ctx) fexit = fexit->next
xxxxxx """
And we lookup the "bpfs" by the function ip in a hash map in trampoline_lookup_ip. The type of "bpfs" is:
struct bpf_array { struct bpf_prog *fentries; struct bpf_prog *fexits; struct bpf_prog *modify_returns; }
When we need to attach the bpf progA to function A/B/C, we only need to create the bpf_arrayA, bpf_arrayB, bpf_arrayC and add the progA to them, and insert them to the hash map "direct_call_bpfs", and attach the "dynamic trampoline" to A/B/C. If bpf_arrayA exist, just add progA to the tail of bpf_arrayA->fentries. When we need to attach progB to B/C, just add progB to bpf_arrayB->fentries and bpf_arrayB->fentries.
Compared to the trampoline, extra overhead is introduced by the hash lookuping.
I have not begun to code yet, and I am not sure the overhead is acceptable. Considering that we also need to do hash lookup by the function in kprobe_multi, maybe the overhead is acceptable?
Sounds like you are just recreating the function management that ftrace has. It also can add thousands of trampolines very quickly, because it does it in batches. It takes special synchronization steps to attach to fentry. ftrace (and I believe multi-kprobes) updates all the attachments for each step, so the synchronization needed is only done once.
If you really want to have thousands of functions, why not just register it with ftrace itself. It will give you the arguments via the ftrace_regs structure. Can't you just register a program as the callback?
It will probably make your accounting much easier, and just let ftrace handle the fentry logic. That's what it was made to do.
-- Steve
On Thu, Mar 28, 2024 at 8:10 AM Steven Rostedt rostedt@goodmis.org wrote:
On Thu, 28 Mar 2024 22:43:46 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
I have done a simple benchmark on creating 1000 trampolines. It is slow, quite slow, which consume up to 60s. We can't do it this way.
Now, I have a bad idea. How about we introduce a "dynamic trampoline"? The basic logic of it can be:
""" save regs bpfs = trampoline_lookup_ip(ip) fentry = bpfs->fentries while fentry: fentry(ctx) fentry = fentry->next
call origin save return value
fexit = bpfs->fexits while fexit: fexit(ctx) fexit = fexit->next
xxxxxx """
And we lookup the "bpfs" by the function ip in a hash map in trampoline_lookup_ip. The type of "bpfs" is:
struct bpf_array { struct bpf_prog *fentries; struct bpf_prog *fexits; struct bpf_prog *modify_returns; }
When we need to attach the bpf progA to function A/B/C, we only need to create the bpf_arrayA, bpf_arrayB, bpf_arrayC and add the progA to them, and insert them to the hash map "direct_call_bpfs", and attach the "dynamic trampoline" to A/B/C. If bpf_arrayA exist, just add progA to the tail of bpf_arrayA->fentries. When we need to attach progB to B/C, just add progB to bpf_arrayB->fentries and bpf_arrayB->fentries.
Compared to the trampoline, extra overhead is introduced by the hash lookuping.
I have not begun to code yet, and I am not sure the overhead is acceptable. Considering that we also need to do hash lookup by the function in kprobe_multi, maybe the overhead is acceptable?
Sounds like you are just recreating the function management that ftrace has. It also can add thousands of trampolines very quickly, because it does it in batches. It takes special synchronization steps to attach to fentry. ftrace (and I believe multi-kprobes) updates all the attachments for each step, so the synchronization needed is only done once.
If you really want to have thousands of functions, why not just register it with ftrace itself. It will give you the arguments via the ftrace_regs structure. Can't you just register a program as the callback?
It will probably make your accounting much easier, and just let ftrace handle the fentry logic. That's what it was made to do.
Absolutely agree. There is no point re-inventing this logic.
Menlong, before you hook up into ftrace check whether it's going to be any different from kprobe-multi, since it's the same ftrace underneath. I suspect it will look exactly the same. So it sounds like multi-fentry idea will be shelved once again.
On Fri, Mar 29, 2024 at 7:17 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Thu, Mar 28, 2024 at 8:10 AM Steven Rostedt rostedt@goodmis.org wrote:
On Thu, 28 Mar 2024 22:43:46 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
I have done a simple benchmark on creating 1000 trampolines. It is slow, quite slow, which consume up to 60s. We can't do it this way.
Now, I have a bad idea. How about we introduce a "dynamic trampoline"? The basic logic of it can be:
""" save regs bpfs = trampoline_lookup_ip(ip) fentry = bpfs->fentries while fentry: fentry(ctx) fentry = fentry->next
call origin save return value
fexit = bpfs->fexits while fexit: fexit(ctx) fexit = fexit->next
xxxxxx """
And we lookup the "bpfs" by the function ip in a hash map in trampoline_lookup_ip. The type of "bpfs" is:
struct bpf_array { struct bpf_prog *fentries; struct bpf_prog *fexits; struct bpf_prog *modify_returns; }
When we need to attach the bpf progA to function A/B/C, we only need to create the bpf_arrayA, bpf_arrayB, bpf_arrayC and add the progA to them, and insert them to the hash map "direct_call_bpfs", and attach the "dynamic trampoline" to A/B/C. If bpf_arrayA exist, just add progA to the tail of bpf_arrayA->fentries. When we need to attach progB to B/C, just add progB to bpf_arrayB->fentries and bpf_arrayB->fentries.
Compared to the trampoline, extra overhead is introduced by the hash lookuping.
I have not begun to code yet, and I am not sure the overhead is acceptable. Considering that we also need to do hash lookup by the function in kprobe_multi, maybe the overhead is acceptable?
Sounds like you are just recreating the function management that ftrace has. It also can add thousands of trampolines very quickly, because it does it in batches. It takes special synchronization steps to attach to fentry. ftrace (and I believe multi-kprobes) updates all the attachments for each step, so the synchronization needed is only done once.
If you really want to have thousands of functions, why not just register it with ftrace itself. It will give you the arguments via the ftrace_regs structure. Can't you just register a program as the callback?
It will probably make your accounting much easier, and just let ftrace handle the fentry logic. That's what it was made to do.
Absolutely agree. There is no point re-inventing this logic.
Menlong, before you hook up into ftrace check whether it's going to be any different from kprobe-multi, since it's the same ftrace underneath. I suspect it will look exactly the same.
Yeah, I dig it a little. I think it is different. For multi-kprobe, it registers a ftrace_ops to ftrace_ops_list for every bpf program. This means that we can register 2 or more multi-kprobe in the same function. The bpf is called in the following step:
ftrace_regs_caller | __ftrace_ops_list_func -> fprobe_handler -> kprobe_multi_link_handler -> run BPF
And for trampoline, it needs to be called directly, so it can't be registered as a callback to ftrace_ops_list. It need to be called in the following step:
ftrace_regs_caller | __ftrace_ops_list_func -> call_direct_funcs -> save trampoline to pt_regs->origin_ax | call pt_regs->origin_ax if not NULL
So it sounds like multi-fentry idea will be shelved once again.
Enn...this is the best solution that I can think of. If it doesn't work, I suspect it will be shelved again.
Thanks! Menglong Dong
On Thu, Mar 28, 2024 at 8:10 AM Steven Rostedt rostedt@goodmis.org wrote:
On Thu, 28 Mar 2024 22:43:46 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
I have done a simple benchmark on creating 1000 trampolines. It is slow, quite slow, which consume up to 60s. We can't do it this way.
Now, I have a bad idea. How about we introduce a "dynamic trampoline"? The basic logic of it can be:
""" save regs bpfs = trampoline_lookup_ip(ip) fentry = bpfs->fentries while fentry: fentry(ctx) fentry = fentry->next
call origin save return value
fexit = bpfs->fexits while fexit: fexit(ctx) fexit = fexit->next
xxxxxx """
And we lookup the "bpfs" by the function ip in a hash map in trampoline_lookup_ip. The type of "bpfs" is:
struct bpf_array { struct bpf_prog *fentries; struct bpf_prog *fexits; struct bpf_prog *modify_returns; }
When we need to attach the bpf progA to function A/B/C, we only need to create the bpf_arrayA, bpf_arrayB, bpf_arrayC and add the progA to them, and insert them to the hash map "direct_call_bpfs", and attach the "dynamic trampoline" to A/B/C. If bpf_arrayA exist, just add progA to the tail of bpf_arrayA->fentries. When we need to attach progB to B/C, just add progB to bpf_arrayB->fentries and bpf_arrayB->fentries.
Compared to the trampoline, extra overhead is introduced by the hash lookuping.
I have not begun to code yet, and I am not sure the overhead is acceptable. Considering that we also need to do hash lookup by the function in kprobe_multi, maybe the overhead is acceptable?
Sounds like you are just recreating the function management that ftrace has. It also can add thousands of trampolines very quickly, because it does it in batches. It takes special synchronization steps to attach to fentry. ftrace (and I believe multi-kprobes) updates all the attachments for each step, so the synchronization needed is only done once.
If you really want to have thousands of functions, why not just register it with ftrace itself. It will give you the arguments via the ftrace_regs structure. Can't you just register a program as the callback?
It will probably make your accounting much easier, and just let ftrace handle the fentry logic. That's what it was made to do.
I thought I'll just ask instead of digging through code, sorry for being lazy :) Is there any way to pass pt_regs/ftrace_regs captured before function execution to a return probe (fexit/kretprobe)? I.e., how hard is it to pass input function arguments to a kretprobe? That's the biggest advantage of fexit over kretprobe, and if we can make these original pt_regs/ftrace_regs available to kretprobe, then multi-kretprobe will effectively be this multi-fexit.
-- Steve
On Sat, Mar 30, 2024 at 7:28 AM Andrii Nakryiko andrii.nakryiko@gmail.com wrote:
On Thu, Mar 28, 2024 at 8:10 AM Steven Rostedt rostedt@goodmis.org wrote:
On Thu, 28 Mar 2024 22:43:46 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
I have done a simple benchmark on creating 1000 trampolines. It is slow, quite slow, which consume up to 60s. We can't do it this way.
Now, I have a bad idea. How about we introduce a "dynamic trampoline"? The basic logic of it can be:
""" save regs bpfs = trampoline_lookup_ip(ip) fentry = bpfs->fentries while fentry: fentry(ctx) fentry = fentry->next
call origin save return value
fexit = bpfs->fexits while fexit: fexit(ctx) fexit = fexit->next
xxxxxx """
And we lookup the "bpfs" by the function ip in a hash map in trampoline_lookup_ip. The type of "bpfs" is:
struct bpf_array { struct bpf_prog *fentries; struct bpf_prog *fexits; struct bpf_prog *modify_returns; }
When we need to attach the bpf progA to function A/B/C, we only need to create the bpf_arrayA, bpf_arrayB, bpf_arrayC and add the progA to them, and insert them to the hash map "direct_call_bpfs", and attach the "dynamic trampoline" to A/B/C. If bpf_arrayA exist, just add progA to the tail of bpf_arrayA->fentries. When we need to attach progB to B/C, just add progB to bpf_arrayB->fentries and bpf_arrayB->fentries.
Compared to the trampoline, extra overhead is introduced by the hash lookuping.
I have not begun to code yet, and I am not sure the overhead is acceptable. Considering that we also need to do hash lookup by the function in kprobe_multi, maybe the overhead is acceptable?
Sounds like you are just recreating the function management that ftrace has. It also can add thousands of trampolines very quickly, because it does it in batches. It takes special synchronization steps to attach to fentry. ftrace (and I believe multi-kprobes) updates all the attachments for each step, so the synchronization needed is only done once.
If you really want to have thousands of functions, why not just register it with ftrace itself. It will give you the arguments via the ftrace_regs structure. Can't you just register a program as the callback?
It will probably make your accounting much easier, and just let ftrace handle the fentry logic. That's what it was made to do.
I thought I'll just ask instead of digging through code, sorry for being lazy :) Is there any way to pass pt_regs/ftrace_regs captured before function execution to a return probe (fexit/kretprobe)? I.e., how hard is it to pass input function arguments to a kretprobe? That's the biggest advantage of fexit over kretprobe, and if we can make these original pt_regs/ftrace_regs available to kretprobe, then multi-kretprobe will effectively be this multi-fexit.
Yes, we can use multi-kretprobe instead of multi-fexit if we can obtain the function args in kretprobe.
I think it's hard. The reason that we can obtain the function args is that we have a trampoline, and it call the origin function for FEXIT. If we do the same for multi-kretprobe, we need to modify ftrace_regs_caller to:
ftrace_regs_caller | __ftrace_ops_list_func | call all multi-kprobe callbacks | call orgin | call all multi-kretprobe callbacks | call bpf trampoline(for TRACING)
However, this logic conflicts with bpf trampoline, as it can also call the origin function. What's more, the FENTRY should be called before the "call origin" above.
I'm sure if I understand correctly, as I have not figured out how multi-kretprobe works in fprobe.
Thanks! Menglong Dong
-- Steve
On Fri, 29 Mar 2024 16:28:33 -0700 Andrii Nakryiko andrii.nakryiko@gmail.com wrote:
I thought I'll just ask instead of digging through code, sorry for being lazy :) Is there any way to pass pt_regs/ftrace_regs captured before function execution to a return probe (fexit/kretprobe)? I.e., how hard is it to pass input function arguments to a kretprobe? That's the biggest advantage of fexit over kretprobe, and if we can make these original pt_regs/ftrace_regs available to kretprobe, then multi-kretprobe will effectively be this multi-fexit.
This should be possible with the updates that Masami is doing with the fgraph code.
-- Steve
On Sat, Mar 30, 2024 at 08:27:55AM -0400, Steven Rostedt wrote:
On Fri, 29 Mar 2024 16:28:33 -0700 Andrii Nakryiko andrii.nakryiko@gmail.com wrote:
I thought I'll just ask instead of digging through code, sorry for being lazy :) Is there any way to pass pt_regs/ftrace_regs captured before function execution to a return probe (fexit/kretprobe)? I.e., how hard is it to pass input function arguments to a kretprobe? That's the biggest advantage of fexit over kretprobe, and if we can make these original pt_regs/ftrace_regs available to kretprobe, then multi-kretprobe will effectively be this multi-fexit.
This should be possible with the updates that Masami is doing with the fgraph code.
yes, I have bpf kprobe-multi link support for that [0] (it's on top of Masami's fprobe-over-fgraph changes) we discussed that in [1]
jirka
[0] https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git/log/?h=bpf/se... [1] https://lore.kernel.org/bpf/20240228090242.4040210-1-jolsa@kernel.org/
On Sat, Mar 30, 2024 at 10:52 AM Jiri Olsa olsajiri@gmail.com wrote:
On Sat, Mar 30, 2024 at 08:27:55AM -0400, Steven Rostedt wrote:
On Fri, 29 Mar 2024 16:28:33 -0700 Andrii Nakryiko andrii.nakryiko@gmail.com wrote:
I thought I'll just ask instead of digging through code, sorry for being lazy :) Is there any way to pass pt_regs/ftrace_regs captured before function execution to a return probe (fexit/kretprobe)? I.e., how hard is it to pass input function arguments to a kretprobe? That's the biggest advantage of fexit over kretprobe, and if we can make these original pt_regs/ftrace_regs available to kretprobe, then multi-kretprobe will effectively be this multi-fexit.
This should be possible with the updates that Masami is doing with the fgraph code.
yes, I have bpf kprobe-multi link support for that [0] (it's on top of Masami's fprobe-over-fgraph changes) we discussed that in [1]
Sorry, I forgot the regs/args part, mostly remembering we discussed session cookie ideas. Thanks for reminder!
jirka
[0] https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git/log/?h=bpf/se... [1] https://lore.kernel.org/bpf/20240228090242.4040210-1-jolsa@kernel.org/
On Thu, Mar 28, 2024 at 11:11 PM Steven Rostedt rostedt@goodmis.org wrote:
On Thu, 28 Mar 2024 22:43:46 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
I have done a simple benchmark on creating 1000 trampolines. It is slow, quite slow, which consume up to 60s. We can't do it this way.
Now, I have a bad idea. How about we introduce a "dynamic trampoline"? The basic logic of it can be:
""" save regs bpfs = trampoline_lookup_ip(ip) fentry = bpfs->fentries while fentry: fentry(ctx) fentry = fentry->next
call origin save return value
fexit = bpfs->fexits while fexit: fexit(ctx) fexit = fexit->next
xxxxxx """
And we lookup the "bpfs" by the function ip in a hash map in trampoline_lookup_ip. The type of "bpfs" is:
struct bpf_array { struct bpf_prog *fentries; struct bpf_prog *fexits; struct bpf_prog *modify_returns; }
When we need to attach the bpf progA to function A/B/C, we only need to create the bpf_arrayA, bpf_arrayB, bpf_arrayC and add the progA to them, and insert them to the hash map "direct_call_bpfs", and attach the "dynamic trampoline" to A/B/C. If bpf_arrayA exist, just add progA to the tail of bpf_arrayA->fentries. When we need to attach progB to B/C, just add progB to bpf_arrayB->fentries and bpf_arrayB->fentries.
Compared to the trampoline, extra overhead is introduced by the hash lookuping.
I have not begun to code yet, and I am not sure the overhead is acceptable. Considering that we also need to do hash lookup by the function in kprobe_multi, maybe the overhead is acceptable?
Sounds like you are just recreating the function management that ftrace has. It also can add thousands of trampolines very quickly, because it does it in batches. It takes special synchronization steps to attach to fentry. ftrace (and I believe multi-kprobes) updates all the attachments for each step, so the synchronization needed is only done once.
Yes, it is fast to register a trampoline for a kernel function in the managed ftrace in register_fentry->register_ftrace_direct->ftrace_add_rec_direct. And it will add the trampoline to the hash table "direct_functions".
And the trampoline will be called in the following step (I'm not sure if I understand it correctly):
ftrace_regs_caller | __ftrace_ops_list_func -> call_direct_funcs -> save trampoline to pt_regs->origin_ax | call pt_regs->origin_ax if not NULL
The logic above means that we can only call a trampoline once, and a kernel function can only have one trampoline.
The original idea of mine is to register all the shared trampoline to the managed ftrace. For example, if we have the shared trampoline1 for function A/B/C, and shared trampoline2 for function B/C/D, then I register trampoline1 and trampoline2 for function B/C. However, it can't work, as we can't call 2 trampolines for a function.
Then, I thought that we could create a "dynamic trampoline". The logic for the non-ftrace-managed case is simple, we only need to replace the "nop" of all the target functions to "call dynamic_trampoline". And for the ftrace-managed case, the logic is the same too, except that the trampoline that we add to the "direct_functions" hash is the dynamic-trampoline:
ftrace_regs_caller | __ftrace_ops_list_func -> call_direct_funcs -> save dynamic-trampoline to pt_regs->origin_ax | call pt_regs->origin_ax(dynamic-trampoline) if not NULL
And in the dynamic-trampoline, we can call prog1 for A, call prog1 and prog2 for B/C, call prog2 for D.
And the register is fast enough.
If you really want to have thousands of functions, why not just register it with ftrace itself. It will give you the arguments via the ftrace_regs structure. Can't you just register a program as the callback?
Ennn...I don't understand. The main purpose for me to use TRACING is:
1. we can directly access the memory, which is more efficient. 2. we can obtain the function args in FEXIT, which kretprobe can't do it. And this is the main reason.
Thanks! Menglong Dong
It will probably make your accounting much easier, and just let ftrace handle the fentry logic. That's what it was made to do.
-- Steve
On Sat, 30 Mar 2024 11:18:29 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
If you really want to have thousands of functions, why not just register it with ftrace itself. It will give you the arguments via the ftrace_regs structure. Can't you just register a program as the callback?
Ennn...I don't understand. The main purpose for me to use TRACING is:
- we can directly access the memory, which is more efficient.
I'm not sure what you mean by the above. Access what memory?
- we can obtain the function args in FEXIT, which kretprobe can't do it. And this is the main reason.
I didn't mention kretprobe. If you need access to the exit of the function, you can use Masami's fgraph update.
fentry -> ftrace_trampoline -> your_code
For fgraph:
fentry -> ftrace_trampoline -> fgraph [sets up return call] -> your_entry_code
function ret -> fgraph_ret_handler -> your_exit_code
And you will be able to pass data from the entry to the exit code, including parameters.
-- Steve
On Sun, Mar 31, 2024 at 3:34 AM Steven Rostedt rostedt@goodmis.org wrote:
On Sat, 30 Mar 2024 11:18:29 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
If you really want to have thousands of functions, why not just register it with ftrace itself. It will give you the arguments via the ftrace_regs structure. Can't you just register a program as the callback?
Ennn...I don't understand. The main purpose for me to use TRACING is:
- we can directly access the memory, which is more efficient.
I'm not sure what you mean by the above. Access what memory?
We need to use the helper of bpf_probe_read_kernel when we read "skb->sk" in kprobe, and the "skb" is the 1st arg in ip_rcv(). And we can directly read "skb->sk" in tracing, which is more efficient. Isn't it?
- we can obtain the function args in FEXIT, which kretprobe can't do it. And this is the main reason.
I didn't mention kretprobe. If you need access to the exit of the function, you can use Masami's fgraph update.
fentry -> ftrace_trampoline -> your_code
For fgraph:
fentry -> ftrace_trampoline -> fgraph [sets up return call] -> your_entry_code
function ret -> fgraph_ret_handler -> your_exit_code
And you will be able to pass data from the entry to the exit code, including parameters.
Yeah, the fgraph sounds like a nice solution to my problem. I'll have a try on it.
Thanks! Menglong Dong
-- Steve
On Mon, 1 Apr 2024 10:28:17 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
On Sun, Mar 31, 2024 at 3:34 AM Steven Rostedt rostedt@goodmis.org wrote:
On Sat, 30 Mar 2024 11:18:29 +0800 梦龙董 dongmenglong.8@bytedance.com wrote:
If you really want to have thousands of functions, why not just register it with ftrace itself. It will give you the arguments via the ftrace_regs structure. Can't you just register a program as the callback?
Ennn...I don't understand. The main purpose for me to use TRACING is:
- we can directly access the memory, which is more efficient.
I'm not sure what you mean by the above. Access what memory?
We need to use the helper of bpf_probe_read_kernel when we read "skb->sk" in kprobe, and the "skb" is the 1st arg in ip_rcv(). And we can directly read "skb->sk" in tracing, which is more efficient. Isn't it?
If you add a ftrace_ops function handler that calls a BPF program, I don't see why you can't just give it the parameters it needs instead of using bpf helpers. It's no different than using a trampoline to do the same thing.
-- Steve
Refactor the struct modules_array to more general struct ptr_array, which is used to store the pointers.
Meanwhiles, introduce the bpf_try_add_ptr(), which checks the existing of the ptr before adding it to the array.
Seems it should be moved to another files in "lib", and I'm not sure where to add it now, and let's move it to kernel/bpf/syscall.c for now.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- include/linux/bpf.h | 10 +++++++++ kernel/bpf/syscall.c | 37 +++++++++++++++++++++++++++++++ kernel/trace/bpf_trace.c | 48 ++++++---------------------------------- 3 files changed, 54 insertions(+), 41 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 0f677fdcfcc7..997765cdf474 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -304,6 +304,16 @@ struct bpf_map { s64 __percpu *elem_count; };
+struct ptr_array { + void **ptrs; + int cnt; + int cap; +}; + +int bpf_add_ptr(struct ptr_array *arr, void *ptr); +bool bpf_has_ptr(struct ptr_array *arr, struct module *mod); +int bpf_try_add_ptr(struct ptr_array *arr, void *ptr); + static inline const char *btf_field_type_name(enum btf_field_type type) { switch (type) { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index f63f4da4db5e..4f230fd1f8e4 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -479,6 +479,43 @@ static void bpf_map_release_memcg(struct bpf_map *map) } #endif
+int bpf_add_ptr(struct ptr_array *arr, void *ptr) +{ + void **ptrs; + + if (arr->cnt == arr->cap) { + arr->cap = max(16, arr->cap * 3 / 2); + ptrs = krealloc_array(arr->ptrs, arr->cap, sizeof(*ptrs), GFP_KERNEL); + if (!ptrs) + return -ENOMEM; + arr->ptrs = ptrs; + } + + arr->ptrs[arr->cnt] = ptr; + arr->cnt++; + return 0; +} + +bool bpf_has_ptr(struct ptr_array *arr, struct module *mod) +{ + int i; + + for (i = arr->cnt - 1; i >= 0; i--) { + if (arr->ptrs[i] == mod) + return true; + } + return false; +} + +int bpf_try_add_ptr(struct ptr_array *arr, void *ptr) +{ + if (bpf_has_ptr(arr, ptr)) + return -EEXIST; + if (bpf_add_ptr(arr, ptr)) + return -ENOMEM; + return 0; +} + static int btf_field_cmp(const void *a, const void *b) { const struct btf_field *f1 = a, *f2 = b; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 241ddf5e3895..791e97a3f8e3 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2873,43 +2873,9 @@ static void symbols_swap_r(void *a, void *b, int size, const void *priv) } }
-struct modules_array { - struct module **mods; - int mods_cnt; - int mods_cap; -}; - -static int add_module(struct modules_array *arr, struct module *mod) -{ - struct module **mods; - - if (arr->mods_cnt == arr->mods_cap) { - arr->mods_cap = max(16, arr->mods_cap * 3 / 2); - mods = krealloc_array(arr->mods, arr->mods_cap, sizeof(*mods), GFP_KERNEL); - if (!mods) - return -ENOMEM; - arr->mods = mods; - } - - arr->mods[arr->mods_cnt] = mod; - arr->mods_cnt++; - return 0; -} - -static bool has_module(struct modules_array *arr, struct module *mod) -{ - int i; - - for (i = arr->mods_cnt - 1; i >= 0; i--) { - if (arr->mods[i] == mod) - return true; - } - return false; -} - static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u32 addrs_cnt) { - struct modules_array arr = {}; + struct ptr_array arr = {}; u32 i, err = 0;
for (i = 0; i < addrs_cnt; i++) { @@ -2918,7 +2884,7 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3 preempt_disable(); mod = __module_address(addrs[i]); /* Either no module or we it's already stored */ - if (!mod || has_module(&arr, mod)) { + if (!mod || bpf_has_ptr(&arr, mod)) { preempt_enable(); continue; } @@ -2927,7 +2893,7 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3 preempt_enable(); if (err) break; - err = add_module(&arr, mod); + err = bpf_add_ptr(&arr, mod); if (err) { module_put(mod); break; @@ -2936,14 +2902,14 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3
/* We return either err < 0 in case of error, ... */ if (err) { - kprobe_multi_put_modules(arr.mods, arr.mods_cnt); - kfree(arr.mods); + kprobe_multi_put_modules((struct module **)arr.ptrs, arr.cnt); + kfree(arr.ptrs); return err; }
/* or number of modules found if everything is ok. */ - *mods = arr.mods; - return arr.mods_cnt; + *mods = (struct module **)arr.ptrs; + return arr.cnt; }
static int addrs_check_error_injection_list(unsigned long *addrs, u32 cnt)
On Mon, Mar 11, 2024 at 2:34 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
Refactor the struct modules_array to more general struct ptr_array, which is used to store the pointers.
Meanwhiles, introduce the bpf_try_add_ptr(), which checks the existing of the ptr before adding it to the array.
Seems it should be moved to another files in "lib", and I'm not sure where to add it now, and let's move it to kernel/bpf/syscall.c for now.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 10 +++++++++ kernel/bpf/syscall.c | 37 +++++++++++++++++++++++++++++++ kernel/trace/bpf_trace.c | 48 ++++++---------------------------------- 3 files changed, 54 insertions(+), 41 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 0f677fdcfcc7..997765cdf474 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -304,6 +304,16 @@ struct bpf_map { s64 __percpu *elem_count; };
+struct ptr_array {
void **ptrs;
int cnt;
int cap;
+};
+int bpf_add_ptr(struct ptr_array *arr, void *ptr); +bool bpf_has_ptr(struct ptr_array *arr, struct module *mod); +int bpf_try_add_ptr(struct ptr_array *arr, void *ptr);
static inline const char *btf_field_type_name(enum btf_field_type type) { switch (type) { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index f63f4da4db5e..4f230fd1f8e4 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -479,6 +479,43 @@ static void bpf_map_release_memcg(struct bpf_map *map) } #endif
+int bpf_add_ptr(struct ptr_array *arr, void *ptr) +{
void **ptrs;
if (arr->cnt == arr->cap) {
arr->cap = max(16, arr->cap * 3 / 2);
ptrs = krealloc_array(arr->ptrs, arr->cap, sizeof(*ptrs), GFP_KERNEL);
if (!ptrs)
return -ENOMEM;
arr->ptrs = ptrs;
}
arr->ptrs[arr->cnt] = ptr;
arr->cnt++;
return 0;
+}
+bool bpf_has_ptr(struct ptr_array *arr, struct module *mod)
Don't you need 'void *mod' here?
+{
int i;
for (i = arr->cnt - 1; i >= 0; i--) {
if (arr->ptrs[i] == mod)
return true;
}
return false;
+}
...
kprobe_multi_put_modules(arr.mods, arr.mods_cnt);
kfree(arr.mods);
kprobe_multi_put_modules((struct module **)arr.ptrs, arr.cnt);
Do you really need to type cast? Compiler doesn't convert void** automatically?
On Tue, Mar 12, 2024 at 9:49 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:34 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
Refactor the struct modules_array to more general struct ptr_array, which is used to store the pointers.
Meanwhiles, introduce the bpf_try_add_ptr(), which checks the existing of the ptr before adding it to the array.
Seems it should be moved to another files in "lib", and I'm not sure where to add it now, and let's move it to kernel/bpf/syscall.c for now.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 10 +++++++++ kernel/bpf/syscall.c | 37 +++++++++++++++++++++++++++++++ kernel/trace/bpf_trace.c | 48 ++++++---------------------------------- 3 files changed, 54 insertions(+), 41 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 0f677fdcfcc7..997765cdf474 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -304,6 +304,16 @@ struct bpf_map { s64 __percpu *elem_count; };
+struct ptr_array {
void **ptrs;
int cnt;
int cap;
+};
+int bpf_add_ptr(struct ptr_array *arr, void *ptr); +bool bpf_has_ptr(struct ptr_array *arr, struct module *mod); +int bpf_try_add_ptr(struct ptr_array *arr, void *ptr);
static inline const char *btf_field_type_name(enum btf_field_type type) { switch (type) { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index f63f4da4db5e..4f230fd1f8e4 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -479,6 +479,43 @@ static void bpf_map_release_memcg(struct bpf_map *map) } #endif
+int bpf_add_ptr(struct ptr_array *arr, void *ptr) +{
void **ptrs;
if (arr->cnt == arr->cap) {
arr->cap = max(16, arr->cap * 3 / 2);
ptrs = krealloc_array(arr->ptrs, arr->cap, sizeof(*ptrs), GFP_KERNEL);
if (!ptrs)
return -ENOMEM;
arr->ptrs = ptrs;
}
arr->ptrs[arr->cnt] = ptr;
arr->cnt++;
return 0;
+}
+bool bpf_has_ptr(struct ptr_array *arr, struct module *mod)
Don't you need 'void *mod' here?
Oops, it should be void *ptr here, my mistake~
+{
int i;
for (i = arr->cnt - 1; i >= 0; i--) {
if (arr->ptrs[i] == mod)
return true;
}
return false;
+}
...
kprobe_multi_put_modules(arr.mods, arr.mods_cnt);
kfree(arr.mods);
kprobe_multi_put_modules((struct module **)arr.ptrs, arr.cnt);
Do you really need to type cast? Compiler doesn't convert void** automatically?
Yeah, the compiler reports errors without this casting.
For now, bpf_tramp_link is added to the hash list of tr->progs_hlist when attaching. This means that bpf_link and trampoline is one-to-one, and is not friendly to the multi-link trampoline that we commit in the following patches.
Therefore, now we introduce the struct bpf_tramp_link_conn to be the bridge between bpf_tramp_link and trampoline. And we also chang the type of links in struct bpf_tramp_links to struct bpf_tramp_link_conn.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- arch/arm64/net/bpf_jit_comp.c | 4 ++-- arch/riscv/net/bpf_jit_comp64.c | 4 ++-- arch/s390/net/bpf_jit_comp.c | 4 ++-- arch/x86/net/bpf_jit_comp.c | 4 ++-- include/linux/bpf.h | 12 +++++++--- kernel/bpf/bpf_struct_ops.c | 3 ++- kernel/bpf/syscall.c | 3 ++- kernel/bpf/trampoline.c | 42 +++++++++++++++++---------------- net/bpf/bpf_dummy_struct_ops.c | 1 + 9 files changed, 44 insertions(+), 33 deletions(-)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index c5b461dda438..b6f7d8a6d372 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1810,14 +1810,14 @@ bool bpf_jit_supports_subprog_tailcalls(void) return true; }
-static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l, +static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link_conn *l, int args_off, int retval_off, int run_ctx_off, bool save_ret) { __le32 *branch; u64 enter_prog; u64 exit_prog; - struct bpf_prog *p = l->link.prog; + struct bpf_prog *p = l->link->prog; int cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
enter_prog = (u64)bpf_trampoline_enter(p); diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index aac190085472..c147053001db 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -700,11 +700,11 @@ static void restore_args(int nregs, int args_off, struct rv_jit_context *ctx) } }
-static int invoke_bpf_prog(struct bpf_tramp_link *l, int args_off, int retval_off, +static int invoke_bpf_prog(struct bpf_tramp_link_conn *l, int args_off, int retval_off, int run_ctx_off, bool save_ret, struct rv_jit_context *ctx) { int ret, branch_off; - struct bpf_prog *p = l->link.prog; + struct bpf_prog *p = l->link->prog; int cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
if (l->cookie) { diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index b418333bb086..177efbc1b5ec 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -2243,12 +2243,12 @@ static void load_imm64(struct bpf_jit *jit, int dst_reg, u64 val)
static int invoke_bpf_prog(struct bpf_tramp_jit *tjit, const struct btf_func_model *m, - struct bpf_tramp_link *tlink, bool save_ret) + struct bpf_tramp_link_conn *tlink, bool save_ret) { struct bpf_jit *jit = &tjit->common; int cookie_off = tjit->run_ctx_off + offsetof(struct bpf_tramp_run_ctx, bpf_cookie); - struct bpf_prog *p = tlink->link.prog; + struct bpf_prog *p = tlink->link->prog; int patch;
/* diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index e1390d1e331b..e7f9f987770d 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -2261,14 +2261,14 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog, }
static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog, - struct bpf_tramp_link *l, int stack_size, + struct bpf_tramp_link_conn *l, int stack_size, int run_ctx_off, bool save_ret, void *image, void *rw_image) { u8 *prog = *pprog; u8 *jmp_insn; int ctx_cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie); - struct bpf_prog *p = l->link.prog; + struct bpf_prog *p = l->link->prog; u64 cookie = l->cookie;
/* mov rdi, cookie */ diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 997765cdf474..2b5cd6100fc4 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -56,6 +56,7 @@ struct bpf_token; struct user_namespace; struct super_block; struct inode; +struct bpf_tramp_link;
extern struct idr btf_idr; extern spinlock_t btf_idr_lock; @@ -1090,7 +1091,7 @@ enum { };
struct bpf_tramp_links { - struct bpf_tramp_link *links[BPF_MAX_TRAMP_LINKS]; + struct bpf_tramp_link_conn *links[BPF_MAX_TRAMP_LINKS]; int nr_links; };
@@ -1597,12 +1598,17 @@ struct bpf_link_ops { struct bpf_map *old_map); };
-struct bpf_tramp_link { - struct bpf_link link; +struct bpf_tramp_link_conn { + struct bpf_link *link; struct hlist_node tramp_hlist; u64 cookie; };
+struct bpf_tramp_link { + struct bpf_link link; + struct bpf_tramp_link_conn conn; +}; + struct bpf_shim_tramp_link { struct bpf_tramp_link link; struct bpf_trampoline *trampoline; diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index 43356faaa057..4fbe2faa80a8 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -549,7 +549,7 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks, void *image = *_image; int size;
- tlinks[BPF_TRAMP_FENTRY].links[0] = link; + tlinks[BPF_TRAMP_FENTRY].links[0] = &link->conn; tlinks[BPF_TRAMP_FENTRY].nr_links = 1;
if (model->ret_size > 0) @@ -710,6 +710,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, err = -ENOMEM; goto reset_unlock; } + link->conn.link = &link->link; bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog); st_map->links[i] = &link->link; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 4f230fd1f8e4..d1cd645ef9ac 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3339,6 +3339,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, }
link = kzalloc(sizeof(*link), GFP_USER); + link->link.conn.link = &link->link.link; if (!link) { err = -ENOMEM; goto out_put_prog; @@ -3346,7 +3347,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, bpf_link_init(&link->link.link, BPF_LINK_TYPE_TRACING, &bpf_tracing_link_lops, prog); link->attach_type = prog->expected_attach_type; - link->link.cookie = bpf_cookie; + link->link.conn.cookie = bpf_cookie;
mutex_lock(&prog->aux->dst_mutex);
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index d382f5ebe06c..cf9b84f785f3 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -228,9 +228,9 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr) static struct bpf_tramp_links * bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg) { - struct bpf_tramp_link *link; + struct bpf_tramp_link_conn *link_conn; + struct bpf_tramp_link_conn **links; struct bpf_tramp_links *tlinks; - struct bpf_tramp_link **links; int kind;
*total = 0; @@ -243,9 +243,9 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_a *total += tr->progs_cnt[kind]; links = tlinks[kind].links;
- hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) { - *ip_arg |= link->link.prog->call_get_func_ip; - *links++ = link; + hlist_for_each_entry(link_conn, &tr->progs_hlist[kind], tramp_hlist) { + *ip_arg |= link_conn->link->prog->call_get_func_ip; + *links++ = link_conn; } } return tlinks; @@ -521,14 +521,14 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog) } }
-static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr) +static int __bpf_trampoline_link_prog(struct bpf_tramp_link_conn *link, struct bpf_trampoline *tr) { enum bpf_tramp_prog_type kind; - struct bpf_tramp_link *link_exiting; + struct bpf_tramp_link_conn *link_exiting; int err = 0; int cnt = 0, i;
- kind = bpf_attach_type_to_tramp(link->link.prog); + kind = bpf_attach_type_to_tramp(link->link->prog); if (tr->extension_prog) /* cannot attach fentry/fexit if extension prog is attached. * cannot overwrite extension prog either. @@ -542,9 +542,9 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr /* Cannot attach extension if fentry/fexit are in use. */ if (cnt) return -EBUSY; - tr->extension_prog = link->link.prog; + tr->extension_prog = link->link->prog; return bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, NULL, - link->link.prog->bpf_func); + link->link->prog->bpf_func); } if (cnt >= BPF_MAX_TRAMP_LINKS) return -E2BIG; @@ -552,7 +552,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr /* prog already linked */ return -EBUSY; hlist_for_each_entry(link_exiting, &tr->progs_hlist[kind], tramp_hlist) { - if (link_exiting->link.prog != link->link.prog) + if (link_exiting->link->prog != link->link->prog) continue; /* prog already linked */ return -EBUSY; @@ -573,17 +573,17 @@ int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline int err;
mutex_lock(&tr->mutex); - err = __bpf_trampoline_link_prog(link, tr); + err = __bpf_trampoline_link_prog(&link->conn, tr); mutex_unlock(&tr->mutex); return err; }
-static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr) +static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link_conn *link, struct bpf_trampoline *tr) { enum bpf_tramp_prog_type kind; int err;
- kind = bpf_attach_type_to_tramp(link->link.prog); + kind = bpf_attach_type_to_tramp(link->link->prog); if (kind == BPF_TRAMP_REPLACE) { WARN_ON_ONCE(!tr->extension_prog); err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, @@ -602,7 +602,7 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampolin int err;
mutex_lock(&tr->mutex); - err = __bpf_trampoline_unlink_prog(link, tr); + err = __bpf_trampoline_unlink_prog(&link->conn, tr); mutex_unlock(&tr->mutex); return err; } @@ -645,6 +645,7 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog if (!shim_link) return NULL;
+ shim_link->link.conn.link = &shim_link->link.link; p = bpf_prog_alloc(1, 0); if (!p) { kfree(shim_link); @@ -672,15 +673,16 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog static struct bpf_shim_tramp_link *cgroup_shim_find(struct bpf_trampoline *tr, bpf_func_t bpf_func) { - struct bpf_tramp_link *link; + struct bpf_tramp_link_conn *link_conn; int kind;
for (kind = 0; kind < BPF_TRAMP_MAX; kind++) { - hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) { - struct bpf_prog *p = link->link.prog; + hlist_for_each_entry(link_conn, &tr->progs_hlist[kind], tramp_hlist) { + struct bpf_prog *p = link_conn->link->prog;
if (p->bpf_func == bpf_func) - return container_of(link, struct bpf_shim_tramp_link, link); + return container_of((struct bpf_tramp_link *)link_conn->link, + struct bpf_shim_tramp_link, link); } }
@@ -731,7 +733,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog, goto err; }
- err = __bpf_trampoline_link_prog(&shim_link->link, tr); + err = __bpf_trampoline_link_prog(&shim_link->link.conn, tr); if (err) goto err;
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c index 1b5f812e6972..35a2cf60eef6 100644 --- a/net/bpf/bpf_dummy_struct_ops.c +++ b/net/bpf/bpf_dummy_struct_ops.c @@ -120,6 +120,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr, err = -ENOMEM; goto out; } + link->conn.link = &link->link; /* prog doesn't take the ownership of the reference from caller */ bpf_prog_inc(prog); bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog);
Introduce the struct bpf_tramp_multi_link, which is used to attach a bpf_link to multi trampoline. Meanwhile, introduce corresponding function bpf_trampoline_multi_{link,unlink}_prog.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- include/linux/bpf.h | 14 ++++++++++++ kernel/bpf/trampoline.c | 47 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 61 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 2b5cd6100fc4..4e8f17d9f022 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -57,6 +57,7 @@ struct user_namespace; struct super_block; struct inode; struct bpf_tramp_link; +struct bpf_tramp_multi_link;
extern struct idr btf_idr; extern spinlock_t btf_idr_lock; @@ -1282,6 +1283,8 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key, struct bpf_attach_target_info *tgt_info); void bpf_trampoline_put(struct bpf_trampoline *tr); int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs); +int bpf_trampoline_multi_link_prog(struct bpf_tramp_multi_link *link); +int bpf_trampoline_multi_unlink_prog(struct bpf_tramp_multi_link *link);
/* * When the architecture supports STATIC_CALL replace the bpf_dispatcher_fn @@ -1614,6 +1617,17 @@ struct bpf_shim_tramp_link { struct bpf_trampoline *trampoline; };
+struct bpf_tramp_multi_link_entry { + struct bpf_trampoline *trampoline; + struct bpf_tramp_link_conn conn; +}; + +struct bpf_tramp_multi_link { + struct bpf_link link; + u32 cnt; + struct bpf_tramp_multi_link_entry *entries; +}; + struct bpf_tracing_link { struct bpf_tramp_link link; enum bpf_attach_type attach_type; diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index cf9b84f785f3..2167aa3fe583 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -607,6 +607,53 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampolin return err; }
+static int __bpf_trampoline_multi_unlink_prog(struct bpf_tramp_multi_link *link, + u32 cnt) +{ + struct bpf_tramp_multi_link_entry *entry; + struct bpf_trampoline *tr; + int err = 0, i; + + for (i = 0; i < cnt; i++) { + entry = &link->entries[i]; + tr = entry->trampoline; + mutex_lock(&tr->mutex); + err = __bpf_trampoline_unlink_prog(&entry->conn, + entry->trampoline); + mutex_unlock(&tr->mutex); + if (err) + break; + } + return err; +} + +int bpf_trampoline_multi_unlink_prog(struct bpf_tramp_multi_link *link) +{ + return __bpf_trampoline_multi_unlink_prog(link, link->cnt); +} + +int bpf_trampoline_multi_link_prog(struct bpf_tramp_multi_link *link) +{ + struct bpf_tramp_multi_link_entry *entry; + struct bpf_trampoline *tr; + int err = 0, i; + + for (i = 0; i < link->cnt; i++) { + entry = &link->entries[i]; + tr = entry->trampoline; + mutex_lock(&tr->mutex); + err = __bpf_trampoline_link_prog(&entry->conn, tr); + mutex_unlock(&tr->mutex); + if (err) + goto unlink; + } + + return 0; +unlink: + __bpf_trampoline_multi_unlink_prog(link, i); + return err; +} + #if defined(CONFIG_CGROUP_BPF) && defined(CONFIG_BPF_LSM) static void bpf_shim_tramp_link_release(struct bpf_link *link) {
Add target btf to the function args of bpf_check_attach_target(), then the caller can specify the btf to check.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- include/linux/bpf_verifier.h | 1 + kernel/bpf/syscall.c | 6 ++++-- kernel/bpf/trampoline.c | 1 + kernel/bpf/verifier.c | 8 +++++--- 4 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 4b0f6600e499..6cb20efcfac3 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -811,6 +811,7 @@ static inline void bpf_trampoline_unpack_key(u64 key, u32 *obj_id, u32 *btf_id) int bpf_check_attach_target(struct bpf_verifier_log *log, const struct bpf_prog *prog, const struct bpf_prog *tgt_prog, + struct btf *btf, u32 btf_id, struct bpf_attach_target_info *tgt_info); void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index d1cd645ef9ac..6128c3131141 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3401,9 +3401,11 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, * need a new trampoline and a check for compatibility */ struct bpf_attach_target_info tgt_info = {}; + struct btf *btf;
- err = bpf_check_attach_target(NULL, prog, tgt_prog, btf_id, - &tgt_info); + btf = tgt_prog ? tgt_prog->aux->btf : prog->aux->attach_btf; + err = bpf_check_attach_target(NULL, prog, tgt_prog, btf, + btf_id, &tgt_info); if (err) goto out_unlock;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 2167aa3fe583..b00d53af8fcb 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -747,6 +747,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog, int err;
err = bpf_check_attach_target(NULL, prog, NULL, + prog->aux->attach_btf, prog->aux->attach_btf_id, &tgt_info); if (err) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index bf084c693507..4493ecc23597 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -20613,6 +20613,7 @@ static int check_non_sleepable_error_inject(u32 btf_id) int bpf_check_attach_target(struct bpf_verifier_log *log, const struct bpf_prog *prog, const struct bpf_prog *tgt_prog, + struct btf *btf, u32 btf_id, struct bpf_attach_target_info *tgt_info) { @@ -20623,7 +20624,6 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, const struct btf_type *t; bool conservative = true; const char *tname; - struct btf *btf; long addr = 0; struct module *mod = NULL;
@@ -20631,7 +20631,6 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, bpf_log(log, "Tracing programs must provide btf_id\n"); return -EINVAL; } - btf = tgt_prog ? tgt_prog->aux->btf : prog->aux->attach_btf; if (!btf) { bpf_log(log, "FENTRY/FEXIT program can only be attached to another program annotated with BTF\n"); @@ -20940,6 +20939,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) struct bpf_attach_target_info tgt_info = {}; u32 btf_id = prog->aux->attach_btf_id; struct bpf_trampoline *tr; + struct btf *btf; int ret; u64 key;
@@ -20964,7 +20964,9 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) prog->type != BPF_PROG_TYPE_EXT) return 0;
- ret = bpf_check_attach_target(&env->log, prog, tgt_prog, btf_id, &tgt_info); + btf = tgt_prog ? tgt_prog->aux->btf : prog->aux->attach_btf; + ret = bpf_check_attach_target(&env->log, prog, tgt_prog, btf, + btf_id, &tgt_info); if (ret) return ret;
On Mon, Mar 11, 2024 at 2:35 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
Add target btf to the function args of bpf_check_attach_target(), then the caller can specify the btf to check.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf_verifier.h | 1 + kernel/bpf/syscall.c | 6 ++++-- kernel/bpf/trampoline.c | 1 + kernel/bpf/verifier.c | 8 +++++--- 4 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 4b0f6600e499..6cb20efcfac3 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -811,6 +811,7 @@ static inline void bpf_trampoline_unpack_key(u64 key, u32 *obj_id, u32 *btf_id) int bpf_check_attach_target(struct bpf_verifier_log *log, const struct bpf_prog *prog, const struct bpf_prog *tgt_prog,
struct btf *btf, u32 btf_id, struct bpf_attach_target_info *tgt_info);
void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index d1cd645ef9ac..6128c3131141 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3401,9 +3401,11 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, * need a new trampoline and a check for compatibility */ struct bpf_attach_target_info tgt_info = {};
struct btf *btf;
err = bpf_check_attach_target(NULL, prog, tgt_prog, btf_id,
&tgt_info);
btf = tgt_prog ? tgt_prog->aux->btf : prog->aux->attach_btf;
I think it's better to keep this bit inside bpf_check_attach_target(), since a lot of other code in there is working with if (tgt_prog) ... so if the caller messes up passing tgt_prog->aux->btf with tgt_prog the bug will be difficult to debug.
err = bpf_check_attach_target(NULL, prog, tgt_prog, btf,
btf_id, &tgt_info);
On Tue, Mar 12, 2024 at 9:51 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:35 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
Add target btf to the function args of bpf_check_attach_target(), then the caller can specify the btf to check.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf_verifier.h | 1 + kernel/bpf/syscall.c | 6 ++++-- kernel/bpf/trampoline.c | 1 + kernel/bpf/verifier.c | 8 +++++--- 4 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 4b0f6600e499..6cb20efcfac3 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -811,6 +811,7 @@ static inline void bpf_trampoline_unpack_key(u64 key, u32 *obj_id, u32 *btf_id) int bpf_check_attach_target(struct bpf_verifier_log *log, const struct bpf_prog *prog, const struct bpf_prog *tgt_prog,
struct btf *btf, u32 btf_id, struct bpf_attach_target_info *tgt_info);
void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index d1cd645ef9ac..6128c3131141 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3401,9 +3401,11 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, * need a new trampoline and a check for compatibility */ struct bpf_attach_target_info tgt_info = {};
struct btf *btf;
err = bpf_check_attach_target(NULL, prog, tgt_prog, btf_id,
&tgt_info);
btf = tgt_prog ? tgt_prog->aux->btf : prog->aux->attach_btf;
I think it's better to keep this bit inside bpf_check_attach_target(), since a lot of other code in there is working with if (tgt_prog) ... so if the caller messes up passing tgt_prog->aux->btf with tgt_prog the bug will be difficult to debug.
In the previous version, I pass the attach_btf with the following way:
+ origin_btf = prog->aux->attach_btf; + /* use the new attach_btf to check the target */ + prog->aux->attach_btf = attach_btf; err = bpf_check_attach_target(NULL, prog, tgt_prog, btf_id, &tgt_info); + prog->aux->attach_btf = origin_btf;
And Jiri suggested to add the attach_btf to the function args of bpf_check_attach_target().
Ennn....Should I convert to the old way?
Thanks! Menglong Dong
err = bpf_check_attach_target(NULL, prog, tgt_prog, btf,
btf_id, &tgt_info);
In this commit, we add the support to allow attaching a tracing BPF program to multi hooks, which is similar to BPF_TRACE_KPROBE_MULTI.
The use case is obvious. For now, we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar logic). This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace. The KPROBE_MULTI maybe a alternative, but it can't do what TRACING do. For example, the kretprobe can't obtain the function args, but the FEXIT can.
For now, we support to create multi-link for fentry/fexit/modify_return with the following new attach types that we introduce:
BPF_TRACE_FENTRY_MULTI BPF_TRACE_FEXIT_MULTI BPF_MODIFY_RETURN_MULTI
We introduce the struct bpf_tracing_multi_link for this purpose, which can hold all the kernel modules, target bpf program (for attaching to bpf program) or target btf (for attaching to kernel function) that we referenced. Meanwhiles, every trampoline for the function that we attach to is also stored in "struct bpf_tramp_multi_link_entry *entries" in struct bpf_tramp_multi_link.
During loading, the first target is used for verification by the verifer. And during attaching, we check the consistency of all the targets with the target that we loaded, which is the first target.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- include/linux/bpf.h | 11 + include/linux/bpf_types.h | 1 + include/uapi/linux/bpf.h | 10 + kernel/bpf/btf.c | 5 + kernel/bpf/syscall.c | 379 +++++++++++++++++++++++++++++++++ kernel/bpf/trampoline.c | 7 +- kernel/bpf/verifier.c | 16 +- net/core/bpf_sk_storage.c | 2 + tools/include/uapi/linux/bpf.h | 10 + 9 files changed, 438 insertions(+), 3 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 4e8f17d9f022..28fac2d0964a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1635,6 +1635,17 @@ struct bpf_tracing_link { struct bpf_prog *tgt_prog; };
+struct bpf_tracing_multi_link { + struct bpf_tramp_multi_link link; + enum bpf_attach_type attach_type; + u32 prog_cnt; + u32 btf_cnt; + struct bpf_prog **tgt_progs; + struct btf **tgt_btfs; + u32 mods_cnt; + struct module **mods; +}; + struct bpf_link_primer { struct bpf_link *link; struct file *file; diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h index 94baced5a1ad..a93195bd825a 100644 --- a/include/linux/bpf_types.h +++ b/include/linux/bpf_types.h @@ -152,3 +152,4 @@ BPF_LINK_TYPE(BPF_LINK_TYPE_PERF_EVENT, perf) BPF_LINK_TYPE(BPF_LINK_TYPE_KPROBE_MULTI, kprobe_multi) BPF_LINK_TYPE(BPF_LINK_TYPE_STRUCT_OPS, struct_ops) BPF_LINK_TYPE(BPF_LINK_TYPE_UPROBE_MULTI, uprobe_multi) +BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING_MULTI, tracing_multi) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 85ec7fc799d7..f01c4f463c0d 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1114,6 +1114,9 @@ enum bpf_attach_type { BPF_CGROUP_UNIX_GETSOCKNAME, BPF_NETKIT_PRIMARY, BPF_NETKIT_PEER, + BPF_TRACE_FENTRY_MULTI, + BPF_TRACE_FEXIT_MULTI, + BPF_MODIFY_RETURN_MULTI, __MAX_BPF_ATTACH_TYPE };
@@ -1134,6 +1137,7 @@ enum bpf_link_type { BPF_LINK_TYPE_TCX = 11, BPF_LINK_TYPE_UPROBE_MULTI = 12, BPF_LINK_TYPE_NETKIT = 13, + BPF_LINK_TYPE_TRACING_MULTI = 14, __MAX_BPF_LINK_TYPE, };
@@ -1726,6 +1730,12 @@ union bpf_attr { */ __u64 cookie; } tracing; + struct { + __u32 cnt; + __aligned_u64 tgt_fds; + __aligned_u64 btf_ids; + __aligned_u64 cookies; + } tracing_multi; struct { __u32 pf; __u32 hooknum; diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index c2a0299d4358..2d6e9e680091 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -5879,6 +5879,9 @@ static int btf_validate_prog_ctx_type(struct bpf_verifier_log *log, const struct case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: case BPF_MODIFY_RETURN: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: + case BPF_MODIFY_RETURN_MULTI: /* allow u64* as ctx */ if (btf_is_int(t) && t->size == 8) return 0; @@ -6238,6 +6241,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, case BPF_LSM_CGROUP: case BPF_LSM_MAC: case BPF_TRACE_FEXIT: + case BPF_TRACE_FEXIT_MULTI: /* When LSM programs are attached to void LSM hooks * they use FEXIT trampolines and when attached to * int LSM hooks, they use MODIFY_RETURN trampolines. @@ -6256,6 +6260,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, t = btf_type_by_id(btf, t->type); break; case BPF_MODIFY_RETURN: + case BPF_MODIFY_RETURN_MULTI: /* For now the BPF_MODIFY_RETURN can only be attached to * functions that return an int. */ diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 6128c3131141..3e45584e4898 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3273,6 +3273,34 @@ static const struct bpf_link_ops bpf_tracing_link_lops = { .fill_link_info = bpf_tracing_link_fill_link_info, };
+static int bpf_tracing_check_multi(struct bpf_prog *prog, + struct bpf_prog *tgt_prog, + struct btf *btf2, + const struct btf_type *t2) +{ + const struct btf_type *t1; + struct btf *btf1; + + /* this case is already valided in bpf_check_attach_target() */ + if (prog->type == BPF_PROG_TYPE_EXT) + return 0; + + btf1 = prog->aux->dst_prog ? prog->aux->dst_prog->aux->btf : + prog->aux->attach_btf; + if (!btf1) + return -EOPNOTSUPP; + + btf2 = btf2 ?: tgt_prog->aux->btf; + t1 = prog->aux->attach_func_proto; + + /* the target is the same as the origin one, this is a re-attach */ + if (t1 == t2) + return 0; + + return btf_check_func_part_match(btf1, t1, btf2, t2, + prog->aux->accessed_args); +} + static int bpf_tracing_prog_attach(struct bpf_prog *prog, int tgt_prog_fd, u32 btf_id, @@ -3473,6 +3501,350 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, return err; }
+static void __bpf_tracing_multi_link_release(struct bpf_tracing_multi_link *link) +{ + int i; + + if (link->mods_cnt) { + for (i = 0; i < link->mods_cnt; i++) + module_put(link->mods[i]); + kfree(link->mods); + } + + if (link->prog_cnt) { + for (i = 0; i < link->prog_cnt; i++) + bpf_prog_put(link->tgt_progs[i]); + kfree(link->tgt_progs); + } + + if (link->btf_cnt) { + for (i = 0; i < link->btf_cnt; i++) + btf_put(link->tgt_btfs[i]); + kfree(link->tgt_btfs); + } + + if (link->link.cnt) { + for (i = 0; i < link->link.cnt; i++) + bpf_trampoline_put(link->link.entries[i].trampoline); + kfree(link->link.entries); + } +} + +static void bpf_tracing_multi_link_release(struct bpf_link *link) +{ + struct bpf_tracing_multi_link *multi_link = + container_of(link, struct bpf_tracing_multi_link, link.link); + + bpf_trampoline_multi_unlink_prog(&multi_link->link); + __bpf_tracing_multi_link_release(multi_link); +} + +static void bpf_tracing_multi_link_dealloc(struct bpf_link *link) +{ + struct bpf_tracing_multi_link *tr_link = + container_of(link, struct bpf_tracing_multi_link, link.link); + + kfree(tr_link); +} + +static void bpf_tracing_multi_link_show_fdinfo(const struct bpf_link *link, + struct seq_file *seq) +{ + struct bpf_tracing_multi_link *tr_link = + container_of(link, struct bpf_tracing_multi_link, link.link); + u32 target_btf_id, target_obj_id; + int i; + + for (i = 0; i < tr_link->link.cnt; i++) { + bpf_trampoline_unpack_key(tr_link->link.entries[i].trampoline->key, + &target_obj_id, &target_btf_id); + seq_printf(seq, + "attach_type:\t%d\n" + "target_obj_id:\t%u\n" + "target_btf_id:\t%u\n", + tr_link->attach_type, + target_obj_id, + target_btf_id); + } +} + +static const struct bpf_link_ops bpf_tracing_multi_link_lops = { + .release = bpf_tracing_multi_link_release, + .dealloc = bpf_tracing_multi_link_dealloc, + .show_fdinfo = bpf_tracing_multi_link_show_fdinfo, +}; + +#define MAX_TRACING_MULTI_CNT 1024 + +static int bpf_tracing_get_target(u32 fd, struct bpf_prog **tgt_prog, + struct btf **tgt_btf) +{ + struct bpf_prog *prog = NULL; + struct btf *btf = NULL; + int err = 0; + + if (fd) { + prog = bpf_prog_get(fd); + if (!IS_ERR(prog)) + goto found; + + prog = NULL; + /* "fd" is the fd of the kernel module BTF */ + btf = btf_get_by_fd(fd); + if (IS_ERR(btf)) { + err = PTR_ERR(btf); + goto err; + } + if (!btf_is_kernel(btf)) { + btf_put(btf); + err = -EOPNOTSUPP; + goto err; + } + } else { + btf = bpf_get_btf_vmlinux(); + if (IS_ERR(btf)) { + err = PTR_ERR(btf); + goto err; + } + if (!btf) { + err = -EINVAL; + goto err; + } + btf_get(btf); + } +found: + *tgt_prog = prog; + *tgt_btf = btf; + return 0; +err: + *tgt_prog = NULL; + *tgt_btf = NULL; + return err; +} + +static int bpf_tracing_multi_link_check(const union bpf_attr *attr, u32 **btf_ids, + u32 **tgt_fds, u64 **cookies, + u32 cnt) +{ + void __user *ubtf_ids; + void __user *utgt_fds; + void __user *ucookies; + void *tmp; + int i; + + if (!cnt) + return -EINVAL; + + if (cnt > MAX_TRACING_MULTI_CNT) + return -E2BIG; + + ucookies = u64_to_user_ptr(attr->link_create.tracing_multi.cookies); + if (ucookies) { + tmp = kvmalloc_array(cnt, sizeof(**cookies), GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + *cookies = tmp; + if (copy_from_user(tmp, ucookies, cnt * sizeof(**cookies))) + return -EFAULT; + } + + utgt_fds = u64_to_user_ptr(attr->link_create.tracing_multi.tgt_fds); + if (utgt_fds) { + tmp = kvmalloc_array(cnt, sizeof(**tgt_fds), GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + *tgt_fds = tmp; + if (copy_from_user(tmp, utgt_fds, cnt * sizeof(**tgt_fds))) + return -EFAULT; + } + + ubtf_ids = u64_to_user_ptr(attr->link_create.tracing_multi.btf_ids); + if (!ubtf_ids) + return -EINVAL; + + tmp = kvmalloc_array(cnt, sizeof(**btf_ids), GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + *btf_ids = tmp; + if (copy_from_user(tmp, ubtf_ids, cnt * sizeof(**btf_ids))) + return -EFAULT; + + for (i = 0; i < cnt; i++) { + if (!(*btf_ids)[i]) + return -EINVAL; + } + + return 0; +} + +static void bpf_tracing_multi_link_ptr_fill(struct bpf_tracing_multi_link *link, + struct ptr_array *progs, + struct ptr_array *mods, + struct ptr_array *btfs) +{ + link->mods = (struct module **) mods->ptrs; + link->mods_cnt = mods->cnt; + link->tgt_btfs = (struct btf **) btfs->ptrs; + link->btf_cnt = btfs->cnt; + link->tgt_progs = (struct bpf_prog **) progs->ptrs; + link->prog_cnt = progs->cnt; +} + +static int bpf_tracing_prog_attach_multi(const union bpf_attr *attr, + struct bpf_prog *prog) +{ + struct bpf_tracing_multi_link *link = NULL; + u32 cnt, *btf_ids = NULL, *tgt_fds = NULL; + struct bpf_link_primer link_primer; + struct ptr_array prog_array = { }; + struct ptr_array btf_array = { }; + struct ptr_array mod_array = { }; + u64 *cookies = NULL; + int err = 0, i; + + if ((prog->expected_attach_type != BPF_TRACE_FENTRY_MULTI && + prog->expected_attach_type != BPF_TRACE_FEXIT_MULTI && + prog->expected_attach_type != BPF_MODIFY_RETURN_MULTI) || + prog->type != BPF_PROG_TYPE_TRACING) + return -EINVAL; + + cnt = attr->link_create.tracing_multi.cnt; + err = bpf_tracing_multi_link_check(attr, &btf_ids, &tgt_fds, &cookies, + cnt); + if (err) + goto err_out; + + link = kzalloc(sizeof(*link), GFP_USER); + if (!link) { + err = -ENOMEM; + goto err_out; + } + link->link.entries = kzalloc(sizeof(*link->link.entries) * cnt, + GFP_USER); + if (!link->link.entries) { + err = -ENOMEM; + goto err_out; + } + + bpf_link_init(&link->link.link, BPF_LINK_TYPE_TRACING_MULTI, + &bpf_tracing_multi_link_lops, prog); + link->attach_type = prog->expected_attach_type; + + mutex_lock(&prog->aux->dst_mutex); + + /* program is already attached, re-attach is not supported here yet */ + if (!prog->aux->dst_trampoline) { + err = -EEXIST; + goto err_out_unlock; + } + + for (i = 0; i < cnt; i++) { + struct bpf_attach_target_info tgt_info = {}; + struct bpf_tramp_multi_link_entry *entry; + struct bpf_prog *tgt_prog = NULL; + struct bpf_trampoline *tr = NULL; + u32 tgt_fd, btf_id = btf_ids[i]; + struct btf *tgt_btf = NULL; + struct module *mod = NULL; + u64 key = 0; + + entry = &link->link.entries[i]; + tgt_fd = tgt_fds ? tgt_fds[i] : 0; + err = bpf_tracing_get_target(tgt_fd, &tgt_prog, &tgt_btf); + if (err) + goto err_out_unlock; + + if (tgt_prog) { + err = bpf_try_add_ptr(&prog_array, tgt_prog); + if (err) { + bpf_prog_put(tgt_prog); + if (err != -EEXIST) + goto err_out_unlock; + } + } + + if (tgt_btf) { + err = bpf_try_add_ptr(&btf_array, tgt_btf); + if (err) { + btf_put(tgt_btf); + if (err != -EEXIST) + goto err_out_unlock; + } + } + + prog->aux->attach_tracing_prog = tgt_prog && + tgt_prog->type == BPF_PROG_TYPE_TRACING && + prog->type == BPF_PROG_TYPE_TRACING; + + err = bpf_check_attach_target(NULL, prog, tgt_prog, tgt_btf, + btf_id, &tgt_info); + if (err) + goto err_out_unlock; + + mod = tgt_info.tgt_mod; + if (mod) { + err = bpf_try_add_ptr(&mod_array, mod); + if (err) { + module_put(mod); + if (err != -EEXIST) + goto err_out_unlock; + } + } + + err = bpf_tracing_check_multi(prog, tgt_prog, tgt_btf, + tgt_info.tgt_type); + if (err) + goto err_out_unlock; + + key = bpf_trampoline_compute_key(tgt_prog, tgt_btf, btf_id); + tr = bpf_trampoline_get(key, &tgt_info); + if (!tr) { + err = -ENOMEM; + goto err_out_unlock; + } + + entry->conn.cookie = cookies ? cookies[i] : 0; + entry->conn.link = &link->link.link; + entry->trampoline = tr; + link->link.cnt++; + } + + err = bpf_trampoline_multi_link_prog(&link->link); + if (err) + goto err_out_unlock; + + err = bpf_link_prime(&link->link.link, &link_primer); + if (err) { + bpf_trampoline_multi_unlink_prog(&link->link); + goto err_out_unlock; + } + + bpf_tracing_multi_link_ptr_fill(link, &prog_array, &mod_array, + &btf_array); + bpf_trampoline_put(prog->aux->dst_trampoline); + prog->aux->dst_trampoline = NULL; + mutex_unlock(&prog->aux->dst_mutex); + + kfree(btf_ids); + kfree(tgt_fds); + kfree(cookies); + return bpf_link_settle(&link_primer); +err_out_unlock: + bpf_tracing_multi_link_ptr_fill(link, &prog_array, &mod_array, + &btf_array); + __bpf_tracing_multi_link_release(link); + mutex_unlock(&prog->aux->dst_mutex); +err_out: + kfree(btf_ids); + kfree(tgt_fds); + kfree(cookies); + kfree(link); + return err; +} + struct bpf_raw_tp_link { struct bpf_link link; struct bpf_raw_event_map *btp; @@ -3924,6 +4296,9 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type) case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: case BPF_MODIFY_RETURN: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: + case BPF_MODIFY_RETURN_MULTI: return BPF_PROG_TYPE_TRACING; case BPF_LSM_MAC: return BPF_PROG_TYPE_LSM; @@ -5201,6 +5576,10 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr) ret = bpf_iter_link_attach(attr, uattr, prog); else if (prog->expected_attach_type == BPF_LSM_CGROUP) ret = cgroup_bpf_link_attach(attr, prog); + else if (prog->expected_attach_type == BPF_TRACE_FENTRY_MULTI || + prog->expected_attach_type == BPF_TRACE_FEXIT_MULTI || + prog->expected_attach_type == BPF_MODIFY_RETURN_MULTI) + ret = bpf_tracing_prog_attach_multi(attr, prog); else ret = bpf_tracing_prog_attach(prog, attr->link_create.target_fd, diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index b00d53af8fcb..6d249303a96e 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -111,7 +111,9 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
return (ptype == BPF_PROG_TYPE_TRACING && (eatype == BPF_TRACE_FENTRY || eatype == BPF_TRACE_FEXIT || - eatype == BPF_MODIFY_RETURN)) || + eatype == BPF_MODIFY_RETURN || + eatype == BPF_TRACE_FENTRY_MULTI || eatype == BPF_TRACE_FEXIT_MULTI || + eatype == BPF_MODIFY_RETURN_MULTI)) || (ptype == BPF_PROG_TYPE_LSM && eatype == BPF_LSM_MAC); }
@@ -503,10 +505,13 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog) { switch (prog->expected_attach_type) { case BPF_TRACE_FENTRY: + case BPF_TRACE_FENTRY_MULTI: return BPF_TRAMP_FENTRY; case BPF_MODIFY_RETURN: + case BPF_MODIFY_RETURN_MULTI: return BPF_TRAMP_MODIFY_RETURN; case BPF_TRACE_FEXIT: + case BPF_TRACE_FEXIT_MULTI: return BPF_TRAMP_FEXIT; case BPF_LSM_MAC: if (!prog->aux->attach_func_proto->type) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 4493ecc23597..f878edfcf987 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -15405,10 +15405,13 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char switch (env->prog->expected_attach_type) { case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: range = retval_range(0, 0); break; case BPF_TRACE_RAW_TP: case BPF_MODIFY_RETURN: + case BPF_MODIFY_RETURN_MULTI: return 0; case BPF_TRACE_ITER: break; @@ -20709,7 +20712,9 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, if (tgt_prog->type == BPF_PROG_TYPE_TRACING && prog_extension && (tgt_prog->expected_attach_type == BPF_TRACE_FENTRY || - tgt_prog->expected_attach_type == BPF_TRACE_FEXIT)) { + tgt_prog->expected_attach_type == BPF_TRACE_FEXIT || + tgt_prog->expected_attach_type == BPF_TRACE_FENTRY_MULTI || + tgt_prog->expected_attach_type == BPF_TRACE_FEXIT_MULTI)) { /* Program extensions can extend all program types * except fentry/fexit. The reason is the following. * The fentry/fexit programs are used for performance @@ -20784,6 +20789,9 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, case BPF_LSM_CGROUP: case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: + case BPF_MODIFY_RETURN_MULTI: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: if (!btf_type_is_func(t)) { bpf_log(log, "attach_btf_id %u is not a function\n", btf_id); @@ -20869,7 +20877,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, bpf_log(log, "%s is not sleepable\n", tname); return ret; } - } else if (prog->expected_attach_type == BPF_MODIFY_RETURN) { + } else if (prog->expected_attach_type == BPF_MODIFY_RETURN || + prog->expected_attach_type == BPF_MODIFY_RETURN_MULTI) { if (tgt_prog) { module_put(mod); bpf_log(log, "can't modify return codes of BPF programs\n"); @@ -20922,6 +20931,9 @@ static bool can_be_sleepable(struct bpf_prog *prog) case BPF_TRACE_FEXIT: case BPF_MODIFY_RETURN: case BPF_TRACE_ITER: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: + case BPF_MODIFY_RETURN_MULTI: return true; default: return false; diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c index 6c4d90b24d46..712ae31593e5 100644 --- a/net/core/bpf_sk_storage.c +++ b/net/core/bpf_sk_storage.c @@ -371,6 +371,8 @@ static bool bpf_sk_storage_tracing_allowed(const struct bpf_prog *prog) return true; case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: btf_vmlinux = bpf_get_btf_vmlinux(); if (IS_ERR_OR_NULL(btf_vmlinux)) return false; diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 85ec7fc799d7..f01c4f463c0d 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1114,6 +1114,9 @@ enum bpf_attach_type { BPF_CGROUP_UNIX_GETSOCKNAME, BPF_NETKIT_PRIMARY, BPF_NETKIT_PEER, + BPF_TRACE_FENTRY_MULTI, + BPF_TRACE_FEXIT_MULTI, + BPF_MODIFY_RETURN_MULTI, __MAX_BPF_ATTACH_TYPE };
@@ -1134,6 +1137,7 @@ enum bpf_link_type { BPF_LINK_TYPE_TCX = 11, BPF_LINK_TYPE_UPROBE_MULTI = 12, BPF_LINK_TYPE_NETKIT = 13, + BPF_LINK_TYPE_TRACING_MULTI = 14, __MAX_BPF_LINK_TYPE, };
@@ -1726,6 +1730,12 @@ union bpf_attr { */ __u64 cookie; } tracing; + struct { + __u32 cnt; + __aligned_u64 tgt_fds; + __aligned_u64 btf_ids; + __aligned_u64 cookies; + } tracing_multi; struct { __u32 pf; __u32 hooknum;
Hi Menglong,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Menglong-Dong/bpf-tracing-add... base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master patch link: https://lore.kernel.org/r/20240311093526.1010158-7-dongmenglong.8%40bytedanc... patch subject: [PATCH bpf-next v2 6/9] bpf: tracing: add multi-link support config: hexagon-allmodconfig (https://download.01.org/0day-ci/archive/20240312/202403120218.Me518kSk-lkp@i...) compiler: clang version 19.0.0git (https://github.com/llvm/llvm-project 503c55e17037436dcd45ac69dea8967e67e3f5e8) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240312/202403120218.Me518kSk-lkp@i...)
If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot lkp@intel.com | Closes: https://lore.kernel.org/oe-kbuild-all/202403120218.Me518kSk-lkp@intel.com/
All errors (new ones prefixed by >>):
In file included from kernel/bpf/syscall.c:4: In file included from include/linux/bpf.h:31: In file included from include/linux/memcontrol.h:13: In file included from include/linux/cgroup.h:26: In file included from include/linux/kernel_stat.h:9: In file included from include/linux/interrupt.h:11: In file included from include/linux/hardirq.h:11: In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1: In file included from include/asm-generic/hardirq.h:17: In file included from include/linux/irq.h:20: In file included from include/linux/io.h:13: In file included from arch/hexagon/include/asm/io.h:328: include/asm-generic/io.h:573:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic] 573 | val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr)); | ~~~~~~~~~~ ^ include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu' 35 | #define __le32_to_cpu(x) ((__force __u32)(__le32)(x)) | ^ In file included from kernel/bpf/syscall.c:4: In file included from include/linux/bpf.h:31: In file included from include/linux/memcontrol.h:13: In file included from include/linux/cgroup.h:26: In file included from include/linux/kernel_stat.h:9: In file included from include/linux/interrupt.h:11: In file included from include/linux/hardirq.h:11: In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1: In file included from include/asm-generic/hardirq.h:17: In file included from include/linux/irq.h:20: In file included from include/linux/io.h:13: In file included from arch/hexagon/include/asm/io.h:328: include/asm-generic/io.h:584:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic] 584 | __raw_writeb(value, PCI_IOBASE + addr); | ~~~~~~~~~~ ^ include/asm-generic/io.h:594:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic] 594 | __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr); | ~~~~~~~~~~ ^ include/asm-generic/io.h:604:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic] 604 | __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr); | ~~~~~~~~~~ ^ In file included from kernel/bpf/syscall.c:4: include/linux/bpf.h:751:48: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_arg_type') [-Wenum-enum-conversion] 751 | ARG_PTR_TO_MAP_VALUE_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_MAP_VALUE, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~ include/linux/bpf.h:752:43: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_arg_type') [-Wenum-enum-conversion] 752 | ARG_PTR_TO_MEM_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_MEM, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~ include/linux/bpf.h:753:43: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_arg_type') [-Wenum-enum-conversion] 753 | ARG_PTR_TO_CTX_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_CTX, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~ include/linux/bpf.h:754:45: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_arg_type') [-Wenum-enum-conversion] 754 | ARG_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_SOCKET, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~ include/linux/bpf.h:755:44: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_arg_type') [-Wenum-enum-conversion] 755 | ARG_PTR_TO_STACK_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_STACK, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~ include/linux/bpf.h:756:45: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_arg_type') [-Wenum-enum-conversion] 756 | ARG_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_BTF_ID, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~ include/linux/bpf.h:760:38: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_arg_type') [-Wenum-enum-conversion] 760 | ARG_PTR_TO_UNINIT_MEM = MEM_UNINIT | ARG_PTR_TO_MEM, | ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~ include/linux/bpf.h:762:45: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_arg_type') [-Wenum-enum-conversion] 762 | ARG_PTR_TO_FIXED_SIZE_MEM = MEM_FIXED_SIZE | ARG_PTR_TO_MEM, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~ include/linux/bpf.h:785:48: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_return_type') [-Wenum-enum-conversion] 785 | RET_PTR_TO_MAP_VALUE_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_MAP_VALUE, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~ include/linux/bpf.h:786:45: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_return_type') [-Wenum-enum-conversion] 786 | RET_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCKET, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~ include/linux/bpf.h:787:47: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_return_type') [-Wenum-enum-conversion] 787 | RET_PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~ include/linux/bpf.h:788:50: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_return_type') [-Wenum-enum-conversion] 788 | RET_PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~ include/linux/bpf.h:790:49: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_return_type') [-Wenum-enum-conversion] 790 | RET_PTR_TO_DYNPTR_MEM_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_MEM, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~ include/linux/bpf.h:791:45: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_return_type') [-Wenum-enum-conversion] 791 | RET_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~ include/linux/bpf.h:792:43: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_return_type') [-Wenum-enum-conversion] 792 | RET_PTR_TO_BTF_ID_TRUSTED = PTR_TRUSTED | RET_PTR_TO_BTF_ID, | ~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~ include/linux/bpf.h:903:44: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_reg_type') [-Wenum-enum-conversion] 903 | PTR_TO_MAP_VALUE_OR_NULL = PTR_MAYBE_NULL | PTR_TO_MAP_VALUE, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~ include/linux/bpf.h:904:42: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_reg_type') [-Wenum-enum-conversion] 904 | PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | PTR_TO_SOCKET, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~ include/linux/bpf.h:905:46: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_reg_type') [-Wenum-enum-conversion] 905 | PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | PTR_TO_SOCK_COMMON, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~ include/linux/bpf.h:906:44: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_reg_type') [-Wenum-enum-conversion] 906 | PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | PTR_TO_TCP_SOCK, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~ include/linux/bpf.h:907:42: warning: bitwise operation between different enumeration types ('enum bpf_type_flag' and 'enum bpf_reg_type') [-Wenum-enum-conversion] 907 | PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | PTR_TO_BTF_ID, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~
kernel/bpf/syscall.c:3538:2: error: call to undeclared function 'bpf_trampoline_multi_unlink_prog'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
3538 | bpf_trampoline_multi_unlink_prog(&multi_link->link); | ^ kernel/bpf/syscall.c:3538:2: note: did you mean 'bpf_trampoline_unlink_prog'? include/linux/bpf.h:1368:19: note: 'bpf_trampoline_unlink_prog' declared here 1368 | static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, | ^
kernel/bpf/syscall.c:3815:8: error: call to undeclared function 'bpf_trampoline_multi_link_prog'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
3815 | err = bpf_trampoline_multi_link_prog(&link->link); | ^ kernel/bpf/syscall.c:3815:8: note: did you mean 'bpf_trampoline_unlink_prog'? include/linux/bpf.h:1368:19: note: 'bpf_trampoline_unlink_prog' declared here 1368 | static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, | ^ kernel/bpf/syscall.c:3821:3: error: call to undeclared function 'bpf_trampoline_multi_unlink_prog'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 3821 | bpf_trampoline_multi_unlink_prog(&link->link); | ^ kernel/bpf/syscall.c:6204:30: warning: bitwise operation between different enumeration types ('enum bpf_arg_type' and 'enum bpf_type_flag') [-Wenum-enum-conversion] 6204 | .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY, | ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~ 28 warnings and 3 errors generated.
vim +/bpf_trampoline_multi_unlink_prog +3538 kernel/bpf/syscall.c
3532 3533 static void bpf_tracing_multi_link_release(struct bpf_link *link) 3534 { 3535 struct bpf_tracing_multi_link *multi_link = 3536 container_of(link, struct bpf_tracing_multi_link, link.link); 3537
3538 bpf_trampoline_multi_unlink_prog(&multi_link->link);
3539 __bpf_tracing_multi_link_release(multi_link); 3540 } 3541
Hi Menglong,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Menglong-Dong/bpf-tracing-add... base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master patch link: https://lore.kernel.org/r/20240311093526.1010158-7-dongmenglong.8%40bytedanc... patch subject: [PATCH bpf-next v2 6/9] bpf: tracing: add multi-link support config: alpha-allyesconfig (https://download.01.org/0day-ci/archive/20240312/202403120515.LMAOyTdG-lkp@i...) compiler: alpha-linux-gcc (GCC) 13.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240312/202403120515.LMAOyTdG-lkp@i...)
If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot lkp@intel.com | Closes: https://lore.kernel.org/oe-kbuild-all/202403120515.LMAOyTdG-lkp@intel.com/
All errors (new ones prefixed by >>):
kernel/bpf/syscall.c: In function 'bpf_tracing_multi_link_release':
kernel/bpf/syscall.c:3538:9: error: implicit declaration of function 'bpf_trampoline_multi_unlink_prog'; did you mean 'bpf_trampoline_unlink_prog'? [-Werror=implicit-function-declaration]
3538 | bpf_trampoline_multi_unlink_prog(&multi_link->link); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | bpf_trampoline_unlink_prog kernel/bpf/syscall.c: In function 'bpf_tracing_prog_attach_multi':
kernel/bpf/syscall.c:3815:15: error: implicit declaration of function 'bpf_trampoline_multi_link_prog'; did you mean 'bpf_trampoline_unlink_prog'? [-Werror=implicit-function-declaration]
3815 | err = bpf_trampoline_multi_link_prog(&link->link); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | bpf_trampoline_unlink_prog cc1: some warnings being treated as errors
vim +3538 kernel/bpf/syscall.c
3532 3533 static void bpf_tracing_multi_link_release(struct bpf_link *link) 3534 { 3535 struct bpf_tracing_multi_link *multi_link = 3536 container_of(link, struct bpf_tracing_multi_link, link.link); 3537
3538 bpf_trampoline_multi_unlink_prog(&multi_link->link);
3539 __bpf_tracing_multi_link_release(multi_link); 3540 } 3541
By default, the kernel btf that we load during loading program will be freed after the programs are loaded in bpf_object_load(). However, we still need to use these btf for tracing of multi-link during attaching. Therefore, we don't free the btfs until the bpf object is closed if any bpf programs of the type multi-link tracing exist.
Meanwhile, introduce the new api bpf_object__free_btf() to manually free the btfs after attaching.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- tools/lib/bpf/libbpf.c | 47 ++++++++++++++++++++++++++++++---------- tools/lib/bpf/libbpf.h | 2 ++ tools/lib/bpf/libbpf.map | 1 + 3 files changed, 38 insertions(+), 12 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 567ad367e7aa..fd5428494a7e 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -8267,6 +8267,39 @@ static int bpf_object_prepare_struct_ops(struct bpf_object *obj) return 0; }
+void bpf_object__free_btfs(struct bpf_object *obj) +{ + int i; + + /* clean up module BTFs */ + for (i = 0; i < obj->btf_module_cnt; i++) { + close(obj->btf_modules[i].fd); + btf__free(obj->btf_modules[i].btf); + free(obj->btf_modules[i].name); + } + free(obj->btf_modules); + obj->btf_modules = NULL; + obj->btf_module_cnt = 0; + + /* clean up vmlinux BTF */ + btf__free(obj->btf_vmlinux); + obj->btf_vmlinux = NULL; +} + +static void bpf_object_early_free_btf(struct bpf_object *obj) +{ + struct bpf_program *prog; + + bpf_object__for_each_program(prog, obj) { + if (prog->expected_attach_type == BPF_TRACE_FENTRY_MULTI || + prog->expected_attach_type == BPF_TRACE_FEXIT_MULTI || + prog->expected_attach_type == BPF_MODIFY_RETURN_MULTI) + return; + } + + bpf_object__free_btfs(obj); +} + static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path) { int err, i; @@ -8307,18 +8340,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch /* clean up fd_array */ zfree(&obj->fd_array);
- /* clean up module BTFs */ - for (i = 0; i < obj->btf_module_cnt; i++) { - close(obj->btf_modules[i].fd); - btf__free(obj->btf_modules[i].btf); - free(obj->btf_modules[i].name); - } - free(obj->btf_modules); - - /* clean up vmlinux BTF */ - btf__free(obj->btf_vmlinux); - obj->btf_vmlinux = NULL; - + bpf_object_early_free_btf(obj); obj->loaded = true; /* doesn't matter if successfully or not */
if (err) @@ -8791,6 +8813,7 @@ void bpf_object__close(struct bpf_object *obj) usdt_manager_free(obj->usdt_man); obj->usdt_man = NULL;
+ bpf_object__free_btfs(obj); bpf_gen__free(obj->gen_loader); bpf_object__elf_finish(obj); bpf_object_unload(obj); diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 5723cbbfcc41..c41a909ea4c1 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -299,6 +299,8 @@ LIBBPF_API struct bpf_program * bpf_object__find_program_by_name(const struct bpf_object *obj, const char *name);
+LIBBPF_API void bpf_object__free_btfs(struct bpf_object *obj); + LIBBPF_API int libbpf_prog_type_by_name(const char *name, enum bpf_prog_type *prog_type, enum bpf_attach_type *expected_attach_type); diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 86804fd90dd1..57642b78917f 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -413,4 +413,5 @@ LIBBPF_1.4.0 { bpf_token_create; btf__new_split; btf_ext__raw_data; + bpf_object__free_btfs; } LIBBPF_1.3.0;
On Mon, Mar 11, 2024 at 2:35 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
By default, the kernel btf that we load during loading program will be freed after the programs are loaded in bpf_object_load(). However, we still need to use these btf for tracing of multi-link during attaching. Therefore, we don't free the btfs until the bpf object is closed if any bpf programs of the type multi-link tracing exist.
Meanwhile, introduce the new api bpf_object__free_btf() to manually free the btfs after attaching.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
tools/lib/bpf/libbpf.c | 47 ++++++++++++++++++++++++++++++---------- tools/lib/bpf/libbpf.h | 2 ++ tools/lib/bpf/libbpf.map | 1 + 3 files changed, 38 insertions(+), 12 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 567ad367e7aa..fd5428494a7e 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -8267,6 +8267,39 @@ static int bpf_object_prepare_struct_ops(struct bpf_object *obj) return 0; }
+void bpf_object__free_btfs(struct bpf_object *obj) +{
int i;
/* clean up module BTFs */
for (i = 0; i < obj->btf_module_cnt; i++) {
close(obj->btf_modules[i].fd);
btf__free(obj->btf_modules[i].btf);
free(obj->btf_modules[i].name);
}
free(obj->btf_modules);
obj->btf_modules = NULL;
obj->btf_module_cnt = 0;
/* clean up vmlinux BTF */
btf__free(obj->btf_vmlinux);
obj->btf_vmlinux = NULL;
+}
+static void bpf_object_early_free_btf(struct bpf_object *obj) +{
struct bpf_program *prog;
bpf_object__for_each_program(prog, obj) {
if (prog->expected_attach_type == BPF_TRACE_FENTRY_MULTI ||
prog->expected_attach_type == BPF_TRACE_FEXIT_MULTI ||
prog->expected_attach_type == BPF_MODIFY_RETURN_MULTI)
return;
}
bpf_object__free_btfs(obj);
+}
static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path) { int err, i; @@ -8307,18 +8340,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch /* clean up fd_array */ zfree(&obj->fd_array);
/* clean up module BTFs */
for (i = 0; i < obj->btf_module_cnt; i++) {
close(obj->btf_modules[i].fd);
btf__free(obj->btf_modules[i].btf);
free(obj->btf_modules[i].name);
}
free(obj->btf_modules);
/* clean up vmlinux BTF */
btf__free(obj->btf_vmlinux);
obj->btf_vmlinux = NULL;
bpf_object_early_free_btf(obj); obj->loaded = true; /* doesn't matter if successfully or not */ if (err)
@@ -8791,6 +8813,7 @@ void bpf_object__close(struct bpf_object *obj) usdt_manager_free(obj->usdt_man); obj->usdt_man = NULL;
bpf_object__free_btfs(obj); bpf_gen__free(obj->gen_loader); bpf_object__elf_finish(obj); bpf_object_unload(obj);
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 5723cbbfcc41..c41a909ea4c1 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -299,6 +299,8 @@ LIBBPF_API struct bpf_program * bpf_object__find_program_by_name(const struct bpf_object *obj, const char *name);
+LIBBPF_API void bpf_object__free_btfs(struct bpf_object *obj);
It shouldn't be exported. libbpf should clean it up when bpf_object is freed.
On Tue, Mar 12, 2024 at 9:55 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:35 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
By default, the kernel btf that we load during loading program will be freed after the programs are loaded in bpf_object_load(). However, we still need to use these btf for tracing of multi-link during attaching. Therefore, we don't free the btfs until the bpf object is closed if any bpf programs of the type multi-link tracing exist.
Meanwhile, introduce the new api bpf_object__free_btf() to manually free the btfs after attaching.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
tools/lib/bpf/libbpf.c | 47 ++++++++++++++++++++++++++++++---------- tools/lib/bpf/libbpf.h | 2 ++ tools/lib/bpf/libbpf.map | 1 + 3 files changed, 38 insertions(+), 12 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 567ad367e7aa..fd5428494a7e 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -8267,6 +8267,39 @@ static int bpf_object_prepare_struct_ops(struct bpf_object *obj) return 0; }
+void bpf_object__free_btfs(struct bpf_object *obj) +{
int i;
/* clean up module BTFs */
for (i = 0; i < obj->btf_module_cnt; i++) {
close(obj->btf_modules[i].fd);
btf__free(obj->btf_modules[i].btf);
free(obj->btf_modules[i].name);
}
free(obj->btf_modules);
obj->btf_modules = NULL;
obj->btf_module_cnt = 0;
/* clean up vmlinux BTF */
btf__free(obj->btf_vmlinux);
obj->btf_vmlinux = NULL;
+}
+static void bpf_object_early_free_btf(struct bpf_object *obj) +{
struct bpf_program *prog;
bpf_object__for_each_program(prog, obj) {
if (prog->expected_attach_type == BPF_TRACE_FENTRY_MULTI ||
prog->expected_attach_type == BPF_TRACE_FEXIT_MULTI ||
prog->expected_attach_type == BPF_MODIFY_RETURN_MULTI)
return;
}
bpf_object__free_btfs(obj);
+}
static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path) { int err, i; @@ -8307,18 +8340,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch /* clean up fd_array */ zfree(&obj->fd_array);
/* clean up module BTFs */
for (i = 0; i < obj->btf_module_cnt; i++) {
close(obj->btf_modules[i].fd);
btf__free(obj->btf_modules[i].btf);
free(obj->btf_modules[i].name);
}
free(obj->btf_modules);
/* clean up vmlinux BTF */
btf__free(obj->btf_vmlinux);
obj->btf_vmlinux = NULL;
bpf_object_early_free_btf(obj); obj->loaded = true; /* doesn't matter if successfully or not */ if (err)
@@ -8791,6 +8813,7 @@ void bpf_object__close(struct bpf_object *obj) usdt_manager_free(obj->usdt_man); obj->usdt_man = NULL;
bpf_object__free_btfs(obj); bpf_gen__free(obj->gen_loader); bpf_object__elf_finish(obj); bpf_object_unload(obj);
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 5723cbbfcc41..c41a909ea4c1 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -299,6 +299,8 @@ LIBBPF_API struct bpf_program * bpf_object__find_program_by_name(const struct bpf_object *obj, const char *name);
+LIBBPF_API void bpf_object__free_btfs(struct bpf_object *obj);
It shouldn't be exported. libbpf should clean it up when bpf_object is freed.
Yes, libbpf will clean up the btfs when bpf_object is freed in this commit. And I'm trying to offer a way to early free the btfs by the users manual to reduce the memory usage. Or, the btfs that we opened will keep existing until we close the bpf_object.
This is optional, I can remove it if you prefer.
Thanks! Menglong Dong
On Mon, Mar 11, 2024 at 7:05 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
+LIBBPF_API void bpf_object__free_btfs(struct bpf_object *obj);
It shouldn't be exported. libbpf should clean it up when bpf_object is freed.
Yes, libbpf will clean up the btfs when bpf_object is freed in this commit. And I'm trying to offer a way to early free the btfs by the users manual to reduce the memory usage. Or, the btfs that we opened will keep existing until we close the bpf_object.
This is optional, I can remove it if you prefer.
Let's not extend libbpf api unless we really need to. bpf_program__attach_trace_multi_opts() and *skel*__attach() can probably free them. I don't see a use case where you'd want to keep them afterwards.
On Tue, Mar 12, 2024 at 10:13 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 7:05 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
+LIBBPF_API void bpf_object__free_btfs(struct bpf_object *obj);
It shouldn't be exported. libbpf should clean it up when bpf_object is freed.
Yes, libbpf will clean up the btfs when bpf_object is freed in this commit. And I'm trying to offer a way to early free the btfs by the users manual to reduce the memory usage. Or, the btfs that we opened will keep existing until we close the bpf_object.
This is optional, I can remove it if you prefer.
Let's not extend libbpf api unless we really need to. bpf_program__attach_trace_multi_opts() and *skel*__attach() can probably free them.
That's a good idea! Should we add a "bool free_btf" field to struct bpf_trace_multi_opts? bpf_program__attach_trace_multi_opts() can be called multi times for a bpf_object, which has multi bpf program of type tracing multi-link. Or, can we free the btfs automatically if we found all tracing multi-link programs are attached?
Thanks! Menglong Dong
I don't see a use case where you'd want to keep them afterwards.
Add support for the attach types of:
BPF_TRACE_FENTRY_MULTI BPF_TRACE_FEXIT_MULTI BPF_MODIFY_RETURN_MULTI
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- tools/bpf/bpftool/common.c | 3 + tools/lib/bpf/bpf.c | 10 +++ tools/lib/bpf/bpf.h | 6 ++ tools/lib/bpf/libbpf.c | 168 ++++++++++++++++++++++++++++++++++++- tools/lib/bpf/libbpf.h | 14 ++++ tools/lib/bpf/libbpf.map | 1 + 6 files changed, 199 insertions(+), 3 deletions(-)
diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c index cc6e6aae2447..ffc85256671d 100644 --- a/tools/bpf/bpftool/common.c +++ b/tools/bpf/bpftool/common.c @@ -1089,6 +1089,9 @@ const char *bpf_attach_type_input_str(enum bpf_attach_type t) case BPF_TRACE_FENTRY: return "fentry"; case BPF_TRACE_FEXIT: return "fexit"; case BPF_MODIFY_RETURN: return "mod_ret"; + case BPF_TRACE_FENTRY_MULTI: return "fentry_multi"; + case BPF_TRACE_FEXIT_MULTI: return "fexit_multi"; + case BPF_MODIFY_RETURN_MULTI: return "mod_ret_multi"; case BPF_SK_REUSEPORT_SELECT: return "sk_skb_reuseport_select"; case BPF_SK_REUSEPORT_SELECT_OR_MIGRATE: return "sk_skb_reuseport_select_or_migrate"; default: return libbpf_bpf_attach_type_str(t); diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 97ec005c3c47..63d4734dbae4 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -793,6 +793,16 @@ int bpf_link_create(int prog_fd, int target_fd, if (!OPTS_ZEROED(opts, tracing)) return libbpf_err(-EINVAL); break; + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: + case BPF_MODIFY_RETURN_MULTI: + attr.link_create.tracing_multi.btf_ids = ptr_to_u64(OPTS_GET(opts, tracing_multi.btf_ids, 0)); + attr.link_create.tracing_multi.tgt_fds = ptr_to_u64(OPTS_GET(opts, tracing_multi.tgt_fds, 0)); + attr.link_create.tracing_multi.cookies = ptr_to_u64(OPTS_GET(opts, tracing_multi.cookies, 0)); + attr.link_create.tracing_multi.cnt = OPTS_GET(opts, tracing_multi.cnt, 0); + if (!OPTS_ZEROED(opts, tracing_multi)) + return libbpf_err(-EINVAL); + break; case BPF_NETFILTER: attr.link_create.netfilter.pf = OPTS_GET(opts, netfilter.pf, 0); attr.link_create.netfilter.hooknum = OPTS_GET(opts, netfilter.hooknum, 0); diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index df0db2f0cdb7..e28c88d6cfa4 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -419,6 +419,12 @@ struct bpf_link_create_opts { struct { __u64 cookie; } tracing; + struct { + __u32 cnt; + const __u32 *btf_ids; + const __u32 *tgt_fds; + const __u64 *cookies; + } tracing_multi; struct { __u32 pf; __u32 hooknum; diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index fd5428494a7e..821214774941 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -132,6 +132,9 @@ static const char * const attach_type_name[] = { [BPF_TRACE_UPROBE_MULTI] = "trace_uprobe_multi", [BPF_NETKIT_PRIMARY] = "netkit_primary", [BPF_NETKIT_PEER] = "netkit_peer", + [BPF_TRACE_FENTRY_MULTI] = "trace_fentry_multi", + [BPF_TRACE_FEXIT_MULTI] = "trace_fexit_multi", + [BPF_MODIFY_RETURN_MULTI] = "modify_return_multi", };
static const char * const link_type_name[] = { @@ -381,6 +384,8 @@ enum sec_def_flags { SEC_XDP_FRAGS = 16, /* Setup proper attach type for usdt probes. */ SEC_USDT = 32, + /* attachment target is multi-link */ + SEC_ATTACH_BTF_MULTI = 64, };
struct bpf_sec_def { @@ -7160,9 +7165,9 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog, if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK)) prog->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
- if ((def & SEC_ATTACH_BTF) && !prog->attach_btf_id) { + if ((def & (SEC_ATTACH_BTF | SEC_ATTACH_BTF_MULTI)) && !prog->attach_btf_id) { int btf_obj_fd = 0, btf_type_id = 0, err; - const char *attach_name; + const char *attach_name, *name_end;
attach_name = strchr(prog->sec_name, '/'); if (!attach_name) { @@ -7181,7 +7186,27 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog, } attach_name++; /* skip over / */
- err = libbpf_find_attach_btf_id(prog, attach_name, &btf_obj_fd, &btf_type_id); + name_end = strchr(attach_name, ','); + /* for multi-link tracing, use the first target symbol during + * loading. + */ + if ((def & SEC_ATTACH_BTF_MULTI) && name_end) { + int len = name_end - attach_name + 1; + char *first_tgt; + + first_tgt = malloc(len); + if (!first_tgt) + return -ENOMEM; + strncpy(first_tgt, attach_name, len); + first_tgt[len - 1] = '\0'; + err = libbpf_find_attach_btf_id(prog, first_tgt, &btf_obj_fd, + &btf_type_id); + free(first_tgt); + } else { + err = libbpf_find_attach_btf_id(prog, attach_name, &btf_obj_fd, + &btf_type_id); + } + if (err) return err;
@@ -9149,6 +9174,7 @@ static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, stru static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link); static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link); static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link); +static int attach_trace_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static const struct bpf_sec_def section_defs[] = { SEC_DEF("socket", SOCKET_FILTER, 0, SEC_NONE), @@ -9192,6 +9218,13 @@ static const struct bpf_sec_def section_defs[] = { SEC_DEF("fentry.s+", TRACING, BPF_TRACE_FENTRY, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace), SEC_DEF("fmod_ret.s+", TRACING, BPF_MODIFY_RETURN, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace), SEC_DEF("fexit.s+", TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace), + SEC_DEF("tp_btf+", TRACING, BPF_TRACE_RAW_TP, SEC_ATTACH_BTF, attach_trace), + SEC_DEF("fentry.multi+", TRACING, BPF_TRACE_FENTRY_MULTI, SEC_ATTACH_BTF_MULTI, attach_trace_multi), + SEC_DEF("fmod_ret.multi+", TRACING, BPF_MODIFY_RETURN_MULTI, SEC_ATTACH_BTF_MULTI, attach_trace_multi), + SEC_DEF("fexit.multi+", TRACING, BPF_TRACE_FEXIT_MULTI, SEC_ATTACH_BTF_MULTI, attach_trace_multi), + SEC_DEF("fentry.multi.s+", TRACING, BPF_TRACE_FENTRY_MULTI, SEC_ATTACH_BTF_MULTI | SEC_SLEEPABLE, attach_trace_multi), + SEC_DEF("fmod_ret.multi.s+", TRACING, BPF_MODIFY_RETURN_MULTI, SEC_ATTACH_BTF_MULTI | SEC_SLEEPABLE, attach_trace_multi), + SEC_DEF("fexit.multi.s+", TRACING, BPF_TRACE_FEXIT_MULTI, SEC_ATTACH_BTF_MULTI | SEC_SLEEPABLE, attach_trace_multi), SEC_DEF("freplace+", EXT, 0, SEC_ATTACH_BTF, attach_trace), SEC_DEF("lsm+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm), SEC_DEF("lsm.s+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm), @@ -12300,6 +12333,135 @@ static int attach_trace(const struct bpf_program *prog, long cookie, struct bpf_ return libbpf_get_error(*link); }
+struct bpf_link *bpf_program__attach_trace_multi_opts(const struct bpf_program *prog, + const struct bpf_trace_multi_opts *opts) +{ + LIBBPF_OPTS(bpf_link_create_opts, link_opts); + __u32 *btf_ids = NULL, *tgt_fds = NULL; + struct bpf_link *link = NULL; + char errmsg[STRERR_BUFSIZE]; + int prog_fd, pfd, cnt, err; + + if (!OPTS_VALID(opts, bpf_trace_multi_opts)) + return libbpf_err_ptr(-EINVAL); + + prog_fd = bpf_program__fd(prog); + if (prog_fd < 0) { + pr_warn("prog '%s': can't attach before loaded\n", prog->name); + return libbpf_err_ptr(-EINVAL); + } + + cnt = OPTS_GET(opts, cnt, 0); + if (opts->syms) { + int btf_obj_fd, btf_type_id, i; + + if (opts->btf_ids || opts->tgt_fds) { + pr_warn("can set both opts->syms and opts->btf_ids\n"); + return libbpf_err_ptr(-EINVAL); + } + + btf_ids = malloc(sizeof(*btf_ids) * cnt); + tgt_fds = malloc(sizeof(*tgt_fds) * cnt); + if (!btf_ids || !tgt_fds) { + err = -ENOMEM; + goto err_free; + } + for (i = 0; i < cnt; i++) { + btf_obj_fd = btf_type_id = 0; + + err = find_kernel_btf_id(prog->obj, opts->syms[i], + prog->expected_attach_type, &btf_obj_fd, + &btf_type_id); + if (err) + goto err_free; + btf_ids[i] = btf_type_id; + tgt_fds[i] = btf_obj_fd; + } + link_opts.tracing_multi.btf_ids = btf_ids; + link_opts.tracing_multi.tgt_fds = tgt_fds; + } else { + link_opts.tracing_multi.btf_ids = OPTS_GET(opts, btf_ids, 0); + link_opts.tracing_multi.tgt_fds = OPTS_GET(opts, tgt_fds, 0); + } + + link = calloc(1, sizeof(*link)); + if (!link) { + err = -ENOMEM; + goto err_free; + } + link->detach = &bpf_link__detach_fd; + + link_opts.tracing_multi.cookies = OPTS_GET(opts, cookies, 0); + link_opts.tracing_multi.cnt = cnt; + + pfd = bpf_link_create(prog_fd, 0, bpf_program__expected_attach_type(prog), &link_opts); + if (pfd < 0) { + err = -errno; + pr_warn("prog '%s': failed to attach: %s\n", + prog->name, libbpf_strerror_r(pfd, errmsg, sizeof(errmsg))); + goto err_free; + } + link->fd = pfd; + + free(btf_ids); + free(tgt_fds); + return link; +err_free: + free(btf_ids); + free(tgt_fds); + free(link); + return libbpf_err_ptr(err); +} + +static int attach_trace_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link) +{ + LIBBPF_OPTS(bpf_trace_multi_opts, opts); + int i, err, len, cnt = 1; + char **syms, *buf, *name; + const char *spec; + + spec = strchr(prog->sec_name, '/'); + if (!spec || !*(++spec)) + return -EINVAL; + + len = strlen(spec); + buf = malloc(len + 1); + if (!buf) + return -ENOMEM; + + strcpy(buf, spec); + for (i = 0; i < len; i++) { + if (buf[i] == ',') + cnt++; + } + + syms = malloc(sizeof(*syms) * cnt); + if (!syms) { + err = -ENOMEM; + goto out_free; + } + + opts.syms = (const char **)syms; + opts.cnt = cnt; + name = buf; + err = -EINVAL; + while (name) { + if (*name == '\0') + goto out_free; + *(syms++) = name; + name = strchr(name, ','); + if (name) + *(name++) = '\0'; + } + + *link = bpf_program__attach_trace_multi_opts(prog, &opts); + err = libbpf_get_error(*link); +out_free: + free(buf); + free(opts.syms); + return err; +} + static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link) { *link = bpf_program__attach_lsm(prog); diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index c41a909ea4c1..9bca44d5adfa 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -790,6 +790,20 @@ bpf_program__attach_xdp(const struct bpf_program *prog, int ifindex); LIBBPF_API struct bpf_link * bpf_program__attach_freplace(const struct bpf_program *prog, int target_fd, const char *attach_func_name); +struct bpf_trace_multi_opts { + /* size of this struct, for forward/backward compatibility */ + size_t sz; + const char **syms; + __u32 *btf_ids; + __u32 *tgt_fds; + __u64 *cookies; + size_t cnt; +}; +#define bpf_trace_multi_opts__last_field cnt + +LIBBPF_API struct bpf_link * +bpf_program__attach_trace_multi_opts(const struct bpf_program *prog, + const struct bpf_trace_multi_opts *opts);
struct bpf_netfilter_opts { /* size of this struct, for forward/backward compatibility */ diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 57642b78917f..94933898df44 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -414,4 +414,5 @@ LIBBPF_1.4.0 { btf__new_split; btf_ext__raw_data; bpf_object__free_btfs; + bpf_program__attach_trace_multi_opts; } LIBBPF_1.3.0;
2024-03-11 09:35 UTC+0000 ~ Menglong Dong dongmenglong.8@bytedance.com
Add support for the attach types of:
BPF_TRACE_FENTRY_MULTI BPF_TRACE_FEXIT_MULTI BPF_MODIFY_RETURN_MULTI
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
tools/bpf/bpftool/common.c | 3 + tools/lib/bpf/bpf.c | 10 +++ tools/lib/bpf/bpf.h | 6 ++ tools/lib/bpf/libbpf.c | 168 ++++++++++++++++++++++++++++++++++++- tools/lib/bpf/libbpf.h | 14 ++++ tools/lib/bpf/libbpf.map | 1 + 6 files changed, 199 insertions(+), 3 deletions(-)
diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c index cc6e6aae2447..ffc85256671d 100644 --- a/tools/bpf/bpftool/common.c +++ b/tools/bpf/bpftool/common.c @@ -1089,6 +1089,9 @@ const char *bpf_attach_type_input_str(enum bpf_attach_type t) case BPF_TRACE_FENTRY: return "fentry"; case BPF_TRACE_FEXIT: return "fexit"; case BPF_MODIFY_RETURN: return "mod_ret";
- case BPF_TRACE_FENTRY_MULTI: return "fentry_multi";
- case BPF_TRACE_FEXIT_MULTI: return "fexit_multi";
- case BPF_MODIFY_RETURN_MULTI: return "mod_ret_multi"; case BPF_SK_REUSEPORT_SELECT: return "sk_skb_reuseport_select"; case BPF_SK_REUSEPORT_SELECT_OR_MIGRATE: return "sk_skb_reuseport_select_or_migrate"; default: return libbpf_bpf_attach_type_str(t);
Hi, please drop this part in bpftool.
bpf_attach_type_input_str() is used for legacy attach type names that were used before bpftool switched to libbpf_bpf_attach_type_str(), and that are still supported today. The names for new attach types should just be retrieved with libbpf_bpf_attach_type_str(). And function bpf_attach_type_input_str() is also only used for attaching cgroup-related programs with "bpftool cgroup (at|de)tach".
Thanks, Quentin
On Mon, Mar 11, 2024 at 11:29 PM Quentin Monnet quentin@isovalent.com wrote:
2024-03-11 09:35 UTC+0000 ~ Menglong Dong dongmenglong.8@bytedance.com
Add support for the attach types of:
BPF_TRACE_FENTRY_MULTI BPF_TRACE_FEXIT_MULTI BPF_MODIFY_RETURN_MULTI
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
tools/bpf/bpftool/common.c | 3 + tools/lib/bpf/bpf.c | 10 +++ tools/lib/bpf/bpf.h | 6 ++ tools/lib/bpf/libbpf.c | 168 ++++++++++++++++++++++++++++++++++++- tools/lib/bpf/libbpf.h | 14 ++++ tools/lib/bpf/libbpf.map | 1 + 6 files changed, 199 insertions(+), 3 deletions(-)
diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c index cc6e6aae2447..ffc85256671d 100644 --- a/tools/bpf/bpftool/common.c +++ b/tools/bpf/bpftool/common.c @@ -1089,6 +1089,9 @@ const char *bpf_attach_type_input_str(enum bpf_attach_type t) case BPF_TRACE_FENTRY: return "fentry"; case BPF_TRACE_FEXIT: return "fexit"; case BPF_MODIFY_RETURN: return "mod_ret";
case BPF_TRACE_FENTRY_MULTI: return "fentry_multi";
case BPF_TRACE_FEXIT_MULTI: return "fexit_multi";
case BPF_MODIFY_RETURN_MULTI: return "mod_ret_multi"; case BPF_SK_REUSEPORT_SELECT: return "sk_skb_reuseport_select"; case BPF_SK_REUSEPORT_SELECT_OR_MIGRATE: return "sk_skb_reuseport_select_or_migrate"; default: return libbpf_bpf_attach_type_str(t);
Hi, please drop this part in bpftool.
bpf_attach_type_input_str() is used for legacy attach type names that were used before bpftool switched to libbpf_bpf_attach_type_str(), and that are still supported today. The names for new attach types should just be retrieved with libbpf_bpf_attach_type_str(). And function bpf_attach_type_input_str() is also only used for attaching cgroup-related programs with "bpftool cgroup (at|de)tach".
Okay! I was confused about this function, which has reduplicated information about the attach type name, and I understand it now.
Thanks! Menglong Dong
Thanks, Quentin
On Mon, Mar 11, 2024 at 2:35 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
err = libbpf_find_attach_btf_id(prog, attach_name, &btf_obj_fd, &btf_type_id);
name_end = strchr(attach_name, ',');
/* for multi-link tracing, use the first target symbol during
* loading.
*/
if ((def & SEC_ATTACH_BTF_MULTI) && name_end) {
int len = name_end - attach_name + 1;
char *first_tgt;
first_tgt = malloc(len);
if (!first_tgt)
return -ENOMEM;
strncpy(first_tgt, attach_name, len);
first_tgt[len - 1] = '\0';
err = libbpf_find_attach_btf_id(prog, first_tgt, &btf_obj_fd,
&btf_type_id);
free(first_tgt);
} else {
err = libbpf_find_attach_btf_id(prog, attach_name, &btf_obj_fd,
&btf_type_id);
}
Pls use glob_match the way [ku]probe multi are doing instead of exact match.
On Tue, Mar 12, 2024 at 9:56 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:35 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
err = libbpf_find_attach_btf_id(prog, attach_name, &btf_obj_fd, &btf_type_id);
name_end = strchr(attach_name, ',');
/* for multi-link tracing, use the first target symbol during
* loading.
*/
if ((def & SEC_ATTACH_BTF_MULTI) && name_end) {
int len = name_end - attach_name + 1;
char *first_tgt;
first_tgt = malloc(len);
if (!first_tgt)
return -ENOMEM;
strncpy(first_tgt, attach_name, len);
first_tgt[len - 1] = '\0';
err = libbpf_find_attach_btf_id(prog, first_tgt, &btf_obj_fd,
&btf_type_id);
free(first_tgt);
} else {
err = libbpf_find_attach_btf_id(prog, attach_name, &btf_obj_fd,
&btf_type_id);
}
Pls use glob_match the way [ku]probe multi are doing instead of exact match.
Hello,
I'm a little suspecting the effect of glob_match. I seldom found the use case that the kernel functions which we want to trace have the same naming pattern. And the exact match seems more useful.
Can we use both exact and glob match here?
Thanks! Menglong Dong
On Mon, Mar 11, 2024 at 7:44 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Tue, Mar 12, 2024 at 9:56 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:35 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
err = libbpf_find_attach_btf_id(prog, attach_name, &btf_obj_fd, &btf_type_id);
name_end = strchr(attach_name, ',');
/* for multi-link tracing, use the first target symbol during
* loading.
*/
if ((def & SEC_ATTACH_BTF_MULTI) && name_end) {
int len = name_end - attach_name + 1;
char *first_tgt;
first_tgt = malloc(len);
if (!first_tgt)
return -ENOMEM;
strncpy(first_tgt, attach_name, len);
first_tgt[len - 1] = '\0';
err = libbpf_find_attach_btf_id(prog, first_tgt, &btf_obj_fd,
&btf_type_id);
free(first_tgt);
} else {
err = libbpf_find_attach_btf_id(prog, attach_name, &btf_obj_fd,
&btf_type_id);
}
Pls use glob_match the way [ku]probe multi are doing instead of exact match.
Hello,
I'm a little suspecting the effect of glob_match. I seldom found the use case that the kernel functions which we want to trace have the same naming pattern. And the exact match seems more useful.
Can we use both exact and glob match here?
exact is a subset of glob_match. Pls follow the pattern that[ku]probe multi established in terms of user interface expectations.
On Wed, Mar 13, 2024 at 12:12 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 7:44 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Tue, Mar 12, 2024 at 9:56 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Mar 11, 2024 at 2:35 AM Menglong Dong dongmenglong.8@bytedance.com wrote:
[...]
Pls use glob_match the way [ku]probe multi are doing instead of exact match.
Hello,
I'm a little suspecting the effect of glob_match. I seldom found the use case that the kernel functions which we want to trace have the same naming pattern. And the exact match seems more useful.
Can we use both exact and glob match here?
exact is a subset of glob_match. Pls follow the pattern that[ku]probe multi established in terms of user interface expectations.
Okay!
In this commit, we add some testcases for the following attach types:
BPF_TRACE_FENTRY_MULTI BPF_TRACE_FEXIT_MULTI BPF_MODIFY_RETURN_MULTI
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- net/bpf/test_run.c | 3 + .../selftests/bpf/bpf_testmod/bpf_testmod.c | 49 ++++ .../bpf/prog_tests/tracing_multi_link.c | 153 +++++++++++++ .../selftests/bpf/progs/tracing_multi_test.c | 209 ++++++++++++++++++ 4 files changed, 414 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi_link.c create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_test.c
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 5535f9adc658..126218297984 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -673,6 +673,8 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog, switch (prog->expected_attach_type) { case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: + case BPF_TRACE_FENTRY_MULTI: + case BPF_TRACE_FEXIT_MULTI: if (bpf_fentry_test1(1) != 2 || bpf_fentry_test2(2, 3) != 5 || bpf_fentry_test3(4, 5, 6) != 15 || @@ -685,6 +687,7 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog, goto out; break; case BPF_MODIFY_RETURN: + case BPF_MODIFY_RETURN_MULTI: ret = bpf_modify_return_test(1, &b); if (b != 2) side_effect++; diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c index 39ad96a18123..99a941b26cff 100644 --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c @@ -98,12 +98,61 @@ bpf_testmod_test_struct_arg_8(u64 a, void *b, short c, int d, void *e, return bpf_testmod_test_struct_arg_result; }
+noinline int +bpf_testmod_test_struct_arg_9(struct bpf_testmod_struct_arg_2 a, + struct bpf_testmod_struct_arg_1 b) { + bpf_testmod_test_struct_arg_result = a.a + a.b + b.a; + return bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_struct_arg_10(int a, struct bpf_testmod_struct_arg_2 b) { + bpf_testmod_test_struct_arg_result = a + b.a + b.b; + return bpf_testmod_test_struct_arg_result; +} + +noinline struct bpf_testmod_struct_arg_2 * +bpf_testmod_test_struct_arg_11(int a, struct bpf_testmod_struct_arg_2 b, int c) { + bpf_testmod_test_struct_arg_result = a + b.a + b.b + c; + return (void *)bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_struct_arg_12(int a, struct bpf_testmod_struct_arg_2 b, int *c) { + bpf_testmod_test_struct_arg_result = a + b.a + b.b + *c; + return bpf_testmod_test_struct_arg_result; +} + noinline int bpf_testmod_test_arg_ptr_to_struct(struct bpf_testmod_struct_arg_1 *a) { bpf_testmod_test_struct_arg_result = a->a; return bpf_testmod_test_struct_arg_result; }
+noinline int +bpf_testmod_test_arg_ptr_1(struct bpf_testmod_struct_arg_1 *a) { + bpf_testmod_test_struct_arg_result = a->a; + return bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_arg_ptr_2(struct bpf_testmod_struct_arg_2 *a) { + bpf_testmod_test_struct_arg_result = a->a + a->b; + return bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_arg_ptr_3(int a, struct bpf_testmod_struct_arg_2 *b) { + bpf_testmod_test_struct_arg_result = a + b->a + b->b; + return bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_arg_ptr_4(struct bpf_testmod_struct_arg_2 *a, int b) { + bpf_testmod_test_struct_arg_result = a->a + a->b + b; + return bpf_testmod_test_struct_arg_result; +} + __bpf_kfunc void bpf_testmod_test_mod_kfunc(int i) { diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi_link.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi_link.c new file mode 100644 index 000000000000..61701a5b3494 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi_link.c @@ -0,0 +1,153 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Bytedance. */ + +#include <test_progs.h> +#include "tracing_multi_test.skel.h" + +static void test_skel_auto_api(void) +{ + struct tracing_multi_test *skel; + int err; + + skel = tracing_multi_test__open_and_load(); + if (!ASSERT_OK_PTR(skel, "tracing_multi_test__open_and_load")) + return; + + /* disable all programs that should fail */ + bpf_program__set_autoattach(skel->progs.fentry_fail_test1, false); + bpf_program__set_autoattach(skel->progs.fentry_fail_test2, false); + bpf_program__set_autoattach(skel->progs.fentry_fail_test3, false); + bpf_program__set_autoattach(skel->progs.fentry_fail_test4, false); + bpf_program__set_autoattach(skel->progs.fentry_fail_test5, false); + bpf_program__set_autoattach(skel->progs.fentry_fail_test6, false); + bpf_program__set_autoattach(skel->progs.fentry_fail_test7, false); + bpf_program__set_autoattach(skel->progs.fentry_fail_test8, false); + + bpf_program__set_autoattach(skel->progs.fexit_fail_test1, false); + bpf_program__set_autoattach(skel->progs.fexit_fail_test2, false); + bpf_program__set_autoattach(skel->progs.fexit_fail_test3, false); + + err = tracing_multi_test__attach(skel); + bpf_object__free_btfs(skel->obj); + if (!ASSERT_OK(err, "tracing_multi_test__attach")) + goto cleanup; + +cleanup: + tracing_multi_test__destroy(skel); +} + +static void test_skel_manual_api(void) +{ + struct tracing_multi_test *skel; + struct bpf_link *link; + int err; + + skel = tracing_multi_test__open_and_load(); + if (!ASSERT_OK_PTR(skel, "tracing_multi_test__open_and_load")) + return; + +#define RUN_TEST(name, success) \ +do { \ + link = bpf_program__attach(skel->progs.name); \ + err = libbpf_get_error(link); \ + if (!ASSERT_OK(success ? err : !err, \ + "bpf_program__attach: " #name)) \ + goto cleanup; \ + skel->links.name = err ? NULL : link; \ +} while (0) + + RUN_TEST(fentry_success_test1, true); + RUN_TEST(fentry_success_test2, true); + RUN_TEST(fentry_success_test3, true); + RUN_TEST(fentry_success_test4, true); + RUN_TEST(fentry_success_test5, true); + + RUN_TEST(fexit_success_test1, true); + RUN_TEST(fexit_success_test2, true); + + RUN_TEST(fmod_ret_success_test1, true); + + RUN_TEST(fentry_fail_test1, false); + RUN_TEST(fentry_fail_test2, false); + RUN_TEST(fentry_fail_test3, false); + RUN_TEST(fentry_fail_test4, false); + RUN_TEST(fentry_fail_test5, false); + RUN_TEST(fentry_fail_test6, false); + RUN_TEST(fentry_fail_test7, false); + RUN_TEST(fentry_fail_test8, false); + + RUN_TEST(fexit_fail_test1, false); + RUN_TEST(fexit_fail_test2, false); + RUN_TEST(fexit_fail_test3, false); + +cleanup: + tracing_multi_test__destroy(skel); +} + +static void tracing_multi_test_run(struct tracing_multi_test *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + prog_fd = bpf_program__fd(skel->progs.fentry_manual_test1); + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "test_run"); + ASSERT_EQ(topts.retval, 0, "test_run"); + + ASSERT_EQ(skel->bss->fentry_test1_result, 1, "fentry_test1_result"); + ASSERT_EQ(skel->bss->fentry_test2_result, 1, "fentry_test2_result"); + ASSERT_EQ(skel->bss->fentry_test3_result, 1, "fentry_test3_result"); + ASSERT_EQ(skel->bss->fentry_test4_result, 1, "fentry_test4_result"); + ASSERT_EQ(skel->bss->fentry_test5_result, 1, "fentry_test5_result"); + ASSERT_EQ(skel->bss->fentry_test6_result, 1, "fentry_test6_result"); + ASSERT_EQ(skel->bss->fentry_test7_result, 1, "fentry_test7_result"); + ASSERT_EQ(skel->bss->fentry_test8_result, 1, "fentry_test8_result"); +} + +static void test_attach_api(void) +{ + LIBBPF_OPTS(bpf_trace_multi_opts, opts); + struct tracing_multi_test *skel; + struct bpf_link *link; + const char *syms[8] = { + "bpf_fentry_test1", + "bpf_fentry_test2", + "bpf_fentry_test3", + "bpf_fentry_test4", + "bpf_fentry_test5", + "bpf_fentry_test6", + "bpf_fentry_test7", + "bpf_fentry_test8", + }; + __u64 cookies[] = {1, 7, 2, 3, 4, 5, 6, 8}; + + skel = tracing_multi_test__open_and_load(); + if (!ASSERT_OK_PTR(skel, "tracing_multi_test__open_and_load")) + return; + + opts.syms = syms; + opts.cookies = cookies; + opts.cnt = ARRAY_SIZE(syms); + link = bpf_program__attach_trace_multi_opts(skel->progs.fentry_manual_test1, + &opts); + bpf_object__free_btfs(skel->obj); + if (!ASSERT_OK_PTR(link, "bpf_program__attach_trace_multi_opts")) + goto cleanup; + skel->links.fentry_manual_test1 = link; + + skel->bss->pid = getpid(); + skel->bss->test_cookie = true; + tracing_multi_test_run(skel); +cleanup: + tracing_multi_test__destroy(skel); +} + +void test_tracing_multi_attach(void) +{ + if (test__start_subtest("skel_auto_api")) + test_skel_auto_api(); + if (test__start_subtest("skel_manual_api")) + test_skel_manual_api(); + if (test__start_subtest("attach_api")) + test_attach_api(); +} diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_test.c b/tools/testing/selftests/bpf/progs/tracing_multi_test.c new file mode 100644 index 000000000000..adfa4c2f6ee3 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tracing_multi_test.c @@ -0,0 +1,209 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 ByteDance */ +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> +#include "bpf_misc.h" + +char _license[] SEC("license") = "GPL"; + +struct bpf_testmod_struct_arg_1 { + int a; +}; +struct bpf_testmod_struct_arg_2 { + long a; + long b; +}; + +__u64 test_result = 0; + +int pid = 0; +int test_cookie = 0; + +__u64 fentry_test1_result = 0; +__u64 fentry_test2_result = 0; +__u64 fentry_test3_result = 0; +__u64 fentry_test4_result = 0; +__u64 fentry_test5_result = 0; +__u64 fentry_test6_result = 0; +__u64 fentry_test7_result = 0; +__u64 fentry_test8_result = 0; + +extern const void bpf_fentry_test1 __ksym; +extern const void bpf_fentry_test2 __ksym; +extern const void bpf_fentry_test3 __ksym; +extern const void bpf_fentry_test4 __ksym; +extern const void bpf_fentry_test5 __ksym; +extern const void bpf_fentry_test6 __ksym; +extern const void bpf_fentry_test7 __ksym; +extern const void bpf_fentry_test8 __ksym; + +SEC("fentry.multi/bpf_testmod_test_struct_arg_1,bpf_testmod_test_struct_arg_9") +int BPF_PROG2(fentry_success_test1, struct bpf_testmod_struct_arg_2, a) +{ + test_result = a.a + a.b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_2,bpf_testmod_test_struct_arg_10") +int BPF_PROG2(fentry_success_test2, int, a, struct bpf_testmod_struct_arg_2, b) +{ + test_result = a + b.a + b.b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_arg_ptr_2,bpf_testmod_test_arg_ptr_4") +int BPF_PROG(fentry_success_test3, struct bpf_testmod_struct_arg_2 *a) +{ + test_result = a->a + a->b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_1,bpf_testmod_test_struct_arg_4") +int BPF_PROG2(fentry_success_test4, struct bpf_testmod_struct_arg_2, a, int, b, + int, c) +{ + test_result = c; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_1,bpf_testmod_test_struct_arg_2") +int BPF_PROG2(fentry_success_test5, struct bpf_testmod_struct_arg_2, a, int, b, + int, c) +{ + test_result = c; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_1,bpf_testmod_test_struct_arg_1") +int BPF_PROG2(fentry_fail_test1, struct bpf_testmod_struct_arg_2, a) +{ + test_result = a.a + a.b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_1,bpf_testmod_test_struct_arg_2") +int BPF_PROG2(fentry_fail_test2, struct bpf_testmod_struct_arg_2, a) +{ + test_result = a.a + a.b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_1,bpf_testmod_test_arg_ptr_2") +int BPF_PROG2(fentry_fail_test3, struct bpf_testmod_struct_arg_2, a) +{ + test_result = a.a + a.b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_2,bpf_testmod_test_struct_arg_2") +int BPF_PROG2(fentry_fail_test4, int, a, struct bpf_testmod_struct_arg_2, b) +{ + test_result = a + b.a + b.b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_2,bpf_testmod_test_struct_arg_9") +int BPF_PROG2(fentry_fail_test5, int, a, struct bpf_testmod_struct_arg_2, b) +{ + test_result = a + b.a + b.b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_2,bpf_testmod_test_arg_ptr_3") +int BPF_PROG2(fentry_fail_test6, int, a, struct bpf_testmod_struct_arg_2, b) +{ + test_result = a + b.a + b.b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_arg_ptr_2,bpf_testmod_test_arg_ptr_3") +int BPF_PROG(fentry_fail_test7, struct bpf_testmod_struct_arg_2 *a) +{ + test_result = a->a + a->b; + return 0; +} + +SEC("fentry.multi/bpf_testmod_test_struct_arg_1,bpf_testmod_test_struct_arg_12") +int BPF_PROG2(fentry_fail_test8, struct bpf_testmod_struct_arg_2, a, int, b, + int, c) +{ + test_result = c; + return 0; +} + +SEC("fexit.multi/bpf_testmod_test_struct_arg_1,bpf_testmod_test_struct_arg_2,bpf_testmod_test_struct_arg_3") +int BPF_PROG2(fexit_success_test1, struct bpf_testmod_struct_arg_2, a, int, b, + int, c, int, retval) +{ + test_result = retval; + return 0; +} + +SEC("fexit.multi/bpf_testmod_test_struct_arg_2,bpf_testmod_test_struct_arg_12") +int BPF_PROG2(fexit_success_test2, int, a, struct bpf_testmod_struct_arg_2, b, + int, c, int, retval) +{ + test_result = a + b.a + b.b + retval; + return 0; +} + +SEC("fexit.multi/bpf_testmod_test_struct_arg_1,bpf_testmod_test_struct_arg_4") +int BPF_PROG2(fexit_fail_test1, struct bpf_testmod_struct_arg_2, a, int, b, + int, c, int, retval) +{ + test_result = retval; + return 0; +} + +SEC("fexit.multi/bpf_testmod_test_struct_arg_2,bpf_testmod_test_struct_arg_10") +int BPF_PROG2(fexit_fail_test2, int, a, struct bpf_testmod_struct_arg_2, b, + int, c, int, retval) +{ + test_result = a + b.a + b.b + retval; + return 0; +} + +SEC("fexit.multi/bpf_testmod_test_struct_arg_2,bpf_testmod_test_struct_arg_11") +int BPF_PROG2(fexit_fail_test3, int, a, struct bpf_testmod_struct_arg_2, b, + int, c, int, retval) +{ + test_result = a + b.a + b.b + retval; + return 0; +} + +SEC("fmod_ret.multi/bpf_modify_return_test,bpf_modify_return_test2") +int BPF_PROG(fmod_ret_success_test1, int a, int *b) +{ + return 0; +} + +static void tracing_multi_check(unsigned long long *ctx) +{ + if (bpf_get_current_pid_tgid() >> 32 != pid) + return; + + __u64 cookie = test_cookie ? bpf_get_attach_cookie(ctx) : 0; + __u64 addr = bpf_get_func_ip(ctx); + +#define SET(__var, __addr, __cookie) ({ \ + if (((const void *) addr == __addr) && \ + (!test_cookie || (cookie == __cookie))) \ + __var = 1; \ +}) + SET(fentry_test1_result, &bpf_fentry_test1, 1); + SET(fentry_test2_result, &bpf_fentry_test2, 7); + SET(fentry_test3_result, &bpf_fentry_test3, 2); + SET(fentry_test4_result, &bpf_fentry_test4, 3); + SET(fentry_test5_result, &bpf_fentry_test5, 4); + SET(fentry_test6_result, &bpf_fentry_test6, 5); + SET(fentry_test7_result, &bpf_fentry_test7, 6); + SET(fentry_test8_result, &bpf_fentry_test8, 8); +} + +SEC("fentry.multi/bpf_fentry_test1") +int BPF_PROG(fentry_manual_test1) +{ + tracing_multi_check(ctx); + return 0; +}
linux-kselftest-mirror@lists.linaro.org