Hi,
here comes the v9 of the HID-BPF series.
Again, for a full explanation of HID-BPF, please refer to the last patch in this series (23/23).
This version sees some minor improvements compared to v7 and v8, only focusing on the reviews I got. (v8 was a single patch update)
- patch 1/24 in v7 was dropped as it is already fixed upstream - patch 1/23 in v9 is now capable of handling all functions, not just kfuncs (tested with the selftests only) - some minor nits from Greg's review - a rebase on top of the current bpf-next tree as the kfunc definition changed (for the better).
Cheers, Benjamin
Benjamin Tissoires (23): bpf/verifier: allow all functions to read user provided context bpf/verifier: do not clear meta in check_mem_size selftests/bpf: add test for accessing ctx from syscall program type bpf/verifier: allow kfunc to return an allocated mem selftests/bpf: Add tests for kfunc returning a memory pointer bpf: prepare for more bpf syscall to be used from kernel and user space. libbpf: add map_get_fd_by_id and map_delete_elem in light skeleton HID: core: store the unique system identifier in hid_device HID: export hid_report_type to uapi HID: convert defines of HID class requests into a proper enum HID: Kconfig: split HID support and hid-core compilation HID: initial BPF implementation selftests/bpf: add tests for the HID-bpf initial implementation HID: bpf: allocate data memory for device_event BPF programs selftests/bpf/hid: add test to change the report size HID: bpf: introduce hid_hw_request() selftests/bpf: add tests for bpf_hid_hw_request HID: bpf: allow to change the report descriptor selftests/bpf: add report descriptor fixup tests selftests/bpf: Add a test for BPF_F_INSERT_HEAD samples/bpf: HID: add new hid_mouse example samples/bpf: HID: add Surface Dial example Documentation: add HID-BPF docs
Documentation/hid/hid-bpf.rst | 512 +++++++++ Documentation/hid/index.rst | 1 + drivers/Makefile | 2 +- drivers/hid/Kconfig | 20 +- drivers/hid/Makefile | 2 + drivers/hid/bpf/Kconfig | 17 + drivers/hid/bpf/Makefile | 11 + drivers/hid/bpf/entrypoints/Makefile | 93 ++ drivers/hid/bpf/entrypoints/README | 4 + drivers/hid/bpf/entrypoints/entrypoints.bpf.c | 66 ++ .../hid/bpf/entrypoints/entrypoints.lskel.h | 682 ++++++++++++ drivers/hid/bpf/hid_bpf_dispatch.c | 526 ++++++++++ drivers/hid/bpf/hid_bpf_dispatch.h | 28 + drivers/hid/bpf/hid_bpf_jmp_table.c | 577 ++++++++++ drivers/hid/hid-core.c | 49 +- include/linux/bpf.h | 9 +- include/linux/btf.h | 10 + include/linux/hid.h | 38 +- include/linux/hid_bpf.h | 148 +++ include/uapi/linux/hid.h | 26 +- include/uapi/linux/hid_bpf.h | 25 + kernel/bpf/btf.c | 109 +- kernel/bpf/syscall.c | 10 +- kernel/bpf/verifier.c | 64 +- net/bpf/test_run.c | 21 + samples/bpf/.gitignore | 2 + samples/bpf/Makefile | 27 + samples/bpf/hid_mouse.bpf.c | 134 +++ samples/bpf/hid_mouse.c | 161 +++ samples/bpf/hid_surface_dial.bpf.c | 161 +++ samples/bpf/hid_surface_dial.c | 232 ++++ tools/include/uapi/linux/hid.h | 62 ++ tools/include/uapi/linux/hid_bpf.h | 25 + tools/lib/bpf/skel_internal.h | 23 + tools/testing/selftests/bpf/Makefile | 5 +- tools/testing/selftests/bpf/config | 3 + tools/testing/selftests/bpf/prog_tests/hid.c | 990 ++++++++++++++++++ .../selftests/bpf/prog_tests/kfunc_call.c | 76 ++ tools/testing/selftests/bpf/progs/hid.c | 206 ++++ .../selftests/bpf/progs/kfunc_call_test.c | 125 +++ 40 files changed, 5198 insertions(+), 84 deletions(-) create mode 100644 Documentation/hid/hid-bpf.rst create mode 100644 drivers/hid/bpf/Kconfig create mode 100644 drivers/hid/bpf/Makefile create mode 100644 drivers/hid/bpf/entrypoints/Makefile create mode 100644 drivers/hid/bpf/entrypoints/README create mode 100644 drivers/hid/bpf/entrypoints/entrypoints.bpf.c create mode 100644 drivers/hid/bpf/entrypoints/entrypoints.lskel.h create mode 100644 drivers/hid/bpf/hid_bpf_dispatch.c create mode 100644 drivers/hid/bpf/hid_bpf_dispatch.h create mode 100644 drivers/hid/bpf/hid_bpf_jmp_table.c create mode 100644 include/linux/hid_bpf.h create mode 100644 include/uapi/linux/hid_bpf.h create mode 100644 samples/bpf/hid_mouse.bpf.c create mode 100644 samples/bpf/hid_mouse.c create mode 100644 samples/bpf/hid_surface_dial.bpf.c create mode 100644 samples/bpf/hid_surface_dial.c create mode 100644 tools/include/uapi/linux/hid.h create mode 100644 tools/include/uapi/linux/hid_bpf.h create mode 100644 tools/testing/selftests/bpf/prog_tests/hid.c create mode 100644 tools/testing/selftests/bpf/progs/hid.c
When a function was trying to access data from context in a syscall eBPF program, the verifier was rejecting the call unless it was accessing the first element. This is because the syscall context is not known at compile time, and so we need to check this when actually accessing it.
Check for the valid memory access if there is no convert_ctx callback, and allow such situation to happen.
There is a slight hiccup with subprogs. btf_check_subprog_arg_match() will check that the types are matching, which is a good thing, but to have an accurate result, it hides the fact that the context register may be null. This makes env->prog->aux->max_ctx_offset being set to the size of the context, which is incompatible with a NULL context.
Solve that last problem by storing max_ctx_offset before the type check and restoring it after.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
changes in v9: - rewrote the commit title and description - made it so all functions can make use of context even if there is no convert_ctx - remove the is_kfunc field in bpf_call_arg_meta
changes in v8: - fixup comment - return -EACCESS instead of -EINVAL for consistency
changes in v7: - renamed access_t into atype - allow zero-byte read - check_mem_access() to the correct offset/size
new in v6 --- kernel/bpf/btf.c | 11 ++++++++++- kernel/bpf/verifier.c | 19 +++++++++++++++++++ 2 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 903719b89238..386300f52b23 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6443,8 +6443,8 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, { struct bpf_prog *prog = env->prog; struct btf *btf = prog->aux->btf; + u32 btf_id, max_ctx_offset; bool is_global; - u32 btf_id; int err;
if (!prog->aux->func_info) @@ -6457,9 +6457,18 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, if (prog->aux->func_info_aux[subprog].unreliable) return -EINVAL;
+ /* subprogs arguments are not actually accessing the data, we need + * to check for the types if they match. + * Store the max_ctx_offset and restore it after btf_check_func_arg_match() + * given that this function will have a side effect of changing it. + */ + max_ctx_offset = env->prog->aux->max_ctx_offset; + is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, 0);
+ env->prog->aux->max_ctx_offset = max_ctx_offset; + /* Compiler optimizations can remove arguments from static functions * or mismatched type can be passed into a global function. * In such cases mark the function as unreliable from BTF point of view. diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2c1f8069f7b7..d694f43ab911 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5229,6 +5229,25 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, env, regno, reg->off, access_size, zero_size_allowed, ACCESS_HELPER, meta); + case PTR_TO_CTX: + /* in case the function doesn't know how to access the context, + * (because we are in a program of type SYSCALL for example), we + * can not statically check its size. + * Dynamically check it now. + */ + if (!env->ops->convert_ctx_access) { + enum bpf_access_type atype = meta && meta->raw_mode ? BPF_WRITE : BPF_READ; + int offset = access_size - 1; + + /* Allow zero-byte read from PTR_TO_CTX */ + if (access_size == 0) + return zero_size_allowed ? 0 : -EACCES; + + return check_mem_access(env, env->insn_idx, regno, offset, BPF_B, + atype, -1, false); + } + + fallthrough; default: /* scalar_value or invalid ptr */ /* Allow zero-byte read from NULL, regardless of pointer type */ if (zero_size_allowed && access_size == 0 &&
On Wed, Aug 24, 2022 at 6:41 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
When a function was trying to access data from context in a syscall eBPF program, the verifier was rejecting the call unless it was accessing the first element. This is because the syscall context is not known at compile time, and so we need to check this when actually accessing it.
Check for the valid memory access if there is no convert_ctx callback, and allow such situation to happen.
There is a slight hiccup with subprogs. btf_check_subprog_arg_match() will check that the types are matching, which is a good thing, but to have an accurate result, it hides the fact that the context register may be null. This makes env->prog->aux->max_ctx_offset being set to the size of the context, which is incompatible with a NULL context.
Solve that last problem by storing max_ctx_offset before the type check and restoring it after.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- rewrote the commit title and description
- made it so all functions can make use of context even if there is no convert_ctx
- remove the is_kfunc field in bpf_call_arg_meta
changes in v8:
- fixup comment
- return -EACCESS instead of -EINVAL for consistency
changes in v7:
- renamed access_t into atype
- allow zero-byte read
- check_mem_access() to the correct offset/size
new in v6
kernel/bpf/btf.c | 11 ++++++++++- kernel/bpf/verifier.c | 19 +++++++++++++++++++ 2 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 903719b89238..386300f52b23 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6443,8 +6443,8 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, { struct bpf_prog *prog = env->prog; struct btf *btf = prog->aux->btf;
u32 btf_id, max_ctx_offset; bool is_global;
u32 btf_id; int err; if (!prog->aux->func_info)
@@ -6457,9 +6457,18 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, if (prog->aux->func_info_aux[subprog].unreliable) return -EINVAL;
/* subprogs arguments are not actually accessing the data, we need
* to check for the types if they match.
* Store the max_ctx_offset and restore it after btf_check_func_arg_match()
* given that this function will have a side effect of changing it.
*/
max_ctx_offset = env->prog->aux->max_ctx_offset;
is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, 0);
env->prog->aux->max_ctx_offset = max_ctx_offset;
I don't understand this. If we pass a ctx into a helper and it's going to access [0..N] bytes from it why do we need to hide it? max_ctx_offset will be used later raw_tp, tp, syscall progs to determine whether it's ok to load them. By hiding the actual size of access somebody can construct a prog that reads out of bounds. How is this related to NULL-ness property?
/* Compiler optimizations can remove arguments from static functions * or mismatched type can be passed into a global function. * In such cases mark the function as unreliable from BTF point of view.
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2c1f8069f7b7..d694f43ab911 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5229,6 +5229,25 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, env, regno, reg->off, access_size, zero_size_allowed, ACCESS_HELPER, meta);
case PTR_TO_CTX:
/* in case the function doesn't know how to access the context,
* (because we are in a program of type SYSCALL for example), we
* can not statically check its size.
* Dynamically check it now.
*/
if (!env->ops->convert_ctx_access) {
enum bpf_access_type atype = meta && meta->raw_mode ? BPF_WRITE : BPF_READ;
int offset = access_size - 1;
/* Allow zero-byte read from PTR_TO_CTX */
if (access_size == 0)
return zero_size_allowed ? 0 : -EACCES;
return check_mem_access(env, env->insn_idx, regno, offset, BPF_B,
atype, -1, false);
}
This part looks good alone. Without max_ctx_offset save/restore.
fallthrough; default: /* scalar_value or invalid ptr */ /* Allow zero-byte read from NULL, regardless of pointer type */ if (zero_size_allowed && access_size == 0 &&
-- 2.36.1
On Fri, 26 Aug 2022 at 03:42, Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Wed, Aug 24, 2022 at 6:41 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
When a function was trying to access data from context in a syscall eBPF program, the verifier was rejecting the call unless it was accessing the first element. This is because the syscall context is not known at compile time, and so we need to check this when actually accessing it.
Check for the valid memory access if there is no convert_ctx callback, and allow such situation to happen.
There is a slight hiccup with subprogs. btf_check_subprog_arg_match() will check that the types are matching, which is a good thing, but to have an accurate result, it hides the fact that the context register may be null. This makes env->prog->aux->max_ctx_offset being set to the size of the context, which is incompatible with a NULL context.
Solve that last problem by storing max_ctx_offset before the type check and restoring it after.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- rewrote the commit title and description
- made it so all functions can make use of context even if there is no convert_ctx
- remove the is_kfunc field in bpf_call_arg_meta
changes in v8:
- fixup comment
- return -EACCESS instead of -EINVAL for consistency
changes in v7:
- renamed access_t into atype
- allow zero-byte read
- check_mem_access() to the correct offset/size
new in v6
kernel/bpf/btf.c | 11 ++++++++++- kernel/bpf/verifier.c | 19 +++++++++++++++++++ 2 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 903719b89238..386300f52b23 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6443,8 +6443,8 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, { struct bpf_prog *prog = env->prog; struct btf *btf = prog->aux->btf;
u32 btf_id, max_ctx_offset; bool is_global;
u32 btf_id; int err; if (!prog->aux->func_info)
@@ -6457,9 +6457,18 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, if (prog->aux->func_info_aux[subprog].unreliable) return -EINVAL;
/* subprogs arguments are not actually accessing the data, we need
* to check for the types if they match.
* Store the max_ctx_offset and restore it after btf_check_func_arg_match()
* given that this function will have a side effect of changing it.
*/
max_ctx_offset = env->prog->aux->max_ctx_offset;
is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, 0);
env->prog->aux->max_ctx_offset = max_ctx_offset;
I don't understand this. If we pass a ctx into a helper and it's going to access [0..N] bytes from it why do we need to hide it? max_ctx_offset will be used later raw_tp, tp, syscall progs to determine whether it's ok to load them. By hiding the actual size of access somebody can construct a prog that reads out of bounds. How is this related to NULL-ness property?
Same question, was just typing exactly the same thing.
/* Compiler optimizations can remove arguments from static functions * or mismatched type can be passed into a global function. * In such cases mark the function as unreliable from BTF point of view.
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2c1f8069f7b7..d694f43ab911 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5229,6 +5229,25 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, env, regno, reg->off, access_size, zero_size_allowed, ACCESS_HELPER, meta);
case PTR_TO_CTX:
/* in case the function doesn't know how to access the context,
* (because we are in a program of type SYSCALL for example), we
* can not statically check its size.
* Dynamically check it now.
*/
if (!env->ops->convert_ctx_access) {
enum bpf_access_type atype = meta && meta->raw_mode ? BPF_WRITE : BPF_READ;
int offset = access_size - 1;
/* Allow zero-byte read from PTR_TO_CTX */
if (access_size == 0)
return zero_size_allowed ? 0 : -EACCES;
return check_mem_access(env, env->insn_idx, regno, offset, BPF_B,
atype, -1, false);
}
This part looks good alone. Without max_ctx_offset save/restore.
+1, save/restore would be incorrect.
On Fri, Aug 26, 2022 at 3:51 AM Kumar Kartikeya Dwivedi memxor@gmail.com wrote:
On Fri, 26 Aug 2022 at 03:42, Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Wed, Aug 24, 2022 at 6:41 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
When a function was trying to access data from context in a syscall eBPF program, the verifier was rejecting the call unless it was accessing the first element. This is because the syscall context is not known at compile time, and so we need to check this when actually accessing it.
Check for the valid memory access if there is no convert_ctx callback, and allow such situation to happen.
There is a slight hiccup with subprogs. btf_check_subprog_arg_match() will check that the types are matching, which is a good thing, but to have an accurate result, it hides the fact that the context register may be null. This makes env->prog->aux->max_ctx_offset being set to the size of the context, which is incompatible with a NULL context.
Solve that last problem by storing max_ctx_offset before the type check and restoring it after.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- rewrote the commit title and description
- made it so all functions can make use of context even if there is no convert_ctx
- remove the is_kfunc field in bpf_call_arg_meta
changes in v8:
- fixup comment
- return -EACCESS instead of -EINVAL for consistency
changes in v7:
- renamed access_t into atype
- allow zero-byte read
- check_mem_access() to the correct offset/size
new in v6
kernel/bpf/btf.c | 11 ++++++++++- kernel/bpf/verifier.c | 19 +++++++++++++++++++ 2 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 903719b89238..386300f52b23 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6443,8 +6443,8 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, { struct bpf_prog *prog = env->prog; struct btf *btf = prog->aux->btf;
u32 btf_id, max_ctx_offset; bool is_global;
u32 btf_id; int err; if (!prog->aux->func_info)
@@ -6457,9 +6457,18 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, if (prog->aux->func_info_aux[subprog].unreliable) return -EINVAL;
/* subprogs arguments are not actually accessing the data, we need
* to check for the types if they match.
* Store the max_ctx_offset and restore it after btf_check_func_arg_match()
* given that this function will have a side effect of changing it.
*/
max_ctx_offset = env->prog->aux->max_ctx_offset;
is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, 0);
env->prog->aux->max_ctx_offset = max_ctx_offset;
I don't understand this. If we pass a ctx into a helper and it's going to access [0..N] bytes from it why do we need to hide it? max_ctx_offset will be used later raw_tp, tp, syscall progs to determine whether it's ok to load them. By hiding the actual size of access somebody can construct a prog that reads out of bounds. How is this related to NULL-ness property?
Same question, was just typing exactly the same thing.
The test I have that is failing in patch 2/23 is the following, with args being set to NULL by userspace:
SEC("syscall") int kfunc_syscall_test_null(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, 0);
return 0; }
Basically: if userspace declares the following: DECLARE_LIBBPF_OPTS(bpf_test_run_opts, syscall_topts, .ctx_in = NULL, .ctx_size_in = 0, );
The verifier is happy with the current released kernel: kfunc_syscall_test_fail() never dereferences the ctx pointer, it just passes it around to bpf_kfunc_call_test_mem_len_pass1(), which in turn is also happy because it says it is not accessing the data at all (0 size memory parameter).
In the current code, check_helper_mem_access() actually returns -EINVAL, but doesn't change max_ctx_offset (it's still at the value of 0 here). The program is now marked as unreliable, but the verifier goes on.
When adding this patch, if we declare a syscall eBPF (or any other function that doesn't have env->ops->convert_ctx_access), the previous "test" is failing because this ensures the syscall program has to have a valid ctx pointer. btf_check_func_arg_match() now calls check_mem_access() which basically validates the fact that the program can dereference the ctx.
So now, without the max_ctx_offset store/restore, the verifier enforces that the provided ctx is not null.
What I thought that would happen was that if we were to pass a NULL context from userspace, but the eBPF program dereferences it (or in that case have a subprog or a function call that dereferences it), then max_ctx_offset would still be set to the proper value because of that internal dereference, and so the verifier would reject with -EINVAL the call to the eBPF program.
If I add another test that has the following ebpf prog (with ctx_in being set to NULL by the userspace):
SEC("syscall") int kfunc_syscall_test_null_fail(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args));
return 0; }
Then the call of the program is actually failing with -EINVAL, even with this patch.
But again, if setting from userspace a ctx of NULL with a 0 size is not considered as valid, then we can just drop that hunk and add a test to enforce it.
Cheers, Benjamin
/* Compiler optimizations can remove arguments from static functions * or mismatched type can be passed into a global function. * In such cases mark the function as unreliable from BTF point of view.
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2c1f8069f7b7..d694f43ab911 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5229,6 +5229,25 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, env, regno, reg->off, access_size, zero_size_allowed, ACCESS_HELPER, meta);
case PTR_TO_CTX:
/* in case the function doesn't know how to access the context,
* (because we are in a program of type SYSCALL for example), we
* can not statically check its size.
* Dynamically check it now.
*/
if (!env->ops->convert_ctx_access) {
enum bpf_access_type atype = meta && meta->raw_mode ? BPF_WRITE : BPF_READ;
int offset = access_size - 1;
/* Allow zero-byte read from PTR_TO_CTX */
if (access_size == 0)
return zero_size_allowed ? 0 : -EACCES;
return check_mem_access(env, env->insn_idx, regno, offset, BPF_B,
atype, -1, false);
}
This part looks good alone. Without max_ctx_offset save/restore.
+1, save/restore would be incorrect.
On Tue, Aug 30, 2022 at 7:29 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
On Fri, Aug 26, 2022 at 3:51 AM Kumar Kartikeya Dwivedi memxor@gmail.com wrote:
On Fri, 26 Aug 2022 at 03:42, Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Wed, Aug 24, 2022 at 6:41 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
When a function was trying to access data from context in a syscall eBPF program, the verifier was rejecting the call unless it was accessing the first element. This is because the syscall context is not known at compile time, and so we need to check this when actually accessing it.
Check for the valid memory access if there is no convert_ctx callback, and allow such situation to happen.
There is a slight hiccup with subprogs. btf_check_subprog_arg_match() will check that the types are matching, which is a good thing, but to have an accurate result, it hides the fact that the context register may be null. This makes env->prog->aux->max_ctx_offset being set to the size of the context, which is incompatible with a NULL context.
Solve that last problem by storing max_ctx_offset before the type check and restoring it after.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- rewrote the commit title and description
- made it so all functions can make use of context even if there is no convert_ctx
- remove the is_kfunc field in bpf_call_arg_meta
changes in v8:
- fixup comment
- return -EACCESS instead of -EINVAL for consistency
changes in v7:
- renamed access_t into atype
- allow zero-byte read
- check_mem_access() to the correct offset/size
new in v6
kernel/bpf/btf.c | 11 ++++++++++- kernel/bpf/verifier.c | 19 +++++++++++++++++++ 2 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 903719b89238..386300f52b23 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6443,8 +6443,8 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, { struct bpf_prog *prog = env->prog; struct btf *btf = prog->aux->btf;
u32 btf_id, max_ctx_offset; bool is_global;
u32 btf_id; int err; if (!prog->aux->func_info)
@@ -6457,9 +6457,18 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, if (prog->aux->func_info_aux[subprog].unreliable) return -EINVAL;
/* subprogs arguments are not actually accessing the data, we need
* to check for the types if they match.
* Store the max_ctx_offset and restore it after btf_check_func_arg_match()
* given that this function will have a side effect of changing it.
*/
max_ctx_offset = env->prog->aux->max_ctx_offset;
is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, 0);
env->prog->aux->max_ctx_offset = max_ctx_offset;
I don't understand this. If we pass a ctx into a helper and it's going to access [0..N] bytes from it why do we need to hide it? max_ctx_offset will be used later raw_tp, tp, syscall progs to determine whether it's ok to load them. By hiding the actual size of access somebody can construct a prog that reads out of bounds. How is this related to NULL-ness property?
Same question, was just typing exactly the same thing.
The test I have that is failing in patch 2/23 is the following, with args being set to NULL by userspace:
SEC("syscall") int kfunc_syscall_test_null(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, 0);
return 0;
}
Basically: if userspace declares the following: DECLARE_LIBBPF_OPTS(bpf_test_run_opts, syscall_topts, .ctx_in = NULL, .ctx_size_in = 0, );
The verifier is happy with the current released kernel: kfunc_syscall_test_fail() never dereferences the ctx pointer, it just passes it around to bpf_kfunc_call_test_mem_len_pass1(), which in turn is also happy because it says it is not accessing the data at all (0 size memory parameter).
In the current code, check_helper_mem_access() actually returns -EINVAL, but doesn't change max_ctx_offset (it's still at the value of 0 here). The program is now marked as unreliable, but the verifier goes on.
When adding this patch, if we declare a syscall eBPF (or any other function that doesn't have env->ops->convert_ctx_access), the previous "test" is failing because this ensures the syscall program has to have a valid ctx pointer. btf_check_func_arg_match() now calls check_mem_access() which basically validates the fact that the program can dereference the ctx.
So now, without the max_ctx_offset store/restore, the verifier enforces that the provided ctx is not null.
What I thought that would happen was that if we were to pass a NULL context from userspace, but the eBPF program dereferences it (or in that case have a subprog or a function call that dereferences it), then max_ctx_offset would still be set to the proper value because of that internal dereference, and so the verifier would reject with -EINVAL the call to the eBPF program.
If I add another test that has the following ebpf prog (with ctx_in being set to NULL by the userspace):
SEC("syscall") int kfunc_syscall_test_null_fail(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args));
return 0;
}
Then the call of the program is actually failing with -EINVAL, even with this patch.
But again, if setting from userspace a ctx of NULL with a 0 size is not considered as valid, then we can just drop that hunk and add a test to enforce it.
PTR_TO_CTX in the verifier always means valid pointer. All code paths in the verifier assumes that it's not NULL. Pointer to skb, to xdp, to pt_regs, etc. The syscall prog type is little bit special, since it makes sense not to pass any argument to such prog. So ctx_size_in == 0 is enforced after the verification: if (ctx_size_in < prog->aux->max_ctx_offset || ctx_size_in > U16_MAX) return -EINVAL; The verifier should be able to proceed assuming ctx != NULL and remember max max_ctx_offset. If max_ctx_offset == 4 and ctx_size_in == 0 then it doesn't matter whether the actual 'ctx' pointer is NULL or points to a valid memory. So it's ok for the verifier to assume ctx != NULL everywhere.
Back to the issue at hand. With this patch the line: bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args)); will be seen as access_size == sizeof(*args), right? So this part: + if (access_size == 0) + return zero_size_allowed ? 0 : -EACCES;
will be skipped and the newly added check_mem_access() will call check_ctx_access() which will call syscall_prog_is_valid_access() and it will say that any off < U16_MAX is fine and will simply record max max_ctx_offset. The ctx_size_in < prog->aux->max_ctx_offset check is done later.
So when you're saying: "call of the program is actually failing with -EINVAL" that's the check you're referring to?
If so, everything works as expected. The verifier thinks that bpf_kfunc_call_test_mem_len_pass1() can read that many bytes from args, so it has to reject running the loaded prog in bpf_prog_test_run_syscall().
So what are you trying to achieve ? Make the verifier understand that ctx can be NULL ? If so that is a probably huge undertaking. Something else?
On Wed, Aug 31, 2022 at 6:37 PM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Tue, Aug 30, 2022 at 7:29 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
On Fri, Aug 26, 2022 at 3:51 AM Kumar Kartikeya Dwivedi memxor@gmail.com wrote:
On Fri, 26 Aug 2022 at 03:42, Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Wed, Aug 24, 2022 at 6:41 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
When a function was trying to access data from context in a syscall eBPF program, the verifier was rejecting the call unless it was accessing the first element. This is because the syscall context is not known at compile time, and so we need to check this when actually accessing it.
Check for the valid memory access if there is no convert_ctx callback, and allow such situation to happen.
There is a slight hiccup with subprogs. btf_check_subprog_arg_match() will check that the types are matching, which is a good thing, but to have an accurate result, it hides the fact that the context register may be null. This makes env->prog->aux->max_ctx_offset being set to the size of the context, which is incompatible with a NULL context.
Solve that last problem by storing max_ctx_offset before the type check and restoring it after.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- rewrote the commit title and description
- made it so all functions can make use of context even if there is no convert_ctx
- remove the is_kfunc field in bpf_call_arg_meta
changes in v8:
- fixup comment
- return -EACCESS instead of -EINVAL for consistency
changes in v7:
- renamed access_t into atype
- allow zero-byte read
- check_mem_access() to the correct offset/size
new in v6
kernel/bpf/btf.c | 11 ++++++++++- kernel/bpf/verifier.c | 19 +++++++++++++++++++ 2 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 903719b89238..386300f52b23 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6443,8 +6443,8 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, { struct bpf_prog *prog = env->prog; struct btf *btf = prog->aux->btf;
u32 btf_id, max_ctx_offset; bool is_global;
u32 btf_id; int err; if (!prog->aux->func_info)
@@ -6457,9 +6457,18 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, if (prog->aux->func_info_aux[subprog].unreliable) return -EINVAL;
/* subprogs arguments are not actually accessing the data, we need
* to check for the types if they match.
* Store the max_ctx_offset and restore it after btf_check_func_arg_match()
* given that this function will have a side effect of changing it.
*/
max_ctx_offset = env->prog->aux->max_ctx_offset;
is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, 0);
env->prog->aux->max_ctx_offset = max_ctx_offset;
I don't understand this. If we pass a ctx into a helper and it's going to access [0..N] bytes from it why do we need to hide it? max_ctx_offset will be used later raw_tp, tp, syscall progs to determine whether it's ok to load them. By hiding the actual size of access somebody can construct a prog that reads out of bounds. How is this related to NULL-ness property?
Same question, was just typing exactly the same thing.
The test I have that is failing in patch 2/23 is the following, with args being set to NULL by userspace:
SEC("syscall") int kfunc_syscall_test_null(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, 0);
return 0;
}
Basically: if userspace declares the following: DECLARE_LIBBPF_OPTS(bpf_test_run_opts, syscall_topts, .ctx_in = NULL, .ctx_size_in = 0, );
The verifier is happy with the current released kernel: kfunc_syscall_test_fail() never dereferences the ctx pointer, it just passes it around to bpf_kfunc_call_test_mem_len_pass1(), which in turn is also happy because it says it is not accessing the data at all (0 size memory parameter).
In the current code, check_helper_mem_access() actually returns -EINVAL, but doesn't change max_ctx_offset (it's still at the value of 0 here). The program is now marked as unreliable, but the verifier goes on.
When adding this patch, if we declare a syscall eBPF (or any other function that doesn't have env->ops->convert_ctx_access), the previous "test" is failing because this ensures the syscall program has to have a valid ctx pointer. btf_check_func_arg_match() now calls check_mem_access() which basically validates the fact that the program can dereference the ctx.
So now, without the max_ctx_offset store/restore, the verifier enforces that the provided ctx is not null.
What I thought that would happen was that if we were to pass a NULL context from userspace, but the eBPF program dereferences it (or in that case have a subprog or a function call that dereferences it), then max_ctx_offset would still be set to the proper value because of that internal dereference, and so the verifier would reject with -EINVAL the call to the eBPF program.
If I add another test that has the following ebpf prog (with ctx_in being set to NULL by the userspace):
SEC("syscall") int kfunc_syscall_test_null_fail(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args));
return 0;
}
Then the call of the program is actually failing with -EINVAL, even with this patch.
But again, if setting from userspace a ctx of NULL with a 0 size is not considered as valid, then we can just drop that hunk and add a test to enforce it.
PTR_TO_CTX in the verifier always means valid pointer. All code paths in the verifier assumes that it's not NULL. Pointer to skb, to xdp, to pt_regs, etc. The syscall prog type is little bit special, since it makes sense not to pass any argument to such prog. So ctx_size_in == 0 is enforced after the verification: if (ctx_size_in < prog->aux->max_ctx_offset || ctx_size_in > U16_MAX) return -EINVAL; The verifier should be able to proceed assuming ctx != NULL and remember max max_ctx_offset. If max_ctx_offset == 4 and ctx_size_in == 0 then it doesn't matter whether the actual 'ctx' pointer is NULL or points to a valid memory. So it's ok for the verifier to assume ctx != NULL everywhere.
Ok, thanks for the detailed explanation.
Back to the issue at hand. With this patch the line: bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args)); will be seen as access_size == sizeof(*args), right? So this part:
if (access_size == 0)
return zero_size_allowed ? 0 : -EACCES;
will be skipped and the newly added check_mem_access() will call check_ctx_access() which will call syscall_prog_is_valid_access() and it will say that any off < U16_MAX is fine and will simply record max max_ctx_offset. The ctx_size_in < prog->aux->max_ctx_offset check is done later.
Yep, this is correct and this is working now, with a proper error (and no, this is not the error I am trying to fix, see below):
eBPF prog: ``` SEC("?syscall") int kfunc_syscall_test_null_fail(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args)); return 0; } ```
before this patch (1/23): * with ctx not NULL: libbpf: prog 'kfunc_syscall_test_null_fail': BPF program load failed: Invalid argument R1 type=ctx expected=fp arg#0 arg#1 memory, len pair leads to invalid memory access
=> this is not correct, we expect the program to be loaded (and it is expected, this is the bug that is fixed)
* Same result with ctx being NULL from the caller
With just the hunk in kernel/bpf/verifier.c (so without touching max_ctx_offset: * with ctx not NULL: program is loaded, and executed correctly
* with ctx being NULL: program is now loaded, but execution returns -EINVAL, as expected
So this case is fully solved by just the hunk in verifier.c
With the full patch: same results, with or without ctx being set to NULL, so no side effects.
So when you're saying: "call of the program is actually failing with -EINVAL" that's the check you're referring to?
No. I am referring to the following eBPF program: ``` SEC("syscall") int kfunc_syscall_test_null(struct syscall_test_args *args) { return 0; } ```
(no calls, just the declaration of a program)
This one is supposed to be loaded and properly run whatever the context is, right?
However, without the hunk in the btf.c file (max_ctx_offset), we have the following (ctx is set to NULL by the userspace): verify_success:FAIL:kfunc_syscall_test_null unexpected error: -22 (errno 22)
The reason is that the verifier is calling btf_check_subprog_arg_match() on programs too, and considers that ctx is not NULL, and bumps the max_ctx_offset value.
If so, everything works as expected.
Not exactly, we can not call a syscall program with a null context without this hunk.
The verifier thinks that bpf_kfunc_call_test_mem_len_pass1() can read that many bytes from args, so it has to reject running the loaded prog in bpf_prog_test_run_syscall().
Yes, that part works. I am focusing on the program declaration.
So what are you trying to achieve ?
See above :)
Make the verifier understand that ctx can be NULL ?
Nope. I am fine with the way it is. But any eBPF (sub)prog is checked against btf_check_subprog_arg_match(), which in turns marks all of these calls accessing the entire ctx, even if the ctx is null when that case is valid.
If so that is a probably huge undertaking. Something else?
Hopefully this is clearer now.
Cheers, Benjamin
On Wed, Aug 31, 2022 at 10:56 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
On Wed, Aug 31, 2022 at 6:37 PM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Tue, Aug 30, 2022 at 7:29 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
On Fri, Aug 26, 2022 at 3:51 AM Kumar Kartikeya Dwivedi memxor@gmail.com wrote:
On Fri, 26 Aug 2022 at 03:42, Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Wed, Aug 24, 2022 at 6:41 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
When a function was trying to access data from context in a syscall eBPF program, the verifier was rejecting the call unless it was accessing the first element. This is because the syscall context is not known at compile time, and so we need to check this when actually accessing it.
Check for the valid memory access if there is no convert_ctx callback, and allow such situation to happen.
There is a slight hiccup with subprogs. btf_check_subprog_arg_match() will check that the types are matching, which is a good thing, but to have an accurate result, it hides the fact that the context register may be null. This makes env->prog->aux->max_ctx_offset being set to the size of the context, which is incompatible with a NULL context.
Solve that last problem by storing max_ctx_offset before the type check and restoring it after.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- rewrote the commit title and description
- made it so all functions can make use of context even if there is no convert_ctx
- remove the is_kfunc field in bpf_call_arg_meta
changes in v8:
- fixup comment
- return -EACCESS instead of -EINVAL for consistency
changes in v7:
- renamed access_t into atype
- allow zero-byte read
- check_mem_access() to the correct offset/size
new in v6
kernel/bpf/btf.c | 11 ++++++++++- kernel/bpf/verifier.c | 19 +++++++++++++++++++ 2 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 903719b89238..386300f52b23 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6443,8 +6443,8 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, { struct bpf_prog *prog = env->prog; struct btf *btf = prog->aux->btf;
u32 btf_id, max_ctx_offset; bool is_global;
u32 btf_id; int err; if (!prog->aux->func_info)
@@ -6457,9 +6457,18 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, if (prog->aux->func_info_aux[subprog].unreliable) return -EINVAL;
/* subprogs arguments are not actually accessing the data, we need
* to check for the types if they match.
* Store the max_ctx_offset and restore it after btf_check_func_arg_match()
* given that this function will have a side effect of changing it.
*/
max_ctx_offset = env->prog->aux->max_ctx_offset;
is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, 0);
env->prog->aux->max_ctx_offset = max_ctx_offset;
I don't understand this. If we pass a ctx into a helper and it's going to access [0..N] bytes from it why do we need to hide it? max_ctx_offset will be used later raw_tp, tp, syscall progs to determine whether it's ok to load them. By hiding the actual size of access somebody can construct a prog that reads out of bounds. How is this related to NULL-ness property?
Same question, was just typing exactly the same thing.
The test I have that is failing in patch 2/23 is the following, with args being set to NULL by userspace:
SEC("syscall") int kfunc_syscall_test_null(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, 0);
return 0;
}
Basically: if userspace declares the following: DECLARE_LIBBPF_OPTS(bpf_test_run_opts, syscall_topts, .ctx_in = NULL, .ctx_size_in = 0, );
The verifier is happy with the current released kernel: kfunc_syscall_test_fail() never dereferences the ctx pointer, it just passes it around to bpf_kfunc_call_test_mem_len_pass1(), which in turn is also happy because it says it is not accessing the data at all (0 size memory parameter).
In the current code, check_helper_mem_access() actually returns -EINVAL, but doesn't change max_ctx_offset (it's still at the value of 0 here). The program is now marked as unreliable, but the verifier goes on.
When adding this patch, if we declare a syscall eBPF (or any other function that doesn't have env->ops->convert_ctx_access), the previous "test" is failing because this ensures the syscall program has to have a valid ctx pointer. btf_check_func_arg_match() now calls check_mem_access() which basically validates the fact that the program can dereference the ctx.
So now, without the max_ctx_offset store/restore, the verifier enforces that the provided ctx is not null.
What I thought that would happen was that if we were to pass a NULL context from userspace, but the eBPF program dereferences it (or in that case have a subprog or a function call that dereferences it), then max_ctx_offset would still be set to the proper value because of that internal dereference, and so the verifier would reject with -EINVAL the call to the eBPF program.
If I add another test that has the following ebpf prog (with ctx_in being set to NULL by the userspace):
SEC("syscall") int kfunc_syscall_test_null_fail(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args));
return 0;
}
Then the call of the program is actually failing with -EINVAL, even with this patch.
But again, if setting from userspace a ctx of NULL with a 0 size is not considered as valid, then we can just drop that hunk and add a test to enforce it.
PTR_TO_CTX in the verifier always means valid pointer. All code paths in the verifier assumes that it's not NULL. Pointer to skb, to xdp, to pt_regs, etc. The syscall prog type is little bit special, since it makes sense not to pass any argument to such prog. So ctx_size_in == 0 is enforced after the verification: if (ctx_size_in < prog->aux->max_ctx_offset || ctx_size_in > U16_MAX) return -EINVAL; The verifier should be able to proceed assuming ctx != NULL and remember max max_ctx_offset. If max_ctx_offset == 4 and ctx_size_in == 0 then it doesn't matter whether the actual 'ctx' pointer is NULL or points to a valid memory. So it's ok for the verifier to assume ctx != NULL everywhere.
Ok, thanks for the detailed explanation.
Back to the issue at hand. With this patch the line: bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args)); will be seen as access_size == sizeof(*args), right? So this part:
if (access_size == 0)
return zero_size_allowed ? 0 : -EACCES;
will be skipped and the newly added check_mem_access() will call check_ctx_access() which will call syscall_prog_is_valid_access() and it will say that any off < U16_MAX is fine and will simply record max max_ctx_offset. The ctx_size_in < prog->aux->max_ctx_offset check is done later.
Yep, this is correct and this is working now, with a proper error (and no, this is not the error I am trying to fix, see below):
eBPF prog:
SEC("?syscall") int kfunc_syscall_test_null_fail(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args)); return 0; }
before this patch (1/23):
- with ctx not NULL:
libbpf: prog 'kfunc_syscall_test_null_fail': BPF program load failed: Invalid argument R1 type=ctx expected=fp arg#0 arg#1 memory, len pair leads to invalid memory access
=> this is not correct, we expect the program to be loaded (and it is expected, this is the bug that is fixed)
- Same result with ctx being NULL from the caller
With just the hunk in kernel/bpf/verifier.c (so without touching max_ctx_offset:
- with ctx not NULL:
program is loaded, and executed correctly
- with ctx being NULL:
program is now loaded, but execution returns -EINVAL, as expected
So this case is fully solved by just the hunk in verifier.c
With the full patch: same results, with or without ctx being set to NULL, so no side effects.
So when you're saying: "call of the program is actually failing with -EINVAL" that's the check you're referring to?
No. I am referring to the following eBPF program:
SEC("syscall") int kfunc_syscall_test_null(struct syscall_test_args *args) { return 0; }
(no calls, just the declaration of a program)
This one is supposed to be loaded and properly run whatever the context is, right?
Got it. Yes. Indeed. The if (!env->ops->convert_ctx_access) hunk alone would break existing progs because of side effect of max_ctx_offset. We have this unfortunate bit of code: ret = btf_check_subprog_arg_match(env, subprog, regs); if (ret == -EFAULT) /* unlikely verifier bug. abort. * ret == 0 and ret < 0 are sadly acceptable for * main() function due to backward compatibility. * Like socket filter program may be written as: * int bpf_prog(struct pt_regs *ctx) * and never dereference that ctx in the program. * 'struct pt_regs' is a type mismatch for socket * filter that should be using 'struct __sk_buff'. */ goto out;
because btf_check_subprog_arg_match() is used to match arguments for calling into a function and when the verifier just starts to analyze a function. Before this patch the btf_check_subprog_arg_match() would just EINVAL on your above example and will proceed, but with the patch the non zero max_ctx_offset will disallow execution later and break things. I think we need to clean up this bit of code. Just save/restore of max_ctx_offset isn't going to work. How about adding a flag to btf_check_subprog_arg_match to indicate whether the verifier is processing 'call' insn or just starting processing a function body and then do if (ptr_to_mem_ok && processing_call) ? Still feels like a hack. Maybe btf_check_func_arg_match() needs to be split to disambiguate calling vs processing the body ? And may cleanup the rest of that function ? Like all of if (is_kfunc) applies only to 'calling' case. Other ideas?
On Thu, Sep 1, 2022 at 6:15 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Wed, Aug 31, 2022 at 10:56 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
On Wed, Aug 31, 2022 at 6:37 PM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Tue, Aug 30, 2022 at 7:29 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
On Fri, Aug 26, 2022 at 3:51 AM Kumar Kartikeya Dwivedi memxor@gmail.com wrote:
On Fri, 26 Aug 2022 at 03:42, Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Wed, Aug 24, 2022 at 6:41 AM Benjamin Tissoires benjamin.tissoires@redhat.com wrote: > > When a function was trying to access data from context in a syscall eBPF > program, the verifier was rejecting the call unless it was accessing the > first element. > This is because the syscall context is not known at compile time, and > so we need to check this when actually accessing it. > > Check for the valid memory access if there is no convert_ctx callback, > and allow such situation to happen. > > There is a slight hiccup with subprogs. btf_check_subprog_arg_match() > will check that the types are matching, which is a good thing, but to > have an accurate result, it hides the fact that the context register may > be null. This makes env->prog->aux->max_ctx_offset being set to the size > of the context, which is incompatible with a NULL context. > > Solve that last problem by storing max_ctx_offset before the type check > and restoring it after. > > Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com > Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com > > --- > > changes in v9: > - rewrote the commit title and description > - made it so all functions can make use of context even if there is > no convert_ctx > - remove the is_kfunc field in bpf_call_arg_meta > > changes in v8: > - fixup comment > - return -EACCESS instead of -EINVAL for consistency > > changes in v7: > - renamed access_t into atype > - allow zero-byte read > - check_mem_access() to the correct offset/size > > new in v6 > --- > kernel/bpf/btf.c | 11 ++++++++++- > kernel/bpf/verifier.c | 19 +++++++++++++++++++ > 2 files changed, 29 insertions(+), 1 deletion(-) > > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c > index 903719b89238..386300f52b23 100644 > --- a/kernel/bpf/btf.c > +++ b/kernel/bpf/btf.c > @@ -6443,8 +6443,8 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, > { > struct bpf_prog *prog = env->prog; > struct btf *btf = prog->aux->btf; > + u32 btf_id, max_ctx_offset; > bool is_global; > - u32 btf_id; > int err; > > if (!prog->aux->func_info) > @@ -6457,9 +6457,18 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, > if (prog->aux->func_info_aux[subprog].unreliable) > return -EINVAL; > > + /* subprogs arguments are not actually accessing the data, we need > + * to check for the types if they match. > + * Store the max_ctx_offset and restore it after btf_check_func_arg_match() > + * given that this function will have a side effect of changing it. > + */ > + max_ctx_offset = env->prog->aux->max_ctx_offset; > + > is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; > err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, 0); > > + env->prog->aux->max_ctx_offset = max_ctx_offset;
I don't understand this. If we pass a ctx into a helper and it's going to access [0..N] bytes from it why do we need to hide it? max_ctx_offset will be used later raw_tp, tp, syscall progs to determine whether it's ok to load them. By hiding the actual size of access somebody can construct a prog that reads out of bounds. How is this related to NULL-ness property?
Same question, was just typing exactly the same thing.
The test I have that is failing in patch 2/23 is the following, with args being set to NULL by userspace:
SEC("syscall") int kfunc_syscall_test_null(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, 0);
return 0;
}
Basically: if userspace declares the following: DECLARE_LIBBPF_OPTS(bpf_test_run_opts, syscall_topts, .ctx_in = NULL, .ctx_size_in = 0, );
The verifier is happy with the current released kernel: kfunc_syscall_test_fail() never dereferences the ctx pointer, it just passes it around to bpf_kfunc_call_test_mem_len_pass1(), which in turn is also happy because it says it is not accessing the data at all (0 size memory parameter).
In the current code, check_helper_mem_access() actually returns -EINVAL, but doesn't change max_ctx_offset (it's still at the value of 0 here). The program is now marked as unreliable, but the verifier goes on.
When adding this patch, if we declare a syscall eBPF (or any other function that doesn't have env->ops->convert_ctx_access), the previous "test" is failing because this ensures the syscall program has to have a valid ctx pointer. btf_check_func_arg_match() now calls check_mem_access() which basically validates the fact that the program can dereference the ctx.
So now, without the max_ctx_offset store/restore, the verifier enforces that the provided ctx is not null.
What I thought that would happen was that if we were to pass a NULL context from userspace, but the eBPF program dereferences it (or in that case have a subprog or a function call that dereferences it), then max_ctx_offset would still be set to the proper value because of that internal dereference, and so the verifier would reject with -EINVAL the call to the eBPF program.
If I add another test that has the following ebpf prog (with ctx_in being set to NULL by the userspace):
SEC("syscall") int kfunc_syscall_test_null_fail(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args));
return 0;
}
Then the call of the program is actually failing with -EINVAL, even with this patch.
But again, if setting from userspace a ctx of NULL with a 0 size is not considered as valid, then we can just drop that hunk and add a test to enforce it.
PTR_TO_CTX in the verifier always means valid pointer. All code paths in the verifier assumes that it's not NULL. Pointer to skb, to xdp, to pt_regs, etc. The syscall prog type is little bit special, since it makes sense not to pass any argument to such prog. So ctx_size_in == 0 is enforced after the verification: if (ctx_size_in < prog->aux->max_ctx_offset || ctx_size_in > U16_MAX) return -EINVAL; The verifier should be able to proceed assuming ctx != NULL and remember max max_ctx_offset. If max_ctx_offset == 4 and ctx_size_in == 0 then it doesn't matter whether the actual 'ctx' pointer is NULL or points to a valid memory. So it's ok for the verifier to assume ctx != NULL everywhere.
Ok, thanks for the detailed explanation.
Back to the issue at hand. With this patch the line: bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args)); will be seen as access_size == sizeof(*args), right? So this part:
if (access_size == 0)
return zero_size_allowed ? 0 : -EACCES;
will be skipped and the newly added check_mem_access() will call check_ctx_access() which will call syscall_prog_is_valid_access() and it will say that any off < U16_MAX is fine and will simply record max max_ctx_offset. The ctx_size_in < prog->aux->max_ctx_offset check is done later.
Yep, this is correct and this is working now, with a proper error (and no, this is not the error I am trying to fix, see below):
eBPF prog:
SEC("?syscall") int kfunc_syscall_test_null_fail(struct syscall_test_args *args) { bpf_kfunc_call_test_mem_len_pass1(args, sizeof(*args)); return 0; }
before this patch (1/23):
- with ctx not NULL:
libbpf: prog 'kfunc_syscall_test_null_fail': BPF program load failed: Invalid argument R1 type=ctx expected=fp arg#0 arg#1 memory, len pair leads to invalid memory access
=> this is not correct, we expect the program to be loaded (and it is expected, this is the bug that is fixed)
- Same result with ctx being NULL from the caller
With just the hunk in kernel/bpf/verifier.c (so without touching max_ctx_offset:
- with ctx not NULL:
program is loaded, and executed correctly
- with ctx being NULL:
program is now loaded, but execution returns -EINVAL, as expected
So this case is fully solved by just the hunk in verifier.c
With the full patch: same results, with or without ctx being set to NULL, so no side effects.
So when you're saying: "call of the program is actually failing with -EINVAL" that's the check you're referring to?
No. I am referring to the following eBPF program:
SEC("syscall") int kfunc_syscall_test_null(struct syscall_test_args *args) { return 0; }
(no calls, just the declaration of a program)
This one is supposed to be loaded and properly run whatever the context is, right?
Got it. Yes. Indeed. The if (!env->ops->convert_ctx_access) hunk alone would break existing progs because of side effect of max_ctx_offset. We have this unfortunate bit of code: ret = btf_check_subprog_arg_match(env, subprog, regs); if (ret == -EFAULT) /* unlikely verifier bug. abort. * ret == 0 and ret < 0 are sadly acceptable for * main() function due to backward compatibility. * Like socket filter program may be written as: * int bpf_prog(struct pt_regs *ctx) * and never dereference that ctx in the program. * 'struct pt_regs' is a type mismatch for socket * filter that should be using 'struct __sk_buff'. */ goto out;
because btf_check_subprog_arg_match() is used to match arguments for calling into a function and when the verifier just starts to analyze a function. Before this patch the btf_check_subprog_arg_match() would just EINVAL on your above example and will proceed, but with the patch the non zero max_ctx_offset will disallow execution later and break things. I think we need to clean up this bit of code. Just save/restore of max_ctx_offset isn't going to work. How about adding a flag to btf_check_subprog_arg_match to indicate whether the verifier is processing 'call' insn or just starting processing a function body and then do if (ptr_to_mem_ok && processing_call) ? Still feels like a hack. Maybe btf_check_func_arg_match() needs to be split to disambiguate calling vs processing the body ?
Just to be sure I understand the problem correctly: btf_check_subprog_arg_match() is called twice only in verifier.c - first time (in do_check_common()): /* 1st arg to a function */ regs[BPF_REG_1].type = PTR_TO_CTX; mark_reg_known_zero(env, regs, BPF_REG_1); ret = btf_check_subprog_arg_match(env, subprog, regs);
AFAICT this call is the "starting processing a function body", and thus we should only match whether the function definition is correct compared to the BTF (whether the program is correctly defined or not), and thus should not have side effects like changing max_ctx_offset
- second time (in __check_func_call()): func_info_aux = env->prog->aux->func_info_aux; if (func_info_aux) is_global = func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; err = btf_check_subprog_arg_match(env, subprog, caller->regs);
This time we are in the "processing 'call' insn" part and this is where we need to also ensure that the register we access is correctly set, so max_ctx_offset needs to be updated.
If the above is correct, then yes, it would make sense to me to have 2 distinct functions: one to check for the args types only (does the function definition in the problem matches BTF), and one to check for its use. Behind the scenes, btf_check_subprog_arg_match() calls btf_check_func_arg_match() which is the one function with entangled arguments type checking and actually assessing that the values provided are correct.
I can try to split that btf_check_func_arg_match() into 2 distinct functions, though I am not sure I'll get it right. Maybe the hack about having "processing_call" for btf_check_func_arg_match() only will be good enough as a first step towards a better solution?
And may cleanup the rest of that function ? Like all of if (is_kfunc) applies only to 'calling' case. Other ideas?
I was trying to understand the problem most of today, and the only other thing I could think of was "why is the assumption that PTR_TO_CTX is not NULL actually required?". But again, this question is "valid" in the function declaration part, but not in the caller insn part. So I think splitting btf_check_subprog_arg_match() in 2 is probably the best.
Cheers, Benjamin
On Thu, 1 Sept 2022 at 18:48, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
[...] If the above is correct, then yes, it would make sense to me to have 2 distinct functions: one to check for the args types only (does the function definition in the problem matches BTF), and one to check for its use. Behind the scenes, btf_check_subprog_arg_match() calls btf_check_func_arg_match() which is the one function with entangled arguments type checking and actually assessing that the values provided are correct.
I can try to split that btf_check_func_arg_match() into 2 distinct functions, though I am not sure I'll get it right.
FYI, I've already split them into separate functions in my tree because it had become super ugly at this point with all the new support and I refactored it to add the linked list helpers support using kfuncs (which requires some special handling for the args), so I think you can just leave it with a "processing_call" check in for your series for now.
Maybe the hack about having "processing_call" for btf_check_func_arg_match() only will be good enough as a first step towards a better solution?
And may cleanup the rest of that function ? Like all of if (is_kfunc) applies only to 'calling' case. Other ideas?
I was trying to understand the problem most of today, and the only other thing I could think of was "why is the assumption that PTR_TO_CTX is not NULL actually required?". But again, this question is "valid" in the function declaration part, but not in the caller insn part. So I think splitting btf_check_subprog_arg_match() in 2 is probably the best.
Cheers, Benjamin
On Fri, Sep 2, 2022 at 5:50 AM Kumar Kartikeya Dwivedi memxor@gmail.com wrote:
On Thu, 1 Sept 2022 at 18:48, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
[...] If the above is correct, then yes, it would make sense to me to have 2 distinct functions: one to check for the args types only (does the function definition in the problem matches BTF), and one to check for its use. Behind the scenes, btf_check_subprog_arg_match() calls btf_check_func_arg_match() which is the one function with entangled arguments type checking and actually assessing that the values provided are correct.
I can try to split that btf_check_func_arg_match() into 2 distinct functions, though I am not sure I'll get it right.
FYI, I've already split them into separate functions in my tree because it had become super ugly at this point with all the new support and I refactored it to add the linked list helpers support using kfuncs (which requires some special handling for the args), so I think you can just leave it with a "processing_call" check in for your series for now.
great, thanks a lot. Actually, writing the patch today with the "processing_call" was really easy now that I have turned the problem in my head a lot yesterday.
I am about to send v10 with the reviews addressed.
Cheers, Benjamin
The purpose of this clear is to prevent meta->raw_mode to be evaluated at true, but this also prevents to forward any other data to the other callees.
Only switch back raw_mode to false so we don't entirely clear meta.
Acked-by: Yonghong Song yhs@fb.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
new in v6 --- kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d694f43ab911..13190487fb12 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5287,7 +5287,7 @@ static int check_mem_size_reg(struct bpf_verifier_env *env, * initialize all the memory that the helper could * just partially fill up. */ - meta = NULL; + meta->raw_mode = false;
if (reg->smin_value < 0) { verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n",
On Wed, 24 Aug 2022 at 15:41, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
The purpose of this clear is to prevent meta->raw_mode to be evaluated at true, but this also prevents to forward any other data to the other callees.
Only switch back raw_mode to false so we don't entirely clear meta.
Acked-by: Yonghong Song yhs@fb.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
no changes in v9
no changes in v8
no changes in v7
new in v6
kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d694f43ab911..13190487fb12 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5287,7 +5287,7 @@ static int check_mem_size_reg(struct bpf_verifier_env *env, * initialize all the memory that the helper could * just partially fill up. */
meta = NULL;
meta->raw_mode = false;
But this is adding a side effect, the caller's meta->raw_mode becomes false, which the caller may not expect...
if (reg->smin_value < 0) { verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n",
-- 2.36.1
On Fri, Aug 26, 2022 at 3:55 AM Kumar Kartikeya Dwivedi memxor@gmail.com wrote:
On Wed, 24 Aug 2022 at 15:41, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
The purpose of this clear is to prevent meta->raw_mode to be evaluated at true, but this also prevents to forward any other data to the other callees.
Only switch back raw_mode to false so we don't entirely clear meta.
Acked-by: Yonghong Song yhs@fb.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
no changes in v9
no changes in v8
no changes in v7
new in v6
kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d694f43ab911..13190487fb12 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5287,7 +5287,7 @@ static int check_mem_size_reg(struct bpf_verifier_env *env, * initialize all the memory that the helper could * just partially fill up. */
meta = NULL;
meta->raw_mode = false;
But this is adding a side effect, the caller's meta->raw_mode becomes false, which the caller may not expect...
Turns out that I don't need that patch anymore because I am not checking against is_kfunc in the previous patch. So dropping it from the next revision.
Cheers, Benjamin
if (reg->smin_value < 0) { verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n",
-- 2.36.1
We need to also export the kfunc set to the syscall program type, and then add a couple of eBPF programs that are testing those calls.
The first one checks for valid access, and the second one is OK from a static analysis point of view but fails at run time because we are trying to access outside of the allocated memory.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
changes in v7: - add 1 more case to ensure we can read the entire sizeof(ctx) - add a test case for when the context is NULL
new in v6 --- net/bpf/test_run.c | 1 + .../selftests/bpf/prog_tests/kfunc_call.c | 28 +++++++++++++++ .../selftests/bpf/progs/kfunc_call_test.c | 36 +++++++++++++++++++ 3 files changed, 65 insertions(+)
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 25d8ecf105aa..f16baf977a21 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -1634,6 +1634,7 @@ static int __init bpf_prog_test_run_init(void)
ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_prog_test_kfunc_set); ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_prog_test_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &bpf_prog_test_kfunc_set); return ret ?: register_btf_id_dtor_kfuncs(bpf_prog_test_dtor_kfunc, ARRAY_SIZE(bpf_prog_test_dtor_kfunc), THIS_MODULE); diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c index eede7c304f86..1edad012fe01 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c @@ -9,10 +9,22 @@
#include "cap_helpers.h"
+struct syscall_test_args { + __u8 data[16]; + size_t size; +}; + static void test_main(void) { struct kfunc_call_test_lskel *skel; int prog_fd, err; + struct syscall_test_args args = { + .size = 10, + }; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, syscall_topts, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); LIBBPF_OPTS(bpf_test_run_opts, topts, .data_in = &pkt_v4, .data_size_in = sizeof(pkt_v4), @@ -38,6 +50,22 @@ static void test_main(void) ASSERT_OK(err, "bpf_prog_test_run(test_ref_btf_id)"); ASSERT_EQ(topts.retval, 0, "test_ref_btf_id-retval");
+ prog_fd = skel->progs.kfunc_syscall_test.prog_fd; + err = bpf_prog_test_run_opts(prog_fd, &syscall_topts); + ASSERT_OK(err, "bpf_prog_test_run(syscall_test)"); + + prog_fd = skel->progs.kfunc_syscall_test_fail.prog_fd; + err = bpf_prog_test_run_opts(prog_fd, &syscall_topts); + ASSERT_ERR(err, "bpf_prog_test_run(syscall_test_fail)"); + + syscall_topts.ctx_in = NULL; + syscall_topts.ctx_size_in = 0; + + prog_fd = skel->progs.kfunc_syscall_test_null.prog_fd; + err = bpf_prog_test_run_opts(prog_fd, &syscall_topts); + ASSERT_OK(err, "bpf_prog_test_run(syscall_test_null)"); + ASSERT_EQ(syscall_topts.retval, 0, "syscall_test_null-retval"); + kfunc_call_test_lskel__destroy(skel); }
diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_test.c b/tools/testing/selftests/bpf/progs/kfunc_call_test.c index 5aecbb9fdc68..da7ae0ef9100 100644 --- a/tools/testing/selftests/bpf/progs/kfunc_call_test.c +++ b/tools/testing/selftests/bpf/progs/kfunc_call_test.c @@ -92,4 +92,40 @@ int kfunc_call_test_pass(struct __sk_buff *skb) return 0; }
+struct syscall_test_args { + __u8 data[16]; + size_t size; +}; + +SEC("syscall") +int kfunc_syscall_test(struct syscall_test_args *args) +{ + const int size = args->size; + + if (size > sizeof(args->data)) + return -7; /* -E2BIG */ + + bpf_kfunc_call_test_mem_len_pass1(&args->data, sizeof(args->data)); + bpf_kfunc_call_test_mem_len_pass1(&args->data, sizeof(*args)); + bpf_kfunc_call_test_mem_len_pass1(&args->data, size); + + return 0; +} + +SEC("syscall") +int kfunc_syscall_test_null(struct syscall_test_args *args) +{ + bpf_kfunc_call_test_mem_len_pass1(args, 0); + + return 0; +} + +SEC("syscall") +int kfunc_syscall_test_fail(struct syscall_test_args *args) +{ + bpf_kfunc_call_test_mem_len_pass1(&args->data, sizeof(*args) + 1); + + return 0; +} + char _license[] SEC("license") = "GPL";
On Wed, 24 Aug 2022 at 15:41, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
We need to also export the kfunc set to the syscall program type, and then add a couple of eBPF programs that are testing those calls.
The first one checks for valid access, and the second one is OK from a static analysis point of view but fails at run time because we are trying to access outside of the allocated memory.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
no changes in v9
no changes in v8
changes in v7:
- add 1 more case to ensure we can read the entire sizeof(ctx)
- add a test case for when the context is NULL
new in v6
net/bpf/test_run.c | 1 + .../selftests/bpf/prog_tests/kfunc_call.c | 28 +++++++++++++++ .../selftests/bpf/progs/kfunc_call_test.c | 36 +++++++++++++++++++ 3 files changed, 65 insertions(+)
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 25d8ecf105aa..f16baf977a21 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -1634,6 +1634,7 @@ static int __init bpf_prog_test_run_init(void)
ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_prog_test_kfunc_set); ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_prog_test_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &bpf_prog_test_kfunc_set); return ret ?: register_btf_id_dtor_kfuncs(bpf_prog_test_dtor_kfunc, ARRAY_SIZE(bpf_prog_test_dtor_kfunc), THIS_MODULE);
diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c index eede7c304f86..1edad012fe01 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c @@ -9,10 +9,22 @@
#include "cap_helpers.h"
+struct syscall_test_args {
__u8 data[16];
size_t size;
+};
static void test_main(void) { struct kfunc_call_test_lskel *skel; int prog_fd, err;
struct syscall_test_args args = {
.size = 10,
};
DECLARE_LIBBPF_OPTS(bpf_test_run_opts, syscall_topts,
.ctx_in = &args,
.ctx_size_in = sizeof(args),
); LIBBPF_OPTS(bpf_test_run_opts, topts, .data_in = &pkt_v4, .data_size_in = sizeof(pkt_v4),
@@ -38,6 +50,22 @@ static void test_main(void) ASSERT_OK(err, "bpf_prog_test_run(test_ref_btf_id)"); ASSERT_EQ(topts.retval, 0, "test_ref_btf_id-retval");
prog_fd = skel->progs.kfunc_syscall_test.prog_fd;
err = bpf_prog_test_run_opts(prog_fd, &syscall_topts);
ASSERT_OK(err, "bpf_prog_test_run(syscall_test)");
prog_fd = skel->progs.kfunc_syscall_test_fail.prog_fd;
err = bpf_prog_test_run_opts(prog_fd, &syscall_topts);
ASSERT_ERR(err, "bpf_prog_test_run(syscall_test_fail)");
It would be better to assert on the verifier error string, to make sure we continue actually testing the error we care about and not something else.
syscall_topts.ctx_in = NULL;
syscall_topts.ctx_size_in = 0;
prog_fd = skel->progs.kfunc_syscall_test_null.prog_fd;
err = bpf_prog_test_run_opts(prog_fd, &syscall_topts);
ASSERT_OK(err, "bpf_prog_test_run(syscall_test_null)");
ASSERT_EQ(syscall_topts.retval, 0, "syscall_test_null-retval");
kfunc_call_test_lskel__destroy(skel);
}
diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_test.c b/tools/testing/selftests/bpf/progs/kfunc_call_test.c index 5aecbb9fdc68..da7ae0ef9100 100644 --- a/tools/testing/selftests/bpf/progs/kfunc_call_test.c +++ b/tools/testing/selftests/bpf/progs/kfunc_call_test.c @@ -92,4 +92,40 @@ int kfunc_call_test_pass(struct __sk_buff *skb) return 0; }
+struct syscall_test_args {
__u8 data[16];
size_t size;
+};
+SEC("syscall") +int kfunc_syscall_test(struct syscall_test_args *args) +{
const int size = args->size;
if (size > sizeof(args->data))
return -7; /* -E2BIG */
bpf_kfunc_call_test_mem_len_pass1(&args->data, sizeof(args->data));
bpf_kfunc_call_test_mem_len_pass1(&args->data, sizeof(*args));
bpf_kfunc_call_test_mem_len_pass1(&args->data, size);
return 0;
+}
+SEC("syscall") +int kfunc_syscall_test_null(struct syscall_test_args *args) +{
bpf_kfunc_call_test_mem_len_pass1(args, 0);
Where is it testing 'NULL'? It is testing zero_size_allowed.
return 0;
+}
+SEC("syscall") +int kfunc_syscall_test_fail(struct syscall_test_args *args) +{
bpf_kfunc_call_test_mem_len_pass1(&args->data, sizeof(*args) + 1);
return 0;
+}
char _license[] SEC("license") = "GPL";
2.36.1
For drivers (outside of network), the incoming data is not statically defined in a struct. Most of the time the data buffer is kzalloc-ed and thus we can not rely on eBPF and BTF to explore the data.
This commit allows to return an arbitrary memory, previously allocated by the driver. An interesting extra point is that the kfunc can mark the exported memory region as read only or read/write.
So, when a kfunc is not returning a pointer to a struct but to a plain type, we can consider it is a valid allocated memory assuming that: - one of the arguments is either called rdonly_buf_size or rdwr_buf_size - and this argument is a const from the caller point of view
We can then use this parameter as the size of the allocated memory.
The memory is either read-only or read-write based on the name of the size parameter.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
changes in v9: - updated to match upstream (replaced kfunc_flag by a field in kfunc_meta)
no changes in v8
changes in v7: - ensures btf_type_is_struct_ptr() checks for a ptr first (squashed from next commit) - remove multiple_ref_obj_id need - use btf_type_skip_modifiers instead of manually doing it in btf_type_is_struct_ptr() - s/strncmp/strcmp/ in btf_is_kfunc_arg_mem_size() - check for tnum_is_const when retrieving the size value - have only one check for "Ensure only one argument is referenced PTR_TO_BTF_ID" - add some more context to the commit message
changes in v6: - code review from Kartikeya: - remove comment change that had no reasons to be - remove handling of PTR_TO_MEM with kfunc releases - introduce struct bpf_kfunc_arg_meta - do rdonly/rdwr_buf_size check in btf_check_kfunc_arg_match - reverted most of the changes in verifier.c - make sure kfunc acquire is using a struct pointer, not just a plain pointer - also forward ref_obj_id to PTR_TO_MEM in kfunc to not use after free the allocated memory
changes in v5: - updated PTR_TO_MEM comment in btf.c to match upstream - make it read-only or read-write based on the name of size
new in v4
change btf.h
fix allow kfunc to return an allocated mem --- include/linux/bpf.h | 9 +++- include/linux/btf.h | 10 +++++ kernel/bpf/btf.c | 98 ++++++++++++++++++++++++++++++++++--------- kernel/bpf/verifier.c | 43 +++++++++++++------ 4 files changed, 128 insertions(+), 32 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 39bd36359c1e..90dd218e0199 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1932,13 +1932,20 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, const char *func_name, struct btf_func_model *m);
+struct bpf_kfunc_arg_meta { + u64 r0_size; + bool r0_rdonly; + int ref_obj_id; + u32 flags; +}; + struct bpf_reg_state; int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, struct bpf_reg_state *regs); int btf_check_kfunc_arg_match(struct bpf_verifier_env *env, const struct btf *btf, u32 func_id, struct bpf_reg_state *regs, - u32 kfunc_flags); + struct bpf_kfunc_arg_meta *meta); int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog, struct bpf_reg_state *reg); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, diff --git a/include/linux/btf.h b/include/linux/btf.h index ad93c2d9cc1c..1fcc833a8690 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -441,4 +441,14 @@ static inline int register_btf_id_dtor_kfuncs(const struct btf_id_dtor_kfunc *dt } #endif
+static inline bool btf_type_is_struct_ptr(struct btf *btf, const struct btf_type *t) +{ + if (!btf_type_is_ptr(t)) + return false; + + t = btf_type_skip_modifiers(btf, t->type, NULL); + + return btf_type_is_struct(t); +} + #endif diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 386300f52b23..c0057ad1088f 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6166,11 +6166,36 @@ static bool is_kfunc_arg_mem_size(const struct btf *btf, return true; }
+static bool btf_is_kfunc_arg_mem_size(const struct btf *btf, + const struct btf_param *arg, + const struct bpf_reg_state *reg, + const char *name) +{ + int len, target_len = strlen(name); + const struct btf_type *t; + const char *param_name; + + t = btf_type_skip_modifiers(btf, arg->type, NULL); + if (!btf_type_is_scalar(t) || reg->type != SCALAR_VALUE) + return false; + + param_name = btf_name_by_offset(btf, arg->name_off); + if (str_is_empty(param_name)) + return false; + len = strlen(param_name); + if (len != target_len) + return false; + if (strcmp(param_name, name)) + return false; + + return true; +} + static int btf_check_func_arg_match(struct bpf_verifier_env *env, const struct btf *btf, u32 func_id, struct bpf_reg_state *regs, bool ptr_to_mem_ok, - u32 kfunc_flags) + struct bpf_kfunc_arg_meta *kfunc_meta) { enum bpf_prog_type prog_type = resolve_prog_type(env->prog); bool rel = false, kptr_get = false, trusted_arg = false; @@ -6207,12 +6232,12 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, return -EINVAL; }
- if (is_kfunc) { + if (is_kfunc && kfunc_meta) { /* Only kfunc can be release func */ - rel = kfunc_flags & KF_RELEASE; - kptr_get = kfunc_flags & KF_KPTR_GET; - trusted_arg = kfunc_flags & KF_TRUSTED_ARGS; - sleepable = kfunc_flags & KF_SLEEPABLE; + rel = kfunc_meta->flags & KF_RELEASE; + kptr_get = kfunc_meta->flags & KF_KPTR_GET; + trusted_arg = kfunc_meta->flags & KF_TRUSTED_ARGS; + sleepable = kfunc_meta->flags & KF_SLEEPABLE; }
/* check that BTF function arguments match actual types that the @@ -6225,6 +6250,35 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
t = btf_type_skip_modifiers(btf, args[i].type, NULL); if (btf_type_is_scalar(t)) { + if (is_kfunc && kfunc_meta) { + bool is_buf_size = false; + + /* check for any const scalar parameter of name "rdonly_buf_size" + * or "rdwr_buf_size" + */ + if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg, + "rdonly_buf_size")) { + kfunc_meta->r0_rdonly = true; + is_buf_size = true; + } else if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg, + "rdwr_buf_size")) + is_buf_size = true; + + if (is_buf_size) { + if (kfunc_meta->r0_size) { + bpf_log(log, "2 or more rdonly/rdwr_buf_size parameters for kfunc"); + return -EINVAL; + } + + if (!tnum_is_const(reg->var_off)) { + bpf_log(log, "R%d is not a const\n", regno); + return -EINVAL; + } + + kfunc_meta->r0_size = reg->var_off.value; + } + } + if (reg->type == SCALAR_VALUE) continue; bpf_log(log, "R%d is not a scalar\n", regno); @@ -6255,6 +6309,19 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, if (ret < 0) return ret;
+ if (is_kfunc && reg->type == PTR_TO_BTF_ID) { + /* Ensure only one argument is referenced PTR_TO_BTF_ID */ + if (reg->ref_obj_id) { + if (ref_obj_id) { + bpf_log(log, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n", + regno, reg->ref_obj_id, ref_obj_id); + return -EFAULT; + } + ref_regno = regno; + ref_obj_id = reg->ref_obj_id; + } + } + /* kptr_get is only true for kfunc */ if (i == 0 && kptr_get) { struct bpf_map_value_off_desc *off_desc; @@ -6327,16 +6394,6 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, if (reg->type == PTR_TO_BTF_ID) { reg_btf = reg->btf; reg_ref_id = reg->btf_id; - /* Ensure only one argument is referenced PTR_TO_BTF_ID */ - if (reg->ref_obj_id) { - if (ref_obj_id) { - bpf_log(log, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n", - regno, reg->ref_obj_id, ref_obj_id); - return -EFAULT; - } - ref_regno = regno; - ref_obj_id = reg->ref_obj_id; - } } else { reg_btf = btf_vmlinux; reg_ref_id = *reg2btf_ids[base_type(reg->type)]; @@ -6427,6 +6484,9 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, return -EINVAL; }
+ if (kfunc_meta && ref_obj_id) + kfunc_meta->ref_obj_id = ref_obj_id; + /* returns argument register number > 0 in case of reference release kfunc */ return rel ? ref_regno : 0; } @@ -6465,7 +6525,7 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, max_ctx_offset = env->prog->aux->max_ctx_offset;
is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; - err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, 0); + err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, NULL);
env->prog->aux->max_ctx_offset = max_ctx_offset;
@@ -6481,9 +6541,9 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, int btf_check_kfunc_arg_match(struct bpf_verifier_env *env, const struct btf *btf, u32 func_id, struct bpf_reg_state *regs, - u32 kfunc_flags) + struct bpf_kfunc_arg_meta *meta) { - return btf_check_func_arg_match(env, btf, func_id, regs, true, kfunc_flags); + return btf_check_func_arg_match(env, btf, func_id, regs, true, meta); }
/* Convert BTF of a function into bpf_reg_state if possible diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 13190487fb12..cd50850e139d 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7576,6 +7576,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, { const struct btf_type *t, *func, *func_proto, *ptr_type; struct bpf_reg_state *regs = cur_regs(env); + struct bpf_kfunc_arg_meta meta = { 0 }; const char *func_name, *ptr_type_name; u32 i, nargs, func_id, ptr_type_id; int err, insn_idx = *insn_idx_p; @@ -7610,8 +7611,10 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
acq = *kfunc_flags & KF_ACQUIRE;
+ meta.flags = *kfunc_flags; + /* Check the arguments */ - err = btf_check_kfunc_arg_match(env, desc_btf, func_id, regs, *kfunc_flags); + err = btf_check_kfunc_arg_match(env, desc_btf, func_id, regs, &meta); if (err < 0) return err; /* In case of release function, we get register number of refcounted @@ -7632,7 +7635,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, /* Check return type */ t = btf_type_skip_modifiers(desc_btf, func_proto->type, NULL);
- if (acq && !btf_type_is_ptr(t)) { + if (acq && !btf_type_is_struct_ptr(desc_btf, t)) { verbose(env, "acquire kernel function does not return PTR_TO_BTF_ID\n"); return -EINVAL; } @@ -7644,17 +7647,33 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, ptr_type = btf_type_skip_modifiers(desc_btf, t->type, &ptr_type_id); if (!btf_type_is_struct(ptr_type)) { - ptr_type_name = btf_name_by_offset(desc_btf, - ptr_type->name_off); - verbose(env, "kernel function %s returns pointer type %s %s is not supported\n", - func_name, btf_type_str(ptr_type), - ptr_type_name); - return -EINVAL; + if (!meta.r0_size) { + ptr_type_name = btf_name_by_offset(desc_btf, + ptr_type->name_off); + verbose(env, + "kernel function %s returns pointer type %s %s is not supported\n", + func_name, + btf_type_str(ptr_type), + ptr_type_name); + return -EINVAL; + } + + mark_reg_known_zero(env, regs, BPF_REG_0); + regs[BPF_REG_0].type = PTR_TO_MEM; + regs[BPF_REG_0].mem_size = meta.r0_size; + + if (meta.r0_rdonly) + regs[BPF_REG_0].type |= MEM_RDONLY; + + /* Ensures we don't access the memory after a release_reference() */ + if (meta.ref_obj_id) + regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id; + } else { + mark_reg_known_zero(env, regs, BPF_REG_0); + regs[BPF_REG_0].btf = desc_btf; + regs[BPF_REG_0].type = PTR_TO_BTF_ID; + regs[BPF_REG_0].btf_id = ptr_type_id; } - mark_reg_known_zero(env, regs, BPF_REG_0); - regs[BPF_REG_0].btf = desc_btf; - regs[BPF_REG_0].type = PTR_TO_BTF_ID; - regs[BPF_REG_0].btf_id = ptr_type_id; if (*kfunc_flags & KF_RET_NULL) { regs[BPF_REG_0].type |= PTR_MAYBE_NULL; /* For mark_ptr_or_null_reg, see 93c230e3f5bd6 */
On Wed, 24 Aug 2022 at 15:41, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
For drivers (outside of network), the incoming data is not statically defined in a struct. Most of the time the data buffer is kzalloc-ed and thus we can not rely on eBPF and BTF to explore the data.
This commit allows to return an arbitrary memory, previously allocated by the driver. An interesting extra point is that the kfunc can mark the exported memory region as read only or read/write.
So, when a kfunc is not returning a pointer to a struct but to a plain type, we can consider it is a valid allocated memory assuming that:
- one of the arguments is either called rdonly_buf_size or rdwr_buf_size
- and this argument is a const from the caller point of view
We can then use this parameter as the size of the allocated memory.
The memory is either read-only or read-write based on the name of the size parameter.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- updated to match upstream (replaced kfunc_flag by a field in kfunc_meta)
no changes in v8
changes in v7:
- ensures btf_type_is_struct_ptr() checks for a ptr first (squashed from next commit)
- remove multiple_ref_obj_id need
- use btf_type_skip_modifiers instead of manually doing it in btf_type_is_struct_ptr()
- s/strncmp/strcmp/ in btf_is_kfunc_arg_mem_size()
- check for tnum_is_const when retrieving the size value
- have only one check for "Ensure only one argument is referenced PTR_TO_BTF_ID"
- add some more context to the commit message
changes in v6:
- code review from Kartikeya:
- remove comment change that had no reasons to be
- remove handling of PTR_TO_MEM with kfunc releases
- introduce struct bpf_kfunc_arg_meta
- do rdonly/rdwr_buf_size check in btf_check_kfunc_arg_match
- reverted most of the changes in verifier.c
- make sure kfunc acquire is using a struct pointer, not just a plain pointer
- also forward ref_obj_id to PTR_TO_MEM in kfunc to not use after free the allocated memory
changes in v5:
- updated PTR_TO_MEM comment in btf.c to match upstream
- make it read-only or read-write based on the name of size
new in v4
change btf.h
fix allow kfunc to return an allocated mem
include/linux/bpf.h | 9 +++- include/linux/btf.h | 10 +++++ kernel/bpf/btf.c | 98 ++++++++++++++++++++++++++++++++++--------- kernel/bpf/verifier.c | 43 +++++++++++++------ 4 files changed, 128 insertions(+), 32 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 39bd36359c1e..90dd218e0199 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1932,13 +1932,20 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, const char *func_name, struct btf_func_model *m); [...]
static int btf_check_func_arg_match(struct bpf_verifier_env *env, const struct btf *btf, u32 func_id, struct bpf_reg_state *regs, bool ptr_to_mem_ok,
u32 kfunc_flags)
struct bpf_kfunc_arg_meta *kfunc_meta)
{ enum bpf_prog_type prog_type = resolve_prog_type(env->prog); bool rel = false, kptr_get = false, trusted_arg = false; @@ -6207,12 +6232,12 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, return -EINVAL; }
if (is_kfunc) {
if (is_kfunc && kfunc_meta) { /* Only kfunc can be release func */
rel = kfunc_flags & KF_RELEASE;
kptr_get = kfunc_flags & KF_KPTR_GET;
trusted_arg = kfunc_flags & KF_TRUSTED_ARGS;
sleepable = kfunc_flags & KF_SLEEPABLE;
rel = kfunc_meta->flags & KF_RELEASE;
kptr_get = kfunc_meta->flags & KF_KPTR_GET;
trusted_arg = kfunc_meta->flags & KF_TRUSTED_ARGS;
sleepable = kfunc_meta->flags & KF_SLEEPABLE; } /* check that BTF function arguments match actual types that the
@@ -6225,6 +6250,35 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
t = btf_type_skip_modifiers(btf, args[i].type, NULL); if (btf_type_is_scalar(t)) {
if (is_kfunc && kfunc_meta) {
bool is_buf_size = false;
/* check for any const scalar parameter of name "rdonly_buf_size"
* or "rdwr_buf_size"
*/
if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg,
"rdonly_buf_size")) {
kfunc_meta->r0_rdonly = true;
is_buf_size = true;
} else if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg,
"rdwr_buf_size"))
is_buf_size = true;
if (is_buf_size) {
if (kfunc_meta->r0_size) {
bpf_log(log, "2 or more rdonly/rdwr_buf_size parameters for kfunc");
return -EINVAL;
}
if (!tnum_is_const(reg->var_off)) {
bpf_log(log, "R%d is not a const\n", regno);
return -EINVAL;
}
kfunc_meta->r0_size = reg->var_off.value;
Sorry for not pointing it out before, but you will need a call to mark_chain_precision here after this, since the value of the scalar is being used to decide the size of the returned pointer.
}
}
if (reg->type == SCALAR_VALUE) continue; bpf_log(log, "R%d is not a scalar\n", regno);
@@ -6255,6 +6309,19 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, if (ret < 0) return ret;
if (is_kfunc && reg->type == PTR_TO_BTF_ID) {
I think you can drop this extra check 'reg->type == PTR_TO_BTF_ID), this condition of only one ref_obj_id should hold regardless of the type.
[...]
On Fri, Aug 26, 2022 at 3:25 AM Kumar Kartikeya Dwivedi memxor@gmail.com wrote:
On Wed, 24 Aug 2022 at 15:41, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
For drivers (outside of network), the incoming data is not statically defined in a struct. Most of the time the data buffer is kzalloc-ed and thus we can not rely on eBPF and BTF to explore the data.
This commit allows to return an arbitrary memory, previously allocated by the driver. An interesting extra point is that the kfunc can mark the exported memory region as read only or read/write.
So, when a kfunc is not returning a pointer to a struct but to a plain type, we can consider it is a valid allocated memory assuming that:
- one of the arguments is either called rdonly_buf_size or rdwr_buf_size
- and this argument is a const from the caller point of view
We can then use this parameter as the size of the allocated memory.
The memory is either read-only or read-write based on the name of the size parameter.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- updated to match upstream (replaced kfunc_flag by a field in kfunc_meta)
no changes in v8
changes in v7:
- ensures btf_type_is_struct_ptr() checks for a ptr first (squashed from next commit)
- remove multiple_ref_obj_id need
- use btf_type_skip_modifiers instead of manually doing it in btf_type_is_struct_ptr()
- s/strncmp/strcmp/ in btf_is_kfunc_arg_mem_size()
- check for tnum_is_const when retrieving the size value
- have only one check for "Ensure only one argument is referenced PTR_TO_BTF_ID"
- add some more context to the commit message
changes in v6:
- code review from Kartikeya:
- remove comment change that had no reasons to be
- remove handling of PTR_TO_MEM with kfunc releases
- introduce struct bpf_kfunc_arg_meta
- do rdonly/rdwr_buf_size check in btf_check_kfunc_arg_match
- reverted most of the changes in verifier.c
- make sure kfunc acquire is using a struct pointer, not just a plain pointer
- also forward ref_obj_id to PTR_TO_MEM in kfunc to not use after free the allocated memory
changes in v5:
- updated PTR_TO_MEM comment in btf.c to match upstream
- make it read-only or read-write based on the name of size
new in v4
change btf.h
fix allow kfunc to return an allocated mem
include/linux/bpf.h | 9 +++- include/linux/btf.h | 10 +++++ kernel/bpf/btf.c | 98 ++++++++++++++++++++++++++++++++++--------- kernel/bpf/verifier.c | 43 +++++++++++++------ 4 files changed, 128 insertions(+), 32 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 39bd36359c1e..90dd218e0199 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1932,13 +1932,20 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, const char *func_name, struct btf_func_model *m); [...]
static int btf_check_func_arg_match(struct bpf_verifier_env *env, const struct btf *btf, u32 func_id, struct bpf_reg_state *regs, bool ptr_to_mem_ok,
u32 kfunc_flags)
struct bpf_kfunc_arg_meta *kfunc_meta)
{ enum bpf_prog_type prog_type = resolve_prog_type(env->prog); bool rel = false, kptr_get = false, trusted_arg = false; @@ -6207,12 +6232,12 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, return -EINVAL; }
if (is_kfunc) {
if (is_kfunc && kfunc_meta) { /* Only kfunc can be release func */
rel = kfunc_flags & KF_RELEASE;
kptr_get = kfunc_flags & KF_KPTR_GET;
trusted_arg = kfunc_flags & KF_TRUSTED_ARGS;
sleepable = kfunc_flags & KF_SLEEPABLE;
rel = kfunc_meta->flags & KF_RELEASE;
kptr_get = kfunc_meta->flags & KF_KPTR_GET;
trusted_arg = kfunc_meta->flags & KF_TRUSTED_ARGS;
sleepable = kfunc_meta->flags & KF_SLEEPABLE; } /* check that BTF function arguments match actual types that the
@@ -6225,6 +6250,35 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
t = btf_type_skip_modifiers(btf, args[i].type, NULL); if (btf_type_is_scalar(t)) {
if (is_kfunc && kfunc_meta) {
bool is_buf_size = false;
/* check for any const scalar parameter of name "rdonly_buf_size"
* or "rdwr_buf_size"
*/
if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg,
"rdonly_buf_size")) {
kfunc_meta->r0_rdonly = true;
is_buf_size = true;
} else if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg,
"rdwr_buf_size"))
is_buf_size = true;
if (is_buf_size) {
if (kfunc_meta->r0_size) {
bpf_log(log, "2 or more rdonly/rdwr_buf_size parameters for kfunc");
return -EINVAL;
}
if (!tnum_is_const(reg->var_off)) {
bpf_log(log, "R%d is not a const\n", regno);
return -EINVAL;
}
kfunc_meta->r0_size = reg->var_off.value;
Sorry for not pointing it out before, but you will need a call to mark_chain_precision here after this, since the value of the scalar is being used to decide the size of the returned pointer.
No worries.
I do however have a couple of questions (I have strictly no idea what mark_chain_precision does): - which register number should I call mark_chain_precision() as parameter? r0 or regno (the one with the constant)? - mark_chain_precision() is declared static in verifier.c. Should I export it so btf.c can have access to it, or can I delay the call to mark_chain_precision() in verifier.c when I set regs[BPF_REG_0].mem_size?
}
}
if (reg->type == SCALAR_VALUE) continue; bpf_log(log, "R%d is not a scalar\n", regno);
@@ -6255,6 +6309,19 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, if (ret < 0) return ret;
if (is_kfunc && reg->type == PTR_TO_BTF_ID) {
I think you can drop this extra check 'reg->type == PTR_TO_BTF_ID), this condition of only one ref_obj_id should hold regardless of the type.
Ack.
Cheers, Benjamin
[...]
On Wed, 31 Aug 2022 at 07:50, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
On Fri, Aug 26, 2022 at 3:25 AM Kumar Kartikeya Dwivedi memxor@gmail.com wrote:
On Wed, 24 Aug 2022 at 15:41, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
For drivers (outside of network), the incoming data is not statically defined in a struct. Most of the time the data buffer is kzalloc-ed and thus we can not rely on eBPF and BTF to explore the data.
This commit allows to return an arbitrary memory, previously allocated by the driver. An interesting extra point is that the kfunc can mark the exported memory region as read only or read/write.
So, when a kfunc is not returning a pointer to a struct but to a plain type, we can consider it is a valid allocated memory assuming that:
- one of the arguments is either called rdonly_buf_size or rdwr_buf_size
- and this argument is a const from the caller point of view
We can then use this parameter as the size of the allocated memory.
The memory is either read-only or read-write based on the name of the size parameter.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- updated to match upstream (replaced kfunc_flag by a field in kfunc_meta)
no changes in v8
changes in v7:
- ensures btf_type_is_struct_ptr() checks for a ptr first (squashed from next commit)
- remove multiple_ref_obj_id need
- use btf_type_skip_modifiers instead of manually doing it in btf_type_is_struct_ptr()
- s/strncmp/strcmp/ in btf_is_kfunc_arg_mem_size()
- check for tnum_is_const when retrieving the size value
- have only one check for "Ensure only one argument is referenced PTR_TO_BTF_ID"
- add some more context to the commit message
changes in v6:
- code review from Kartikeya:
- remove comment change that had no reasons to be
- remove handling of PTR_TO_MEM with kfunc releases
- introduce struct bpf_kfunc_arg_meta
- do rdonly/rdwr_buf_size check in btf_check_kfunc_arg_match
- reverted most of the changes in verifier.c
- make sure kfunc acquire is using a struct pointer, not just a plain pointer
- also forward ref_obj_id to PTR_TO_MEM in kfunc to not use after free the allocated memory
changes in v5:
- updated PTR_TO_MEM comment in btf.c to match upstream
- make it read-only or read-write based on the name of size
new in v4
change btf.h
fix allow kfunc to return an allocated mem
include/linux/bpf.h | 9 +++- include/linux/btf.h | 10 +++++ kernel/bpf/btf.c | 98 ++++++++++++++++++++++++++++++++++--------- kernel/bpf/verifier.c | 43 +++++++++++++------ 4 files changed, 128 insertions(+), 32 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 39bd36359c1e..90dd218e0199 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1932,13 +1932,20 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, const char *func_name, struct btf_func_model *m); [...]
static int btf_check_func_arg_match(struct bpf_verifier_env *env, const struct btf *btf, u32 func_id, struct bpf_reg_state *regs, bool ptr_to_mem_ok,
u32 kfunc_flags)
struct bpf_kfunc_arg_meta *kfunc_meta)
{ enum bpf_prog_type prog_type = resolve_prog_type(env->prog); bool rel = false, kptr_get = false, trusted_arg = false; @@ -6207,12 +6232,12 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, return -EINVAL; }
if (is_kfunc) {
if (is_kfunc && kfunc_meta) { /* Only kfunc can be release func */
rel = kfunc_flags & KF_RELEASE;
kptr_get = kfunc_flags & KF_KPTR_GET;
trusted_arg = kfunc_flags & KF_TRUSTED_ARGS;
sleepable = kfunc_flags & KF_SLEEPABLE;
rel = kfunc_meta->flags & KF_RELEASE;
kptr_get = kfunc_meta->flags & KF_KPTR_GET;
trusted_arg = kfunc_meta->flags & KF_TRUSTED_ARGS;
sleepable = kfunc_meta->flags & KF_SLEEPABLE; } /* check that BTF function arguments match actual types that the
@@ -6225,6 +6250,35 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
t = btf_type_skip_modifiers(btf, args[i].type, NULL); if (btf_type_is_scalar(t)) {
if (is_kfunc && kfunc_meta) {
bool is_buf_size = false;
/* check for any const scalar parameter of name "rdonly_buf_size"
* or "rdwr_buf_size"
*/
if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg,
"rdonly_buf_size")) {
kfunc_meta->r0_rdonly = true;
is_buf_size = true;
} else if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg,
"rdwr_buf_size"))
is_buf_size = true;
if (is_buf_size) {
if (kfunc_meta->r0_size) {
bpf_log(log, "2 or more rdonly/rdwr_buf_size parameters for kfunc");
return -EINVAL;
}
if (!tnum_is_const(reg->var_off)) {
bpf_log(log, "R%d is not a const\n", regno);
return -EINVAL;
}
kfunc_meta->r0_size = reg->var_off.value;
Sorry for not pointing it out before, but you will need a call to mark_chain_precision here after this, since the value of the scalar is being used to decide the size of the returned pointer.
No worries.
I do however have a couple of questions (I have strictly no idea what mark_chain_precision does):
See this patch for some background: https://lore.kernel.org/bpf/20220823185300.406-2-memxor@gmail.com Same case here, it is setting the size of r0 PTR_TO_MEM.
- which register number should I call mark_chain_precision() as
parameter? r0 or regno (the one with the constant)?
Yes, regno, i.e. the one with the constant.
- mark_chain_precision() is declared static in verifier.c. Should I
export it so btf.c can have access to it, or can I delay the call to mark_chain_precision() in verifier.c when I set regs[BPF_REG_0].mem_size?
Yes, but then you have to remember the regno you have to call it for. So it might be easier to just make it non-static and call in btf.c.
}
}
if (reg->type == SCALAR_VALUE) continue; bpf_log(log, "R%d is not a scalar\n", regno);
@@ -6255,6 +6309,19 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, if (ret < 0) return ret;
if (is_kfunc && reg->type == PTR_TO_BTF_ID) {
I think you can drop this extra check 'reg->type == PTR_TO_BTF_ID), this condition of only one ref_obj_id should hold regardless of the type.
Ack.
Cheers, Benjamin
[...]
We add 2 new kfuncs that are following the RET_PTR_TO_MEM capability from the previous commit. Then we test them in selftests: the first tests are testing valid case, and are not failing, and the later ones are actually preventing the program to be loaded because they are wrong.
To work around that, we mark the failing ones as not autoloaded (with SEC("?tc")), and we manually enable them one by one, ensuring the verifier rejects them.
To be able to use bpf_program__set_autoload() from libbpf, we need to use a plain skeleton, not a light-skeleton, and this is why we also change the Makefile to generate both for kfunc_call_test.c
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
changes in v9: - updated to match upstream (net/bpf/test_run.c id sets is now using flags)
no changes in v8
changes in v7: - removed stray include/linux/btf.h change
new in v6 --- net/bpf/test_run.c | 20 +++++ tools/testing/selftests/bpf/Makefile | 5 +- .../selftests/bpf/prog_tests/kfunc_call.c | 48 ++++++++++ .../selftests/bpf/progs/kfunc_call_test.c | 89 +++++++++++++++++++ 4 files changed, 160 insertions(+), 2 deletions(-)
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index f16baf977a21..6accd57d4ded 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -606,6 +606,24 @@ noinline void bpf_kfunc_call_memb1_release(struct prog_test_member1 *p) WARN_ON_ONCE(1); }
+static int *__bpf_kfunc_call_test_get_mem(struct prog_test_ref_kfunc *p, const int size) +{ + if (size > 2 * sizeof(int)) + return NULL; + + return (int *)p; +} + +noinline int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, const int rdwr_buf_size) +{ + return __bpf_kfunc_call_test_get_mem(p, rdwr_buf_size); +} + +noinline int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size) +{ + return __bpf_kfunc_call_test_get_mem(p, rdonly_buf_size); +} + noinline struct prog_test_ref_kfunc * bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **pp, int a, int b) { @@ -712,6 +730,8 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_memb_acquire, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_test_release, KF_RELEASE) BTF_ID_FLAGS(func, bpf_kfunc_call_memb_release, KF_RELEASE) BTF_ID_FLAGS(func, bpf_kfunc_call_memb1_release, KF_RELEASE) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_get_rdwr_mem, KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_get_rdonly_mem, KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_test_kptr_get, KF_ACQUIRE | KF_RET_NULL | KF_KPTR_GET) BTF_ID_FLAGS(func, bpf_kfunc_call_test_pass_ctx) BTF_ID_FLAGS(func, bpf_kfunc_call_test_pass1) diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 8d59ec7f4c2d..0905315ff86d 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -350,11 +350,12 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ test_subskeleton.skel.h test_subskeleton_lib.skel.h \ test_usdt.skel.h
-LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ +LSKELS := fentry_test.c fexit_test.c fexit_sleep.c \ test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \ map_ptr_kern.c core_kern.c core_kern_overflow.c # Generate both light skeleton and libbpf skeleton for these -LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test_subprog.c +LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test.c \ + kfunc_call_test_subprog.c SKEL_BLACKLIST += $$(LSKELS)
test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c index 1edad012fe01..590417d48962 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c @@ -2,6 +2,7 @@ /* Copyright (c) 2021 Facebook */ #include <test_progs.h> #include <network_helpers.h> +#include "kfunc_call_test.skel.h" #include "kfunc_call_test.lskel.h" #include "kfunc_call_test_subprog.skel.h" #include "kfunc_call_test_subprog.lskel.h" @@ -53,10 +54,12 @@ static void test_main(void) prog_fd = skel->progs.kfunc_syscall_test.prog_fd; err = bpf_prog_test_run_opts(prog_fd, &syscall_topts); ASSERT_OK(err, "bpf_prog_test_run(syscall_test)"); + ASSERT_EQ(syscall_topts.retval, 0, "syscall_test-retval");
prog_fd = skel->progs.kfunc_syscall_test_fail.prog_fd; err = bpf_prog_test_run_opts(prog_fd, &syscall_topts); ASSERT_ERR(err, "bpf_prog_test_run(syscall_test_fail)"); + ASSERT_EQ(syscall_topts.retval, 0, "syscall_test_fail-retval");
syscall_topts.ctx_in = NULL; syscall_topts.ctx_size_in = 0; @@ -147,6 +150,48 @@ static void test_destructive(void) cap_enable_effective(save_caps, NULL); }
+static void test_get_mem(void) +{ + struct kfunc_call_test *skel; + int prog_fd, err; + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + + skel = kfunc_call_test__open_and_load(); + if (!ASSERT_OK_PTR(skel, "skel")) + return; + + prog_fd = bpf_program__fd(skel->progs.kfunc_call_test_get_mem); + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "bpf_prog_test_run(test_get_mem)"); + ASSERT_EQ(topts.retval, 42, "test_get_mem-retval"); + + kfunc_call_test__destroy(skel); + + /* start the various failing tests */ + skel = kfunc_call_test__open(); + if (!ASSERT_OK_PTR(skel, "skel")) + return; + + bpf_program__set_autoload(skel->progs.kfunc_call_test_get_mem_fail1, true); + err = kfunc_call_test__load(skel); + ASSERT_ERR(err, "load(kfunc_call_test_get_mem_fail1)"); + kfunc_call_test__destroy(skel); + + skel = kfunc_call_test__open(); + if (!ASSERT_OK_PTR(skel, "skel")) + return; + + bpf_program__set_autoload(skel->progs.kfunc_call_test_get_mem_fail2, true); + err = kfunc_call_test__load(skel); + ASSERT_ERR(err, "load(kfunc_call_test_get_mem_fail2)"); + + kfunc_call_test__destroy(skel); +} + void test_kfunc_call(void) { if (test__start_subtest("main")) @@ -160,4 +205,7 @@ void test_kfunc_call(void)
if (test__start_subtest("destructive")) test_destructive(); + + if (test__start_subtest("get_mem")) + test_get_mem(); } diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_test.c b/tools/testing/selftests/bpf/progs/kfunc_call_test.c index da7ae0ef9100..b4a98d17c2b6 100644 --- a/tools/testing/selftests/bpf/progs/kfunc_call_test.c +++ b/tools/testing/selftests/bpf/progs/kfunc_call_test.c @@ -14,6 +14,8 @@ extern void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) __ksym; extern void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym; extern void bpf_kfunc_call_test_mem_len_pass1(void *mem, int len) __ksym; extern void bpf_kfunc_call_test_mem_len_fail2(__u64 *mem, int len) __ksym; +extern int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, const int rdwr_buf_size) __ksym; +extern int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size) __ksym;
SEC("tc") int kfunc_call_test2(struct __sk_buff *skb) @@ -128,4 +130,91 @@ int kfunc_syscall_test_fail(struct syscall_test_args *args) return 0; }
+SEC("tc") +int kfunc_call_test_get_mem(struct __sk_buff *skb) +{ + struct prog_test_ref_kfunc *pt; + unsigned long s = 0; + int *p = NULL; + int ret = 0; + + pt = bpf_kfunc_call_test_acquire(&s); + if (pt) { + if (pt->a != 42 || pt->b != 108) + ret = -1; + + p = bpf_kfunc_call_test_get_rdwr_mem(pt, 2 * sizeof(int)); + if (p) { + p[0] = 42; + ret = p[1]; /* 108 */ + } else { + ret = -1; + } + + if (ret >= 0) { + p = bpf_kfunc_call_test_get_rdonly_mem(pt, 2 * sizeof(int)); + if (p) + ret = p[0]; /* 42 */ + else + ret = -1; + } + + bpf_kfunc_call_test_release(pt); + } + return ret; +} + +SEC("?tc") +int kfunc_call_test_get_mem_fail1(struct __sk_buff *skb) +{ + struct prog_test_ref_kfunc *pt; + unsigned long s = 0; + int *p = NULL; + int ret = 0; + + pt = bpf_kfunc_call_test_acquire(&s); + if (pt) { + if (pt->a != 42 || pt->b != 108) + ret = -1; + + p = bpf_kfunc_call_test_get_rdonly_mem(pt, 2 * sizeof(int)); + if (p) + p[0] = 42; /* this is a read-only buffer, so -EACCES */ + else + ret = -1; + + bpf_kfunc_call_test_release(pt); + } + return ret; +} + +SEC("?tc") +int kfunc_call_test_get_mem_fail2(struct __sk_buff *skb) +{ + struct prog_test_ref_kfunc *pt; + unsigned long s = 0; + int *p = NULL; + int ret = 0; + + pt = bpf_kfunc_call_test_acquire(&s); + if (pt) { + if (pt->a != 42 || pt->b != 108) + ret = -1; + + p = bpf_kfunc_call_test_get_rdwr_mem(pt, 2 * sizeof(int)); + if (p) { + p[0] = 42; + ret = p[1]; /* 108 */ + } else { + ret = -1; + } + + bpf_kfunc_call_test_release(pt); + } + if (p) + ret = p[0]; /* p is not valid anymore */ + + return ret; +} + char _license[] SEC("license") = "GPL";
On Wed, 24 Aug 2022 at 15:41, Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
We add 2 new kfuncs that are following the RET_PTR_TO_MEM capability from the previous commit. Then we test them in selftests: the first tests are testing valid case, and are not failing, and the later ones are actually preventing the program to be loaded because they are wrong.
To work around that, we mark the failing ones as not autoloaded (with SEC("?tc")), and we manually enable them one by one, ensuring the verifier rejects them.
To be able to use bpf_program__set_autoload() from libbpf, we need to use a plain skeleton, not a light-skeleton, and this is why we also change the Makefile to generate both for kfunc_call_test.c
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
changes in v9:
- updated to match upstream (net/bpf/test_run.c id sets is now using flags)
no changes in v8
changes in v7:
- removed stray include/linux/btf.h change
new in v6
net/bpf/test_run.c | 20 +++++ tools/testing/selftests/bpf/Makefile | 5 +- .../selftests/bpf/prog_tests/kfunc_call.c | 48 ++++++++++ .../selftests/bpf/progs/kfunc_call_test.c | 89 +++++++++++++++++++ 4 files changed, 160 insertions(+), 2 deletions(-)
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index f16baf977a21..6accd57d4ded 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -606,6 +606,24 @@ noinline void bpf_kfunc_call_memb1_release(struct prog_test_member1 *p) WARN_ON_ONCE(1); }
+static int *__bpf_kfunc_call_test_get_mem(struct prog_test_ref_kfunc *p, const int size) +{
if (size > 2 * sizeof(int))
return NULL;
return (int *)p;
+}
+noinline int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, const int rdwr_buf_size) +{
return __bpf_kfunc_call_test_get_mem(p, rdwr_buf_size);
+}
+noinline int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size) +{
return __bpf_kfunc_call_test_get_mem(p, rdonly_buf_size);
+}
noinline struct prog_test_ref_kfunc * bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **pp, int a, int b) { @@ -712,6 +730,8 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_memb_acquire, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_test_release, KF_RELEASE) BTF_ID_FLAGS(func, bpf_kfunc_call_memb_release, KF_RELEASE) BTF_ID_FLAGS(func, bpf_kfunc_call_memb1_release, KF_RELEASE) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_get_rdwr_mem, KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_get_rdonly_mem, KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_test_kptr_get, KF_ACQUIRE | KF_RET_NULL | KF_KPTR_GET) BTF_ID_FLAGS(func, bpf_kfunc_call_test_pass_ctx) BTF_ID_FLAGS(func, bpf_kfunc_call_test_pass1) diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 8d59ec7f4c2d..0905315ff86d 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -350,11 +350,12 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ test_subskeleton.skel.h test_subskeleton_lib.skel.h \ test_usdt.skel.h
-LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ +LSKELS := fentry_test.c fexit_test.c fexit_sleep.c \ test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \ map_ptr_kern.c core_kern.c core_kern_overflow.c # Generate both light skeleton and libbpf skeleton for these -LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test_subprog.c +LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test.c \
kfunc_call_test_subprog.c
SKEL_BLACKLIST += $$(LSKELS)
test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c index 1edad012fe01..590417d48962 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c @@ -2,6 +2,7 @@ /* Copyright (c) 2021 Facebook */ #include <test_progs.h> #include <network_helpers.h> +#include "kfunc_call_test.skel.h" #include "kfunc_call_test.lskel.h" #include "kfunc_call_test_subprog.skel.h" #include "kfunc_call_test_subprog.lskel.h" @@ -53,10 +54,12 @@ static void test_main(void) prog_fd = skel->progs.kfunc_syscall_test.prog_fd; err = bpf_prog_test_run_opts(prog_fd, &syscall_topts); ASSERT_OK(err, "bpf_prog_test_run(syscall_test)");
ASSERT_EQ(syscall_topts.retval, 0, "syscall_test-retval"); prog_fd = skel->progs.kfunc_syscall_test_fail.prog_fd; err = bpf_prog_test_run_opts(prog_fd, &syscall_topts); ASSERT_ERR(err, "bpf_prog_test_run(syscall_test_fail)");
ASSERT_EQ(syscall_topts.retval, 0, "syscall_test_fail-retval"); syscall_topts.ctx_in = NULL; syscall_topts.ctx_size_in = 0;
@@ -147,6 +150,48 @@ static void test_destructive(void) cap_enable_effective(save_caps, NULL); }
+static void test_get_mem(void) +{
struct kfunc_call_test *skel;
int prog_fd, err;
LIBBPF_OPTS(bpf_test_run_opts, topts,
.data_in = &pkt_v4,
.data_size_in = sizeof(pkt_v4),
.repeat = 1,
);
skel = kfunc_call_test__open_and_load();
if (!ASSERT_OK_PTR(skel, "skel"))
return;
prog_fd = bpf_program__fd(skel->progs.kfunc_call_test_get_mem);
err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "bpf_prog_test_run(test_get_mem)");
ASSERT_EQ(topts.retval, 42, "test_get_mem-retval");
kfunc_call_test__destroy(skel);
/* start the various failing tests */
skel = kfunc_call_test__open();
if (!ASSERT_OK_PTR(skel, "skel"))
return;
bpf_program__set_autoload(skel->progs.kfunc_call_test_get_mem_fail1, true);
err = kfunc_call_test__load(skel);
ASSERT_ERR(err, "load(kfunc_call_test_get_mem_fail1)");
kfunc_call_test__destroy(skel);
skel = kfunc_call_test__open();
if (!ASSERT_OK_PTR(skel, "skel"))
return;
bpf_program__set_autoload(skel->progs.kfunc_call_test_get_mem_fail2, true);
err = kfunc_call_test__load(skel);
ASSERT_ERR(err, "load(kfunc_call_test_get_mem_fail2)");
We should match the verifier error string. See e.g. how dynptr tests work. Also it would be better to split failure and success tests into separate objects.
kfunc_call_test__destroy(skel);
+}
void test_kfunc_call(void) { if (test__start_subtest("main")) @@ -160,4 +205,7 @@ void test_kfunc_call(void)
if (test__start_subtest("destructive")) test_destructive();
if (test__start_subtest("get_mem"))
test_get_mem();
} diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_test.c b/tools/testing/selftests/bpf/progs/kfunc_call_test.c index da7ae0ef9100..b4a98d17c2b6 100644 --- a/tools/testing/selftests/bpf/progs/kfunc_call_test.c +++ b/tools/testing/selftests/bpf/progs/kfunc_call_test.c @@ -14,6 +14,8 @@ extern void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) __ksym; extern void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym; extern void bpf_kfunc_call_test_mem_len_pass1(void *mem, int len) __ksym; extern void bpf_kfunc_call_test_mem_len_fail2(__u64 *mem, int len) __ksym; +extern int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, const int rdwr_buf_size) __ksym; +extern int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size) __ksym;
SEC("tc") int kfunc_call_test2(struct __sk_buff *skb) @@ -128,4 +130,91 @@ int kfunc_syscall_test_fail(struct syscall_test_args *args) return 0; }
+SEC("tc") +int kfunc_call_test_get_mem(struct __sk_buff *skb) +{
struct prog_test_ref_kfunc *pt;
unsigned long s = 0;
int *p = NULL;
int ret = 0;
pt = bpf_kfunc_call_test_acquire(&s);
if (pt) {
if (pt->a != 42 || pt->b != 108)
ret = -1;
No need to test this I think.
p = bpf_kfunc_call_test_get_rdwr_mem(pt, 2 * sizeof(int));
if (p) {
p[0] = 42;
ret = p[1]; /* 108 */
} else {
ret = -1;
}
if (ret >= 0) {
p = bpf_kfunc_call_test_get_rdonly_mem(pt, 2 * sizeof(int));
if (p)
ret = p[0]; /* 42 */
else
ret = -1;
}
bpf_kfunc_call_test_release(pt);
}
return ret;
+}
+SEC("?tc") +int kfunc_call_test_get_mem_fail1(struct __sk_buff *skb) +{
struct prog_test_ref_kfunc *pt;
unsigned long s = 0;
int *p = NULL;
int ret = 0;
pt = bpf_kfunc_call_test_acquire(&s);
if (pt) {
if (pt->a != 42 || pt->b != 108)
ret = -1;
p = bpf_kfunc_call_test_get_rdonly_mem(pt, 2 * sizeof(int));
if (p)
p[0] = 42; /* this is a read-only buffer, so -EACCES */
else
ret = -1;
bpf_kfunc_call_test_release(pt);
}
return ret;
+}
+SEC("?tc") +int kfunc_call_test_get_mem_fail2(struct __sk_buff *skb) +{
struct prog_test_ref_kfunc *pt;
unsigned long s = 0;
int *p = NULL;
int ret = 0;
pt = bpf_kfunc_call_test_acquire(&s);
if (pt) {
if (pt->a != 42 || pt->b != 108)
ret = -1;
p = bpf_kfunc_call_test_get_rdwr_mem(pt, 2 * sizeof(int));
if (p) {
p[0] = 42;
ret = p[1]; /* 108 */
} else {
ret = -1;
}
bpf_kfunc_call_test_release(pt);
}
if (p)
ret = p[0]; /* p is not valid anymore */
Great that this ref_obj_id transfer is tested. A few more small test cases that come to mind: . oob access to returned ptr_to_mem to ensure size is set correctly. . failure when size is not 'const', since this is not going through check_mem_size_reg. . incorrect acq kfunc type inside kernel so that on use its ret type is not struct ptr, so verifier complains about it.
return ret;
+}
char _license[] SEC("license") = "GPL";
2.36.1
Add BPF_MAP_GET_FD_BY_ID and BPF_MAP_DELETE_PROG.
Only BPF_MAP_GET_FD_BY_ID needs to be amended to be able to access the bpf pointer either from the userspace or the kernel.
Acked-by: Yonghong Song yhs@fb.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
changes in v6: - commit description change
new in v5 --- kernel/bpf/syscall.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index a4d40d98428a..4e9d4622aef7 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1437,9 +1437,9 @@ static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
#define BPF_MAP_DELETE_ELEM_LAST_FIELD key
-static int map_delete_elem(union bpf_attr *attr) +static int map_delete_elem(union bpf_attr *attr, bpfptr_t uattr) { - void __user *ukey = u64_to_user_ptr(attr->key); + bpfptr_t ukey = make_bpfptr(attr->key, uattr.is_kernel); int ufd = attr->map_fd; struct bpf_map *map; struct fd f; @@ -1459,7 +1459,7 @@ static int map_delete_elem(union bpf_attr *attr) goto err_put; }
- key = __bpf_copy_key(ukey, map->key_size); + key = ___bpf_copy_key(ukey, map->key_size); if (IS_ERR(key)) { err = PTR_ERR(key); goto err_put; @@ -4941,7 +4941,7 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) err = map_update_elem(&attr, uattr); break; case BPF_MAP_DELETE_ELEM: - err = map_delete_elem(&attr); + err = map_delete_elem(&attr, uattr); break; case BPF_MAP_GET_NEXT_KEY: err = map_get_next_key(&attr); @@ -5073,8 +5073,10 @@ BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size) { switch (cmd) { case BPF_MAP_CREATE: + case BPF_MAP_DELETE_ELEM: case BPF_MAP_UPDATE_ELEM: case BPF_MAP_FREEZE: + case BPF_MAP_GET_FD_BY_ID: case BPF_PROG_LOAD: case BPF_BTF_LOAD: case BPF_LINK_CREATE:
This allows to have a better control over maps from the kernel when preloading eBPF programs.
Acked-by: Yonghong Song yhs@fb.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
no changes in v6
new in v5 --- tools/lib/bpf/skel_internal.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)
diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h index 00c5f94b43be..1e82ab06c3eb 100644 --- a/tools/lib/bpf/skel_internal.h +++ b/tools/lib/bpf/skel_internal.h @@ -251,6 +251,29 @@ static inline int skel_map_update_elem(int fd, const void *key, return skel_sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, attr_sz); }
+static inline int skel_map_delete_elem(int fd, const void *key) +{ + const size_t attr_sz = offsetofend(union bpf_attr, flags); + union bpf_attr attr; + + memset(&attr, 0, attr_sz); + attr.map_fd = fd; + attr.key = (long)key; + + return skel_sys_bpf(BPF_MAP_DELETE_ELEM, &attr, attr_sz); +} + +static inline int skel_map_get_fd_by_id(__u32 id) +{ + const size_t attr_sz = offsetofend(union bpf_attr, flags); + union bpf_attr attr; + + memset(&attr, 0, attr_sz); + attr.map_id = id; + + return skel_sys_bpf(BPF_MAP_GET_FD_BY_ID, &attr, attr_sz); +} + static inline int skel_raw_tracepoint_open(const char *name, int prog_fd) { const size_t attr_sz = offsetofend(union bpf_attr, raw_tracepoint.prog_fd);
This unique identifier is currently used only for ensuring uniqueness in sysfs. However, this could be handful for userspace to refer to a specific hid_device by this id.
2 use cases are in my mind: LEDs (and their naming convention), and HID-BPF.
Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
no changes in v6
new in v5 --- drivers/hid/hid-core.c | 4 +++- include/linux/hid.h | 2 ++ 2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index b7f5566e338d..a00dd43db8bf 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -2739,10 +2739,12 @@ int hid_add_device(struct hid_device *hdev) hid_warn(hdev, "bad device descriptor (%d)\n", ret); }
+ hdev->id = atomic_inc_return(&id); + /* XXX hack, any other cleaner solution after the driver core * is converted to allow more than 20 bytes as the device name? */ dev_set_name(&hdev->dev, "%04X:%04X:%04X.%04X", hdev->bus, - hdev->vendor, hdev->product, atomic_inc_return(&id)); + hdev->vendor, hdev->product, hdev->id);
hid_debug_register(hdev, dev_name(&hdev->dev)); ret = device_add(&hdev->dev); diff --git a/include/linux/hid.h b/include/linux/hid.h index 4363a63b9775..a43dd17bc78f 100644 --- a/include/linux/hid.h +++ b/include/linux/hid.h @@ -658,6 +658,8 @@ struct hid_device { /* device report descriptor */ struct list_head debug_list; spinlock_t debug_list_lock; wait_queue_head_t debug_wait; + + unsigned int id; /* system unique id */ };
#define to_hid_device(pdev) \
When we are dealing with eBPF, we need to have access to the report type. Currently our implementation differs from the USB standard, making it impossible for users to know the exact value besides hardcoding it themselves.
And instead of a blank define, convert it as an enum.
Note that we need to also do change in the ll_driver API, but given that this will have a wider impact outside of this tree, we leave this as a TODO for the future.
Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
changes in v6: - add missing change for hid_hw_raw_request()
new in v5 --- drivers/hid/hid-core.c | 13 +++++++------ include/linux/hid.h | 24 ++++++++---------------- include/uapi/linux/hid.h | 12 ++++++++++++ 3 files changed, 27 insertions(+), 22 deletions(-)
diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index a00dd43db8bf..ab98754522d9 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -55,7 +55,7 @@ MODULE_PARM_DESC(ignore_special_drivers, "Ignore any special drivers and handle */
struct hid_report *hid_register_report(struct hid_device *device, - unsigned int type, unsigned int id, + enum hid_report_type type, unsigned int id, unsigned int application) { struct hid_report_enum *report_enum = device->report_enum + type; @@ -967,7 +967,7 @@ static const char * const hid_report_names[] = { * parsing. */ struct hid_report *hid_validate_values(struct hid_device *hid, - unsigned int type, unsigned int id, + enum hid_report_type type, unsigned int id, unsigned int field_index, unsigned int report_counts) { @@ -1954,8 +1954,8 @@ int __hid_request(struct hid_device *hid, struct hid_report *report, } EXPORT_SYMBOL_GPL(__hid_request);
-int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size, - int interrupt) +int hid_report_raw_event(struct hid_device *hid, enum hid_report_type type, u8 *data, u32 size, + int interrupt) { struct hid_report_enum *report_enum = hid->report_enum + type; struct hid_report *report; @@ -2019,7 +2019,8 @@ EXPORT_SYMBOL_GPL(hid_report_raw_event); * * This is data entry for lower layers. */ -int hid_input_report(struct hid_device *hid, int type, u8 *data, u32 size, int interrupt) +int hid_input_report(struct hid_device *hid, enum hid_report_type type, u8 *data, u32 size, + int interrupt) { struct hid_report_enum *report_enum; struct hid_driver *hdrv; @@ -2377,7 +2378,7 @@ EXPORT_SYMBOL_GPL(hid_hw_request); */ int hid_hw_raw_request(struct hid_device *hdev, unsigned char reportnum, __u8 *buf, - size_t len, unsigned char rtype, int reqtype) + size_t len, enum hid_report_type rtype, int reqtype) { if (len < 1 || len > HID_MAX_BUFFER_SIZE || !buf) return -EINVAL; diff --git a/include/linux/hid.h b/include/linux/hid.h index a43dd17bc78f..b1a33dbbc78e 100644 --- a/include/linux/hid.h +++ b/include/linux/hid.h @@ -314,15 +314,6 @@ struct hid_item { #define HID_BAT_ABSOLUTESTATEOFCHARGE 0x00850065
#define HID_VD_ASUS_CUSTOM_MEDIA_KEYS 0xff310076 -/* - * HID report types --- Ouch! HID spec says 1 2 3! - */ - -#define HID_INPUT_REPORT 0 -#define HID_OUTPUT_REPORT 1 -#define HID_FEATURE_REPORT 2 - -#define HID_REPORT_TYPES 3
/* * HID connect requests @@ -509,7 +500,7 @@ struct hid_report { struct list_head hidinput_list; struct list_head field_entry_list; /* ordered list of input fields */ unsigned int id; /* id of this report */ - unsigned int type; /* report type */ + enum hid_report_type type; /* report type */ unsigned int application; /* application usage for this report */ struct hid_field *field[HID_MAX_FIELDS]; /* fields of the report */ struct hid_field_entry *field_entries; /* allocated memory of input field_entry */ @@ -926,7 +917,8 @@ extern int hidinput_connect(struct hid_device *hid, unsigned int force); extern void hidinput_disconnect(struct hid_device *);
int hid_set_field(struct hid_field *, unsigned, __s32); -int hid_input_report(struct hid_device *, int type, u8 *, u32, int); +int hid_input_report(struct hid_device *hid, enum hid_report_type type, u8 *data, u32 size, + int interrupt); struct hid_field *hidinput_get_led_field(struct hid_device *hid); unsigned int hidinput_count_leds(struct hid_device *hid); __s32 hidinput_calc_abs_res(const struct hid_field *field, __u16 code); @@ -935,11 +927,11 @@ int __hid_request(struct hid_device *hid, struct hid_report *rep, int reqtype); u8 *hid_alloc_report_buf(struct hid_report *report, gfp_t flags); struct hid_device *hid_allocate_device(void); struct hid_report *hid_register_report(struct hid_device *device, - unsigned int type, unsigned int id, + enum hid_report_type type, unsigned int id, unsigned int application); int hid_parse_report(struct hid_device *hid, __u8 *start, unsigned size); struct hid_report *hid_validate_values(struct hid_device *hid, - unsigned int type, unsigned int id, + enum hid_report_type type, unsigned int id, unsigned int field_index, unsigned int report_counts);
@@ -1111,7 +1103,7 @@ void hid_hw_request(struct hid_device *hdev, struct hid_report *report, int reqtype); int hid_hw_raw_request(struct hid_device *hdev, unsigned char reportnum, __u8 *buf, - size_t len, unsigned char rtype, int reqtype); + size_t len, enum hid_report_type rtype, int reqtype); int hid_hw_output_report(struct hid_device *hdev, __u8 *buf, size_t len);
/** @@ -1184,8 +1176,8 @@ static inline u32 hid_report_len(struct hid_report *report) return DIV_ROUND_UP(report->size, 8) + (report->id > 0); }
-int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size, - int interrupt); +int hid_report_raw_event(struct hid_device *hid, enum hid_report_type type, u8 *data, u32 size, + int interrupt);
/* HID quirks API */ unsigned long hid_lookup_quirk(const struct hid_device *hdev); diff --git a/include/uapi/linux/hid.h b/include/uapi/linux/hid.h index b34492a87a8a..b25b0bacaff2 100644 --- a/include/uapi/linux/hid.h +++ b/include/uapi/linux/hid.h @@ -42,6 +42,18 @@ #define USB_INTERFACE_PROTOCOL_KEYBOARD 1 #define USB_INTERFACE_PROTOCOL_MOUSE 2
+/* + * HID report types --- Ouch! HID spec says 1 2 3! + */ + +enum hid_report_type { + HID_INPUT_REPORT = 0, + HID_OUTPUT_REPORT = 1, + HID_FEATURE_REPORT = 2, + + HID_REPORT_TYPES, +}; + /* * HID class requests */
This allows to export the type in BTF and so in the automatically generated vmlinux.h. It will also add some static checks on the users when we change the ll driver API (see not below).
Note that we need to also do change in the ll_driver API, but given that this will have a wider impact outside of this tree, we leave this as a TODO for the future.
Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
new in v6 --- drivers/hid/hid-core.c | 6 +++--- include/linux/hid.h | 9 +++++---- include/uapi/linux/hid.h | 14 ++++++++------ 3 files changed, 16 insertions(+), 13 deletions(-)
diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index ab98754522d9..aff37d6f587c 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -1921,7 +1921,7 @@ static struct hid_report *hid_get_report(struct hid_report_enum *report_enum, * DO NOT USE in hid drivers directly, but through hid_hw_request instead. */ int __hid_request(struct hid_device *hid, struct hid_report *report, - int reqtype) + enum hid_class_request reqtype) { char *buf; int ret; @@ -2353,7 +2353,7 @@ EXPORT_SYMBOL_GPL(hid_hw_close); * @reqtype: hid request type */ void hid_hw_request(struct hid_device *hdev, - struct hid_report *report, int reqtype) + struct hid_report *report, enum hid_class_request reqtype) { if (hdev->ll_driver->request) return hdev->ll_driver->request(hdev, report, reqtype); @@ -2378,7 +2378,7 @@ EXPORT_SYMBOL_GPL(hid_hw_request); */ int hid_hw_raw_request(struct hid_device *hdev, unsigned char reportnum, __u8 *buf, - size_t len, enum hid_report_type rtype, int reqtype) + size_t len, enum hid_report_type rtype, enum hid_class_request reqtype) { if (len < 1 || len > HID_MAX_BUFFER_SIZE || !buf) return -EINVAL; diff --git a/include/linux/hid.h b/include/linux/hid.h index b1a33dbbc78e..8677ae38599e 100644 --- a/include/linux/hid.h +++ b/include/linux/hid.h @@ -923,7 +923,7 @@ struct hid_field *hidinput_get_led_field(struct hid_device *hid); unsigned int hidinput_count_leds(struct hid_device *hid); __s32 hidinput_calc_abs_res(const struct hid_field *field, __u16 code); void hid_output_report(struct hid_report *report, __u8 *data); -int __hid_request(struct hid_device *hid, struct hid_report *rep, int reqtype); +int __hid_request(struct hid_device *hid, struct hid_report *rep, enum hid_class_request reqtype); u8 *hid_alloc_report_buf(struct hid_report *report, gfp_t flags); struct hid_device *hid_allocate_device(void); struct hid_report *hid_register_report(struct hid_device *device, @@ -1100,10 +1100,11 @@ void hid_hw_stop(struct hid_device *hdev); int __must_check hid_hw_open(struct hid_device *hdev); void hid_hw_close(struct hid_device *hdev); void hid_hw_request(struct hid_device *hdev, - struct hid_report *report, int reqtype); + struct hid_report *report, enum hid_class_request reqtype); int hid_hw_raw_request(struct hid_device *hdev, unsigned char reportnum, __u8 *buf, - size_t len, enum hid_report_type rtype, int reqtype); + size_t len, enum hid_report_type rtype, + enum hid_class_request reqtype); int hid_hw_output_report(struct hid_device *hdev, __u8 *buf, size_t len);
/** @@ -1131,7 +1132,7 @@ static inline int hid_hw_power(struct hid_device *hdev, int level) * @reqtype: hid request type */ static inline int hid_hw_idle(struct hid_device *hdev, int report, int idle, - int reqtype) + enum hid_class_request reqtype) { if (hdev->ll_driver->idle) return hdev->ll_driver->idle(hdev, report, idle, reqtype); diff --git a/include/uapi/linux/hid.h b/include/uapi/linux/hid.h index b25b0bacaff2..a4dcb34386e3 100644 --- a/include/uapi/linux/hid.h +++ b/include/uapi/linux/hid.h @@ -58,12 +58,14 @@ enum hid_report_type { * HID class requests */
-#define HID_REQ_GET_REPORT 0x01 -#define HID_REQ_GET_IDLE 0x02 -#define HID_REQ_GET_PROTOCOL 0x03 -#define HID_REQ_SET_REPORT 0x09 -#define HID_REQ_SET_IDLE 0x0A -#define HID_REQ_SET_PROTOCOL 0x0B +enum hid_class_request { + HID_REQ_GET_REPORT = 0x01, + HID_REQ_GET_IDLE = 0x02, + HID_REQ_GET_PROTOCOL = 0x03, + HID_REQ_SET_REPORT = 0x09, + HID_REQ_SET_IDLE = 0x0A, + HID_REQ_SET_PROTOCOL = 0x0B, +};
/* * HID class descriptor types
Currently, we step into drivers/hid/ based on the value of CONFIG_HID.
However, that value is a tristate, meaning that it can be a module.
As per the documentation, if we jump into the subdirectory by following an obj-m, we can not compile anything inside that subdirectory in vmlinux. It is considered as a bug.
To make things more friendly to HID-BPF, split HID (the HID core parameter) from HID_SUPPORT (do we want any kind of HID support in the system?), and make this new config a boolean.
Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
new in v7 --- drivers/Makefile | 2 +- drivers/hid/Kconfig | 20 +++++++++++--------- 2 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/Makefile b/drivers/Makefile index 057857258bfd..a24e6be80764 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -137,7 +137,7 @@ obj-$(CONFIG_CRYPTO) += crypto/ obj-$(CONFIG_SUPERH) += sh/ obj-y += clocksource/ obj-$(CONFIG_DCA) += dca/ -obj-$(CONFIG_HID) += hid/ +obj-$(CONFIG_HID_SUPPORT) += hid/ obj-$(CONFIG_PPC_PS3) += ps3/ obj-$(CONFIG_OF) += of/ obj-$(CONFIG_SSB) += ssb/ diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig index 6ce92830b5d1..4f24e42372dc 100644 --- a/drivers/hid/Kconfig +++ b/drivers/hid/Kconfig @@ -2,12 +2,18 @@ # # HID driver configuration # -menu "HID support" - depends on INPUT +menuconfig HID_SUPPORT + bool "HID bus support" + default y + depends on INPUT + help + This option adds core support for human interface device (HID). + You will also need drivers from the following menu to make use of it. + +if HID_SUPPORT
config HID - tristate "HID bus support" - depends on INPUT + tristate "HID bus core support" default y help A human interface device (HID) is a type of computer device that @@ -24,8 +30,6 @@ config HID
If unsure, say Y.
-if HID - config HID_BATTERY_STRENGTH bool "Battery level reporting for HID devices" depends on HID @@ -1324,8 +1328,6 @@ config HID_KUNIT_TEST
endmenu
-endif # HID - source "drivers/hid/usbhid/Kconfig"
source "drivers/hid/i2c-hid/Kconfig" @@ -1336,4 +1338,4 @@ source "drivers/hid/amd-sfh-hid/Kconfig"
source "drivers/hid/surface-hid/Kconfig"
-endmenu +endif # HID_SUPPORT
Declare an entry point that can use fmod_ret BPF programs, and also an API to access and change the incoming data.
A simpler implementation would consist in just calling hid_bpf_device_event() for any incoming event and let users deal with the fact that they will be called for any event of any device.
The goal of HID-BPF is to partially replace drivers, so this situation can be problematic because we might have programs which will step on each other toes.
For that, we add a new API hid_bpf_attach_prog() that can be called from a syscall and we manually deal with a jump table in hid-bpf.
Whenever we add a program to the jump table (in other words, when we attach a program to a HID device), we keep the number of time we added this program in the jump table so we can release it whenever there are no other users.
HID devices have an RCU protected list of available programs in the jump table, and those programs are called one after the other thanks to bpf_tail_call().
To achieve the detection of users losing their fds on the programs we attached, we add 2 tracing facilities on bpf_prog_release() (for when a fd is closed) and bpf_free_inode() (for when a pinned program gets unpinned).
Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
changes in v9: - fixed the kfunc declaration according to the latest upstream changes - tiny change in the SPDX header of include/linux/hid_bpf.h - removal of one obsolete comment in drivers/hid/bpf/Kconfig
no changes in v8
changes in v7: - generate the entrypoints bpf lskel through bootstrop of bpftool for efficiency - fix warning detected by kernel test robot lkp@intel.com where CONFIG_BPF was used instead of CONFIG_HID_BPF - declare __hid_bpf_tail_call in hid_bpf.h to shut up warning - fix static declarations of __init and __exit functions - changed the default Kconfig to be `default HID_SUPPORT` and do not select HID
changes in v6: - use BTF_ID to get the btf_id of hid_bpf_device_event instead of loading/unloading a dummy eBPF program.
changes in v5: - all the HID bpf operations are in their dedicated module - a bpf program is preloaded on startup so we can call subsequent calls with bpf_tail_call - make hid_bpf_ctx more compact - add a dedicated hid_bpf_attach_prog() API - store the list of progs in each hdev - monitor the calls to bpf_prog_release to automatically release attached progs when there are no other users - add kernel docs directly when functions are defined
new-ish in v4: - far from complete, but gives an overview of what we can do now. --- drivers/hid/Kconfig | 2 + drivers/hid/Makefile | 2 + drivers/hid/bpf/Kconfig | 17 + drivers/hid/bpf/Makefile | 11 + drivers/hid/bpf/entrypoints/Makefile | 93 +++ drivers/hid/bpf/entrypoints/README | 4 + drivers/hid/bpf/entrypoints/entrypoints.bpf.c | 66 ++ .../hid/bpf/entrypoints/entrypoints.lskel.h | 682 ++++++++++++++++++ drivers/hid/bpf/hid_bpf_dispatch.c | 223 ++++++ drivers/hid/bpf/hid_bpf_dispatch.h | 27 + drivers/hid/bpf/hid_bpf_jmp_table.c | 568 +++++++++++++++ drivers/hid/hid-core.c | 15 + include/linux/hid.h | 5 + include/linux/hid_bpf.h | 102 +++ include/uapi/linux/hid_bpf.h | 25 + tools/include/uapi/linux/hid.h | 62 ++ tools/include/uapi/linux/hid_bpf.h | 25 + 17 files changed, 1929 insertions(+) create mode 100644 drivers/hid/bpf/Kconfig create mode 100644 drivers/hid/bpf/Makefile create mode 100644 drivers/hid/bpf/entrypoints/Makefile create mode 100644 drivers/hid/bpf/entrypoints/README create mode 100644 drivers/hid/bpf/entrypoints/entrypoints.bpf.c create mode 100644 drivers/hid/bpf/entrypoints/entrypoints.lskel.h create mode 100644 drivers/hid/bpf/hid_bpf_dispatch.c create mode 100644 drivers/hid/bpf/hid_bpf_dispatch.h create mode 100644 drivers/hid/bpf/hid_bpf_jmp_table.c create mode 100644 include/linux/hid_bpf.h create mode 100644 include/uapi/linux/hid_bpf.h create mode 100644 tools/include/uapi/linux/hid.h create mode 100644 tools/include/uapi/linux/hid_bpf.h
diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig index 4f24e42372dc..1a2a65c68205 100644 --- a/drivers/hid/Kconfig +++ b/drivers/hid/Kconfig @@ -1328,6 +1328,8 @@ config HID_KUNIT_TEST
endmenu
+source "drivers/hid/bpf/Kconfig" + source "drivers/hid/usbhid/Kconfig"
source "drivers/hid/i2c-hid/Kconfig" diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile index b0bef8098139..e3ac587b9a21 100644 --- a/drivers/hid/Makefile +++ b/drivers/hid/Makefile @@ -5,6 +5,8 @@ hid-y := hid-core.o hid-input.o hid-quirks.o hid-$(CONFIG_DEBUG_FS) += hid-debug.o
+obj-$(CONFIG_HID_BPF) += bpf/ + obj-$(CONFIG_HID) += hid.o obj-$(CONFIG_UHID) += uhid.o
diff --git a/drivers/hid/bpf/Kconfig b/drivers/hid/bpf/Kconfig new file mode 100644 index 000000000000..298634fc3335 --- /dev/null +++ b/drivers/hid/bpf/Kconfig @@ -0,0 +1,17 @@ +# SPDX-License-Identifier: GPL-2.0-only +menu "HID-BPF support" + +config HID_BPF + bool "HID-BPF support" + default HID_SUPPORT + depends on BPF && BPF_SYSCALL + help + This option allows to support eBPF programs on the HID subsystem. + eBPF programs can fix HID devices in a lighter way than a full + kernel patch and allow a lot more flexibility. + + For documentation, see Documentation/hid/hid-bpf.rst + + If unsure, say Y. + +endmenu diff --git a/drivers/hid/bpf/Makefile b/drivers/hid/bpf/Makefile new file mode 100644 index 000000000000..cf55120cf7d6 --- /dev/null +++ b/drivers/hid/bpf/Makefile @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for HID-BPF +# + +LIBBPF_INCLUDE = $(srctree)/tools/lib + +obj-$(CONFIG_HID_BPF) += hid_bpf.o +CFLAGS_hid_bpf_dispatch.o += -I$(LIBBPF_INCLUDE) +CFLAGS_hid_bpf_jmp_table.o += -I$(LIBBPF_INCLUDE) +hid_bpf-objs += hid_bpf_dispatch.o hid_bpf_jmp_table.o diff --git a/drivers/hid/bpf/entrypoints/Makefile b/drivers/hid/bpf/entrypoints/Makefile new file mode 100644 index 000000000000..a12edcfa4fe3 --- /dev/null +++ b/drivers/hid/bpf/entrypoints/Makefile @@ -0,0 +1,93 @@ +# SPDX-License-Identifier: GPL-2.0 +OUTPUT := .output +abs_out := $(abspath $(OUTPUT)) + +CLANG ?= clang +LLC ?= llc +LLVM_STRIP ?= llvm-strip + +TOOLS_PATH := $(abspath ../../../../tools) +BPFTOOL_SRC := $(TOOLS_PATH)/bpf/bpftool +BPFTOOL_OUTPUT := $(abs_out)/bpftool +DEFAULT_BPFTOOL := $(BPFTOOL_OUTPUT)/bootstrap/bpftool +BPFTOOL ?= $(DEFAULT_BPFTOOL) + +LIBBPF_SRC := $(TOOLS_PATH)/lib/bpf +LIBBPF_OUTPUT := $(abs_out)/libbpf +LIBBPF_DESTDIR := $(LIBBPF_OUTPUT) +LIBBPF_INCLUDE := $(LIBBPF_DESTDIR)/include +BPFOBJ := $(LIBBPF_OUTPUT)/libbpf.a + +INCLUDES := -I$(OUTPUT) -I$(LIBBPF_INCLUDE) -I$(TOOLS_PATH)/include/uapi +CFLAGS := -g -Wall + +VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux) \ + $(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux) \ + ../../../../vmlinux \ + /sys/kernel/btf/vmlinux \ + /boot/vmlinux-$(shell uname -r) +VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS)))) +ifeq ($(VMLINUX_BTF),) +$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)") +endif + +ifeq ($(V),1) +Q = +msg = +else +Q = @ +msg = @printf ' %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))"; +MAKEFLAGS += --no-print-directory +submake_extras := feature_display=0 +endif + +.DELETE_ON_ERROR: + +.PHONY: all clean + +all: entrypoints.lskel.h + +clean: + $(call msg,CLEAN) + $(Q)rm -rf $(OUTPUT) entrypoints + +entrypoints.lskel.h: $(OUTPUT)/entrypoints.bpf.o | $(BPFTOOL) + $(call msg,GEN-SKEL,$@) + $(Q)$(BPFTOOL) gen skeleton -L $< > $@ + + +$(OUTPUT)/entrypoints.bpf.o: entrypoints.bpf.c $(OUTPUT)/vmlinux.h $(BPFOBJ) | $(OUTPUT) + $(call msg,BPF,$@) + $(Q)$(CLANG) -g -O2 -target bpf $(INCLUDES) \ + -c $(filter %.c,$^) -o $@ && \ + $(LLVM_STRIP) -g $@ + +$(OUTPUT)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL) | $(INCLUDE_DIR) +ifeq ($(VMLINUX_H),) + $(call msg,GEN,,$@) + $(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@ +else + $(call msg,CP,,$@) + $(Q)cp "$(VMLINUX_H)" $@ +endif + +$(OUTPUT) $(LIBBPF_OUTPUT) $(BPFTOOL_OUTPUT): + $(call msg,MKDIR,$@) + $(Q)mkdir -p $@ + +$(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(LIBBPF_OUTPUT) + $(Q)$(MAKE) $(submake_extras) -C $(LIBBPF_SRC) \ + OUTPUT=$(abspath $(dir $@))/ prefix= \ + DESTDIR=$(LIBBPF_DESTDIR) $(abspath $@) install_headers + +ifeq ($(CROSS_COMPILE),) +$(DEFAULT_BPFTOOL): $(BPFOBJ) | $(BPFTOOL_OUTPUT) + $(Q)$(MAKE) $(submake_extras) -C $(BPFTOOL_SRC) \ + OUTPUT=$(BPFTOOL_OUTPUT)/ \ + LIBBPF_BOOTSTRAP_OUTPUT=$(LIBBPF_OUTPUT)/ \ + LIBBPF_BOOTSTRAP_DESTDIR=$(LIBBPF_DESTDIR)/ bootstrap +else +$(DEFAULT_BPFTOOL): | $(BPFTOOL_OUTPUT) + $(Q)$(MAKE) $(submake_extras) -C $(BPFTOOL_SRC) \ + OUTPUT=$(BPFTOOL_OUTPUT)/ bootstrap +endif diff --git a/drivers/hid/bpf/entrypoints/README b/drivers/hid/bpf/entrypoints/README new file mode 100644 index 000000000000..147e0d41509f --- /dev/null +++ b/drivers/hid/bpf/entrypoints/README @@ -0,0 +1,4 @@ +WARNING: +If you change "entrypoints.bpf.c" do "make -j" in this directory to rebuild "entrypoints.skel.h". +Make sure to have clang 10 installed. +See Documentation/bpf/bpf_devel_QA.rst diff --git a/drivers/hid/bpf/entrypoints/entrypoints.bpf.c b/drivers/hid/bpf/entrypoints/entrypoints.bpf.c new file mode 100644 index 000000000000..41dd66d5fc7a --- /dev/null +++ b/drivers/hid/bpf/entrypoints/entrypoints.bpf.c @@ -0,0 +1,66 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Benjamin Tissoires */ + +#include ".output/vmlinux.h" +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +#define HID_BPF_MAX_PROGS 1024 + +extern bool call_hid_bpf_prog_release(u64 prog, int table_cnt) __ksym; + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, HID_BPF_MAX_PROGS); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} hid_jmp_table SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, HID_BPF_MAX_PROGS * HID_BPF_PROG_TYPE_MAX); + __type(key, void *); + __type(value, __u8); +} progs_map SEC(".maps"); + +SEC("fmod_ret/__hid_bpf_tail_call") +int BPF_PROG(hid_tail_call, struct hid_bpf_ctx *hctx) +{ + bpf_tail_call(ctx, &hid_jmp_table, hctx->index); + + return 0; +} + +static void release_prog(u64 prog) +{ + u8 *value; + + value = bpf_map_lookup_elem(&progs_map, &prog); + if (!value) + return; + + if (call_hid_bpf_prog_release(prog, *value)) + bpf_map_delete_elem(&progs_map, &prog); +} + +SEC("fexit/bpf_prog_release") +int BPF_PROG(hid_prog_release, struct inode *inode, struct file *filp) +{ + u64 prog = (u64)filp->private_data; + + release_prog(prog); + + return 0; +} + +SEC("fexit/bpf_free_inode") +int BPF_PROG(hid_free_inode, struct inode *inode) +{ + u64 prog = (u64)inode->i_private; + + release_prog(prog); + + return 0; +} + +char LICENSE[] SEC("license") = "GPL"; diff --git a/drivers/hid/bpf/entrypoints/entrypoints.lskel.h b/drivers/hid/bpf/entrypoints/entrypoints.lskel.h new file mode 100644 index 000000000000..d6a6045a06fe --- /dev/null +++ b/drivers/hid/bpf/entrypoints/entrypoints.lskel.h @@ -0,0 +1,682 @@ +/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ +/* THIS FILE IS AUTOGENERATED BY BPFTOOL! */ +#ifndef __ENTRYPOINTS_BPF_SKEL_H__ +#define __ENTRYPOINTS_BPF_SKEL_H__ + +#include <bpf/skel_internal.h> + +struct entrypoints_bpf { + struct bpf_loader_ctx ctx; + struct { + struct bpf_map_desc hid_jmp_table; + struct bpf_map_desc progs_map; + } maps; + struct { + struct bpf_prog_desc hid_tail_call; + struct bpf_prog_desc hid_prog_release; + struct bpf_prog_desc hid_free_inode; + } progs; + struct { + int hid_tail_call_fd; + int hid_prog_release_fd; + int hid_free_inode_fd; + } links; +}; + +static inline int +entrypoints_bpf__hid_tail_call__attach(struct entrypoints_bpf *skel) +{ + int prog_fd = skel->progs.hid_tail_call.prog_fd; + int fd = skel_raw_tracepoint_open(NULL, prog_fd); + + if (fd > 0) + skel->links.hid_tail_call_fd = fd; + return fd; +} + +static inline int +entrypoints_bpf__hid_prog_release__attach(struct entrypoints_bpf *skel) +{ + int prog_fd = skel->progs.hid_prog_release.prog_fd; + int fd = skel_raw_tracepoint_open(NULL, prog_fd); + + if (fd > 0) + skel->links.hid_prog_release_fd = fd; + return fd; +} + +static inline int +entrypoints_bpf__hid_free_inode__attach(struct entrypoints_bpf *skel) +{ + int prog_fd = skel->progs.hid_free_inode.prog_fd; + int fd = skel_raw_tracepoint_open(NULL, prog_fd); + + if (fd > 0) + skel->links.hid_free_inode_fd = fd; + return fd; +} + +static inline int +entrypoints_bpf__attach(struct entrypoints_bpf *skel) +{ + int ret = 0; + + ret = ret < 0 ? ret : entrypoints_bpf__hid_tail_call__attach(skel); + ret = ret < 0 ? ret : entrypoints_bpf__hid_prog_release__attach(skel); + ret = ret < 0 ? ret : entrypoints_bpf__hid_free_inode__attach(skel); + return ret < 0 ? ret : 0; +} + +static inline void +entrypoints_bpf__detach(struct entrypoints_bpf *skel) +{ + skel_closenz(skel->links.hid_tail_call_fd); + skel_closenz(skel->links.hid_prog_release_fd); + skel_closenz(skel->links.hid_free_inode_fd); +} +static void +entrypoints_bpf__destroy(struct entrypoints_bpf *skel) +{ + if (!skel) + return; + entrypoints_bpf__detach(skel); + skel_closenz(skel->progs.hid_tail_call.prog_fd); + skel_closenz(skel->progs.hid_prog_release.prog_fd); + skel_closenz(skel->progs.hid_free_inode.prog_fd); + skel_closenz(skel->maps.hid_jmp_table.map_fd); + skel_closenz(skel->maps.progs_map.map_fd); + skel_free(skel); +} +static inline struct entrypoints_bpf * +entrypoints_bpf__open(void) +{ + struct entrypoints_bpf *skel; + + skel = skel_alloc(sizeof(*skel)); + if (!skel) + goto cleanup; + skel->ctx.sz = (void *)&skel->links - (void *)skel; + return skel; +cleanup: + entrypoints_bpf__destroy(skel); + return NULL; +} + +static inline int +entrypoints_bpf__load(struct entrypoints_bpf *skel) +{ + struct bpf_load_and_run_opts opts = {}; + int err; + + opts.ctx = (struct bpf_loader_ctx *)skel; + opts.data_sz = 11504; + opts.data = (void *)"\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x9f\xeb\x01\0\ +\x18\0\0\0\0\0\0\0\x78\x14\0\0\x78\x14\0\0\xf4\x0c\0\0\0\0\0\0\0\0\0\x02\x03\0\ +\0\0\x01\0\0\0\0\0\0\x01\x04\0\0\0\x20\0\0\x01\0\0\0\0\0\0\0\x03\0\0\0\0\x02\0\ +\0\0\x04\0\0\0\x03\0\0\0\x05\0\0\0\0\0\0\x01\x04\0\0\0\x20\0\0\0\0\0\0\0\0\0\0\ +\x02\x06\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x02\0\0\0\x04\0\0\0\0\x04\0\0\0\0\0\0\ +\0\0\0\x02\x08\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x02\0\0\0\x04\0\0\0\x04\0\0\0\0\ +\0\0\0\x04\0\0\x04\x20\0\0\0\x19\0\0\0\x01\0\0\0\0\0\0\0\x1e\0\0\0\x05\0\0\0\ +\x40\0\0\0\x2a\0\0\0\x07\0\0\0\x80\0\0\0\x33\0\0\0\x07\0\0\0\xc0\0\0\0\x3e\0\0\ +\0\0\0\0\x0e\x09\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\x02\x0c\0\0\0\0\0\0\0\0\0\0\x03\ +\0\0\0\0\x02\0\0\0\x04\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\x02\x0e\0\0\0\0\0\0\0\0\0\ +\0\x03\0\0\0\0\x02\0\0\0\x04\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\x02\x10\0\0\0\0\0\0\ +\0\0\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\x02\x12\0\0\0\x4c\0\0\0\0\0\0\x08\x13\0\0\0\ +\x51\0\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\0\0\0\0\0\x04\0\0\x04\x20\0\0\0\x19\0\0\ +\0\x0b\0\0\0\0\0\0\0\x1e\0\0\0\x0d\0\0\0\x40\0\0\0\x5f\0\0\0\x0f\0\0\0\x80\0\0\ +\0\x63\0\0\0\x11\0\0\0\xc0\0\0\0\x69\0\0\0\0\0\0\x0e\x14\0\0\0\x01\0\0\0\0\0\0\ +\0\0\0\0\x02\x17\0\0\0\x73\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\0\0\0\0\x01\0\0\ +\x0d\x02\0\0\0\x86\0\0\0\x16\0\0\0\x8a\0\0\0\x01\0\0\x0c\x18\0\0\0\x30\x01\0\0\ +\x05\0\0\x04\x20\0\0\0\x3c\x01\0\0\x1b\0\0\0\0\0\0\0\x42\x01\0\0\x1d\0\0\0\x40\ +\0\0\0\x46\x01\0\0\x1b\0\0\0\x80\0\0\0\x55\x01\0\0\x1f\0\0\0\xa0\0\0\0\0\0\0\0\ +\x20\0\0\0\xc0\0\0\0\x61\x01\0\0\0\0\0\x08\x1c\0\0\0\x67\x01\0\0\0\0\0\x01\x04\ +\0\0\0\x20\0\0\0\0\0\0\0\0\0\0\x02\x1e\0\0\0\0\0\0\0\0\0\0\x0a\xb5\0\0\0\x74\ +\x01\0\0\x04\0\0\x06\x04\0\0\0\x84\x01\0\0\0\0\0\0\x95\x01\0\0\x01\0\0\0\xa7\ +\x01\0\0\x02\0\0\0\xba\x01\0\0\x03\0\0\0\0\0\0\0\x02\0\0\x05\x04\0\0\0\xcb\x01\ +\0\0\x21\0\0\0\0\0\0\0\xd2\x01\0\0\x21\0\0\0\0\0\0\0\xd7\x01\0\0\0\0\0\x08\x02\ +\0\0\0\0\0\0\0\x01\0\0\x0d\x02\0\0\0\x86\0\0\0\x16\0\0\0\x13\x02\0\0\x01\0\0\ +\x0c\x22\0\0\0\x82\x02\0\0\x14\0\0\x04\xc8\x01\0\0\x87\x02\0\0\x25\0\0\0\0\0\0\ +\0\x8b\x02\0\0\x2c\0\0\0\x80\0\0\0\x92\x02\0\0\x2f\0\0\0\0\x01\0\0\x9a\x02\0\0\ +\x30\0\0\0\x40\x01\0\0\x9f\x02\0\0\x32\0\0\0\x80\x01\0\0\xa6\x02\0\0\x59\0\0\0\ +\x80\x03\0\0\xae\x02\0\0\x1c\0\0\0\xc0\x03\0\0\xb6\x02\0\0\x5f\0\0\0\xe0\x03\0\ +\0\xbd\x02\0\0\x60\0\0\0\0\x04\0\0\xc8\x02\0\0\x63\0\0\0\x80\x08\0\0\xce\x02\0\ +\0\x65\0\0\0\xc0\x08\0\0\xd6\x02\0\0\x73\0\0\0\x80\x0b\0\0\xdd\x02\0\0\x75\0\0\ +\0\xc0\x0b\0\0\xe2\x02\0\0\x76\0\0\0\xc0\x0c\0\0\xec\x02\0\0\x10\0\0\0\0\x0d\0\ +\0\xf7\x02\0\0\x10\0\0\0\x40\x0d\0\0\x04\x03\0\0\x78\0\0\0\x80\x0d\0\0\x09\x03\ +\0\0\x79\0\0\0\xc0\x0d\0\0\x13\x03\0\0\x7a\0\0\0\0\x0e\0\0\x1c\x03\0\0\x7a\0\0\ +\0\x20\x0e\0\0\0\0\0\0\x02\0\0\x05\x10\0\0\0\x25\x03\0\0\x26\0\0\0\0\0\0\0\x2e\ +\x03\0\0\x28\0\0\0\0\0\0\0\x39\x03\0\0\x01\0\0\x04\x08\0\0\0\x44\x03\0\0\x27\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\x26\0\0\0\x49\x03\0\0\x02\0\0\x04\x10\0\0\0\x44\ +\x03\0\0\x29\0\0\0\0\0\0\0\x57\x03\0\0\x2a\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\ +\x28\0\0\0\0\0\0\0\0\0\0\x02\x2b\0\0\0\0\0\0\0\x01\0\0\x0d\0\0\0\0\0\0\0\0\x29\ +\0\0\0\x5c\x03\0\0\x02\0\0\x04\x10\0\0\0\x61\x03\0\0\x2d\0\0\0\0\0\0\0\x65\x03\ +\0\0\x2e\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\xbe\0\0\0\0\0\0\0\0\0\0\x02\xb1\0\0\ +\0\0\0\0\0\0\0\0\x02\x81\0\0\0\0\0\0\0\0\0\0\x02\x31\0\0\0\0\0\0\0\0\0\0\x0a\ +\xb3\0\0\0\x6c\x03\0\0\0\0\0\x08\x33\0\0\0\x77\x03\0\0\x01\0\0\x04\x40\0\0\0\0\ +\0\0\0\x34\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\x05\x40\0\0\0\x80\x03\0\0\x35\0\0\0\0\ +\0\0\0\0\0\0\0\x57\0\0\0\0\0\0\0\x86\x03\0\0\x05\0\0\x04\x40\0\0\0\x93\x03\0\0\ +\x36\0\0\0\0\0\0\0\x9c\x03\0\0\x1c\0\0\0\x20\0\0\0\xa2\x03\0\0\x1c\0\0\0\x40\0\ +\0\0\xac\x03\0\0\x10\0\0\0\x80\0\0\0\xb2\x03\0\0\x41\0\0\0\xc0\0\0\0\xba\x03\0\ +\0\0\0\0\x08\x37\0\0\0\xca\x03\0\0\x01\0\0\x04\x04\0\0\0\0\0\0\0\x38\0\0\0\0\0\ +\0\0\0\0\0\0\x03\0\0\x05\x04\0\0\0\xd4\x03\0\0\x39\0\0\0\0\0\0\0\0\0\0\0\x3b\0\ +\0\0\0\0\0\0\0\0\0\0\x3d\0\0\0\0\0\0\0\xd8\x03\0\0\0\0\0\x08\x3a\0\0\0\0\0\0\0\ +\x01\0\0\x04\x04\0\0\0\xe1\x03\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\x04\x02\0\ +\0\0\xe9\x03\0\0\x3c\0\0\0\0\0\0\0\xf0\x03\0\0\x3c\0\0\0\x08\0\0\0\xf8\x03\0\0\ +\0\0\0\x08\x12\0\0\0\0\0\0\0\x02\0\0\x04\x04\0\0\0\xfb\x03\0\0\x3e\0\0\0\0\0\0\ +\0\x0a\x04\0\0\x3e\0\0\0\x10\0\0\0\x0f\x04\0\0\0\0\0\x08\x3f\0\0\0\x13\x04\0\0\ +\0\0\0\x08\x40\0\0\0\x19\x04\0\0\0\0\0\x01\x02\0\0\0\x10\0\0\0\x28\x04\0\0\x06\ +\0\0\x04\x28\0\0\0\x5f\0\0\0\x42\0\0\0\0\0\0\0\x34\x04\0\0\x56\0\0\0\x40\0\0\0\ +\x40\x04\0\0\x53\0\0\0\xc0\0\0\0\x45\x04\0\0\x3c\0\0\0\0\x01\0\0\x55\x04\0\0\ +\x3c\0\0\0\x08\x01\0\0\x65\x04\0\0\x3c\0\0\0\x10\x01\0\0\0\0\0\0\0\0\0\x02\xb7\ +\0\0\0\0\0\0\0\0\0\0\x02\x44\0\0\0\x6f\x04\0\0\x0e\0\0\x04\xc0\0\0\0\x7a\x04\0\ +\0\x45\0\0\0\0\0\0\0\x85\x04\0\0\x48\0\0\0\x80\0\0\0\x90\x04\0\0\x48\0\0\0\0\ +\x01\0\0\x9c\x04\0\0\x48\0\0\0\x80\x01\0\0\x5f\0\0\0\x4a\0\0\0\0\x02\0\0\xa9\ +\x04\0\0\x1c\0\0\0\x40\x02\0\0\xb2\x04\0\0\x1c\0\0\0\x60\x02\0\0\xbd\x04\0\0\ +\x4c\0\0\0\x80\x02\0\0\xc8\x04\0\0\x52\0\0\0\xc0\x02\0\0\xd5\x04\0\0\x02\0\0\0\ +\x40\x05\0\0\x40\x04\0\0\x53\0\0\0\x80\x05\0\0\x55\x04\0\0\x3c\0\0\0\xc0\x05\0\ +\0\x45\x04\0\0\x3c\0\0\0\xc8\x05\0\0\x65\x04\0\0\x3c\0\0\0\xd0\x05\0\0\xe2\x04\ +\0\0\x02\0\0\x04\x10\0\0\0\x44\x03\0\0\x46\0\0\0\0\0\0\0\xed\x04\0\0\x47\0\0\0\ +\x40\0\0\0\0\0\0\0\0\0\0\x02\x45\0\0\0\0\0\0\0\0\0\0\x02\x46\0\0\0\xf3\x04\0\0\ +\x02\0\0\x04\x10\0\0\0\x44\x03\0\0\x49\0\0\0\0\0\0\0\xfd\x04\0\0\x49\0\0\0\x40\ +\0\0\0\0\0\0\0\0\0\0\x02\x48\0\0\0\0\0\0\0\0\0\0\x02\x4b\0\0\0\0\0\0\0\0\0\0\ +\x0a\xb8\0\0\0\x02\x05\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x4e\ +\0\0\0\0\0\0\0\0\0\0\x0a\x4f\0\0\0\x10\x05\0\0\x04\0\0\x04\x18\0\0\0\x7a\x04\0\ +\0\x45\0\0\0\0\0\0\0\x1b\x05\0\0\x50\0\0\0\x80\0\0\0\x20\x05\0\0\x50\0\0\0\xa0\ +\0\0\0\x2b\x05\0\0\x51\0\0\0\xc0\0\0\0\x33\x05\0\0\0\0\0\x08\x1b\0\0\0\0\0\0\0\ +\0\0\0\x03\0\0\0\0\x4c\0\0\0\x04\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x4d\0\ +\0\0\x04\0\0\0\x0a\0\0\0\0\0\0\0\0\0\0\x02\x54\0\0\0\0\0\0\0\0\0\0\x0a\x55\0\0\ +\0\x37\x05\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\0\0\x03\0\0\0\0\x43\0\ +\0\0\x04\0\0\0\x02\0\0\0\0\0\0\0\x02\0\0\x04\x40\0\0\0\x3c\x05\0\0\x58\0\0\0\0\ +\0\0\0\xb2\x03\0\0\x41\0\0\0\xc0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x3c\0\0\0\x04\ +\0\0\0\x18\0\0\0\x46\x05\0\0\0\0\0\x08\x5a\0\0\0\x54\x05\0\0\0\0\0\x08\x5b\0\0\ +\0\0\0\0\0\x01\0\0\x04\x08\0\0\0\xe1\x03\0\0\x5c\0\0\0\0\0\0\0\x5f\x05\0\0\0\0\ +\0\x08\x5d\0\0\0\x63\x05\0\0\0\0\0\x08\x5e\0\0\0\x69\x05\0\0\0\0\0\x01\x08\0\0\ +\0\x40\0\0\x01\x73\x05\0\0\0\0\0\x08\x1c\0\0\0\x7b\x05\0\0\x06\0\0\x04\x90\0\0\ +\0\xac\x03\0\0\x59\0\0\0\0\0\0\0\x81\x05\0\0\x61\0\0\0\x40\0\0\0\x8b\x05\0\0\ +\x62\0\0\0\x40\x02\0\0\x8f\x05\0\0\x48\0\0\0\x80\x02\0\0\x9c\x03\0\0\x10\0\0\0\ +\0\x03\0\0\xb2\x03\0\0\x41\0\0\0\x40\x03\0\0\x99\x05\0\0\0\0\0\x08\x35\0\0\0\ +\xa8\x05\0\0\x01\0\0\x04\x04\0\0\0\x0a\x04\0\0\x39\0\0\0\0\0\0\0\xbe\x05\0\0\0\ +\0\0\x08\x64\0\0\0\xc5\x05\0\0\0\0\0\x08\x5e\0\0\0\xd5\x05\0\0\x06\0\0\x04\x58\ +\0\0\0\xe1\x05\0\0\x66\0\0\0\0\0\0\0\xe6\x05\0\0\x6d\0\0\0\0\x02\0\0\xea\x05\0\ +\0\x6e\0\0\0\x40\x02\0\0\xf3\x05\0\0\x6f\0\0\0\x60\x02\0\0\xf7\x05\0\0\x6f\0\0\ +\0\x80\x02\0\0\xfc\x05\0\0\x02\0\0\0\xa0\x02\0\0\x03\x06\0\0\0\0\0\x08\x67\0\0\ +\0\0\0\0\0\x05\0\0\x04\x40\0\0\0\x93\x03\0\0\x68\0\0\0\0\0\0\0\x9c\x03\0\0\x1c\ +\0\0\0\x40\0\0\0\xa2\x03\0\0\x1c\0\0\0\x60\0\0\0\xac\x03\0\0\x10\0\0\0\x80\0\0\ +\0\xb2\x03\0\0\x41\0\0\0\xc0\0\0\0\x0c\x06\0\0\0\0\0\x08\x69\0\0\0\x1a\x06\0\0\ +\x02\0\0\x04\x08\0\0\0\0\0\0\0\x6a\0\0\0\0\0\0\0\x81\x05\0\0\x36\0\0\0\x20\0\0\ +\0\0\0\0\0\x02\0\0\x05\x04\0\0\0\x22\x06\0\0\x39\0\0\0\0\0\0\0\0\0\0\0\x6b\0\0\ +\0\0\0\0\0\0\0\0\0\x02\0\0\x04\x04\0\0\0\x27\x06\0\0\x3c\0\0\0\0\0\0\0\x2f\x06\ +\0\0\x6c\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x3c\0\0\0\x04\0\0\0\x03\0\0\ +\0\0\0\0\0\0\0\0\x02\xb9\0\0\0\xea\x05\0\0\x05\0\0\x06\x04\0\0\0\x38\x06\0\0\0\ +\0\0\0\x44\x06\0\0\x01\0\0\0\x51\x06\0\0\x02\0\0\0\x5e\x06\0\0\x03\0\0\0\x6a\ +\x06\0\0\x04\0\0\0\x76\x06\0\0\0\0\0\x08\x70\0\0\0\0\0\0\0\x01\0\0\x04\x04\0\0\ +\0\xd4\x03\0\0\x71\0\0\0\0\0\0\0\x7d\x06\0\0\0\0\0\x08\x72\0\0\0\x83\x06\0\0\0\ +\0\0\x08\x1c\0\0\0\0\0\0\0\0\0\0\x02\x74\0\0\0\0\0\0\0\0\0\0\x0a\xb0\0\0\0\x94\ +\x06\0\0\x06\0\0\x04\x20\0\0\0\xa2\x06\0\0\x4c\0\0\0\0\0\0\0\xd2\x01\0\0\x1c\0\ +\0\0\x40\0\0\0\xa8\x06\0\0\x1c\0\0\0\x60\0\0\0\xb3\x06\0\0\x1c\0\0\0\x80\0\0\0\ +\xbc\x06\0\0\x1c\0\0\0\xa0\0\0\0\xc6\x06\0\0\x63\0\0\0\xc0\0\0\0\xcf\x06\0\0\0\ +\0\0\x08\x77\0\0\0\xd3\x06\0\0\0\0\0\x08\x17\0\0\0\0\0\0\0\0\0\0\x02\x96\0\0\0\ +\0\0\0\0\0\0\0\x02\x9b\0\0\0\xd9\x06\0\0\0\0\0\x08\x50\0\0\0\0\0\0\0\x02\0\0\ +\x0d\x7c\0\0\0\xe9\x0c\0\0\x76\0\0\0\xe9\x0c\0\0\x02\0\0\0\xa2\x07\0\0\0\0\0\ +\x08\x7d\0\0\0\xa7\x07\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x04\xad\x07\0\0\x01\0\0\ +\x0c\x7b\0\0\0\0\0\0\0\x01\0\0\x0d\x02\0\0\0\x86\0\0\0\x16\0\0\0\xc7\x07\0\0\ +\x01\0\0\x0c\x7f\0\0\0\x1d\x08\0\0\x34\0\0\x04\x78\x04\0\0\x23\x08\0\0\x82\0\0\ +\0\0\0\0\0\x2a\x08\0\0\x40\0\0\0\x10\0\0\0\x34\x08\0\0\x6f\0\0\0\x20\0\0\0\x3a\ +\x08\0\0\x83\0\0\0\x40\0\0\0\x40\x08\0\0\x1c\0\0\0\x60\0\0\0\x48\x08\0\0\x87\0\ +\0\0\x80\0\0\0\x4e\x08\0\0\x87\0\0\0\xc0\0\0\0\x5c\x08\0\0\x88\0\0\0\0\x01\0\0\ +\x61\x08\0\0\x8a\0\0\0\x40\x01\0\0\x66\x08\0\0\x79\0\0\0\x80\x01\0\0\x70\x08\0\ +\0\x10\0\0\0\xc0\x01\0\0\x7b\x08\0\0\x4c\0\0\0\0\x02\0\0\0\0\0\0\x8b\0\0\0\x40\ +\x02\0\0\x81\x08\0\0\x8d\0\0\0\x60\x02\0\0\x88\x08\0\0\x63\0\0\0\x80\x02\0\0\ +\x8f\x08\0\0\x8f\0\0\0\xc0\x02\0\0\x97\x08\0\0\x8f\0\0\0\x40\x03\0\0\x9f\x08\0\ +\0\x8f\0\0\0\xc0\x03\0\0\xa7\x08\0\0\x32\0\0\0\x40\x04\0\0\xae\x08\0\0\x40\0\0\ +\0\x40\x06\0\0\xb6\x08\0\0\x3c\0\0\0\x50\x06\0\0\xc0\x08\0\0\x3c\0\0\0\x58\x06\ +\0\0\xcd\x08\0\0\x92\0\0\0\x80\x06\0\0\xd6\x08\0\0\x4c\0\0\0\xc0\x06\0\0\xde\ +\x08\0\0\x93\0\0\0\0\x07\0\0\xe6\x08\0\0\x4c\0\0\0\xc0\x0b\0\0\xf3\x08\0\0\x4c\ +\0\0\0\0\x0c\0\0\x05\x09\0\0\x45\0\0\0\x40\x0c\0\0\x0c\x09\0\0\x48\0\0\0\xc0\ +\x0c\0\0\x16\x09\0\0\x94\0\0\0\x40\x0d\0\0\x1b\x09\0\0\x02\0\0\0\x80\x0d\0\0\ +\x2b\x09\0\0\x3e\0\0\0\xa0\x0d\0\0\x3d\x09\0\0\x3e\0\0\0\xb0\x0d\0\0\x4e\x09\0\ +\0\x48\0\0\0\xc0\x0d\0\0\x54\x09\0\0\x48\0\0\0\x40\x0e\0\0\x5e\x09\0\0\x48\0\0\ +\0\xc0\x0e\0\0\0\0\0\0\x95\0\0\0\x40\x0f\0\0\x68\x09\0\0\x5a\0\0\0\xc0\x0f\0\0\ +\x72\x09\0\0\x5a\0\0\0\0\x10\0\0\x7d\x09\0\0\x39\0\0\0\x40\x10\0\0\x85\x09\0\0\ +\x39\0\0\0\x60\x10\0\0\x91\x09\0\0\x39\0\0\0\x80\x10\0\0\x9e\x09\0\0\x39\0\0\0\ +\xa0\x10\0\0\0\0\0\0\x97\0\0\0\xc0\x10\0\0\xaa\x09\0\0\x9a\0\0\0\0\x11\0\0\xb2\ +\x09\0\0\x9b\0\0\0\x40\x11\0\0\xb9\x09\0\0\x48\0\0\0\x40\x22\0\0\0\0\0\0\xa3\0\ +\0\0\xc0\x22\0\0\xc3\x09\0\0\x1b\0\0\0\0\x23\0\0\xd0\x09\0\0\x1b\0\0\0\x20\x23\ +\0\0\xe0\x09\0\0\xa7\0\0\0\x40\x23\0\0\xf1\x09\0\0\x10\0\0\0\x80\x23\0\0\xfb\ +\x09\0\0\0\0\0\x08\x40\0\0\0\x03\x0a\0\0\0\0\0\x08\x84\0\0\0\0\0\0\0\x01\0\0\ +\x04\x04\0\0\0\xd4\x03\0\0\x85\0\0\0\0\0\0\0\x0a\x0a\0\0\0\0\0\x08\x86\0\0\0\ +\x10\x0a\0\0\0\0\0\x08\x1c\0\0\0\0\0\0\0\0\0\0\x02\xbb\0\0\0\0\0\0\0\0\0\0\x02\ +\x89\0\0\0\0\0\0\0\0\0\0\x0a\xb6\0\0\0\0\0\0\0\0\0\0\x02\xbd\0\0\0\0\0\0\0\x02\ +\0\0\x05\x04\0\0\0\x21\x0a\0\0\x8c\0\0\0\0\0\0\0\x29\x0a\0\0\x1c\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\x0a\x1c\0\0\0\x33\x0a\0\0\0\0\0\x08\x8e\0\0\0\x39\x0a\0\0\0\0\0\ +\x08\x50\0\0\0\x48\x0a\0\0\x02\0\0\x04\x10\0\0\0\x53\x0a\0\0\x90\0\0\0\0\0\0\0\ +\x5a\x0a\0\0\x91\0\0\0\x40\0\0\0\x62\x0a\0\0\0\0\0\x08\x5d\0\0\0\x6b\x0a\0\0\0\ +\0\0\x01\x08\0\0\0\x40\0\0\x01\x70\x0a\0\0\0\0\0\x08\x76\0\0\0\x79\x0a\0\0\x07\ +\0\0\x04\x98\0\0\0\x86\x0a\0\0\x59\0\0\0\0\0\0\0\xac\x03\0\0\x59\0\0\0\x40\0\0\ +\0\x8b\x05\0\0\x62\0\0\0\x80\0\0\0\x81\x05\0\0\x61\0\0\0\xc0\0\0\0\x8f\x05\0\0\ +\x48\0\0\0\xc0\x02\0\0\x9c\x03\0\0\x10\0\0\0\x40\x03\0\0\xb2\x03\0\0\x41\0\0\0\ +\x80\x03\0\0\0\0\0\0\0\0\0\x02\xae\0\0\0\0\0\0\0\x02\0\0\x05\x10\0\0\0\x8c\x0a\ +\0\0\x96\0\0\0\0\0\0\0\x95\x0a\0\0\x28\0\0\0\0\0\0\0\x9b\x0a\0\0\x01\0\0\x04\ +\x08\0\0\0\xa6\x0a\0\0\x46\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\x05\x08\0\0\0\xac\x0a\ +\0\0\x30\0\0\0\0\0\0\0\xb2\x0a\0\0\x98\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\x99\0\0\ +\0\0\0\0\0\x01\0\0\x0d\0\0\0\0\0\0\0\0\x2f\0\0\0\0\0\0\0\0\0\0\x02\xb2\0\0\0\ +\xbd\x0a\0\0\x0f\0\0\x04\x20\x02\0\0\xcb\x0a\0\0\x2f\0\0\0\0\0\0\0\xd0\x0a\0\0\ +\x9c\0\0\0\x40\0\0\0\xd8\x0a\0\0\x93\0\0\0\xc0\x02\0\0\xe8\x0a\0\0\x9d\0\0\0\ +\x80\x07\0\0\xf1\x0a\0\0\x39\0\0\0\xa0\x07\0\0\x01\x0b\0\0\x9e\0\0\0\xc0\x07\0\ +\0\x08\x0b\0\0\x93\0\0\0\x40\x08\0\0\x15\x0b\0\0\x4c\0\0\0\0\x0d\0\0\x1d\x0b\0\ +\0\x4c\0\0\0\x40\x0d\0\0\x2d\x0b\0\0\xa1\0\0\0\x80\x0d\0\0\x33\x0b\0\0\x4c\0\0\ +\0\xc0\x0d\0\0\x39\x0b\0\0\x7a\0\0\0\0\x0e\0\0\x40\x0b\0\0\x32\0\0\0\x40\x0e\0\ +\0\x4d\x0b\0\0\x48\0\0\0\x40\x10\0\0\xf7\x02\0\0\x10\0\0\0\xc0\x10\0\0\x5a\x0b\ +\0\0\x03\0\0\x04\x50\0\0\0\x61\x0b\0\0\x32\0\0\0\0\0\0\0\x69\x0b\0\0\x9d\0\0\0\ +\0\x02\0\0\x72\x0b\0\0\x10\0\0\0\x40\x02\0\0\x7a\x0b\0\0\0\0\0\x08\x1c\0\0\0\ +\x80\x0b\0\0\x02\0\0\x04\x10\0\0\0\x8f\x0b\0\0\x9f\0\0\0\0\0\0\0\x97\x0b\0\0\ +\xa0\0\0\0\x40\0\0\0\x8f\x0b\0\0\x01\0\0\x04\x08\0\0\0\xa3\x0b\0\0\xa0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\x02\xbc\0\0\0\0\0\0\0\0\0\0\x02\xa2\0\0\0\0\0\0\0\0\0\0\ +\x0a\xad\0\0\0\0\0\0\0\x04\0\0\x05\x08\0\0\0\xab\x0b\0\0\xa4\0\0\0\0\0\0\0\xb2\ +\x0b\0\0\xa5\0\0\0\0\0\0\0\xb9\x0b\0\0\xa6\0\0\0\0\0\0\0\xc0\x0b\0\0\x1c\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\x02\xba\0\0\0\0\0\0\0\0\0\0\x02\xaf\0\0\0\0\0\0\0\0\0\0\ +\x02\x55\0\0\0\0\0\0\0\0\0\0\x02\xb4\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x55\0\0\0\ +\x04\0\0\0\x04\0\0\0\xf2\x0b\0\0\0\0\0\x0e\xa8\0\0\0\x01\0\0\0\xfa\x0b\0\0\x01\ +\0\0\x0f\x04\0\0\0\xbf\0\0\0\0\0\0\0\x04\0\0\0\x01\x0c\0\0\x02\0\0\x0f\x40\0\0\ +\0\x0a\0\0\0\0\0\0\0\x20\0\0\0\x15\0\0\0\x20\0\0\0\x20\0\0\0\x07\x0c\0\0\x01\0\ +\0\x0f\x04\0\0\0\xa9\0\0\0\0\0\0\0\x04\0\0\0\x0f\x0c\0\0\0\0\0\x07\0\0\0\0\x28\ +\x0c\0\0\0\0\0\x07\0\0\0\0\x36\x0c\0\0\0\0\0\x07\0\0\0\0\x3b\x0c\0\0\0\0\0\x07\ +\0\0\0\0\x65\x03\0\0\0\0\0\x07\0\0\0\0\x40\x0c\0\0\0\0\0\x07\0\0\0\0\x52\x0c\0\ +\0\0\0\0\x07\0\0\0\0\x62\x0c\0\0\0\0\0\x07\0\0\0\0\x7a\x0c\0\0\0\0\0\x07\0\0\0\ +\0\x85\x0c\0\0\0\0\0\x07\0\0\0\0\x96\x0c\0\0\0\0\0\x07\0\0\0\0\xa5\x0c\0\0\0\0\ +\0\x07\0\0\0\0\xe6\x05\0\0\0\0\0\x07\0\0\0\0\xba\x0c\0\0\0\0\0\x07\0\0\0\0\xca\ +\x0c\0\0\0\0\0\x07\0\0\0\0\xa3\x0b\0\0\0\0\0\x07\0\0\0\0\xd4\x0c\0\0\0\0\0\x07\ +\0\0\0\0\xe0\x0c\0\0\0\0\0\x07\0\0\0\0\xe9\x0c\0\0\0\0\0\x0e\x02\0\0\0\x01\0\0\ +\0\0\x69\x6e\x74\0\x5f\x5f\x41\x52\x52\x41\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\ +\x50\x45\x5f\x5f\0\x74\x79\x70\x65\0\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\ +\x73\0\x6b\x65\x79\x5f\x73\x69\x7a\x65\0\x76\x61\x6c\x75\x65\x5f\x73\x69\x7a\ +\x65\0\x68\x69\x64\x5f\x6a\x6d\x70\x5f\x74\x61\x62\x6c\x65\0\x5f\x5f\x75\x38\0\ +\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\x63\x68\x61\x72\0\x6b\x65\x79\0\x76\x61\ +\x6c\x75\x65\0\x70\x72\x6f\x67\x73\x5f\x6d\x61\x70\0\x75\x6e\x73\x69\x67\x6e\ +\x65\x64\x20\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\0\x63\x74\x78\0\x68\x69\x64\ +\x5f\x74\x61\x69\x6c\x5f\x63\x61\x6c\x6c\0\x66\x6d\x6f\x64\x5f\x72\x65\x74\x2f\ +\x5f\x5f\x68\x69\x64\x5f\x62\x70\x66\x5f\x74\x61\x69\x6c\x5f\x63\x61\x6c\x6c\0\ +\x2f\x68\x6f\x6d\x65\x2f\x62\x74\x69\x73\x73\x6f\x69\x72\x2f\x53\x72\x63\x2f\ +\x68\x69\x64\x2f\x64\x72\x69\x76\x65\x72\x73\x2f\x68\x69\x64\x2f\x62\x70\x66\ +\x2f\x65\x6e\x74\x72\x79\x70\x6f\x69\x6e\x74\x73\x2f\x65\x6e\x74\x72\x79\x70\ +\x6f\x69\x6e\x74\x73\x2e\x62\x70\x66\x2e\x63\0\x69\x6e\x74\x20\x42\x50\x46\x5f\ +\x50\x52\x4f\x47\x28\x68\x69\x64\x5f\x74\x61\x69\x6c\x5f\x63\x61\x6c\x6c\x2c\ +\x20\x73\x74\x72\x75\x63\x74\x20\x68\x69\x64\x5f\x62\x70\x66\x5f\x63\x74\x78\ +\x20\x2a\x68\x63\x74\x78\x29\0\x68\x69\x64\x5f\x62\x70\x66\x5f\x63\x74\x78\0\ +\x69\x6e\x64\x65\x78\0\x68\x69\x64\0\x61\x6c\x6c\x6f\x63\x61\x74\x65\x64\x5f\ +\x73\x69\x7a\x65\0\x72\x65\x70\x6f\x72\x74\x5f\x74\x79\x70\x65\0\x5f\x5f\x75\ +\x33\x32\0\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x68\x69\x64\x5f\ +\x72\x65\x70\x6f\x72\x74\x5f\x74\x79\x70\x65\0\x48\x49\x44\x5f\x49\x4e\x50\x55\ +\x54\x5f\x52\x45\x50\x4f\x52\x54\0\x48\x49\x44\x5f\x4f\x55\x54\x50\x55\x54\x5f\ +\x52\x45\x50\x4f\x52\x54\0\x48\x49\x44\x5f\x46\x45\x41\x54\x55\x52\x45\x5f\x52\ +\x45\x50\x4f\x52\x54\0\x48\x49\x44\x5f\x52\x45\x50\x4f\x52\x54\x5f\x54\x59\x50\ +\x45\x53\0\x72\x65\x74\x76\x61\x6c\0\x73\x69\x7a\x65\0\x5f\x5f\x73\x33\x32\0\ +\x30\x3a\x30\0\x09\x62\x70\x66\x5f\x74\x61\x69\x6c\x5f\x63\x61\x6c\x6c\x28\x63\ +\x74\x78\x2c\x20\x26\x68\x69\x64\x5f\x6a\x6d\x70\x5f\x74\x61\x62\x6c\x65\x2c\ +\x20\x68\x63\x74\x78\x2d\x3e\x69\x6e\x64\x65\x78\x29\x3b\0\x68\x69\x64\x5f\x70\ +\x72\x6f\x67\x5f\x72\x65\x6c\x65\x61\x73\x65\0\x66\x65\x78\x69\x74\x2f\x62\x70\ +\x66\x5f\x70\x72\x6f\x67\x5f\x72\x65\x6c\x65\x61\x73\x65\0\x69\x6e\x74\x20\x42\ +\x50\x46\x5f\x50\x52\x4f\x47\x28\x68\x69\x64\x5f\x70\x72\x6f\x67\x5f\x72\x65\ +\x6c\x65\x61\x73\x65\x2c\x20\x73\x74\x72\x75\x63\x74\x20\x69\x6e\x6f\x64\x65\ +\x20\x2a\x69\x6e\x6f\x64\x65\x2c\x20\x73\x74\x72\x75\x63\x74\x20\x66\x69\x6c\ +\x65\x20\x2a\x66\x69\x6c\x70\x29\0\x66\x69\x6c\x65\0\x66\x5f\x75\0\x66\x5f\x70\ +\x61\x74\x68\0\x66\x5f\x69\x6e\x6f\x64\x65\0\x66\x5f\x6f\x70\0\x66\x5f\x6c\x6f\ +\x63\x6b\0\x66\x5f\x63\x6f\x75\x6e\x74\0\x66\x5f\x66\x6c\x61\x67\x73\0\x66\x5f\ +\x6d\x6f\x64\x65\0\x66\x5f\x70\x6f\x73\x5f\x6c\x6f\x63\x6b\0\x66\x5f\x70\x6f\ +\x73\0\x66\x5f\x6f\x77\x6e\x65\x72\0\x66\x5f\x63\x72\x65\x64\0\x66\x5f\x72\x61\ +\0\x66\x5f\x76\x65\x72\x73\x69\x6f\x6e\0\x66\x5f\x73\x65\x63\x75\x72\x69\x74\ +\x79\0\x70\x72\x69\x76\x61\x74\x65\x5f\x64\x61\x74\x61\0\x66\x5f\x65\x70\0\x66\ +\x5f\x6d\x61\x70\x70\x69\x6e\x67\0\x66\x5f\x77\x62\x5f\x65\x72\x72\0\x66\x5f\ +\x73\x62\x5f\x65\x72\x72\0\x66\x75\x5f\x6c\x6c\x69\x73\x74\0\x66\x75\x5f\x72\ +\x63\x75\x68\x65\x61\x64\0\x6c\x6c\x69\x73\x74\x5f\x6e\x6f\x64\x65\0\x6e\x65\ +\x78\x74\0\x63\x61\x6c\x6c\x62\x61\x63\x6b\x5f\x68\x65\x61\x64\0\x66\x75\x6e\ +\x63\0\x70\x61\x74\x68\0\x6d\x6e\x74\0\x64\x65\x6e\x74\x72\x79\0\x73\x70\x69\ +\x6e\x6c\x6f\x63\x6b\x5f\x74\0\x73\x70\x69\x6e\x6c\x6f\x63\x6b\0\x72\x6c\x6f\ +\x63\x6b\0\x72\x61\x77\x5f\x73\x70\x69\x6e\x6c\x6f\x63\x6b\0\x72\x61\x77\x5f\ +\x6c\x6f\x63\x6b\0\x6d\x61\x67\x69\x63\0\x6f\x77\x6e\x65\x72\x5f\x63\x70\x75\0\ +\x6f\x77\x6e\x65\x72\0\x64\x65\x70\x5f\x6d\x61\x70\0\x61\x72\x63\x68\x5f\x73\ +\x70\x69\x6e\x6c\x6f\x63\x6b\x5f\x74\0\x71\x73\x70\x69\x6e\x6c\x6f\x63\x6b\0\ +\x76\x61\x6c\0\x61\x74\x6f\x6d\x69\x63\x5f\x74\0\x63\x6f\x75\x6e\x74\x65\x72\0\ +\x6c\x6f\x63\x6b\x65\x64\0\x70\x65\x6e\x64\x69\x6e\x67\0\x75\x38\0\x6c\x6f\x63\ +\x6b\x65\x64\x5f\x70\x65\x6e\x64\x69\x6e\x67\0\x74\x61\x69\x6c\0\x75\x31\x36\0\ +\x5f\x5f\x75\x31\x36\0\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\x73\x68\x6f\x72\x74\ +\0\x6c\x6f\x63\x6b\x64\x65\x70\x5f\x6d\x61\x70\0\x63\x6c\x61\x73\x73\x5f\x63\ +\x61\x63\x68\x65\0\x6e\x61\x6d\x65\0\x77\x61\x69\x74\x5f\x74\x79\x70\x65\x5f\ +\x6f\x75\x74\x65\x72\0\x77\x61\x69\x74\x5f\x74\x79\x70\x65\x5f\x69\x6e\x6e\x65\ +\x72\0\x6c\x6f\x63\x6b\x5f\x74\x79\x70\x65\0\x6c\x6f\x63\x6b\x5f\x63\x6c\x61\ +\x73\x73\0\x68\x61\x73\x68\x5f\x65\x6e\x74\x72\x79\0\x6c\x6f\x63\x6b\x5f\x65\ +\x6e\x74\x72\x79\0\x6c\x6f\x63\x6b\x73\x5f\x61\x66\x74\x65\x72\0\x6c\x6f\x63\ +\x6b\x73\x5f\x62\x65\x66\x6f\x72\x65\0\x73\x75\x62\x63\x6c\x61\x73\x73\0\x64\ +\x65\x70\x5f\x67\x65\x6e\x5f\x69\x64\0\x75\x73\x61\x67\x65\x5f\x6d\x61\x73\x6b\ +\0\x75\x73\x61\x67\x65\x5f\x74\x72\x61\x63\x65\x73\0\x6e\x61\x6d\x65\x5f\x76\ +\x65\x72\x73\x69\x6f\x6e\0\x68\x6c\x69\x73\x74\x5f\x6e\x6f\x64\x65\0\x70\x70\ +\x72\x65\x76\0\x6c\x69\x73\x74\x5f\x68\x65\x61\x64\0\x70\x72\x65\x76\0\x75\x6e\ +\x73\x69\x67\x6e\x65\x64\x20\x6c\x6f\x6e\x67\0\x6c\x6f\x63\x6b\x5f\x74\x72\x61\ +\x63\x65\0\x68\x61\x73\x68\0\x6e\x72\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x65\x6e\ +\x74\x72\x69\x65\x73\0\x75\x33\x32\0\x63\x68\x61\x72\0\x5f\x5f\x70\x61\x64\x64\ +\x69\x6e\x67\0\x61\x74\x6f\x6d\x69\x63\x5f\x6c\x6f\x6e\x67\x5f\x74\0\x61\x74\ +\x6f\x6d\x69\x63\x36\x34\x5f\x74\0\x73\x36\x34\0\x5f\x5f\x73\x36\x34\0\x6c\x6f\ +\x6e\x67\x20\x6c\x6f\x6e\x67\0\x66\x6d\x6f\x64\x65\x5f\x74\0\x6d\x75\x74\x65\ +\x78\0\x77\x61\x69\x74\x5f\x6c\x6f\x63\x6b\0\x6f\x73\x71\0\x77\x61\x69\x74\x5f\ +\x6c\x69\x73\x74\0\x72\x61\x77\x5f\x73\x70\x69\x6e\x6c\x6f\x63\x6b\x5f\x74\0\ +\x6f\x70\x74\x69\x6d\x69\x73\x74\x69\x63\x5f\x73\x70\x69\x6e\x5f\x71\x75\x65\ +\x75\x65\0\x6c\x6f\x66\x66\x5f\x74\0\x5f\x5f\x6b\x65\x72\x6e\x65\x6c\x5f\x6c\ +\x6f\x66\x66\x5f\x74\0\x66\x6f\x77\x6e\x5f\x73\x74\x72\x75\x63\x74\0\x6c\x6f\ +\x63\x6b\0\x70\x69\x64\0\x70\x69\x64\x5f\x74\x79\x70\x65\0\x75\x69\x64\0\x65\ +\x75\x69\x64\0\x73\x69\x67\x6e\x75\x6d\0\x72\x77\x6c\x6f\x63\x6b\x5f\x74\0\x61\ +\x72\x63\x68\x5f\x72\x77\x6c\x6f\x63\x6b\x5f\x74\0\x71\x72\x77\x6c\x6f\x63\x6b\ +\0\x63\x6e\x74\x73\0\x77\x6c\x6f\x63\x6b\x65\x64\0\x5f\x5f\x6c\x73\x74\x61\x74\ +\x65\0\x50\x49\x44\x54\x59\x50\x45\x5f\x50\x49\x44\0\x50\x49\x44\x54\x59\x50\ +\x45\x5f\x54\x47\x49\x44\0\x50\x49\x44\x54\x59\x50\x45\x5f\x50\x47\x49\x44\0\ +\x50\x49\x44\x54\x59\x50\x45\x5f\x53\x49\x44\0\x50\x49\x44\x54\x59\x50\x45\x5f\ +\x4d\x41\x58\0\x6b\x75\x69\x64\x5f\x74\0\x75\x69\x64\x5f\x74\0\x5f\x5f\x6b\x65\ +\x72\x6e\x65\x6c\x5f\x75\x69\x64\x33\x32\x5f\x74\0\x66\x69\x6c\x65\x5f\x72\x61\ +\x5f\x73\x74\x61\x74\x65\0\x73\x74\x61\x72\x74\0\x61\x73\x79\x6e\x63\x5f\x73\ +\x69\x7a\x65\0\x72\x61\x5f\x70\x61\x67\x65\x73\0\x6d\x6d\x61\x70\x5f\x6d\x69\ +\x73\x73\0\x70\x72\x65\x76\x5f\x70\x6f\x73\0\x75\x36\x34\0\x5f\x5f\x75\x36\x34\ +\0\x65\x72\x72\x73\x65\x71\x5f\x74\0\x30\x3a\x31\x35\0\x09\x75\x36\x34\x20\x70\ +\x72\x6f\x67\x20\x3d\x20\x28\x75\x36\x34\x29\x66\x69\x6c\x70\x2d\x3e\x70\x72\ +\x69\x76\x61\x74\x65\x5f\x64\x61\x74\x61\x3b\0\x09\x76\x61\x6c\x75\x65\x20\x3d\ +\x20\x62\x70\x66\x5f\x6d\x61\x70\x5f\x6c\x6f\x6f\x6b\x75\x70\x5f\x65\x6c\x65\ +\x6d\x28\x26\x70\x72\x6f\x67\x73\x5f\x6d\x61\x70\x2c\x20\x26\x70\x72\x6f\x67\ +\x29\x3b\0\x09\x69\x66\x20\x28\x21\x76\x61\x6c\x75\x65\x29\0\x09\x69\x66\x20\ +\x28\x63\x61\x6c\x6c\x5f\x68\x69\x64\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x5f\ +\x72\x65\x6c\x65\x61\x73\x65\x28\x70\x72\x6f\x67\x2c\x20\x2a\x76\x61\x6c\x75\ +\x65\x29\x29\0\x09\x09\x62\x70\x66\x5f\x6d\x61\x70\x5f\x64\x65\x6c\x65\x74\x65\ +\x5f\x65\x6c\x65\x6d\x28\x26\x70\x72\x6f\x67\x73\x5f\x6d\x61\x70\x2c\x20\x26\ +\x70\x72\x6f\x67\x29\x3b\0\x62\x6f\x6f\x6c\0\x5f\x42\x6f\x6f\x6c\0\x63\x61\x6c\ +\x6c\x5f\x68\x69\x64\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x5f\x72\x65\x6c\x65\ +\x61\x73\x65\0\x68\x69\x64\x5f\x66\x72\x65\x65\x5f\x69\x6e\x6f\x64\x65\0\x66\ +\x65\x78\x69\x74\x2f\x62\x70\x66\x5f\x66\x72\x65\x65\x5f\x69\x6e\x6f\x64\x65\0\ +\x69\x6e\x74\x20\x42\x50\x46\x5f\x50\x52\x4f\x47\x28\x68\x69\x64\x5f\x66\x72\ +\x65\x65\x5f\x69\x6e\x6f\x64\x65\x2c\x20\x73\x74\x72\x75\x63\x74\x20\x69\x6e\ +\x6f\x64\x65\x20\x2a\x69\x6e\x6f\x64\x65\x29\0\x69\x6e\x6f\x64\x65\0\x69\x5f\ +\x6d\x6f\x64\x65\0\x69\x5f\x6f\x70\x66\x6c\x61\x67\x73\0\x69\x5f\x75\x69\x64\0\ +\x69\x5f\x67\x69\x64\0\x69\x5f\x66\x6c\x61\x67\x73\0\x69\x5f\x61\x63\x6c\0\x69\ +\x5f\x64\x65\x66\x61\x75\x6c\x74\x5f\x61\x63\x6c\0\x69\x5f\x6f\x70\0\x69\x5f\ +\x73\x62\0\x69\x5f\x6d\x61\x70\x70\x69\x6e\x67\0\x69\x5f\x73\x65\x63\x75\x72\ +\x69\x74\x79\0\x69\x5f\x69\x6e\x6f\0\x69\x5f\x72\x64\x65\x76\0\x69\x5f\x73\x69\ +\x7a\x65\0\x69\x5f\x61\x74\x69\x6d\x65\0\x69\x5f\x6d\x74\x69\x6d\x65\0\x69\x5f\ +\x63\x74\x69\x6d\x65\0\x69\x5f\x6c\x6f\x63\x6b\0\x69\x5f\x62\x79\x74\x65\x73\0\ +\x69\x5f\x62\x6c\x6b\x62\x69\x74\x73\0\x69\x5f\x77\x72\x69\x74\x65\x5f\x68\x69\ +\x6e\x74\0\x69\x5f\x62\x6c\x6f\x63\x6b\x73\0\x69\x5f\x73\x74\x61\x74\x65\0\x69\ +\x5f\x72\x77\x73\x65\x6d\0\x64\x69\x72\x74\x69\x65\x64\x5f\x77\x68\x65\x6e\0\ +\x64\x69\x72\x74\x69\x65\x64\x5f\x74\x69\x6d\x65\x5f\x77\x68\x65\x6e\0\x69\x5f\ +\x68\x61\x73\x68\0\x69\x5f\x69\x6f\x5f\x6c\x69\x73\x74\0\x69\x5f\x77\x62\0\x69\ +\x5f\x77\x62\x5f\x66\x72\x6e\x5f\x77\x69\x6e\x6e\x65\x72\0\x69\x5f\x77\x62\x5f\ +\x66\x72\x6e\x5f\x61\x76\x67\x5f\x74\x69\x6d\x65\0\x69\x5f\x77\x62\x5f\x66\x72\ +\x6e\x5f\x68\x69\x73\x74\x6f\x72\x79\0\x69\x5f\x6c\x72\x75\0\x69\x5f\x73\x62\ +\x5f\x6c\x69\x73\x74\0\x69\x5f\x77\x62\x5f\x6c\x69\x73\x74\0\x69\x5f\x76\x65\ +\x72\x73\x69\x6f\x6e\0\x69\x5f\x73\x65\x71\x75\x65\x6e\x63\x65\0\x69\x5f\x63\ +\x6f\x75\x6e\x74\0\x69\x5f\x64\x69\x6f\x5f\x63\x6f\x75\x6e\x74\0\x69\x5f\x77\ +\x72\x69\x74\x65\x63\x6f\x75\x6e\x74\0\x69\x5f\x72\x65\x61\x64\x63\x6f\x75\x6e\ +\x74\0\x69\x5f\x66\x6c\x63\x74\x78\0\x69\x5f\x64\x61\x74\x61\0\x69\x5f\x64\x65\ +\x76\x69\x63\x65\x73\0\x69\x5f\x67\x65\x6e\x65\x72\x61\x74\x69\x6f\x6e\0\x69\ +\x5f\x66\x73\x6e\x6f\x74\x69\x66\x79\x5f\x6d\x61\x73\x6b\0\x69\x5f\x66\x73\x6e\ +\x6f\x74\x69\x66\x79\x5f\x6d\x61\x72\x6b\x73\0\x69\x5f\x70\x72\x69\x76\x61\x74\ +\x65\0\x75\x6d\x6f\x64\x65\x5f\x74\0\x6b\x67\x69\x64\x5f\x74\0\x67\x69\x64\x5f\ +\x74\0\x5f\x5f\x6b\x65\x72\x6e\x65\x6c\x5f\x67\x69\x64\x33\x32\x5f\x74\0\x69\ +\x5f\x6e\x6c\x69\x6e\x6b\0\x5f\x5f\x69\x5f\x6e\x6c\x69\x6e\x6b\0\x64\x65\x76\ +\x5f\x74\0\x5f\x5f\x6b\x65\x72\x6e\x65\x6c\x5f\x64\x65\x76\x5f\x74\0\x74\x69\ +\x6d\x65\x73\x70\x65\x63\x36\x34\0\x74\x76\x5f\x73\x65\x63\0\x74\x76\x5f\x6e\ +\x73\x65\x63\0\x74\x69\x6d\x65\x36\x34\x5f\x74\0\x6c\x6f\x6e\x67\0\x62\x6c\x6b\ +\x63\x6e\x74\x5f\x74\0\x72\x77\x5f\x73\x65\x6d\x61\x70\x68\x6f\x72\x65\0\x63\ +\x6f\x75\x6e\x74\0\x69\x5f\x64\x65\x6e\x74\x72\x79\0\x69\x5f\x72\x63\x75\0\x68\ +\x6c\x69\x73\x74\x5f\x68\x65\x61\x64\0\x66\x69\x72\x73\x74\0\x69\x5f\x66\x6f\ +\x70\0\x66\x72\x65\x65\x5f\x69\x6e\x6f\x64\x65\0\x61\x64\x64\x72\x65\x73\x73\ +\x5f\x73\x70\x61\x63\x65\0\x68\x6f\x73\x74\0\x69\x5f\x70\x61\x67\x65\x73\0\x69\ +\x6e\x76\x61\x6c\x69\x64\x61\x74\x65\x5f\x6c\x6f\x63\x6b\0\x67\x66\x70\x5f\x6d\ +\x61\x73\x6b\0\x69\x5f\x6d\x6d\x61\x70\x5f\x77\x72\x69\x74\x61\x62\x6c\x65\0\ +\x69\x5f\x6d\x6d\x61\x70\0\x69\x5f\x6d\x6d\x61\x70\x5f\x72\x77\x73\x65\x6d\0\ +\x6e\x72\x70\x61\x67\x65\x73\0\x77\x72\x69\x74\x65\x62\x61\x63\x6b\x5f\x69\x6e\ +\x64\x65\x78\0\x61\x5f\x6f\x70\x73\0\x66\x6c\x61\x67\x73\0\x77\x62\x5f\x65\x72\ +\x72\0\x70\x72\x69\x76\x61\x74\x65\x5f\x6c\x6f\x63\x6b\0\x70\x72\x69\x76\x61\ +\x74\x65\x5f\x6c\x69\x73\x74\0\x78\x61\x72\x72\x61\x79\0\x78\x61\x5f\x6c\x6f\ +\x63\x6b\0\x78\x61\x5f\x66\x6c\x61\x67\x73\0\x78\x61\x5f\x68\x65\x61\x64\0\x67\ +\x66\x70\x5f\x74\0\x72\x62\x5f\x72\x6f\x6f\x74\x5f\x63\x61\x63\x68\x65\x64\0\ +\x72\x62\x5f\x72\x6f\x6f\x74\0\x72\x62\x5f\x6c\x65\x66\x74\x6d\x6f\x73\x74\0\ +\x72\x62\x5f\x6e\x6f\x64\x65\0\x69\x5f\x70\x69\x70\x65\0\x69\x5f\x63\x64\x65\ +\x76\0\x69\x5f\x6c\x69\x6e\x6b\0\x69\x5f\x64\x69\x72\x5f\x73\x65\x71\0\x30\x3a\ +\x35\x31\0\x09\x75\x36\x34\x20\x70\x72\x6f\x67\x20\x3d\x20\x28\x75\x36\x34\x29\ +\x69\x6e\x6f\x64\x65\x2d\x3e\x69\x5f\x70\x72\x69\x76\x61\x74\x65\x3b\0\x4c\x49\ +\x43\x45\x4e\x53\x45\0\x2e\x6b\x73\x79\x6d\x73\0\x2e\x6d\x61\x70\x73\0\x6c\x69\ +\x63\x65\x6e\x73\x65\0\x61\x64\x64\x72\x65\x73\x73\x5f\x73\x70\x61\x63\x65\x5f\ +\x6f\x70\x65\x72\x61\x74\x69\x6f\x6e\x73\0\x62\x64\x69\x5f\x77\x72\x69\x74\x65\ +\x62\x61\x63\x6b\0\x63\x64\x65\x76\0\x63\x72\x65\x64\0\x66\x69\x6c\x65\x5f\x6c\ +\x6f\x63\x6b\x5f\x63\x6f\x6e\x74\x65\x78\x74\0\x66\x69\x6c\x65\x5f\x6f\x70\x65\ +\x72\x61\x74\x69\x6f\x6e\x73\0\x66\x73\x6e\x6f\x74\x69\x66\x79\x5f\x6d\x61\x72\ +\x6b\x5f\x63\x6f\x6e\x6e\x65\x63\x74\x6f\x72\0\x68\x69\x64\x5f\x64\x65\x76\x69\ +\x63\x65\0\x69\x6e\x6f\x64\x65\x5f\x6f\x70\x65\x72\x61\x74\x69\x6f\x6e\x73\0\ +\x6c\x6f\x63\x6b\x5f\x63\x6c\x61\x73\x73\x5f\x6b\x65\x79\0\x6c\x6f\x63\x6b\x64\ +\x65\x70\x5f\x73\x75\x62\x63\x6c\x61\x73\x73\x5f\x6b\x65\x79\0\x70\x69\x70\x65\ +\x5f\x69\x6e\x6f\x64\x65\x5f\x69\x6e\x66\x6f\0\x70\x6f\x73\x69\x78\x5f\x61\x63\ +\x6c\0\x73\x75\x70\x65\x72\x5f\x62\x6c\x6f\x63\x6b\0\x76\x66\x73\x6d\x6f\x75\ +\x6e\x74\0\x64\x75\x6d\x6d\x79\x5f\x6b\x73\x79\x6d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\x84\x21\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\x04\0\0\0\x04\0\0\ +\0\0\x04\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x68\x69\x64\x5f\x6a\x6d\x70\x5f\x74\x61\ +\x62\x6c\x65\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\ +\0\0\0\x08\0\0\0\x01\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x70\x72\x6f\x67\ +\x73\x5f\x6d\x61\x70\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\x12\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x12\0\0\0\0\0\0\x61\x23\0\0\0\0\0\0\ +\x18\x52\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x85\0\0\0\x0c\0\0\0\xb7\0\0\0\0\0\0\0\x95\ +\0\0\0\0\0\0\0\0\0\0\0\x19\0\0\0\0\0\0\0\xb5\0\0\0\xfa\0\0\0\x05\x6c\0\0\x01\0\ +\0\0\xb5\0\0\0\xe1\x01\0\0\x02\x74\0\0\x05\0\0\0\xb5\0\0\0\xfa\0\0\0\x05\x6c\0\ +\0\x08\0\0\0\x1a\0\0\0\xdd\x01\0\0\0\0\0\0\x1a\0\0\0\x07\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x68\x69\x64\ +\x5f\x74\x61\x69\x6c\x5f\x63\x61\x6c\x6c\0\0\0\0\0\0\0\x1a\0\0\0\0\0\0\0\x08\0\ +\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\x01\0\0\0\0\ +\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x5f\x5f\x68\ +\x69\x64\x5f\x62\x70\x66\x5f\x74\x61\x69\x6c\x5f\x63\x61\x6c\x6c\0\0\0\0\0\x47\ +\x50\x4c\0\0\0\0\0\x79\x11\x08\0\0\0\0\0\x79\x11\xa8\x01\0\0\0\0\x7b\x1a\xf8\ +\xff\0\0\0\0\xbf\xa2\0\0\0\0\0\0\x07\x02\0\0\xf8\xff\xff\xff\x18\x51\0\0\x01\0\ +\0\0\0\0\0\0\0\0\0\0\x85\0\0\0\x01\0\0\0\x15\0\x09\0\0\0\0\0\x71\x02\0\0\0\0\0\ +\0\x79\xa1\xf8\xff\0\0\0\0\x85\x20\0\0\0\0\0\0\x15\0\x05\0\0\0\0\0\xbf\xa2\0\0\ +\0\0\0\0\x07\x02\0\0\xf8\xff\xff\xff\x18\x51\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\x85\ +\0\0\0\x03\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\0\0\0\0\x23\0\0\0\0\0\0\0\ +\xb5\0\0\0\x3b\x02\0\0\x05\xbc\0\0\x01\0\0\0\xb5\0\0\0\xe7\x06\0\0\x18\xc4\0\0\ +\x04\0\0\0\xb5\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\0\xb5\0\0\0\x0c\x07\0\0\x0a\x98\0\ +\0\x08\0\0\0\xb5\0\0\0\x3d\x07\0\0\x06\x9c\0\0\x09\0\0\0\xb5\0\0\0\x4a\x07\0\0\ +\x26\xa8\0\0\x0a\0\0\0\xb5\0\0\0\x4a\x07\0\0\x20\xa8\0\0\x0b\0\0\0\xb5\0\0\0\ +\x4a\x07\0\0\x06\xa8\0\0\x0c\0\0\0\xb5\0\0\0\x4a\x07\0\0\x06\xa8\0\0\x0e\0\0\0\ +\xb5\0\0\0\0\0\0\0\0\0\0\0\x0f\0\0\0\xb5\0\0\0\x78\x07\0\0\x03\xac\0\0\x12\0\0\ +\0\xb5\0\0\0\x3b\x02\0\0\x05\xbc\0\0\x08\0\0\0\x24\0\0\0\xe2\x06\0\0\0\0\0\0\ +\x1a\0\0\0\x14\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\x68\x69\x64\x5f\x70\x72\x6f\x67\x5f\x72\x65\x6c\x65\x61\ +\x73\0\0\0\0\0\x19\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\ +\0\0\0\0\0\0\0\0\x0c\0\0\0\x01\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\x70\x72\x6f\x67\x5f\x72\x65\x6c\x65\ +\x61\x73\x65\0\0\0\0\0\0\0\0\x63\x61\x6c\x6c\x5f\x68\x69\x64\x5f\x62\x70\x66\ +\x5f\x70\x72\x6f\x67\x5f\x72\x65\x6c\x65\x61\x73\x65\0\0\0\0\0\0\0\x47\x50\x4c\ +\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x79\x11\x70\x04\0\0\0\0\x7b\x1a\xf8\xff\0\0\0\0\ +\xbf\xa2\0\0\0\0\0\0\x07\x02\0\0\xf8\xff\xff\xff\x18\x51\0\0\x01\0\0\0\0\0\0\0\ +\0\0\0\0\x85\0\0\0\x01\0\0\0\x15\0\x09\0\0\0\0\0\x71\x02\0\0\0\0\0\0\x79\xa1\ +\xf8\xff\0\0\0\0\x85\x20\0\0\0\0\0\0\x15\0\x05\0\0\0\0\0\xbf\xa2\0\0\0\0\0\0\ +\x07\x02\0\0\xf8\xff\xff\xff\x18\x51\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\x85\0\0\0\ +\x03\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\0\0\0\0\x80\0\0\0\0\0\0\0\xb5\0\ +\0\0\xeb\x07\0\0\x05\xe4\0\0\x01\0\0\0\xb5\0\0\0\xcf\x0b\0\0\x19\xec\0\0\x04\0\ +\0\0\xb5\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\0\xb5\0\0\0\x0c\x07\0\0\x0a\x98\0\0\x08\ +\0\0\0\xb5\0\0\0\x3d\x07\0\0\x06\x9c\0\0\x09\0\0\0\xb5\0\0\0\x4a\x07\0\0\x26\ +\xa8\0\0\x0a\0\0\0\xb5\0\0\0\x4a\x07\0\0\x20\xa8\0\0\x0b\0\0\0\xb5\0\0\0\x4a\ +\x07\0\0\x06\xa8\0\0\x0c\0\0\0\xb5\0\0\0\x4a\x07\0\0\x06\xa8\0\0\x0e\0\0\0\xb5\ +\0\0\0\0\0\0\0\0\0\0\0\x0f\0\0\0\xb5\0\0\0\x78\x07\0\0\x03\xac\0\0\x12\0\0\0\ +\xb5\0\0\0\xeb\x07\0\0\x05\xe4\0\0\x08\0\0\0\x81\0\0\0\xca\x0b\0\0\0\0\0\0\x1a\ +\0\0\0\x14\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\x68\x69\x64\x5f\x66\x72\x65\x65\x5f\x69\x6e\x6f\x64\x65\0\0\ +\0\0\0\0\x19\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\ +\0\0\0\0\0\x0c\0\0\0\x01\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\x66\x72\x65\x65\x5f\x69\x6e\x6f\x64\x65\0\ +\0\x63\x61\x6c\x6c\x5f\x68\x69\x64\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x5f\x72\ +\x65\x6c\x65\x61\x73\x65\0\0\0\0\0\0\0"; + opts.insns_sz = 3152; + opts.insns = (void *)"\ +\xbf\x16\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\x78\xff\xff\xff\xb7\x02\0\ +\0\x88\0\0\0\xb7\x03\0\0\0\0\0\0\x85\0\0\0\x71\0\0\0\x05\0\x1d\0\0\0\0\0\x61\ +\xa1\x78\xff\0\0\0\0\xd5\x01\x01\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa1\x7c\xff\ +\0\0\0\0\xd5\x01\x01\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa1\x80\xff\0\0\0\0\xd5\ +\x01\x01\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa1\x84\xff\0\0\0\0\xd5\x01\x01\0\0\ +\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa1\x88\xff\0\0\0\0\xd5\x01\x01\0\0\0\0\0\x85\0\ +\0\0\xa8\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x01\0\0\0\0\0\0\xd5\x01\ +\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x18\x60\0\0\0\0\0\0\0\0\ +\0\0\x04\0\0\0\x61\x01\0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\ +\x85\0\0\0\xa8\0\0\0\xbf\x70\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\x61\x60\x08\0\0\0\0\ +\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xa0\x26\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\ +\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x9c\x26\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\ +\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x90\x26\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\ +\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x88\x26\0\0\x7b\x01\ +\0\0\0\0\0\0\xb7\x01\0\0\x12\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x88\x26\0\0\xb7\ +\x03\0\0\x1c\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\xcb\xff\0\0\ +\0\0\x63\x7a\x78\xff\0\0\0\0\x61\x60\x1c\0\0\0\0\0\x15\0\x03\0\0\0\0\0\x18\x61\ +\0\0\0\0\0\0\0\0\0\0\xb4\x26\0\0\x63\x01\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x18\ +\x62\0\0\0\0\0\0\0\0\0\0\xa8\x26\0\0\xb7\x03\0\0\x48\0\0\0\x85\0\0\0\xa6\0\0\0\ +\xbf\x07\0\0\0\0\0\0\xc5\x07\xbe\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\x63\x71\0\0\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x20\ +\x27\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x2c\0\0\0\0\0\x15\0\x03\0\0\0\0\0\x18\x61\ +\0\0\0\0\0\0\0\0\0\0\xfc\x26\0\0\x63\x01\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x18\ +\x62\0\0\0\0\0\0\0\0\0\0\xf0\x26\0\0\xb7\x03\0\0\x48\0\0\0\x85\0\0\0\xa6\0\0\0\ +\xbf\x07\0\0\0\0\0\0\xc5\x07\xab\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x04\0\ +\0\0\x63\x71\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x38\x27\0\0\x18\x61\0\0\0\ +\0\0\0\0\0\0\0\xd0\x27\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x40\ +\x27\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xc8\x27\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\ +\0\0\0\0\0\0\0\0\0\x78\x27\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x10\x28\0\0\x7b\x01\ +\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x80\x27\0\0\x18\x61\0\0\0\0\0\0\0\0\0\ +\0\x20\x28\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xb0\x27\0\0\x18\ +\x61\0\0\0\0\0\0\0\0\0\0\x40\x28\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x38\x28\0\0\x7b\x01\0\0\0\0\0\0\x61\ +\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xd8\x27\0\0\x63\x01\0\0\0\0\0\0\ +\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xdc\x27\0\0\x63\x01\0\0\0\0\ +\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe0\x27\0\0\x7b\x01\0\0\ +\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x28\0\0\x63\ +\x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x50\x28\0\0\xb7\x02\0\0\x14\0\0\0\ +\xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\ +\0\0\xc5\x07\x72\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xc0\x27\0\0\x63\x70\ +\x6c\0\0\0\0\0\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\0\x05\0\0\ +\0\x18\x62\0\0\0\0\0\0\0\0\0\0\xc0\x27\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\xa6\ +\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x30\x28\0\0\x61\x01\0\0\ +\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\x07\ +\x60\xff\0\0\0\0\x63\x7a\x80\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x68\x28\0\ +\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xf8\x29\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\ +\0\0\0\0\0\0\x70\x28\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xf0\x29\0\0\x7b\x01\0\0\0\ +\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x29\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x38\ +\x2a\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x18\x29\0\0\x18\x61\0\ +\0\0\0\0\0\0\0\0\0\x48\x2a\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\ +\xd8\x29\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x68\x2a\0\0\x7b\x01\0\0\0\0\0\0\x18\ +\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x60\x2a\0\0\x7b\ +\x01\0\0\0\0\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\0\x2a\0\0\ +\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x04\x2a\ +\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\ +\x2a\0\0\x7b\x01\0\0\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\ +\0\x30\x2a\0\0\x63\x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x78\x2a\0\0\xb7\ +\x02\0\0\x11\0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\ +\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x29\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\ +\x29\0\0\x63\x70\x6c\0\0\0\0\0\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\x18\ +\x68\0\0\0\0\0\0\0\0\0\0\xc8\x28\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x90\x2a\0\0\ +\xb7\x02\0\0\x1a\0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\ +\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x1a\xff\0\0\0\0\x75\x07\x03\0\0\0\0\0\x62\ +\x08\x04\0\0\0\0\0\x6a\x08\x02\0\0\0\0\0\x05\0\x0a\0\0\0\0\0\x63\x78\x04\0\0\0\ +\0\0\xbf\x79\0\0\0\0\0\0\x77\x09\0\0\x20\0\0\0\x55\x09\x02\0\0\0\0\0\x6a\x08\ +\x02\0\0\0\0\0\x05\0\x04\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\x63\ +\x90\0\0\0\0\0\0\x6a\x08\x02\0\x40\0\0\0\xb7\x01\0\0\x05\0\0\0\x18\x62\0\0\0\0\ +\0\0\0\0\0\0\xe8\x29\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\ +\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\x61\x01\0\0\0\0\0\0\xd5\x01\x02\ +\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\ +\x58\x2a\0\0\x61\x01\0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\ +\0\0\0\xa8\0\0\0\xc5\x07\xf9\xfe\0\0\0\0\x63\x7a\x84\xff\0\0\0\0\x18\x60\0\0\0\ +\0\0\0\0\0\0\0\xb0\x2a\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x40\x2c\0\0\x7b\x01\0\0\ +\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xb8\x2a\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ +\x38\x2c\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x58\x2b\0\0\x18\ +\x61\0\0\0\0\0\0\0\0\0\0\x80\x2c\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\ +\0\0\0\x60\x2b\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x90\x2c\0\0\x7b\x01\0\0\0\0\0\0\ +\x18\x60\0\0\0\0\0\0\0\0\0\0\x20\x2c\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb0\x2c\0\ +\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\ +\0\0\0\0\xa8\x2c\0\0\x7b\x01\0\0\0\0\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\ +\0\0\0\0\0\0\x48\x2c\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\ +\0\0\0\0\0\0\0\0\x4c\x2c\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\ +\0\0\0\0\0\0\0\0\0\0\x50\x2c\0\0\x7b\x01\0\0\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\ +\x18\x61\0\0\0\0\0\0\0\0\0\0\x78\x2c\0\0\x63\x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\ +\0\0\0\0\0\xc0\x2c\0\0\xb7\x02\0\0\x0f\0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\ +\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\xc2\xfe\0\0\0\0\x18\ +\x60\0\0\0\0\0\0\0\0\0\0\x30\x2c\0\0\x63\x70\x6c\0\0\0\0\0\x77\x07\0\0\x20\0\0\ +\0\x63\x70\x70\0\0\0\0\0\x18\x68\0\0\0\0\0\0\0\0\0\0\x10\x2b\0\0\x18\x61\0\0\0\ +\0\0\0\0\0\0\0\xd0\x2c\0\0\xb7\x02\0\0\x1a\0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x04\ +\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\xb3\xfe\0\0\0\0\ +\x75\x07\x03\0\0\0\0\0\x62\x08\x04\0\0\0\0\0\x6a\x08\x02\0\0\0\0\0\x05\0\x0a\0\ +\0\0\0\0\x63\x78\x04\0\0\0\0\0\xbf\x79\0\0\0\0\0\0\x77\x09\0\0\x20\0\0\0\x55\ +\x09\x02\0\0\0\0\0\x6a\x08\x02\0\0\0\0\0\x05\0\x04\0\0\0\0\0\x18\x60\0\0\0\0\0\ +\0\0\0\0\0\0\x01\0\0\x63\x90\0\0\0\0\0\0\x6a\x08\x02\0\x40\0\0\0\xb7\x01\0\0\ +\x05\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x30\x2c\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\ +\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\x61\ +\x01\0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\ +\x18\x60\0\0\0\0\0\0\0\0\0\0\xa0\x2c\0\0\x61\x01\0\0\0\0\0\0\xd5\x01\x02\0\0\0\ +\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\x07\x92\xfe\0\0\0\0\x63\x7a\ +\x88\xff\0\0\0\0\x61\xa1\x78\xff\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\ +\0\0\x85\0\0\0\xa8\0\0\0\x61\xa0\x80\xff\0\0\0\0\x63\x06\x38\0\0\0\0\0\x61\xa0\ +\x84\xff\0\0\0\0\x63\x06\x3c\0\0\0\0\0\x61\xa0\x88\xff\0\0\0\0\x63\x06\x40\0\0\ +\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x10\0\0\0\0\0\0\x63\x06\x18\0\0\ +\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x04\0\0\0\x61\x10\0\0\0\0\0\0\x63\x06\x28\0\ +\0\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0"; + err = bpf_load_and_run(&opts); + if (err < 0) + return err; + return 0; +} + +static inline struct entrypoints_bpf * +entrypoints_bpf__open_and_load(void) +{ + struct entrypoints_bpf *skel; + + skel = entrypoints_bpf__open(); + if (!skel) + return NULL; + if (entrypoints_bpf__load(skel)) { + entrypoints_bpf__destroy(skel); + return NULL; + } + return skel; +} + +__attribute__((unused)) static void +entrypoints_bpf__assert(struct entrypoints_bpf *s __attribute__((unused))) +{ +#ifdef __cplusplus +#define _Static_assert static_assert +#endif +#ifdef __cplusplus +#undef _Static_assert +#endif +} + +#endif /* __ENTRYPOINTS_BPF_SKEL_H__ */ diff --git a/drivers/hid/bpf/hid_bpf_dispatch.c b/drivers/hid/bpf/hid_bpf_dispatch.c new file mode 100644 index 000000000000..600b00fdf6c1 --- /dev/null +++ b/drivers/hid/bpf/hid_bpf_dispatch.c @@ -0,0 +1,223 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * HID-BPF support for Linux + * + * Copyright (c) 2022 Benjamin Tissoires + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include <linux/bitops.h> +#include <linux/btf.h> +#include <linux/btf_ids.h> +#include <linux/filter.h> +#include <linux/hid.h> +#include <linux/hid_bpf.h> +#include <linux/init.h> +#include <linux/kfifo.h> +#include <linux/module.h> +#include <linux/workqueue.h> +#include "hid_bpf_dispatch.h" +#include "entrypoints/entrypoints.lskel.h" + +struct hid_bpf_ops *hid_bpf_ops; +EXPORT_SYMBOL(hid_bpf_ops); + +/** + * hid_bpf_device_event - Called whenever an event is coming in from the device + * + * @ctx: The HID-BPF context + * + * @return %0 on success and keep processing; a negative error code to interrupt + * the processing of this event + * + * Declare an %fmod_ret tracing bpf program to this function and attach this + * program through hid_bpf_attach_prog() to have this helper called for + * any incoming event from the device itself. + * + * The function is called while on IRQ context, so we can not sleep. + */ +/* never used by the kernel but declared so we can load and attach a tracepoint */ +__weak noinline int hid_bpf_device_event(struct hid_bpf_ctx *ctx) +{ + return 0; +} +ALLOW_ERROR_INJECTION(hid_bpf_device_event, ERRNO); + +int +dispatch_hid_bpf_device_event(struct hid_device *hdev, enum hid_report_type type, u8 *data, + u32 size, int interrupt) +{ + struct hid_bpf_ctx_kern ctx_kern = { + .ctx = { + .hid = hdev, + .report_type = type, + .size = size, + }, + .data = data, + }; + + if (type >= HID_REPORT_TYPES) + return -EINVAL; + + return hid_bpf_prog_run(hdev, HID_BPF_PROG_TYPE_DEVICE_EVENT, &ctx_kern); +} +EXPORT_SYMBOL_GPL(dispatch_hid_bpf_device_event); + +/** + * hid_bpf_get_data - Get the kernel memory pointer associated with the context @ctx + * + * @ctx: The HID-BPF context + * @offset: The offset within the memory + * @rdwr_buf_size: the const size of the buffer + * + * @returns %NULL on error, an %__u8 memory pointer on success + */ +noinline __u8 * +hid_bpf_get_data(struct hid_bpf_ctx *ctx, unsigned int offset, const size_t rdwr_buf_size) +{ + struct hid_bpf_ctx_kern *ctx_kern; + + if (!ctx) + return NULL; + + ctx_kern = container_of(ctx, struct hid_bpf_ctx_kern, ctx); + + if (rdwr_buf_size + offset > ctx->size) + return NULL; + + return ctx_kern->data + offset; +} + +/* + * The following set contains all functions we agree BPF programs + * can use. + */ +BTF_SET8_START(hid_bpf_kfunc_ids) +BTF_ID_FLAGS(func, call_hid_bpf_prog_release) +BTF_ID_FLAGS(func, hid_bpf_get_data, KF_RET_NULL) +BTF_SET8_END(hid_bpf_kfunc_ids) + +static const struct btf_kfunc_id_set hid_bpf_kfunc_set = { + .owner = THIS_MODULE, + .set = &hid_bpf_kfunc_ids, +}; + +static int device_match_id(struct device *dev, const void *id) +{ + struct hid_device *hdev = to_hid_device(dev); + + return hdev->id == *(int *)id; +} + +/** + * hid_bpf_attach_prog - Attach the given @prog_fd to the given HID device + * + * @hid_id: the system unique identifier of the HID device + * @prog_fd: an fd in the user process representing the program to attach + * @flags: any logical OR combination of &enum hid_bpf_attach_flags + * + * @returns %0 on success, an error code otherwise. + */ +/* called from syscall */ +noinline int +hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags) +{ + struct hid_device *hdev; + struct device *dev; + int prog_type = hid_bpf_get_prog_attach_type(prog_fd); + + if (!hid_bpf_ops) + return -EINVAL; + + if (prog_type < 0) + return prog_type; + + if (prog_type >= HID_BPF_PROG_TYPE_MAX) + return -EINVAL; + + if ((flags & ~HID_BPF_FLAG_MASK)) + return -EINVAL; + + dev = bus_find_device(hid_bpf_ops->bus_type, NULL, &hid_id, device_match_id); + if (!dev) + return -EINVAL; + + hdev = to_hid_device(dev); + + return __hid_bpf_attach_prog(hdev, prog_type, prog_fd, flags); +} + +/* for syscall HID-BPF */ +BTF_SET8_START(hid_bpf_syscall_kfunc_ids) +BTF_ID_FLAGS(func, hid_bpf_attach_prog) +BTF_SET8_END(hid_bpf_syscall_kfunc_ids) + +static const struct btf_kfunc_id_set hid_bpf_syscall_kfunc_set = { + .owner = THIS_MODULE, + .set = &hid_bpf_syscall_kfunc_ids, +}; + +void hid_bpf_destroy_device(struct hid_device *hdev) +{ + if (!hdev) + return; + + /* mark the device as destroyed in bpf so we don't reattach it */ + hdev->bpf.destroyed = true; + + __hid_bpf_destroy_device(hdev); +} +EXPORT_SYMBOL_GPL(hid_bpf_destroy_device); + +void hid_bpf_device_init(struct hid_device *hdev) +{ + spin_lock_init(&hdev->bpf.progs_lock); +} +EXPORT_SYMBOL_GPL(hid_bpf_device_init); + +static int __init hid_bpf_init(void) +{ + int err; + + /* Note: if we exit with an error any time here, we would entirely break HID, which + * is probably not something we want. So we log an error and return success. + * + * This is not a big deal: the syscall allowing to attach a BPF program to a HID device + * will not be available, so nobody will be able to use the functionality. + */ + + err = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &hid_bpf_kfunc_set); + if (err) { + pr_warn("error while setting HID BPF tracing kfuncs: %d", err); + return 0; + } + + err = hid_bpf_preload_skel(); + if (err) { + pr_warn("error while preloading HID BPF dispatcher: %d", err); + return 0; + } + + /* register syscalls after we are sure we can load our preloaded bpf program */ + err = register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &hid_bpf_syscall_kfunc_set); + if (err) { + pr_warn("error while setting HID BPF syscall kfuncs: %d", err); + return 0; + } + + return 0; +} + +static void __exit hid_bpf_exit(void) +{ + /* HID depends on us, so if we hit that code, we are guaranteed that hid + * has been removed and thus we do not need to clear the HID devices + */ + hid_bpf_free_links_and_skel(); +} + +late_initcall(hid_bpf_init); +module_exit(hid_bpf_exit); +MODULE_AUTHOR("Benjamin Tissoires"); +MODULE_LICENSE("GPL"); diff --git a/drivers/hid/bpf/hid_bpf_dispatch.h b/drivers/hid/bpf/hid_bpf_dispatch.h new file mode 100644 index 000000000000..98c378e18b2b --- /dev/null +++ b/drivers/hid/bpf/hid_bpf_dispatch.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _BPF_HID_BPF_DISPATCH_H +#define _BPF_HID_BPF_DISPATCH_H + +#include <linux/hid.h> + +struct hid_bpf_ctx_kern { + struct hid_bpf_ctx ctx; + u8 *data; +}; + +int hid_bpf_preload_skel(void); +void hid_bpf_free_links_and_skel(void); +int hid_bpf_get_prog_attach_type(int prog_fd); +int __hid_bpf_attach_prog(struct hid_device *hdev, enum hid_bpf_prog_type prog_type, int prog_fd, + __u32 flags); +void __hid_bpf_destroy_device(struct hid_device *hdev); +int hid_bpf_prog_run(struct hid_device *hdev, enum hid_bpf_prog_type type, + struct hid_bpf_ctx_kern *ctx_kern); + +struct bpf_prog; + +/* HID-BPF internal kfuncs API */ +bool call_hid_bpf_prog_release(u64 prog, int table_cnt); + +#endif diff --git a/drivers/hid/bpf/hid_bpf_jmp_table.c b/drivers/hid/bpf/hid_bpf_jmp_table.c new file mode 100644 index 000000000000..05225ff3cc27 --- /dev/null +++ b/drivers/hid/bpf/hid_bpf_jmp_table.c @@ -0,0 +1,568 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * HID-BPF support for Linux + * + * Copyright (c) 2022 Benjamin Tissoires + */ + +#include <linux/bitops.h> +#include <linux/btf.h> +#include <linux/btf_ids.h> +#include <linux/circ_buf.h> +#include <linux/filter.h> +#include <linux/hid.h> +#include <linux/hid_bpf.h> +#include <linux/init.h> +#include <linux/module.h> +#include <linux/workqueue.h> +#include "hid_bpf_dispatch.h" +#include "entrypoints/entrypoints.lskel.h" + +#define HID_BPF_MAX_PROGS 1024 /* keep this in sync with preloaded bpf, + * needs to be a power of 2 as we use it as + * a circular buffer + */ + +#define NEXT(idx) (((idx) + 1) & (HID_BPF_MAX_PROGS - 1)) +#define PREV(idx) (((idx) - 1) & (HID_BPF_MAX_PROGS - 1)) + +/* + * represents one attached program stored in the hid jump table + */ +struct hid_bpf_prog_entry { + struct bpf_prog *prog; + struct hid_device *hdev; + enum hid_bpf_prog_type type; + u16 idx; +}; + +struct hid_bpf_jmp_table { + struct bpf_map *map; + struct bpf_map *prog_keys; + struct hid_bpf_prog_entry entries[HID_BPF_MAX_PROGS]; /* compacted list, circular buffer */ + int tail, head; + struct bpf_prog *progs[HID_BPF_MAX_PROGS]; /* idx -> progs mapping */ + unsigned long enabled[BITS_TO_LONGS(HID_BPF_MAX_PROGS)]; +}; + +#define FOR_ENTRIES(__i, __start, __end) \ + for (__i = __start; CIRC_CNT(__end, __i, HID_BPF_MAX_PROGS); __i = NEXT(__i)) + +static struct hid_bpf_jmp_table jmp_table; + +static DEFINE_MUTEX(hid_bpf_attach_lock); /* held when attaching/detaching programs */ + +static void hid_bpf_release_progs(struct work_struct *work); + +static DECLARE_WORK(release_work, hid_bpf_release_progs); + +BTF_ID_LIST(hid_bpf_btf_ids) +BTF_ID(func, hid_bpf_device_event) /* HID_BPF_PROG_TYPE_DEVICE_EVENT */ + +static int hid_bpf_max_programs(enum hid_bpf_prog_type type) +{ + switch (type) { + case HID_BPF_PROG_TYPE_DEVICE_EVENT: + return HID_BPF_MAX_PROGS_PER_DEV; + default: + return -EINVAL; + } +} + +static int hid_bpf_program_count(struct hid_device *hdev, + struct bpf_prog *prog, + enum hid_bpf_prog_type type) +{ + int i, n = 0; + + if (type >= HID_BPF_PROG_TYPE_MAX) + return -EINVAL; + + FOR_ENTRIES(i, jmp_table.tail, jmp_table.head) { + struct hid_bpf_prog_entry *entry = &jmp_table.entries[i]; + + if (type != HID_BPF_PROG_TYPE_UNDEF && entry->type != type) + continue; + + if (hdev && entry->hdev != hdev) + continue; + + if (prog && entry->prog != prog) + continue; + + n++; + } + + return n; +} + +__weak noinline int __hid_bpf_tail_call(struct hid_bpf_ctx *ctx) +{ + return 0; +} +ALLOW_ERROR_INJECTION(__hid_bpf_tail_call, ERRNO); + +int hid_bpf_prog_run(struct hid_device *hdev, enum hid_bpf_prog_type type, + struct hid_bpf_ctx_kern *ctx_kern) +{ + struct hid_bpf_prog_list *prog_list; + int i, idx, err = 0; + + rcu_read_lock(); + prog_list = rcu_dereference(hdev->bpf.progs[type]); + + if (!prog_list) + goto out_unlock; + + for (i = 0; i < prog_list->prog_cnt; i++) { + idx = prog_list->prog_idx[i]; + + if (!test_bit(idx, jmp_table.enabled)) + continue; + + ctx_kern->ctx.index = idx; + err = __hid_bpf_tail_call(&ctx_kern->ctx); + if (err) + break; + } + + out_unlock: + rcu_read_unlock(); + + return err; +} + +/* + * assign the list of programs attached to a given hid device. + */ +static void __hid_bpf_set_hdev_progs(struct hid_device *hdev, struct hid_bpf_prog_list *new_list, + enum hid_bpf_prog_type type) +{ + struct hid_bpf_prog_list *old_list; + + spin_lock(&hdev->bpf.progs_lock); + old_list = rcu_dereference_protected(hdev->bpf.progs[type], + lockdep_is_held(&hdev->bpf.progs_lock)); + rcu_assign_pointer(hdev->bpf.progs[type], new_list); + spin_unlock(&hdev->bpf.progs_lock); + synchronize_rcu(); + + kfree(old_list); +} + +/* + * allocate and populate the list of programs attached to a given hid device. + * + * Must be called under lock. + */ +static int hid_bpf_populate_hdev(struct hid_device *hdev, enum hid_bpf_prog_type type) +{ + struct hid_bpf_prog_list *new_list; + int i; + + if (type >= HID_BPF_PROG_TYPE_MAX || !hdev) + return -EINVAL; + + if (hdev->bpf.destroyed) + return 0; + + new_list = kzalloc(sizeof(*new_list), GFP_KERNEL); + if (!new_list) + return -ENOMEM; + + FOR_ENTRIES(i, jmp_table.tail, jmp_table.head) { + struct hid_bpf_prog_entry *entry = &jmp_table.entries[i]; + + if (entry->type == type && entry->hdev == hdev && + test_bit(entry->idx, jmp_table.enabled)) + new_list->prog_idx[new_list->prog_cnt++] = entry->idx; + } + + __hid_bpf_set_hdev_progs(hdev, new_list, type); + + return 0; +} + +static void __hid_bpf_do_release_prog(int map_fd, unsigned int idx) +{ + skel_map_delete_elem(map_fd, &idx); + jmp_table.progs[idx] = NULL; +} + +static void hid_bpf_release_progs(struct work_struct *work) +{ + int i, j, n, map_fd = -1; + + if (!jmp_table.map) + return; + + /* retrieve a fd of our prog_array map in BPF */ + map_fd = skel_map_get_fd_by_id(jmp_table.map->id); + if (map_fd < 0) + return; + + mutex_lock(&hid_bpf_attach_lock); /* protects against attaching new programs */ + + /* detach unused progs from HID devices */ + FOR_ENTRIES(i, jmp_table.tail, jmp_table.head) { + struct hid_bpf_prog_entry *entry = &jmp_table.entries[i]; + enum hid_bpf_prog_type type; + struct hid_device *hdev; + + if (test_bit(entry->idx, jmp_table.enabled)) + continue; + + /* we have an attached prog */ + if (entry->hdev) { + hdev = entry->hdev; + type = entry->type; + + hid_bpf_populate_hdev(hdev, type); + + /* mark all other disabled progs from hdev of the given type as detached */ + FOR_ENTRIES(j, i, jmp_table.head) { + struct hid_bpf_prog_entry *next; + + next = &jmp_table.entries[j]; + + if (test_bit(next->idx, jmp_table.enabled)) + continue; + + if (next->hdev == hdev && next->type == type) + next->hdev = NULL; + } + } + } + + /* remove all unused progs from the jump table */ + FOR_ENTRIES(i, jmp_table.tail, jmp_table.head) { + struct hid_bpf_prog_entry *entry = &jmp_table.entries[i]; + + if (test_bit(entry->idx, jmp_table.enabled)) + continue; + + if (entry->prog) + __hid_bpf_do_release_prog(map_fd, entry->idx); + } + + /* compact the entry list */ + n = jmp_table.tail; + FOR_ENTRIES(i, jmp_table.tail, jmp_table.head) { + struct hid_bpf_prog_entry *entry = &jmp_table.entries[i]; + + if (!test_bit(entry->idx, jmp_table.enabled)) + continue; + + jmp_table.entries[n] = jmp_table.entries[i]; + n = NEXT(n); + } + + jmp_table.head = n; + + mutex_unlock(&hid_bpf_attach_lock); + + if (map_fd >= 0) + close_fd(map_fd); +} + +static void hid_bpf_release_prog_at(int idx) +{ + int map_fd = -1; + + /* retrieve a fd of our prog_array map in BPF */ + map_fd = skel_map_get_fd_by_id(jmp_table.map->id); + if (map_fd < 0) + return; + + __hid_bpf_do_release_prog(map_fd, idx); + + close(map_fd); +} + +/* + * Insert the given BPF program represented by its fd in the jmp table. + * Returns the index in the jump table or a negative error. + */ +static int hid_bpf_insert_prog(int prog_fd, struct bpf_prog *prog) +{ + int i, cnt, index = -1, map_fd = -1, progs_map_fd = -1, err = -EINVAL; + + /* retrieve a fd of our prog_array map in BPF */ + map_fd = skel_map_get_fd_by_id(jmp_table.map->id); + /* take an fd for the table of progs we monitor with SEC("fexit/bpf_prog_release") */ + progs_map_fd = skel_map_get_fd_by_id(jmp_table.prog_keys->id); + + if (map_fd < 0 || progs_map_fd < 0) { + err = -EINVAL; + goto out; + } + + cnt = 0; + /* find the first available index in the jmp_table + * and count how many time this program has been inserted + */ + for (i = 0; i < HID_BPF_MAX_PROGS; i++) { + if (!jmp_table.progs[i] && index < 0) { + /* mark the index as used */ + jmp_table.progs[i] = prog; + index = i; + __set_bit(i, jmp_table.enabled); + cnt++; + } else { + if (jmp_table.progs[i] == prog) + cnt++; + } + } + if (index < 0) { + err = -ENOMEM; + goto out; + } + + /* insert the program in the jump table */ + err = skel_map_update_elem(map_fd, &index, &prog_fd, 0); + if (err) + goto out; + + /* insert the program in the prog list table */ + err = skel_map_update_elem(progs_map_fd, &prog, &cnt, 0); + if (err) + goto out; + + /* return the index */ + err = index; + + out: + if (err < 0) + __hid_bpf_do_release_prog(map_fd, index); + if (map_fd >= 0) + close_fd(map_fd); + if (progs_map_fd >= 0) + close_fd(progs_map_fd); + return err; +} + +int hid_bpf_get_prog_attach_type(int prog_fd) +{ + struct bpf_prog *prog = NULL; + int i; + int prog_type = HID_BPF_PROG_TYPE_UNDEF; + + prog = bpf_prog_get(prog_fd); + if (IS_ERR(prog)) + return PTR_ERR(prog); + + for (i = 0; i < HID_BPF_PROG_TYPE_MAX; i++) { + if (hid_bpf_btf_ids[i] == prog->aux->attach_btf_id) { + prog_type = i; + break; + } + } + + bpf_prog_put(prog); + + return prog_type; +} + +/* called from syscall */ +noinline int +__hid_bpf_attach_prog(struct hid_device *hdev, enum hid_bpf_prog_type prog_type, + int prog_fd, __u32 flags) +{ + struct bpf_prog *prog = NULL; + struct hid_bpf_prog_entry *prog_entry; + int cnt, err = -EINVAL, prog_idx = -1; + + /* take a ref on the prog itself */ + prog = bpf_prog_get(prog_fd); + if (IS_ERR(prog)) + return PTR_ERR(prog); + + mutex_lock(&hid_bpf_attach_lock); + + /* do not attach too many programs to a given HID device */ + cnt = hid_bpf_program_count(hdev, NULL, prog_type); + if (cnt < 0) { + err = cnt; + goto out_unlock; + } + + if (cnt >= hid_bpf_max_programs(prog_type)) { + err = -E2BIG; + goto out_unlock; + } + + prog_idx = hid_bpf_insert_prog(prog_fd, prog); + /* if the jmp table is full, abort */ + if (prog_idx < 0) { + err = prog_idx; + goto out_unlock; + } + + if (flags & HID_BPF_FLAG_INSERT_HEAD) { + /* take the previous prog_entry slot */ + jmp_table.tail = PREV(jmp_table.tail); + prog_entry = &jmp_table.entries[jmp_table.tail]; + } else { + /* take the next prog_entry slot */ + prog_entry = &jmp_table.entries[jmp_table.head]; + jmp_table.head = NEXT(jmp_table.head); + } + + /* we steal the ref here */ + prog_entry->prog = prog; + prog_entry->idx = prog_idx; + prog_entry->hdev = hdev; + prog_entry->type = prog_type; + + /* finally store the index in the device list */ + err = hid_bpf_populate_hdev(hdev, prog_type); + if (err) + hid_bpf_release_prog_at(prog_idx); + + out_unlock: + mutex_unlock(&hid_bpf_attach_lock); + + /* we only use prog as a key in the various tables, so we don't need to actually + * increment the ref count. + */ + bpf_prog_put(prog); + + return err; +} + +void __hid_bpf_destroy_device(struct hid_device *hdev) +{ + int type, i; + struct hid_bpf_prog_list *prog_list; + + rcu_read_lock(); + + for (type = 0; type < HID_BPF_PROG_TYPE_MAX; type++) { + prog_list = rcu_dereference(hdev->bpf.progs[type]); + + if (!prog_list) + continue; + + for (i = 0; i < prog_list->prog_cnt; i++) + __clear_bit(prog_list->prog_idx[i], jmp_table.enabled); + } + + rcu_read_unlock(); + + for (type = 0; type < HID_BPF_PROG_TYPE_MAX; type++) + __hid_bpf_set_hdev_progs(hdev, NULL, type); + + /* schedule release of all detached progs */ + schedule_work(&release_work); +} + +noinline bool +call_hid_bpf_prog_release(u64 prog_key, int table_cnt) +{ + /* compare with how many refs are left in the bpf program */ + struct bpf_prog *prog = (struct bpf_prog *)prog_key; + int idx; + + if (!prog) + return false; + + if (atomic64_read(&prog->aux->refcnt) != table_cnt) + return false; + + /* we don't need locking here because the entries in the progs table + * are stable: + * if there are other users (and the progs entries might change), we + * would return in the statement above. + */ + for (idx = 0; idx < HID_BPF_MAX_PROGS; idx++) { + if (jmp_table.progs[idx] == prog) { + __clear_bit(idx, jmp_table.enabled); + break; + } + } + if (idx >= HID_BPF_MAX_PROGS) { + /* should never happen if we get our refcount right */ + idx = -1; + } + + /* schedule release of all detached progs */ + schedule_work(&release_work); + return idx >= 0; +} + +#define HID_BPF_PROGS_COUNT 3 + +static struct bpf_link *links[HID_BPF_PROGS_COUNT]; +static struct entrypoints_bpf *skel; + +void hid_bpf_free_links_and_skel(void) +{ + int i; + + /* the following is enough to release all programs attached to hid */ + if (jmp_table.prog_keys) + bpf_map_put_with_uref(jmp_table.prog_keys); + + if (jmp_table.map) + bpf_map_put_with_uref(jmp_table.map); + + for (i = 0; i < ARRAY_SIZE(links); i++) { + if (!IS_ERR_OR_NULL(links[i])) + bpf_link_put(links[i]); + } + entrypoints_bpf__destroy(skel); +} + +#define ATTACH_AND_STORE_LINK(__name) do { \ + err = entrypoints_bpf__##__name##__attach(skel); \ + if (err) \ + goto out; \ + \ + links[idx] = bpf_link_get_from_fd(skel->links.__name##_fd); \ + if (IS_ERR(links[idx])) { \ + err = PTR_ERR(links[idx]); \ + goto out; \ + } \ + \ + /* Avoid taking over stdin/stdout/stderr of init process. Zeroing out \ + * makes skel_closenz() a no-op later in iterators_bpf__destroy(). \ + */ \ + close_fd(skel->links.__name##_fd); \ + skel->links.__name##_fd = 0; \ + idx++; \ +} while (0) + +int hid_bpf_preload_skel(void) +{ + int err, idx = 0; + + skel = entrypoints_bpf__open(); + if (!skel) + return -ENOMEM; + + err = entrypoints_bpf__load(skel); + if (err) + goto out; + + jmp_table.map = bpf_map_get_with_uref(skel->maps.hid_jmp_table.map_fd); + if (IS_ERR(jmp_table.map)) { + err = PTR_ERR(jmp_table.map); + goto out; + } + + jmp_table.prog_keys = bpf_map_get_with_uref(skel->maps.progs_map.map_fd); + if (IS_ERR(jmp_table.prog_keys)) { + err = PTR_ERR(jmp_table.prog_keys); + goto out; + } + + ATTACH_AND_STORE_LINK(hid_tail_call); + ATTACH_AND_STORE_LINK(hid_prog_release); + ATTACH_AND_STORE_LINK(hid_free_inode); + + return 0; +out: + hid_bpf_free_links_and_skel(); + return err; +} diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index aff37d6f587c..0d0bd8fc69c7 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -2040,6 +2040,10 @@ int hid_input_report(struct hid_device *hid, enum hid_report_type type, u8 *data report_enum = hid->report_enum + type; hdrv = hid->driver;
+ ret = dispatch_hid_bpf_device_event(hid, type, data, size, interrupt); + if (ret) + goto unlock; + if (!size) { dbg_hid("empty report\n"); ret = -1; @@ -2789,6 +2793,8 @@ struct hid_device *hid_allocate_device(void) sema_init(&hdev->driver_input_lock, 1); mutex_init(&hdev->ll_open_lock);
+ hid_bpf_device_init(hdev); + return hdev; } EXPORT_SYMBOL_GPL(hid_allocate_device); @@ -2815,6 +2821,7 @@ static void hid_remove_device(struct hid_device *hdev) */ void hid_destroy_device(struct hid_device *hdev) { + hid_bpf_destroy_device(hdev); hid_remove_device(hdev); put_device(&hdev->dev); } @@ -2901,6 +2908,11 @@ int hid_check_keys_pressed(struct hid_device *hid) } EXPORT_SYMBOL_GPL(hid_check_keys_pressed);
+static struct hid_bpf_ops hid_ops = { + .owner = THIS_MODULE, + .bus_type = &hid_bus_type, +}; + static int __init hid_init(void) { int ret; @@ -2915,6 +2927,8 @@ static int __init hid_init(void) goto err; }
+ hid_bpf_ops = &hid_ops; + ret = hidraw_init(); if (ret) goto err_bus; @@ -2930,6 +2944,7 @@ static int __init hid_init(void)
static void __exit hid_exit(void) { + hid_bpf_ops = NULL; hid_debug_exit(); hidraw_exit(); bus_unregister(&hid_bus_type); diff --git a/include/linux/hid.h b/include/linux/hid.h index 8677ae38599e..cd3c52fae7b1 100644 --- a/include/linux/hid.h +++ b/include/linux/hid.h @@ -26,6 +26,7 @@ #include <linux/mutex.h> #include <linux/power_supply.h> #include <uapi/linux/hid.h> +#include <linux/hid_bpf.h>
/* * We parse each description item into this structure. Short items data @@ -651,6 +652,10 @@ struct hid_device { /* device report descriptor */ wait_queue_head_t debug_wait;
unsigned int id; /* system unique id */ + +#ifdef CONFIG_BPF + struct hid_bpf bpf; /* hid-bpf data */ +#endif /* CONFIG_BPF */ };
#define to_hid_device(pdev) \ diff --git a/include/linux/hid_bpf.h b/include/linux/hid_bpf.h new file mode 100644 index 000000000000..5d53b12c6ea0 --- /dev/null +++ b/include/linux/hid_bpf.h @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ + +#ifndef __HID_BPF_H +#define __HID_BPF_H + +#include <linux/spinlock.h> +#include <uapi/linux/hid.h> +#include <uapi/linux/hid_bpf.h> + +struct hid_device; + +/* + * The following is the HID BPF API. + * + * It should be treated as UAPI, so extra care is required + * when making change to this file. + */ + +/** + * struct hid_bpf_ctx - User accessible data for all HID programs + * + * ``data`` is not directly accessible from the context. We need to issue + * a call to ``hid_bpf_get_data()`` in order to get a pointer to that field. + * + * All of these fields are currently read-only. + * + * @index: program index in the jump table. No special meaning (a smaller index + * doesn't mean the program will be executed before another program with + * a bigger index). + * @hid: the ``struct hid_device`` representing the device itself + * @report_type: used for ``hid_bpf_device_event()`` + * @size: Valid data in the data field. + * + * Programs can get the available valid size in data by fetching this field. + */ +struct hid_bpf_ctx { + __u32 index; + const struct hid_device *hid; + enum hid_report_type report_type; + __s32 size; +}; + +/* Following functions are tracepoints that BPF programs can attach to */ +int hid_bpf_device_event(struct hid_bpf_ctx *ctx); + +/* Following functions are kfunc that we export to BPF programs */ +/* only available in tracing */ +__u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, unsigned int offset, const size_t __sz); + +/* only available in syscall */ +int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags); + +/* + * Below is HID internal + */ + +/* internal function to call eBPF programs, not to be used by anybody */ +int __hid_bpf_tail_call(struct hid_bpf_ctx *ctx); + +#define HID_BPF_MAX_PROGS_PER_DEV 64 +#define HID_BPF_FLAG_MASK (((HID_BPF_FLAG_MAX - 1) << 1) - 1) + +/* types of HID programs to attach to */ +enum hid_bpf_prog_type { + HID_BPF_PROG_TYPE_UNDEF = -1, + HID_BPF_PROG_TYPE_DEVICE_EVENT, /* an event is emitted from the device */ + HID_BPF_PROG_TYPE_MAX, +}; + +struct hid_bpf_ops { + struct module *owner; + struct bus_type *bus_type; +}; + +extern struct hid_bpf_ops *hid_bpf_ops; + +struct hid_bpf_prog_list { + u16 prog_idx[HID_BPF_MAX_PROGS_PER_DEV]; + u8 prog_cnt; +}; + +/* stored in each device */ +struct hid_bpf { + struct hid_bpf_prog_list __rcu *progs[HID_BPF_PROG_TYPE_MAX]; /* attached BPF progs */ + bool destroyed; /* prevents the assignment of any progs */ + + spinlock_t progs_lock; /* protects RCU update of progs */ +}; + +#ifdef CONFIG_HID_BPF +int dispatch_hid_bpf_device_event(struct hid_device *hid, enum hid_report_type type, u8 *data, + u32 size, int interrupt); +void hid_bpf_destroy_device(struct hid_device *hid); +void hid_bpf_device_init(struct hid_device *hid); +#else /* CONFIG_HID_BPF */ +static inline int dispatch_hid_bpf_device_event(struct hid_device *hid, enum hid_report_type type, u8 *data, + u32 size, int interrupt) { return 0; } +static inline void hid_bpf_destroy_device(struct hid_device *hid) {} +static inline void hid_bpf_device_init(struct hid_device *hid) {} +#endif /* CONFIG_HID_BPF */ + +#endif /* __HID_BPF_H */ diff --git a/include/uapi/linux/hid_bpf.h b/include/uapi/linux/hid_bpf.h new file mode 100644 index 000000000000..ba8caf9b60ee --- /dev/null +++ b/include/uapi/linux/hid_bpf.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ + +#ifndef _UAPI_HID_BPF_H +#define _UAPI_HID_BPF_H + +#include <linux/const.h> +#include <linux/hid.h> + +/** + * enum hid_bpf_attach_flags - flags used when attaching a HIF-BPF program + * + * @HID_BPF_FLAG_NONE: no specific flag is used, the kernel choses where to + * insert the program + * @HID_BPF_FLAG_INSERT_HEAD: insert the given program before any other program + * currently attached to the device. This doesn't + * guarantee that this program will always be first + * @HID_BPF_FLAG_MAX: sentinel value, not to be used by the callers + */ +enum hid_bpf_attach_flags { + HID_BPF_FLAG_NONE = 0, + HID_BPF_FLAG_INSERT_HEAD = _BITUL(0), + HID_BPF_FLAG_MAX, +}; + +#endif /* _UAPI_HID_BPF_H */ diff --git a/tools/include/uapi/linux/hid.h b/tools/include/uapi/linux/hid.h new file mode 100644 index 000000000000..3e63bea3b3e2 --- /dev/null +++ b/tools/include/uapi/linux/hid.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ +/* + * Copyright (c) 1999 Andreas Gal + * Copyright (c) 2000-2001 Vojtech Pavlik + * Copyright (c) 2006-2007 Jiri Kosina + */ +#ifndef _UAPI__HID_H +#define _UAPI__HID_H + + + +/* + * USB HID (Human Interface Device) interface class code + */ + +#define USB_INTERFACE_CLASS_HID 3 + +/* + * USB HID interface subclass and protocol codes + */ + +#define USB_INTERFACE_SUBCLASS_BOOT 1 +#define USB_INTERFACE_PROTOCOL_KEYBOARD 1 +#define USB_INTERFACE_PROTOCOL_MOUSE 2 + +/* + * HID report types --- Ouch! HID spec says 1 2 3! + */ + +enum hid_report_type { + HID_INPUT_REPORT = 0, + HID_OUTPUT_REPORT = 1, + HID_FEATURE_REPORT = 2, + + HID_REPORT_TYPES, +}; + +/* + * HID class requests + */ + +enum hid_class_request { + HID_REQ_GET_REPORT = 0x01, + HID_REQ_GET_IDLE = 0x02, + HID_REQ_GET_PROTOCOL = 0x03, + HID_REQ_SET_REPORT = 0x09, + HID_REQ_SET_IDLE = 0x0A, + HID_REQ_SET_PROTOCOL = 0x0B, +}; + +/* + * HID class descriptor types + */ + +#define HID_DT_HID (USB_TYPE_CLASS | 0x01) +#define HID_DT_REPORT (USB_TYPE_CLASS | 0x02) +#define HID_DT_PHYSICAL (USB_TYPE_CLASS | 0x03) + +#define HID_MAX_DESCRIPTOR_SIZE 4096 + + +#endif /* _UAPI__HID_H */ diff --git a/tools/include/uapi/linux/hid_bpf.h b/tools/include/uapi/linux/hid_bpf.h new file mode 100644 index 000000000000..ba8caf9b60ee --- /dev/null +++ b/tools/include/uapi/linux/hid_bpf.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ + +#ifndef _UAPI_HID_BPF_H +#define _UAPI_HID_BPF_H + +#include <linux/const.h> +#include <linux/hid.h> + +/** + * enum hid_bpf_attach_flags - flags used when attaching a HIF-BPF program + * + * @HID_BPF_FLAG_NONE: no specific flag is used, the kernel choses where to + * insert the program + * @HID_BPF_FLAG_INSERT_HEAD: insert the given program before any other program + * currently attached to the device. This doesn't + * guarantee that this program will always be first + * @HID_BPF_FLAG_MAX: sentinel value, not to be used by the callers + */ +enum hid_bpf_attach_flags { + HID_BPF_FLAG_NONE = 0, + HID_BPF_FLAG_INSERT_HEAD = _BITUL(0), + HID_BPF_FLAG_MAX, +}; + +#endif /* _UAPI_HID_BPF_H */
The test is pretty basic: - create a virtual uhid device that no userspace will like (to not mess up the running system) - attach a BPF prog to it - open the matching hidraw node - inject one event and check: * that the BPF program can do something on the event stream * can modify the event stream - add another test where we attach/detach BPF programs to see if we get errors
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
changes in v9: - kept the selftest config list alphabetically ordered
no changes in v8
no changes in v7
no changes in v6
changes in v5: - use of the HID device system id instead of fd - attach to HID device with the new API - add attach/detach test
changes in v4: - manually retrieve the hidraw node from the sysfs (we can't get it for free from BPF) - use the new API
changes in v3: - squashed "hid: rely on uhid event to know if a test device is ready" into this one - add selftests bpf VM config changes - s/hidraw_ino/hidraw_number/
changes in v2: - split the series by bpf/libbpf/hid/selftests and samples --- tools/testing/selftests/bpf/config | 3 + tools/testing/selftests/bpf/prog_tests/hid.c | 669 +++++++++++++++++++ tools/testing/selftests/bpf/progs/hid.c | 45 ++ 3 files changed, 717 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/hid.c create mode 100644 tools/testing/selftests/bpf/progs/hid.c
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index 3fc46f9cfb22..8dc41058a8f8 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -15,6 +15,8 @@ CONFIG_FPROBE=y CONFIG_FTRACE_SYSCALLS=y CONFIG_FUNCTION_TRACER=y CONFIG_GENEVE=y +CONFIG_HID=y +CONFIG_HIDRAW=y CONFIG_IKCONFIG=y CONFIG_IKCONFIG_PROC=y CONFIG_IMA=y @@ -61,6 +63,7 @@ CONFIG_RC_CORE=y CONFIG_SECURITY=y CONFIG_SECURITYFS=y CONFIG_TEST_BPF=m +CONFIG_UHID=y CONFIG_USERFAULTFD=y CONFIG_VXLAN=y CONFIG_XDP_SOCKETS=y diff --git a/tools/testing/selftests/bpf/prog_tests/hid.c b/tools/testing/selftests/bpf/prog_tests/hid.c new file mode 100644 index 000000000000..719d220c8d86 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/hid.c @@ -0,0 +1,669 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Red Hat */ +#include <test_progs.h> +#include <testing_helpers.h> +#include "hid.skel.h" + +#include <fcntl.h> +#include <fnmatch.h> +#include <dirent.h> +#include <poll.h> +#include <stdbool.h> +#include <linux/uhid.h> + +static unsigned char rdesc[] = { + 0x06, 0x00, 0xff, /* Usage Page (Vendor Defined Page 1) */ + 0x09, 0x21, /* Usage (Vendor Usage 0x21) */ + 0xa1, 0x01, /* COLLECTION (Application) */ + 0x09, 0x01, /* Usage (Vendor Usage 0x01) */ + 0xa1, 0x00, /* COLLECTION (Physical) */ + 0x85, 0x01, /* REPORT_ID (1) */ + 0x06, 0x00, 0xff, /* Usage Page (Vendor Defined Page 1) */ + 0x19, 0x01, /* USAGE_MINIMUM (1) */ + 0x29, 0x03, /* USAGE_MAXIMUM (3) */ + 0x15, 0x00, /* LOGICAL_MINIMUM (0) */ + 0x25, 0x01, /* LOGICAL_MAXIMUM (1) */ + 0x95, 0x03, /* REPORT_COUNT (3) */ + 0x75, 0x01, /* REPORT_SIZE (1) */ + 0x81, 0x02, /* INPUT (Data,Var,Abs) */ + 0x95, 0x01, /* REPORT_COUNT (1) */ + 0x75, 0x05, /* REPORT_SIZE (5) */ + 0x81, 0x01, /* INPUT (Cnst,Var,Abs) */ + 0x05, 0x01, /* USAGE_PAGE (Generic Desktop) */ + 0x09, 0x30, /* USAGE (X) */ + 0x09, 0x31, /* USAGE (Y) */ + 0x15, 0x81, /* LOGICAL_MINIMUM (-127) */ + 0x25, 0x7f, /* LOGICAL_MAXIMUM (127) */ + 0x75, 0x10, /* REPORT_SIZE (16) */ + 0x95, 0x02, /* REPORT_COUNT (2) */ + 0x81, 0x06, /* INPUT (Data,Var,Rel) */ + + 0x06, 0x00, 0xff, /* Usage Page (Vendor Defined Page 1) */ + 0x19, 0x01, /* USAGE_MINIMUM (1) */ + 0x29, 0x03, /* USAGE_MAXIMUM (3) */ + 0x15, 0x00, /* LOGICAL_MINIMUM (0) */ + 0x25, 0x01, /* LOGICAL_MAXIMUM (1) */ + 0x95, 0x03, /* REPORT_COUNT (3) */ + 0x75, 0x01, /* REPORT_SIZE (1) */ + 0x91, 0x02, /* Output (Data,Var,Abs) */ + 0x95, 0x01, /* REPORT_COUNT (1) */ + 0x75, 0x05, /* REPORT_SIZE (5) */ + 0x91, 0x01, /* Output (Cnst,Var,Abs) */ + + 0x06, 0x00, 0xff, /* Usage Page (Vendor Defined Page 1) */ + 0x19, 0x06, /* USAGE_MINIMUM (6) */ + 0x29, 0x08, /* USAGE_MAXIMUM (8) */ + 0x15, 0x00, /* LOGICAL_MINIMUM (0) */ + 0x25, 0x01, /* LOGICAL_MAXIMUM (1) */ + 0x95, 0x03, /* REPORT_COUNT (3) */ + 0x75, 0x01, /* REPORT_SIZE (1) */ + 0xb1, 0x02, /* Feature (Data,Var,Abs) */ + 0x95, 0x01, /* REPORT_COUNT (1) */ + 0x75, 0x05, /* REPORT_SIZE (5) */ + 0x91, 0x01, /* Output (Cnst,Var,Abs) */ + + 0xc0, /* END_COLLECTION */ + 0xc0, /* END_COLLECTION */ +}; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +static pthread_mutex_t uhid_started_mtx = PTHREAD_MUTEX_INITIALIZER; +static pthread_cond_t uhid_started = PTHREAD_COND_INITIALIZER; + +/* no need to protect uhid_stopped, only one thread accesses it */ +static bool uhid_stopped; + +static int uhid_write(int fd, const struct uhid_event *ev) +{ + ssize_t ret; + + ret = write(fd, ev, sizeof(*ev)); + if (ret < 0) { + fprintf(stderr, "Cannot write to uhid: %m\n"); + return -errno; + } else if (ret != sizeof(*ev)) { + fprintf(stderr, "Wrong size written to uhid: %zd != %zu\n", + ret, sizeof(ev)); + return -EFAULT; + } else { + return 0; + } +} + +static int create(int fd, int rand_nb) +{ + struct uhid_event ev; + char buf[25]; + + sprintf(buf, "test-uhid-device-%d", rand_nb); + + memset(&ev, 0, sizeof(ev)); + ev.type = UHID_CREATE; + strcpy((char *)ev.u.create.name, buf); + ev.u.create.rd_data = rdesc; + ev.u.create.rd_size = sizeof(rdesc); + ev.u.create.bus = BUS_USB; + ev.u.create.vendor = 0x0001; + ev.u.create.product = 0x0a37; + ev.u.create.version = 0; + ev.u.create.country = 0; + + sprintf(buf, "%d", rand_nb); + strcpy((char *)ev.u.create.phys, buf); + + return uhid_write(fd, &ev); +} + +static void destroy(int fd) +{ + struct uhid_event ev; + + memset(&ev, 0, sizeof(ev)); + ev.type = UHID_DESTROY; + + uhid_write(fd, &ev); +} + +static int uhid_event(int fd) +{ + struct uhid_event ev; + ssize_t ret; + + memset(&ev, 0, sizeof(ev)); + ret = read(fd, &ev, sizeof(ev)); + if (ret == 0) { + fprintf(stderr, "Read HUP on uhid-cdev\n"); + return -EFAULT; + } else if (ret < 0) { + fprintf(stderr, "Cannot read uhid-cdev: %m\n"); + return -errno; + } else if (ret != sizeof(ev)) { + fprintf(stderr, "Invalid size read from uhid-dev: %zd != %zu\n", + ret, sizeof(ev)); + return -EFAULT; + } + + switch (ev.type) { + case UHID_START: + pthread_mutex_lock(&uhid_started_mtx); + pthread_cond_signal(&uhid_started); + pthread_mutex_unlock(&uhid_started_mtx); + + fprintf(stderr, "UHID_START from uhid-dev\n"); + break; + case UHID_STOP: + uhid_stopped = true; + + fprintf(stderr, "UHID_STOP from uhid-dev\n"); + break; + case UHID_OPEN: + fprintf(stderr, "UHID_OPEN from uhid-dev\n"); + break; + case UHID_CLOSE: + fprintf(stderr, "UHID_CLOSE from uhid-dev\n"); + break; + case UHID_OUTPUT: + fprintf(stderr, "UHID_OUTPUT from uhid-dev\n"); + break; + case UHID_GET_REPORT: + fprintf(stderr, "UHID_GET_REPORT from uhid-dev\n"); + break; + case UHID_SET_REPORT: + fprintf(stderr, "UHID_SET_REPORT from uhid-dev\n"); + break; + default: + fprintf(stderr, "Invalid event from uhid-dev: %u\n", ev.type); + } + + return 0; +} + +static void *read_uhid_events_thread(void *arg) +{ + int fd = *(int *)arg; + struct pollfd pfds[1]; + int ret = 0; + + pfds[0].fd = fd; + pfds[0].events = POLLIN; + + uhid_stopped = false; + + while (!uhid_stopped) { + ret = poll(pfds, 1, 100); + if (ret < 0) { + fprintf(stderr, "Cannot poll for fds: %m\n"); + break; + } + if (pfds[0].revents & POLLIN) { + ret = uhid_event(fd); + if (ret) + break; + } + } + + return (void *)(long)ret; +} + +static int uhid_start_listener(pthread_t *tid, int uhid_fd) +{ + int fd = uhid_fd; + + pthread_mutex_lock(&uhid_started_mtx); + if (CHECK_FAIL(pthread_create(tid, NULL, read_uhid_events_thread, + (void *)&fd))) { + pthread_mutex_unlock(&uhid_started_mtx); + close(fd); + return -EIO; + } + pthread_cond_wait(&uhid_started, &uhid_started_mtx); + pthread_mutex_unlock(&uhid_started_mtx); + + return 0; +} + +static int send_event(int fd, u8 *buf, size_t size) +{ + struct uhid_event ev; + + if (size > sizeof(ev.u.input.data)) + return -E2BIG; + + memset(&ev, 0, sizeof(ev)); + ev.type = UHID_INPUT2; + ev.u.input2.size = size; + + memcpy(ev.u.input2.data, buf, size); + + return uhid_write(fd, &ev); +} + +static int setup_uhid(int rand_nb) +{ + int fd; + const char *path = "/dev/uhid"; + int ret; + + fd = open(path, O_RDWR | O_CLOEXEC); + if (!ASSERT_GE(fd, 0, "open uhid-cdev")) + return -EPERM; + + ret = create(fd, rand_nb); + if (!ASSERT_OK(ret, "create uhid device")) { + close(fd); + return -EPERM; + } + + return fd; +} + +static bool match_sysfs_device(int dev_id, const char *workdir, struct dirent *dir) +{ + const char *target = "0003:0001:0A37.*"; + char phys[512]; + char uevent[1024]; + char temp[512]; + int fd, nread; + bool found = false; + + if (fnmatch(target, dir->d_name, 0)) + return false; + + /* we found the correct VID/PID, now check for phys */ + sprintf(uevent, "%s/%s/uevent", workdir, dir->d_name); + + fd = open(uevent, O_RDONLY | O_NONBLOCK); + if (fd < 0) + return false; + + sprintf(phys, "PHYS=%d", dev_id); + + nread = read(fd, temp, ARRAY_SIZE(temp)); + if (nread > 0 && (strstr(temp, phys)) != NULL) + found = true; + + close(fd); + + return found; +} + +static int get_hid_id(int dev_id) +{ + const char *workdir = "/sys/devices/virtual/misc/uhid"; + const char *str_id; + DIR *d; + struct dirent *dir; + int found = -1; + + /* it would be nice to be able to use nftw, but the no_alu32 target doesn't support it */ + + d = opendir(workdir); + if (d) { + while ((dir = readdir(d)) != NULL) { + if (!match_sysfs_device(dev_id, workdir, dir)) + continue; + + str_id = dir->d_name + sizeof("0003:0001:0A37."); + found = (int)strtol(str_id, NULL, 16); + + break; + } + closedir(d); + } + + return found; +} + +static int get_hidraw(int dev_id) +{ + const char *workdir = "/sys/devices/virtual/misc/uhid"; + char sysfs[1024]; + DIR *d, *subd; + struct dirent *dir, *subdir; + int i, found = -1; + + /* retry 5 times in case the system is loaded */ + for (i = 5; i > 0; i--) { + usleep(10); + d = opendir(workdir); + + if (!d) + continue; + + while ((dir = readdir(d)) != NULL) { + if (!match_sysfs_device(dev_id, workdir, dir)) + continue; + + sprintf(sysfs, "%s/%s/hidraw", workdir, dir->d_name); + + subd = opendir(sysfs); + if (!subd) + continue; + + while ((subdir = readdir(subd)) != NULL) { + if (fnmatch("hidraw*", subdir->d_name, 0)) + continue; + + found = atoi(subdir->d_name + strlen("hidraw")); + } + + closedir(subd); + + if (found > 0) + break; + } + closedir(d); + } + + return found; +} + +static int open_hidraw(int dev_id) +{ + int hidraw_number; + char hidraw_path[64] = { 0 }; + + hidraw_number = get_hidraw(dev_id); + if (hidraw_number < 0) + return hidraw_number; + + /* open hidraw node to check the other side of the pipe */ + sprintf(hidraw_path, "/dev/hidraw%d", hidraw_number); + return open(hidraw_path, O_RDWR | O_NONBLOCK); +} + +struct test_params { + struct hid *skel; + int hidraw_fd; +}; + +static int prep_test(int dev_id, const char *prog_name, struct test_params *test_data) +{ + struct hid *hid_skel = NULL; + struct bpf_program *prog = NULL; + char buf[64] = {0}; + int hidraw_fd = -1; + int hid_id, attach_fd, err = -EINVAL; + struct attach_prog_args args = { + .retval = -1, + }; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattr, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + /* locate the uevent file of the created device */ + hid_id = get_hid_id(dev_id); + if (!ASSERT_GE(hid_id, 0, "locate uhid device id")) + goto cleanup; + + args.hid = hid_id; + + hid_skel = hid__open(); + if (!ASSERT_OK_PTR(hid_skel, "hid_skel_open")) + goto cleanup; + + prog = bpf_object__find_program_by_name(*hid_skel->skeleton->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "find_prog_by_name")) + goto cleanup; + + bpf_program__set_autoload(prog, true); + + err = hid__load(hid_skel); + if (!ASSERT_OK(err, "hid_skel_load")) + goto cleanup; + + attach_fd = bpf_program__fd(hid_skel->progs.attach_prog); + if (!ASSERT_GE(attach_fd, 0, "locate attach_prog")) { + err = attach_fd; + goto cleanup; + } + + args.prog_fd = bpf_program__fd(prog); + err = bpf_prog_test_run_opts(attach_fd, &tattr); + snprintf(buf, sizeof(buf), "attach_hid(%s)", prog_name); + if (!ASSERT_EQ(args.retval, 0, buf)) + goto cleanup; + + hidraw_fd = open_hidraw(dev_id); + if (!ASSERT_GE(hidraw_fd, 0, "open_hidraw")) + goto cleanup; + + test_data->skel = hid_skel; + test_data->hidraw_fd = hidraw_fd; + + return 0; + + cleanup: + if (hidraw_fd >= 0) + close(hidraw_fd); + + hid__destroy(hid_skel); + + memset(test_data, 0, sizeof(*test_data)); + + return err; +} + +static void cleanup_test(struct test_params *test_data) +{ + if (!test_data) + return; + + if (test_data->hidraw_fd) + close(test_data->hidraw_fd); + + hid__destroy(test_data->skel); + + memset(test_data, 0, sizeof(*test_data)); +} + +/* + * Attach hid_first_event to the given uhid device, + * retrieve and open the matching hidraw node, + * inject one event in the uhid device, + * check that the program sees it and can change the data + */ +static int test_hid_raw_event(int uhid_fd, int dev_id) +{ + struct hid *hid_skel = NULL; + struct test_params params; + int err, hidraw_fd = -1; + u8 buf[10] = {0}; + int ret = -1; + + err = prep_test(dev_id, "hid_first_event", ¶ms); + if (!ASSERT_EQ(err, 0, "prep_test(hid_first_event)")) + goto cleanup; + + hid_skel = params.skel; + hidraw_fd = params.hidraw_fd; + + /* check that the program is correctly loaded */ + ASSERT_EQ(hid_skel->data->callback_check, 52, "callback_check1"); + ASSERT_EQ(hid_skel->data->callback2_check, 52, "callback2_check1"); + + /* inject one event */ + buf[0] = 1; + buf[1] = 42; + send_event(uhid_fd, buf, 6); + + /* check that hid_first_event() was executed */ + ASSERT_EQ(hid_skel->data->callback_check, 42, "callback_check1"); + + /* read the data from hidraw */ + memset(buf, 0, sizeof(buf)); + err = read(hidraw_fd, buf, sizeof(buf)); + if (!ASSERT_EQ(err, 6, "read_hidraw")) + goto cleanup; + + if (!ASSERT_EQ(buf[0], 1, "hid_first_event")) + goto cleanup; + + if (!ASSERT_EQ(buf[2], 47, "hid_first_event")) + goto cleanup; + + /* inject another event */ + memset(buf, 0, sizeof(buf)); + buf[0] = 1; + buf[1] = 47; + send_event(uhid_fd, buf, 6); + + /* check that hid_first_event() was executed */ + ASSERT_EQ(hid_skel->data->callback_check, 47, "callback_check1"); + + /* read the data from hidraw */ + memset(buf, 0, sizeof(buf)); + err = read(hidraw_fd, buf, sizeof(buf)); + if (!ASSERT_EQ(err, 6, "read_hidraw")) + goto cleanup; + + if (!ASSERT_EQ(buf[2], 52, "hid_first_event")) + goto cleanup; + + ret = 0; + +cleanup: + cleanup_test(¶ms); + + return ret; +} + +/* + * Ensures that we can attach/detach programs + */ +static int test_attach_detach(int uhid_fd, int dev_id) +{ + struct test_params params; + int err, hidraw_fd = -1; + u8 buf[10] = {0}; + int ret = -1; + + err = prep_test(dev_id, "hid_first_event", ¶ms); + if (!ASSERT_EQ(err, 0, "prep_test(hid_first_event)")) + goto cleanup; + + /* inject one event */ + buf[0] = 1; + buf[1] = 42; + send_event(uhid_fd, buf, 6); + + /* read the data from hidraw */ + memset(buf, 0, sizeof(buf)); + err = read(params.hidraw_fd, buf, sizeof(buf)); + if (!ASSERT_EQ(err, 6, "read_hidraw_with_bpf")) + goto cleanup; + + if (!ASSERT_EQ(buf[0], 1, "hid_first_event")) + goto cleanup; + + if (!ASSERT_EQ(buf[2], 47, "hid_first_event")) + goto cleanup; + + /* pin the program and immediately unpin it */ +#define PIN_PATH "/sys/fs/bpf/hid_first_event" + bpf_program__pin(params.skel->progs.hid_first_event, PIN_PATH); + remove(PIN_PATH); +#undef PIN_PATH + usleep(100000); + + /* detach the program */ + cleanup_test(¶ms); + + hidraw_fd = open_hidraw(dev_id); + if (!ASSERT_GE(hidraw_fd, 0, "open_hidraw")) + goto cleanup; + + /* inject another event */ + memset(buf, 0, sizeof(buf)); + buf[0] = 1; + buf[1] = 47; + send_event(uhid_fd, buf, 6); + + /* read the data from hidraw */ + memset(buf, 0, sizeof(buf)); + err = read(hidraw_fd, buf, sizeof(buf)); + if (!ASSERT_EQ(err, 6, "read_hidraw_no_bpf")) + goto cleanup; + + if (!ASSERT_EQ(buf[0], 1, "event_no_bpf")) + goto cleanup; + + if (!ASSERT_EQ(buf[1], 47, "event_no_bpf")) + goto cleanup; + + if (!ASSERT_EQ(buf[2], 0, "event_no_bpf")) + goto cleanup; + + /* re-attach our program */ + + err = prep_test(dev_id, "hid_first_event", ¶ms); + if (!ASSERT_EQ(err, 0, "prep_test(hid_first_event)")) + goto cleanup; + + /* inject one event */ + memset(buf, 0, sizeof(buf)); + buf[0] = 1; + buf[1] = 42; + send_event(uhid_fd, buf, 6); + + /* read the data from hidraw */ + memset(buf, 0, sizeof(buf)); + err = read(params.hidraw_fd, buf, sizeof(buf)); + if (!ASSERT_EQ(err, 6, "read_hidraw")) + goto cleanup; + + if (!ASSERT_EQ(buf[0], 1, "hid_first_event")) + goto cleanup; + + if (!ASSERT_EQ(buf[2], 47, "hid_first_event")) + goto cleanup; + + ret = 0; + +cleanup: + if (hidraw_fd >= 0) + close(hidraw_fd); + + cleanup_test(¶ms); + + return ret; +} + +void serial_test_hid_bpf(void) +{ + int err, uhid_fd; + void *uhid_err; + time_t t; + pthread_t tid; + int dev_id; + + /* initialize random number generator */ + srand((unsigned int)time(&t)); + + dev_id = rand() % 1024; + + uhid_fd = setup_uhid(dev_id); + if (!ASSERT_GE(uhid_fd, 0, "setup uhid")) + return; + + err = uhid_start_listener(&tid, uhid_fd); + ASSERT_OK(err, "uhid_start_listener"); + + /* start the tests! */ + err = test_hid_raw_event(uhid_fd, dev_id); + ASSERT_OK(err, "hid"); + err = test_attach_detach(uhid_fd, dev_id); + ASSERT_OK(err, "hid_attach_detach"); + + destroy(uhid_fd); + + pthread_join(tid, &uhid_err); + err = (int)(long)uhid_err; + CHECK_FAIL(err); +} diff --git a/tools/testing/selftests/bpf/progs/hid.c b/tools/testing/selftests/bpf/progs/hid.c new file mode 100644 index 000000000000..fc0a4241643a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/hid.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Red hat */ +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +char _license[] SEC("license") = "GPL"; + +extern __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, + unsigned int offset, + const size_t __sz) __ksym; +extern int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, u32 flags) __ksym; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +__u64 callback_check = 52; +__u64 callback2_check = 52; + +SEC("?fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_first_event, struct hid_bpf_ctx *hid_ctx) +{ + __u8 *rw_data = hid_bpf_get_data(hid_ctx, 0 /* offset */, 3 /* size */); + + if (!rw_data) + return 0; /* EPERM check */ + + callback_check = rw_data[1]; + + rw_data[2] = rw_data[1] + 5; + + return 0; +} + +SEC("syscall") +int attach_prog(struct attach_prog_args *ctx) +{ + ctx->retval = hid_bpf_attach_prog(ctx->hid, + ctx->prog_fd, + 0); + return 0; +}
We need to also be able to change the size of the report. Reducing it is easy, because we already have the incoming buffer that is big enough, but extending it is harder.
Pre-allocate a buffer that is big enough to handle all reports of the device, and use that as the primary buffer for BPF programs. To be able to change the size of the buffer, we change the device_event API and request it to return the size of the buffer.
Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
no changes in v6
new-ish in v5 --- drivers/hid/bpf/hid_bpf_dispatch.c | 116 +++++++++++++++++++++++++--- drivers/hid/bpf/hid_bpf_jmp_table.c | 4 +- drivers/hid/hid-core.c | 12 ++- include/linux/hid_bpf.h | 37 +++++++-- 4 files changed, 151 insertions(+), 18 deletions(-)
diff --git a/drivers/hid/bpf/hid_bpf_dispatch.c b/drivers/hid/bpf/hid_bpf_dispatch.c index 600b00fdf6c1..33c6b3df6472 100644 --- a/drivers/hid/bpf/hid_bpf_dispatch.c +++ b/drivers/hid/bpf/hid_bpf_dispatch.c @@ -28,8 +28,9 @@ EXPORT_SYMBOL(hid_bpf_ops); * * @ctx: The HID-BPF context * - * @return %0 on success and keep processing; a negative error code to interrupt - * the processing of this event + * @return %0 on success and keep processing; a positive value to change the + * incoming size buffer; a negative error code to interrupt the processing + * of this event * * Declare an %fmod_ret tracing bpf program to this function and attach this * program through hid_bpf_attach_prog() to have this helper called for @@ -44,23 +45,43 @@ __weak noinline int hid_bpf_device_event(struct hid_bpf_ctx *ctx) } ALLOW_ERROR_INJECTION(hid_bpf_device_event, ERRNO);
-int +u8 * dispatch_hid_bpf_device_event(struct hid_device *hdev, enum hid_report_type type, u8 *data, - u32 size, int interrupt) + u32 *size, int interrupt) { struct hid_bpf_ctx_kern ctx_kern = { .ctx = { .hid = hdev, .report_type = type, - .size = size, + .allocated_size = hdev->bpf.allocated_data, + .size = *size, }, - .data = data, + .data = hdev->bpf.device_data, }; + int ret;
if (type >= HID_REPORT_TYPES) - return -EINVAL; + return ERR_PTR(-EINVAL); + + /* no program has been attached yet */ + if (!hdev->bpf.device_data) + return data; + + memset(ctx_kern.data, 0, hdev->bpf.allocated_data); + memcpy(ctx_kern.data, data, *size); + + ret = hid_bpf_prog_run(hdev, HID_BPF_PROG_TYPE_DEVICE_EVENT, &ctx_kern); + if (ret < 0) + return ERR_PTR(ret); + + if (ret) { + if (ret > ctx_kern.ctx.allocated_size) + return ERR_PTR(-EINVAL);
- return hid_bpf_prog_run(hdev, HID_BPF_PROG_TYPE_DEVICE_EVENT, &ctx_kern); + *size = ret; + } + + return ctx_kern.data; } EXPORT_SYMBOL_GPL(dispatch_hid_bpf_device_event);
@@ -83,7 +104,7 @@ hid_bpf_get_data(struct hid_bpf_ctx *ctx, unsigned int offset, const size_t rdwr
ctx_kern = container_of(ctx, struct hid_bpf_ctx_kern, ctx);
- if (rdwr_buf_size + offset > ctx->size) + if (rdwr_buf_size + offset > ctx->allocated_size) return NULL;
return ctx_kern->data + offset; @@ -110,6 +131,51 @@ static int device_match_id(struct device *dev, const void *id) return hdev->id == *(int *)id; }
+static int __hid_bpf_allocate_data(struct hid_device *hdev, u8 **data, u32 *size) +{ + u8 *alloc_data; + unsigned int i, j, max_report_len = 0; + size_t alloc_size = 0; + + /* compute the maximum report length for this device */ + for (i = 0; i < HID_REPORT_TYPES; i++) { + struct hid_report_enum *report_enum = hdev->report_enum + i; + + for (j = 0; j < HID_MAX_IDS; j++) { + struct hid_report *report = report_enum->report_id_hash[j]; + + if (report) + max_report_len = max(max_report_len, hid_report_len(report)); + } + } + + /* + * Give us a little bit of extra space and some predictability in the + * buffer length we create. This way, we can tell users that they can + * work on chunks of 64 bytes of memory without having the bpf verifier + * scream at them. + */ + alloc_size = DIV_ROUND_UP(max_report_len, 64) * 64; + + alloc_data = kzalloc(alloc_size, GFP_KERNEL); + if (!alloc_data) + return -ENOMEM; + + *data = alloc_data; + *size = alloc_size; + + return 0; +} + +static int hid_bpf_allocate_event_data(struct hid_device *hdev) +{ + /* hdev->bpf.device_data is already allocated, abort */ + if (hdev->bpf.device_data) + return 0; + + return __hid_bpf_allocate_data(hdev, &hdev->bpf.device_data, &hdev->bpf.allocated_data); +} + /** * hid_bpf_attach_prog - Attach the given @prog_fd to the given HID device * @@ -125,7 +191,7 @@ hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags) { struct hid_device *hdev; struct device *dev; - int prog_type = hid_bpf_get_prog_attach_type(prog_fd); + int err, prog_type = hid_bpf_get_prog_attach_type(prog_fd);
if (!hid_bpf_ops) return -EINVAL; @@ -145,6 +211,12 @@ hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags)
hdev = to_hid_device(dev);
+ if (prog_type == HID_BPF_PROG_TYPE_DEVICE_EVENT) { + err = hid_bpf_allocate_event_data(hdev); + if (err) + return err; + } + return __hid_bpf_attach_prog(hdev, prog_type, prog_fd, flags); }
@@ -158,6 +230,30 @@ static const struct btf_kfunc_id_set hid_bpf_syscall_kfunc_set = { .set = &hid_bpf_syscall_kfunc_ids, };
+int hid_bpf_connect_device(struct hid_device *hdev) +{ + struct hid_bpf_prog_list *prog_list; + + rcu_read_lock(); + prog_list = rcu_dereference(hdev->bpf.progs[HID_BPF_PROG_TYPE_DEVICE_EVENT]); + rcu_read_unlock(); + + /* only allocate BPF data if there are programs attached */ + if (!prog_list) + return 0; + + return hid_bpf_allocate_event_data(hdev); +} +EXPORT_SYMBOL_GPL(hid_bpf_connect_device); + +void hid_bpf_disconnect_device(struct hid_device *hdev) +{ + kfree(hdev->bpf.device_data); + hdev->bpf.device_data = NULL; + hdev->bpf.allocated_data = 0; +} +EXPORT_SYMBOL_GPL(hid_bpf_disconnect_device); + void hid_bpf_destroy_device(struct hid_device *hdev) { if (!hdev) diff --git a/drivers/hid/bpf/hid_bpf_jmp_table.c b/drivers/hid/bpf/hid_bpf_jmp_table.c index 05225ff3cc27..0f20deab81ff 100644 --- a/drivers/hid/bpf/hid_bpf_jmp_table.c +++ b/drivers/hid/bpf/hid_bpf_jmp_table.c @@ -123,8 +123,10 @@ int hid_bpf_prog_run(struct hid_device *hdev, enum hid_bpf_prog_type type,
ctx_kern->ctx.index = idx; err = __hid_bpf_tail_call(&ctx_kern->ctx); - if (err) + if (err < 0) break; + if (err) + ctx_kern->ctx.retval = err; }
out_unlock: diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index 0d0bd8fc69c7..cadd21a6f995 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -2040,9 +2040,11 @@ int hid_input_report(struct hid_device *hid, enum hid_report_type type, u8 *data report_enum = hid->report_enum + type; hdrv = hid->driver;
- ret = dispatch_hid_bpf_device_event(hid, type, data, size, interrupt); - if (ret) + data = dispatch_hid_bpf_device_event(hid, type, data, &size, interrupt); + if (IS_ERR(data)) { + ret = PTR_ERR(data); goto unlock; + }
if (!size) { dbg_hid("empty report\n"); @@ -2157,6 +2159,10 @@ int hid_connect(struct hid_device *hdev, unsigned int connect_mask) int len; int ret;
+ ret = hid_bpf_connect_device(hdev); + if (ret) + return ret; + if (hdev->quirks & HID_QUIRK_HIDDEV_FORCE) connect_mask |= (HID_CONNECT_HIDDEV_FORCE | HID_CONNECT_HIDDEV); if (hdev->quirks & HID_QUIRK_HIDINPUT_FORCE) @@ -2258,6 +2264,8 @@ void hid_disconnect(struct hid_device *hdev) if (hdev->claimed & HID_CLAIMED_HIDRAW) hidraw_disconnect(hdev); hdev->claimed = 0; + + hid_bpf_disconnect_device(hdev); } EXPORT_SYMBOL_GPL(hid_disconnect);
diff --git a/include/linux/hid_bpf.h b/include/linux/hid_bpf.h index 5d53b12c6ea0..1707e4492d7a 100644 --- a/include/linux/hid_bpf.h +++ b/include/linux/hid_bpf.h @@ -29,15 +29,32 @@ struct hid_device; * a bigger index). * @hid: the ``struct hid_device`` representing the device itself * @report_type: used for ``hid_bpf_device_event()`` + * @allocated_size: Allocated size of data. + * + * This is how much memory is available and can be requested + * by the HID program. + * Note that for ``HID_BPF_RDESC_FIXUP``, that memory is set to + * ``4096`` (4 KB) * @size: Valid data in the data field. * * Programs can get the available valid size in data by fetching this field. + * Programs can also change this value by returning a positive number in the + * program. + * To discard the event, return a negative error code. + * + * ``size`` must always be less or equal than ``allocated_size`` (it is enforced + * once all BPF programs have been run). + * @retval: Return value of the previous program. */ struct hid_bpf_ctx { __u32 index; const struct hid_device *hid; + __u32 allocated_size; enum hid_report_type report_type; - __s32 size; + union { + __s32 retval; + __s32 size; + }; };
/* Following functions are tracepoints that BPF programs can attach to */ @@ -81,6 +98,12 @@ struct hid_bpf_prog_list {
/* stored in each device */ struct hid_bpf { + u8 *device_data; /* allocated when a bpf program of type + * SEC(f.../hid_bpf_device_event) has been attached + * to this HID device + */ + u32 allocated_data; + struct hid_bpf_prog_list __rcu *progs[HID_BPF_PROG_TYPE_MAX]; /* attached BPF progs */ bool destroyed; /* prevents the assignment of any progs */
@@ -88,13 +111,17 @@ struct hid_bpf { };
#ifdef CONFIG_HID_BPF -int dispatch_hid_bpf_device_event(struct hid_device *hid, enum hid_report_type type, u8 *data, - u32 size, int interrupt); +u8 *dispatch_hid_bpf_device_event(struct hid_device *hid, enum hid_report_type type, u8 *data, + u32 *size, int interrupt); +int hid_bpf_connect_device(struct hid_device *hdev); +void hid_bpf_disconnect_device(struct hid_device *hdev); void hid_bpf_destroy_device(struct hid_device *hid); void hid_bpf_device_init(struct hid_device *hid); #else /* CONFIG_HID_BPF */ -static inline int dispatch_hid_bpf_device_event(struct hid_device *hid, enum hid_report_type type, u8 *data, - u32 size, int interrupt) { return 0; } +static inline u8 *dispatch_hid_bpf_device_event(struct hid_device *hid, enum hid_report_type type, u8 *data, + u32 *size, int interrupt) { return 0; } +static inline int hid_bpf_connect_device(struct hid_device *hdev) { return 0; } +static inline void hid_bpf_disconnect_device(struct hid_device *hdev) {} static inline void hid_bpf_destroy_device(struct hid_device *hid) {} static inline void hid_bpf_device_init(struct hid_device *hid) {} #endif /* CONFIG_HID_BPF */
Use a different report with a bigger size and ensures we are doing things properly.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
no changes in v6
new in v5 --- tools/testing/selftests/bpf/prog_tests/hid.c | 60 ++++++++++++++++++++ tools/testing/selftests/bpf/progs/hid.c | 15 ++++- 2 files changed, 74 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/hid.c b/tools/testing/selftests/bpf/prog_tests/hid.c index 719d220c8d86..47bc0a30c275 100644 --- a/tools/testing/selftests/bpf/prog_tests/hid.c +++ b/tools/testing/selftests/bpf/prog_tests/hid.c @@ -17,6 +17,17 @@ static unsigned char rdesc[] = { 0xa1, 0x01, /* COLLECTION (Application) */ 0x09, 0x01, /* Usage (Vendor Usage 0x01) */ 0xa1, 0x00, /* COLLECTION (Physical) */ + 0x85, 0x02, /* REPORT_ID (2) */ + 0x19, 0x01, /* USAGE_MINIMUM (1) */ + 0x29, 0x08, /* USAGE_MAXIMUM (3) */ + 0x15, 0x00, /* LOGICAL_MINIMUM (0) */ + 0x25, 0xff, /* LOGICAL_MAXIMUM (255) */ + 0x95, 0x08, /* REPORT_COUNT (8) */ + 0x75, 0x08, /* REPORT_SIZE (8) */ + 0x81, 0x02, /* INPUT (Data,Var,Abs) */ + 0xc0, /* END_COLLECTION */ + 0x09, 0x01, /* Usage (Vendor Usage 0x01) */ + 0xa1, 0x00, /* COLLECTION (Physical) */ 0x85, 0x01, /* REPORT_ID (1) */ 0x06, 0x00, 0xff, /* Usage Page (Vendor Defined Page 1) */ 0x19, 0x01, /* USAGE_MINIMUM (1) */ @@ -635,6 +646,53 @@ static int test_attach_detach(int uhid_fd, int dev_id) return ret; }
+/* + * Attach hid_change_report_id to the given uhid device, + * retrieve and open the matching hidraw node, + * inject one event in the uhid device, + * check that the program sees it and can change the data + */ +static int test_hid_change_report(int uhid_fd, int dev_id) +{ + struct test_params params; + int err, hidraw_fd = -1; + u8 buf[10] = {0}; + int ret = -1; + + err = prep_test(dev_id, "hid_change_report_id", ¶ms); + if (!ASSERT_EQ(err, 0, "prep_test(hid_change_report_id)")) + goto cleanup; + + hidraw_fd = params.hidraw_fd; + + /* inject one event */ + buf[0] = 1; + buf[1] = 42; + send_event(uhid_fd, buf, 6); + + /* read the data from hidraw */ + memset(buf, 0, sizeof(buf)); + err = read(hidraw_fd, buf, sizeof(buf)); + if (!ASSERT_EQ(err, 9, "read_hidraw")) + goto cleanup; + + if (!ASSERT_EQ(buf[0], 2, "hid_change_report_id")) + goto cleanup; + + if (!ASSERT_EQ(buf[1], 42, "hid_change_report_id")) + goto cleanup; + + if (!ASSERT_EQ(buf[2], 0, "leftovers_from_previous_test")) + goto cleanup; + + ret = 0; + +cleanup: + cleanup_test(¶ms); + + return ret; +} + void serial_test_hid_bpf(void) { int err, uhid_fd; @@ -660,6 +718,8 @@ void serial_test_hid_bpf(void) ASSERT_OK(err, "hid"); err = test_attach_detach(uhid_fd, dev_id); ASSERT_OK(err, "hid_attach_detach"); + err = test_hid_change_report(uhid_fd, dev_id); + ASSERT_OK(err, "hid_change_report");
destroy(uhid_fd);
diff --git a/tools/testing/selftests/bpf/progs/hid.c b/tools/testing/selftests/bpf/progs/hid.c index fc0a4241643a..ee7529c47ad8 100644 --- a/tools/testing/selftests/bpf/progs/hid.c +++ b/tools/testing/selftests/bpf/progs/hid.c @@ -32,7 +32,20 @@ int BPF_PROG(hid_first_event, struct hid_bpf_ctx *hid_ctx)
rw_data[2] = rw_data[1] + 5;
- return 0; + return hid_ctx->size; +} + +SEC("?fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_change_report_id, struct hid_bpf_ctx *hid_ctx) +{ + __u8 *rw_data = hid_bpf_get_data(hid_ctx, 0 /* offset */, 3 /* size */); + + if (!rw_data) + return 0; /* EPERM check */ + + rw_data[0] = 2; + + return 9; }
SEC("syscall")
This function can not be called under IRQ, thus it is only available while in SEC("syscall"). For consistency, this function requires a HID-BPF context to work with, and so we also provide a helper to create one based on the HID unique ID.
Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
--
changes in v9: - fixed kfunc declaration aaccording to latest upstream changes
no changes in v8
changes in v7: - hid_bpf_allocate_context: remove unused variable - ensures buf is not NULL
changes in v6: - rename parameter size into buf__sz to teach the verifier about the actual buffer size used by the call - remove the allocated data in the user created context, it's not used
new-ish in v5 --- drivers/hid/bpf/hid_bpf_dispatch.c | 132 +++++++++++++++++++++++++++++ drivers/hid/hid-core.c | 2 + include/linux/hid_bpf.h | 13 ++- 3 files changed, 146 insertions(+), 1 deletion(-)
diff --git a/drivers/hid/bpf/hid_bpf_dispatch.c b/drivers/hid/bpf/hid_bpf_dispatch.c index 33c6b3df6472..85186640e5af 100644 --- a/drivers/hid/bpf/hid_bpf_dispatch.c +++ b/drivers/hid/bpf/hid_bpf_dispatch.c @@ -220,9 +220,141 @@ hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags) return __hid_bpf_attach_prog(hdev, prog_type, prog_fd, flags); }
+/** + * hid_bpf_allocate_context - Allocate a context to the given HID device + * + * @hid_id: the system unique identifier of the HID device + * + * @returns A pointer to &struct hid_bpf_ctx on success, %NULL on error. + */ +noinline struct hid_bpf_ctx * +hid_bpf_allocate_context(unsigned int hid_id) +{ + struct hid_device *hdev; + struct hid_bpf_ctx_kern *ctx_kern = NULL; + struct device *dev; + + if (!hid_bpf_ops) + return NULL; + + dev = bus_find_device(hid_bpf_ops->bus_type, NULL, &hid_id, device_match_id); + if (!dev) + return NULL; + + hdev = to_hid_device(dev); + + ctx_kern = kzalloc(sizeof(*ctx_kern), GFP_KERNEL); + if (!ctx_kern) + return NULL; + + ctx_kern->ctx.hid = hdev; + + return &ctx_kern->ctx; +} + +/** + * hid_bpf_release_context - Release the previously allocated context @ctx + * + * @ctx: the HID-BPF context to release + * + */ +noinline void +hid_bpf_release_context(struct hid_bpf_ctx *ctx) +{ + struct hid_bpf_ctx_kern *ctx_kern; + + if (!ctx) + return; + + ctx_kern = container_of(ctx, struct hid_bpf_ctx_kern, ctx); + + kfree(ctx_kern); +} + +/** + * hid_bpf_hw_request - Communicate with a HID device + * + * @ctx: the HID-BPF context previously allocated in hid_bpf_allocate_context() + * @buf: a %PTR_TO_MEM buffer + * @buf__sz: the size of the data to transfer + * @rtype: the type of the report (%HID_INPUT_REPORT, %HID_FEATURE_REPORT, %HID_OUTPUT_REPORT) + * @reqtype: the type of the request (%HID_REQ_GET_REPORT, %HID_REQ_SET_REPORT, ...) + * + * @returns %0 on success, a negative error code otherwise. + */ +noinline int +hid_bpf_hw_request(struct hid_bpf_ctx *ctx, __u8 *buf, size_t buf__sz, + enum hid_report_type rtype, enum hid_class_request reqtype) +{ + struct hid_device *hdev = (struct hid_device *)ctx->hid; /* discard const */ + struct hid_report *report; + struct hid_report_enum *report_enum; + u8 *dma_data; + u32 report_len; + int ret; + + /* check arguments */ + if (!ctx || !hid_bpf_ops || !buf) + return -EINVAL; + + switch (rtype) { + case HID_INPUT_REPORT: + case HID_OUTPUT_REPORT: + case HID_FEATURE_REPORT: + break; + default: + return -EINVAL; + } + + switch (reqtype) { + case HID_REQ_GET_REPORT: + case HID_REQ_GET_IDLE: + case HID_REQ_GET_PROTOCOL: + case HID_REQ_SET_REPORT: + case HID_REQ_SET_IDLE: + case HID_REQ_SET_PROTOCOL: + break; + default: + return -EINVAL; + } + + if (buf__sz < 1) + return -EINVAL; + + report_enum = hdev->report_enum + rtype; + report = hid_bpf_ops->hid_get_report(report_enum, buf); + if (!report) + return -EINVAL; + + report_len = hid_report_len(report); + + if (buf__sz > report_len) + buf__sz = report_len; + + dma_data = kmemdup(buf, buf__sz, GFP_KERNEL); + if (!dma_data) + return -ENOMEM; + + ret = hid_bpf_ops->hid_hw_raw_request(hdev, + dma_data[0], + dma_data, + buf__sz, + rtype, + reqtype); + + if (ret > 0) + memcpy(buf, dma_data, ret); + + kfree(dma_data); + return ret; +} + /* for syscall HID-BPF */ BTF_SET8_START(hid_bpf_syscall_kfunc_ids) BTF_ID_FLAGS(func, hid_bpf_attach_prog) +BTF_ID_FLAGS(func, hid_bpf_allocate_context, KF_ACQUIRE | KF_RET_NULL) +BTF_ID_FLAGS(func, hid_bpf_release_context, KF_RELEASE) +BTF_ID_FLAGS(func, hid_bpf_hw_request) BTF_SET8_END(hid_bpf_syscall_kfunc_ids)
static const struct btf_kfunc_id_set hid_bpf_syscall_kfunc_set = { diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index cadd21a6f995..9d86f6fb5a45 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -2917,6 +2917,8 @@ int hid_check_keys_pressed(struct hid_device *hid) EXPORT_SYMBOL_GPL(hid_check_keys_pressed);
static struct hid_bpf_ops hid_ops = { + .hid_get_report = hid_get_report, + .hid_hw_raw_request = hid_hw_raw_request, .owner = THIS_MODULE, .bus_type = &hid_bus_type, }; diff --git a/include/linux/hid_bpf.h b/include/linux/hid_bpf.h index 1707e4492d7a..ef76894f7705 100644 --- a/include/linux/hid_bpf.h +++ b/include/linux/hid_bpf.h @@ -61,11 +61,15 @@ struct hid_bpf_ctx { int hid_bpf_device_event(struct hid_bpf_ctx *ctx);
/* Following functions are kfunc that we export to BPF programs */ -/* only available in tracing */ +/* available everywhere in HID-BPF */ __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, unsigned int offset, const size_t __sz);
/* only available in syscall */ int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags); +int hid_bpf_hw_request(struct hid_bpf_ctx *ctx, __u8 *buf, size_t buf__sz, + enum hid_report_type rtype, enum hid_class_request reqtype); +struct hid_bpf_ctx *hid_bpf_allocate_context(unsigned int hid_id); +void hid_bpf_release_context(struct hid_bpf_ctx *ctx);
/* * Below is HID internal @@ -84,7 +88,14 @@ enum hid_bpf_prog_type { HID_BPF_PROG_TYPE_MAX, };
+struct hid_report_enum; + struct hid_bpf_ops { + struct hid_report *(*hid_get_report)(struct hid_report_enum *report_enum, const u8 *data); + int (*hid_hw_raw_request)(struct hid_device *hdev, + unsigned char reportnum, __u8 *buf, + size_t len, enum hid_report_type rtype, + enum hid_class_request reqtype); struct module *owner; struct bus_type *bus_type; };
Add tests for the newly implemented function. We test here only the GET_REPORT part because the other calls are pure HID protocol and won't infer the result of the test of the bpf hook.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
changes in v6: - fixed copy/paste in prog_tests when calling ASSERT_OK - removed the need for memcpy now that kfuncs can access ctx
changes in v5: - use the new hid_bpf_allocate_context() API - remove the need for ctx_in for syscall TEST_RUN
changes in v3: - use the new hid_get_data API - directly use HID_FEATURE_REPORT and HID_REQ_GET_REPORT from uapi
changes in v2: - split the series by bpf/libbpf/hid/selftests and samples --- tools/testing/selftests/bpf/prog_tests/hid.c | 114 ++++++++++++++++--- tools/testing/selftests/bpf/progs/hid.c | 43 +++++++ 2 files changed, 139 insertions(+), 18 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/hid.c b/tools/testing/selftests/bpf/prog_tests/hid.c index 47bc0a30c275..19172d3e0f44 100644 --- a/tools/testing/selftests/bpf/prog_tests/hid.c +++ b/tools/testing/selftests/bpf/prog_tests/hid.c @@ -77,12 +77,23 @@ static unsigned char rdesc[] = { 0xc0, /* END_COLLECTION */ };
+static u8 feature_data[] = { 1, 2 }; + struct attach_prog_args { int prog_fd; unsigned int hid; int retval; };
+struct hid_hw_request_syscall_args { + __u8 data[10]; + unsigned int hid; + int retval; + size_t size; + enum hid_report_type type; + __u8 request_type; +}; + static pthread_mutex_t uhid_started_mtx = PTHREAD_MUTEX_INITIALIZER; static pthread_cond_t uhid_started = PTHREAD_COND_INITIALIZER;
@@ -142,7 +153,7 @@ static void destroy(int fd)
static int uhid_event(int fd) { - struct uhid_event ev; + struct uhid_event ev, answer; ssize_t ret;
memset(&ev, 0, sizeof(ev)); @@ -183,6 +194,15 @@ static int uhid_event(int fd) break; case UHID_GET_REPORT: fprintf(stderr, "UHID_GET_REPORT from uhid-dev\n"); + + answer.type = UHID_GET_REPORT_REPLY; + answer.u.get_report_reply.id = ev.u.get_report.id; + answer.u.get_report_reply.err = ev.u.get_report.rnum == 1 ? 0 : -EIO; + answer.u.get_report_reply.size = sizeof(feature_data); + memcpy(answer.u.get_report_reply.data, feature_data, sizeof(feature_data)); + + uhid_write(fd, &answer); + break; case UHID_SET_REPORT: fprintf(stderr, "UHID_SET_REPORT from uhid-dev\n"); @@ -391,6 +411,7 @@ static int open_hidraw(int dev_id) struct test_params { struct hid *skel; int hidraw_fd; + int hid_id; };
static int prep_test(int dev_id, const char *prog_name, struct test_params *test_data) @@ -419,27 +440,33 @@ static int prep_test(int dev_id, const char *prog_name, struct test_params *test if (!ASSERT_OK_PTR(hid_skel, "hid_skel_open")) goto cleanup;
- prog = bpf_object__find_program_by_name(*hid_skel->skeleton->obj, prog_name); - if (!ASSERT_OK_PTR(prog, "find_prog_by_name")) - goto cleanup; + if (prog_name) { + prog = bpf_object__find_program_by_name(*hid_skel->skeleton->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "find_prog_by_name")) + goto cleanup;
- bpf_program__set_autoload(prog, true); + bpf_program__set_autoload(prog, true);
- err = hid__load(hid_skel); - if (!ASSERT_OK(err, "hid_skel_load")) - goto cleanup; + err = hid__load(hid_skel); + if (!ASSERT_OK(err, "hid_skel_load")) + goto cleanup;
- attach_fd = bpf_program__fd(hid_skel->progs.attach_prog); - if (!ASSERT_GE(attach_fd, 0, "locate attach_prog")) { - err = attach_fd; - goto cleanup; - } + attach_fd = bpf_program__fd(hid_skel->progs.attach_prog); + if (!ASSERT_GE(attach_fd, 0, "locate attach_prog")) { + err = attach_fd; + goto cleanup; + }
- args.prog_fd = bpf_program__fd(prog); - err = bpf_prog_test_run_opts(attach_fd, &tattr); - snprintf(buf, sizeof(buf), "attach_hid(%s)", prog_name); - if (!ASSERT_EQ(args.retval, 0, buf)) - goto cleanup; + args.prog_fd = bpf_program__fd(prog); + err = bpf_prog_test_run_opts(attach_fd, &tattr); + snprintf(buf, sizeof(buf), "attach_hid(%s)", prog_name); + if (!ASSERT_EQ(args.retval, 0, buf)) + goto cleanup; + } else { + err = hid__load(hid_skel); + if (!ASSERT_OK(err, "hid_skel_load")) + goto cleanup; + }
hidraw_fd = open_hidraw(dev_id); if (!ASSERT_GE(hidraw_fd, 0, "open_hidraw")) @@ -447,6 +474,7 @@ static int prep_test(int dev_id, const char *prog_name, struct test_params *test
test_data->skel = hid_skel; test_data->hidraw_fd = hidraw_fd; + test_data->hid_id = hid_id;
return 0;
@@ -693,6 +721,54 @@ static int test_hid_change_report(int uhid_fd, int dev_id) return ret; }
+/* + * Attach hid_user_raw_request to the given uhid device, + * call the bpf program from userspace + * check that the program is called and does the expected. + */ +static int test_hid_user_raw_request_call(int uhid_fd, int dev_id) +{ + struct test_params params; + int err, prog_fd; + int ret = -1; + struct hid_hw_request_syscall_args args = { + .retval = -1, + .type = HID_FEATURE_REPORT, + .request_type = HID_REQ_GET_REPORT, + .size = 10, + }; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattrs, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + err = prep_test(dev_id, NULL, ¶ms); + if (!ASSERT_EQ(err, 0, "prep_test()")) + goto cleanup; + + args.hid = params.hid_id; + args.data[0] = 1; /* report ID */ + + prog_fd = bpf_program__fd(params.skel->progs.hid_user_raw_request); + + err = bpf_prog_test_run_opts(prog_fd, &tattrs); + if (!ASSERT_EQ(err, 0, "bpf_prog_test_run_opts")) + goto cleanup; + + if (!ASSERT_EQ(args.retval, 2, "bpf_prog_test_run_opts_retval")) + goto cleanup; + + if (!ASSERT_EQ(args.data[1], 2, "hid_user_raw_request_check_in")) + goto cleanup; + + ret = 0; + +cleanup: + cleanup_test(¶ms); + + return ret; +} + void serial_test_hid_bpf(void) { int err, uhid_fd; @@ -720,6 +796,8 @@ void serial_test_hid_bpf(void) ASSERT_OK(err, "hid_attach_detach"); err = test_hid_change_report(uhid_fd, dev_id); ASSERT_OK(err, "hid_change_report"); + err = test_hid_user_raw_request_call(uhid_fd, dev_id); + ASSERT_OK(err, "hid_user_raw_request");
destroy(uhid_fd);
diff --git a/tools/testing/selftests/bpf/progs/hid.c b/tools/testing/selftests/bpf/progs/hid.c index ee7529c47ad8..fde76f63927b 100644 --- a/tools/testing/selftests/bpf/progs/hid.c +++ b/tools/testing/selftests/bpf/progs/hid.c @@ -10,6 +10,13 @@ extern __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, unsigned int offset, const size_t __sz) __ksym; extern int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, u32 flags) __ksym; +extern struct hid_bpf_ctx *hid_bpf_allocate_context(unsigned int hid_id) __ksym; +extern void hid_bpf_release_context(struct hid_bpf_ctx *ctx) __ksym; +extern int hid_bpf_hw_request(struct hid_bpf_ctx *ctx, + __u8 *data, + size_t buf__sz, + enum hid_report_type type, + enum hid_class_request reqtype) __ksym;
struct attach_prog_args { int prog_fd; @@ -56,3 +63,39 @@ int attach_prog(struct attach_prog_args *ctx) 0); return 0; } + +struct hid_hw_request_syscall_args { + /* data needs to come at offset 0 so we can use it in calls */ + __u8 data[10]; + unsigned int hid; + int retval; + size_t size; + enum hid_report_type type; + __u8 request_type; +}; + +SEC("syscall") +int hid_user_raw_request(struct hid_hw_request_syscall_args *args) +{ + struct hid_bpf_ctx *ctx; + const size_t size = args->size; + int i, ret = 0; + + if (size > sizeof(args->data)) + return -7; /* -E2BIG */ + + ctx = hid_bpf_allocate_context(args->hid); + if (!ctx) + return -1; /* EPERM check */ + + ret = hid_bpf_hw_request(ctx, + args->data, + size, + args->type, + args->request_type); + args->retval = ret; + + hid_bpf_release_context(ctx); + + return 0; +}
Add a new tracepoint hid_bpf_rdesc_fixup() so we can trigger a report descriptor fixup in the bpf world.
Whenever the program gets attached/detached, the device is reconnected meaning that userspace will see it disappearing and reappearing with the new report descriptor.
Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
changes in v6: - use BTF_ID to get the btf_id of hid_bpf_rdesc_fixup
changes in v5: - adapted for new API
not in v4
changes in v3: - ensure the ctx.size is properly bounded by allocated size - s/link_attached/post_link_attach/ - removed the switch statement with only one case
changes in v2: - split the series by bpf/libbpf/hid/selftests and samples --- drivers/hid/bpf/hid_bpf_dispatch.c | 77 ++++++++++++++++++++++++++++- drivers/hid/bpf/hid_bpf_dispatch.h | 1 + drivers/hid/bpf/hid_bpf_jmp_table.c | 7 +++ drivers/hid/hid-core.c | 3 +- include/linux/hid_bpf.h | 8 +++ 5 files changed, 94 insertions(+), 2 deletions(-)
diff --git a/drivers/hid/bpf/hid_bpf_dispatch.c b/drivers/hid/bpf/hid_bpf_dispatch.c index 85186640e5af..830ae2cfca14 100644 --- a/drivers/hid/bpf/hid_bpf_dispatch.c +++ b/drivers/hid/bpf/hid_bpf_dispatch.c @@ -85,6 +85,63 @@ dispatch_hid_bpf_device_event(struct hid_device *hdev, enum hid_report_type type } EXPORT_SYMBOL_GPL(dispatch_hid_bpf_device_event);
+/** + * hid_bpf_rdesc_fixup - Called when the probe function parses the report + * descriptor of the HID device + * + * @ctx: The HID-BPF context + * + * @return 0 on success and keep processing; a positive value to change the + * incoming size buffer; a negative error code to interrupt the processing + * of this event + * + * Declare an %fmod_ret tracing bpf program to this function and attach this + * program through hid_bpf_attach_prog() to have this helper called before any + * parsing of the report descriptor by HID. + */ +/* never used by the kernel but declared so we can load and attach a tracepoint */ +__weak noinline int hid_bpf_rdesc_fixup(struct hid_bpf_ctx *ctx) +{ + return 0; +} +ALLOW_ERROR_INJECTION(hid_bpf_rdesc_fixup, ERRNO); + +u8 *call_hid_bpf_rdesc_fixup(struct hid_device *hdev, u8 *rdesc, unsigned int *size) +{ + int ret; + struct hid_bpf_ctx_kern ctx_kern = { + .ctx = { + .hid = hdev, + .size = *size, + .allocated_size = HID_MAX_DESCRIPTOR_SIZE, + }, + }; + + ctx_kern.data = kmemdup(rdesc, ctx_kern.ctx.allocated_size, GFP_KERNEL); + if (!ctx_kern.data) + goto ignore_bpf; + + ret = hid_bpf_prog_run(hdev, HID_BPF_PROG_TYPE_RDESC_FIXUP, &ctx_kern); + if (ret < 0) + goto ignore_bpf; + + if (ret) { + if (ret > ctx_kern.ctx.allocated_size) + goto ignore_bpf; + + *size = ret; + } + + rdesc = krealloc(ctx_kern.data, *size, GFP_KERNEL); + + return rdesc; + + ignore_bpf: + kfree(ctx_kern.data); + return kmemdup(rdesc, *size, GFP_KERNEL); +} +EXPORT_SYMBOL_GPL(call_hid_bpf_rdesc_fixup); + /** * hid_bpf_get_data - Get the kernel memory pointer associated with the context @ctx * @@ -176,6 +233,14 @@ static int hid_bpf_allocate_event_data(struct hid_device *hdev) return __hid_bpf_allocate_data(hdev, &hdev->bpf.device_data, &hdev->bpf.allocated_data); }
+int hid_bpf_reconnect(struct hid_device *hdev) +{ + if (!test_and_set_bit(ffs(HID_STAT_REPROBED), &hdev->status)) + return device_reprobe(&hdev->dev); + + return 0; +} + /** * hid_bpf_attach_prog - Attach the given @prog_fd to the given HID device * @@ -217,7 +282,17 @@ hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags) return err; }
- return __hid_bpf_attach_prog(hdev, prog_type, prog_fd, flags); + err = __hid_bpf_attach_prog(hdev, prog_type, prog_fd, flags); + if (err) + return err; + + if (prog_type == HID_BPF_PROG_TYPE_RDESC_FIXUP) { + err = hid_bpf_reconnect(hdev); + if (err) + return err; + } + + return 0; }
/** diff --git a/drivers/hid/bpf/hid_bpf_dispatch.h b/drivers/hid/bpf/hid_bpf_dispatch.h index 98c378e18b2b..1d1d5bcccbd7 100644 --- a/drivers/hid/bpf/hid_bpf_dispatch.h +++ b/drivers/hid/bpf/hid_bpf_dispatch.h @@ -18,6 +18,7 @@ int __hid_bpf_attach_prog(struct hid_device *hdev, enum hid_bpf_prog_type prog_t void __hid_bpf_destroy_device(struct hid_device *hdev); int hid_bpf_prog_run(struct hid_device *hdev, enum hid_bpf_prog_type type, struct hid_bpf_ctx_kern *ctx_kern); +int hid_bpf_reconnect(struct hid_device *hdev);
struct bpf_prog;
diff --git a/drivers/hid/bpf/hid_bpf_jmp_table.c b/drivers/hid/bpf/hid_bpf_jmp_table.c index 0f20deab81ff..3a5ab70d1a95 100644 --- a/drivers/hid/bpf/hid_bpf_jmp_table.c +++ b/drivers/hid/bpf/hid_bpf_jmp_table.c @@ -59,12 +59,15 @@ static DECLARE_WORK(release_work, hid_bpf_release_progs);
BTF_ID_LIST(hid_bpf_btf_ids) BTF_ID(func, hid_bpf_device_event) /* HID_BPF_PROG_TYPE_DEVICE_EVENT */ +BTF_ID(func, hid_bpf_rdesc_fixup) /* HID_BPF_PROG_TYPE_RDESC_FIXUP */
static int hid_bpf_max_programs(enum hid_bpf_prog_type type) { switch (type) { case HID_BPF_PROG_TYPE_DEVICE_EVENT: return HID_BPF_MAX_PROGS_PER_DEV; + case HID_BPF_PROG_TYPE_RDESC_FIXUP: + return 1; default: return -EINVAL; } @@ -234,6 +237,10 @@ static void hid_bpf_release_progs(struct work_struct *work) if (next->hdev == hdev && next->type == type) next->hdev = NULL; } + + /* if type was rdesc fixup, reconnect device */ + if (type == HID_BPF_PROG_TYPE_RDESC_FIXUP) + hid_bpf_reconnect(hdev); } }
diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index 9d86f6fb5a45..af67a527e0b1 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -1218,7 +1218,8 @@ int hid_open_report(struct hid_device *device) return -ENODEV; size = device->dev_rsize;
- buf = kmemdup(start, size, GFP_KERNEL); + /* call_hid_bpf_rdesc_fixup() ensures we work on a copy of rdesc */ + buf = call_hid_bpf_rdesc_fixup(device, start, &size); if (buf == NULL) return -ENOMEM;
diff --git a/include/linux/hid_bpf.h b/include/linux/hid_bpf.h index ef76894f7705..2fad6bb489ca 100644 --- a/include/linux/hid_bpf.h +++ b/include/linux/hid_bpf.h @@ -59,6 +59,7 @@ struct hid_bpf_ctx {
/* Following functions are tracepoints that BPF programs can attach to */ int hid_bpf_device_event(struct hid_bpf_ctx *ctx); +int hid_bpf_rdesc_fixup(struct hid_bpf_ctx *ctx);
/* Following functions are kfunc that we export to BPF programs */ /* available everywhere in HID-BPF */ @@ -85,6 +86,7 @@ int __hid_bpf_tail_call(struct hid_bpf_ctx *ctx); enum hid_bpf_prog_type { HID_BPF_PROG_TYPE_UNDEF = -1, HID_BPF_PROG_TYPE_DEVICE_EVENT, /* an event is emitted from the device */ + HID_BPF_PROG_TYPE_RDESC_FIXUP, HID_BPF_PROG_TYPE_MAX, };
@@ -128,6 +130,7 @@ int hid_bpf_connect_device(struct hid_device *hdev); void hid_bpf_disconnect_device(struct hid_device *hdev); void hid_bpf_destroy_device(struct hid_device *hid); void hid_bpf_device_init(struct hid_device *hid); +u8 *call_hid_bpf_rdesc_fixup(struct hid_device *hdev, u8 *rdesc, unsigned int *size); #else /* CONFIG_HID_BPF */ static inline u8 *dispatch_hid_bpf_device_event(struct hid_device *hid, enum hid_report_type type, u8 *data, u32 *size, int interrupt) { return 0; } @@ -135,6 +138,11 @@ static inline int hid_bpf_connect_device(struct hid_device *hdev) { return 0; } static inline void hid_bpf_disconnect_device(struct hid_device *hdev) {} static inline void hid_bpf_destroy_device(struct hid_device *hid) {} static inline void hid_bpf_device_init(struct hid_device *hid) {} +static inline u8 *call_hid_bpf_rdesc_fixup(struct hid_device *hdev, u8 *rdesc, unsigned int *size) +{ + return kmemdup(rdesc, *size, GFP_KERNEL); +} + #endif /* CONFIG_HID_BPF */
#endif /* __HID_BPF_H */
Simple report descriptor override in HID: replace part of the report descriptor from a static definition in the bpf kernel program.
Note that this test should be run last because we disconnect/reconnect the device, meaning that it changes the overall uhid device.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
no changes in v6
changes in v5: - amended for the new API
not in v4
changes in v3: - added a comment to mention that this test needs to be run last
changes in v2: - split the series by bpf/libbpf/hid/selftests and samples --- tools/testing/selftests/bpf/prog_tests/hid.c | 76 ++++++++++++++++++++ tools/testing/selftests/bpf/progs/hid.c | 53 ++++++++++++++ 2 files changed, 129 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/hid.c b/tools/testing/selftests/bpf/prog_tests/hid.c index 19172d3e0f44..9dc5f0038472 100644 --- a/tools/testing/selftests/bpf/prog_tests/hid.c +++ b/tools/testing/selftests/bpf/prog_tests/hid.c @@ -9,6 +9,7 @@ #include <dirent.h> #include <poll.h> #include <stdbool.h> +#include <linux/hidraw.h> #include <linux/uhid.h>
static unsigned char rdesc[] = { @@ -769,6 +770,73 @@ static int test_hid_user_raw_request_call(int uhid_fd, int dev_id) return ret; }
+/* + * Attach hid_rdesc_fixup to the given uhid device, + * retrieve and open the matching hidraw node, + * check that the hidraw report descriptor has been updated. + */ +static int test_rdesc_fixup(void) +{ + struct hidraw_report_descriptor rpt_desc = {0}; + struct test_params params; + int err, uhid_fd, desc_size, hidraw_fd = -1, ret = -1; + int dev_id; + void *uhid_err; + pthread_t tid; + bool started = false; + + dev_id = rand() % 1024; + + uhid_fd = setup_uhid(dev_id); + if (!ASSERT_GE(uhid_fd, 0, "setup uhid")) + return uhid_fd; + + err = prep_test(dev_id, "hid_rdesc_fixup", ¶ms); + if (!ASSERT_EQ(err, 0, "prep_test(hid_rdesc_fixup)")) + goto cleanup; + + err = uhid_start_listener(&tid, uhid_fd); + ASSERT_OK(err, "uhid_start_listener"); + + started = true; + + hidraw_fd = params.hidraw_fd; + + /* check that hid_rdesc_fixup() was executed */ + ASSERT_EQ(params.skel->data->callback2_check, 0x21, "callback_check2"); + + /* read the exposed report descriptor from hidraw */ + err = ioctl(hidraw_fd, HIDIOCGRDESCSIZE, &desc_size); + if (!ASSERT_GE(err, 0, "HIDIOCGRDESCSIZE")) + goto cleanup; + + /* ensure the new size of the rdesc is bigger than the old one */ + if (!ASSERT_GT(desc_size, sizeof(rdesc), "new_rdesc_size")) + goto cleanup; + + rpt_desc.size = desc_size; + err = ioctl(hidraw_fd, HIDIOCGRDESC, &rpt_desc); + if (!ASSERT_GE(err, 0, "HIDIOCGRDESC")) + goto cleanup; + + if (!ASSERT_EQ(rpt_desc.value[4], 0x42, "hid_rdesc_fixup")) + goto cleanup; + + ret = 0; + +cleanup: + cleanup_test(¶ms); + + if (started) { + destroy(uhid_fd); + pthread_join(tid, &uhid_err); + err = (int)(long)uhid_err; + CHECK_FAIL(err); + } + + return ret; +} + void serial_test_hid_bpf(void) { int err, uhid_fd; @@ -799,6 +867,14 @@ void serial_test_hid_bpf(void) err = test_hid_user_raw_request_call(uhid_fd, dev_id); ASSERT_OK(err, "hid_user_raw_request");
+ /* + * this test should be run last because we disconnect/reconnect + * the device, meaning that it changes the overall uhid device + * and messes up with the thread that reads uhid events. + */ + err = test_rdesc_fixup(); + ASSERT_OK(err, "hid_rdesc_fixup"); + destroy(uhid_fd);
pthread_join(tid, &uhid_err); diff --git a/tools/testing/selftests/bpf/progs/hid.c b/tools/testing/selftests/bpf/progs/hid.c index fde76f63927b..815ff94321c9 100644 --- a/tools/testing/selftests/bpf/progs/hid.c +++ b/tools/testing/selftests/bpf/progs/hid.c @@ -99,3 +99,56 @@ int hid_user_raw_request(struct hid_hw_request_syscall_args *args)
return 0; } + +static const __u8 rdesc[] = { + 0x05, 0x01, /* USAGE_PAGE (Generic Desktop) */ + 0x09, 0x32, /* USAGE (Z) */ + 0x95, 0x01, /* REPORT_COUNT (1) */ + 0x81, 0x06, /* INPUT (Data,Var,Rel) */ + + 0x06, 0x00, 0xff, /* Usage Page (Vendor Defined Page 1) */ + 0x19, 0x01, /* USAGE_MINIMUM (1) */ + 0x29, 0x03, /* USAGE_MAXIMUM (3) */ + 0x15, 0x00, /* LOGICAL_MINIMUM (0) */ + 0x25, 0x01, /* LOGICAL_MAXIMUM (1) */ + 0x95, 0x03, /* REPORT_COUNT (3) */ + 0x75, 0x01, /* REPORT_SIZE (1) */ + 0x91, 0x02, /* Output (Data,Var,Abs) */ + 0x95, 0x01, /* REPORT_COUNT (1) */ + 0x75, 0x05, /* REPORT_SIZE (5) */ + 0x91, 0x01, /* Output (Cnst,Var,Abs) */ + + 0x06, 0x00, 0xff, /* Usage Page (Vendor Defined Page 1) */ + 0x19, 0x06, /* USAGE_MINIMUM (6) */ + 0x29, 0x08, /* USAGE_MAXIMUM (8) */ + 0x15, 0x00, /* LOGICAL_MINIMUM (0) */ + 0x25, 0x01, /* LOGICAL_MAXIMUM (1) */ + 0x95, 0x03, /* REPORT_COUNT (3) */ + 0x75, 0x01, /* REPORT_SIZE (1) */ + 0xb1, 0x02, /* Feature (Data,Var,Abs) */ + 0x95, 0x01, /* REPORT_COUNT (1) */ + 0x75, 0x05, /* REPORT_SIZE (5) */ + 0x91, 0x01, /* Output (Cnst,Var,Abs) */ + + 0xc0, /* END_COLLECTION */ + 0xc0, /* END_COLLECTION */ +}; + +SEC("?fmod_ret/hid_bpf_rdesc_fixup") +int BPF_PROG(hid_rdesc_fixup, struct hid_bpf_ctx *hid_ctx) +{ + __u8 *data = hid_bpf_get_data(hid_ctx, 0 /* offset */, 4096 /* size */); + + if (!data) + return 0; /* EPERM check */ + + callback2_check = data[4]; + + /* insert rdesc at offset 73 */ + __builtin_memcpy(&data[73], rdesc, sizeof(rdesc)); + + /* Change Usage Vendor globally */ + data[4] = 0x42; + + return sizeof(rdesc) + 73; +}
Insert 3 programs to check that we are doing the correct thing: '2', '1', '3' are inserted, but '1' is supposed to be executed first.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
changes in v6: - fixed copy/paste in ASSERT_OK and test execution order
changes in v5: - use the new API
not in v4
changes in v3: - use the new hid_get_data API
new in v2 --- tools/testing/selftests/bpf/prog_tests/hid.c | 107 +++++++++++++++++++ tools/testing/selftests/bpf/progs/hid.c | 54 +++++++++- 2 files changed, 160 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/hid.c b/tools/testing/selftests/bpf/prog_tests/hid.c index 9dc5f0038472..a86e16554e68 100644 --- a/tools/testing/selftests/bpf/prog_tests/hid.c +++ b/tools/testing/selftests/bpf/prog_tests/hid.c @@ -9,6 +9,7 @@ #include <dirent.h> #include <poll.h> #include <stdbool.h> +#include <linux/hid_bpf.h> #include <linux/hidraw.h> #include <linux/uhid.h>
@@ -83,6 +84,7 @@ static u8 feature_data[] = { 1, 2 }; struct attach_prog_args { int prog_fd; unsigned int hid; + unsigned int flags; int retval; };
@@ -770,6 +772,109 @@ static int test_hid_user_raw_request_call(int uhid_fd, int dev_id) return ret; }
+/* + * Attach hid_insert{0,1,2} to the given uhid device, + * retrieve and open the matching hidraw node, + * inject one event in the uhid device, + * check that the programs have been inserted in the correct order. + */ +static int test_hid_attach_flags(int uhid_fd, int dev_id) +{ + struct hid *hid_skel = NULL; + u8 buf[64] = {0}; + int hidraw_fd = -1; + int hid_id, attach_fd, err = -EINVAL; + struct attach_prog_args args = { + .retval = -1, + }; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattr, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + /* locate the uevent file of the created device */ + hid_id = get_hid_id(dev_id); + if (!ASSERT_GE(hid_id, 0, "locate uhid device id")) + goto cleanup; + + args.hid = hid_id; + + hid_skel = hid__open(); + if (!ASSERT_OK_PTR(hid_skel, "hid_skel_open")) + goto cleanup; + + bpf_program__set_autoload(hid_skel->progs.hid_test_insert1, true); + bpf_program__set_autoload(hid_skel->progs.hid_test_insert2, true); + bpf_program__set_autoload(hid_skel->progs.hid_test_insert3, true); + + err = hid__load(hid_skel); + if (!ASSERT_OK(err, "hid_skel_load")) + goto cleanup; + + attach_fd = bpf_program__fd(hid_skel->progs.attach_prog); + if (!ASSERT_GE(attach_fd, 0, "locate attach_prog")) { + err = attach_fd; + goto cleanup; + } + + /* attach hid_test_insert2 program */ + args.prog_fd = bpf_program__fd(hid_skel->progs.hid_test_insert2); + args.flags = HID_BPF_FLAG_NONE; + args.retval = 1; + err = bpf_prog_test_run_opts(attach_fd, &tattr); + if (!ASSERT_EQ(args.retval, 0, "attach_hid_test_insert2")) + goto cleanup; + + /* then attach hid_test_insert1 program before the previous*/ + args.prog_fd = bpf_program__fd(hid_skel->progs.hid_test_insert1); + args.flags = HID_BPF_FLAG_INSERT_HEAD; + args.retval = 1; + err = bpf_prog_test_run_opts(attach_fd, &tattr); + if (!ASSERT_EQ(args.retval, 0, "attach_hid_test_insert1")) + goto cleanup; + + /* finally attach hid_test_insert3 at the end */ + args.prog_fd = bpf_program__fd(hid_skel->progs.hid_test_insert3); + args.flags = HID_BPF_FLAG_NONE; + args.retval = 1; + err = bpf_prog_test_run_opts(attach_fd, &tattr); + if (!ASSERT_EQ(args.retval, 0, "attach_hid_test_insert3")) + goto cleanup; + + hidraw_fd = open_hidraw(dev_id); + if (!ASSERT_GE(hidraw_fd, 0, "open_hidraw")) + goto cleanup; + + /* inject one event */ + buf[0] = 1; + send_event(uhid_fd, buf, 6); + + /* read the data from hidraw */ + memset(buf, 0, sizeof(buf)); + err = read(hidraw_fd, buf, sizeof(buf)); + if (!ASSERT_EQ(err, 6, "read_hidraw")) + goto cleanup; + + if (!ASSERT_EQ(buf[1], 1, "hid_test_insert1")) + goto cleanup; + + if (!ASSERT_EQ(buf[2], 2, "hid_test_insert2")) + goto cleanup; + + if (!ASSERT_EQ(buf[3], 3, "hid_test_insert3")) + goto cleanup; + + err = 0; + + cleanup: + if (hidraw_fd >= 0) + close(hidraw_fd); + + hid__destroy(hid_skel); + + return err; +} + /* * Attach hid_rdesc_fixup to the given uhid device, * retrieve and open the matching hidraw node, @@ -866,6 +971,8 @@ void serial_test_hid_bpf(void) ASSERT_OK(err, "hid_change_report"); err = test_hid_user_raw_request_call(uhid_fd, dev_id); ASSERT_OK(err, "hid_user_raw_request"); + err = test_hid_attach_flags(uhid_fd, dev_id); + ASSERT_OK(err, "hid_attach_flags");
/* * this test should be run last because we disconnect/reconnect diff --git a/tools/testing/selftests/bpf/progs/hid.c b/tools/testing/selftests/bpf/progs/hid.c index 815ff94321c9..eb869a80c254 100644 --- a/tools/testing/selftests/bpf/progs/hid.c +++ b/tools/testing/selftests/bpf/progs/hid.c @@ -21,6 +21,7 @@ extern int hid_bpf_hw_request(struct hid_bpf_ctx *ctx, struct attach_prog_args { int prog_fd; unsigned int hid; + unsigned int flags; int retval; };
@@ -60,7 +61,7 @@ int attach_prog(struct attach_prog_args *ctx) { ctx->retval = hid_bpf_attach_prog(ctx->hid, ctx->prog_fd, - 0); + ctx->flags); return 0; }
@@ -152,3 +153,54 @@ int BPF_PROG(hid_rdesc_fixup, struct hid_bpf_ctx *hid_ctx)
return sizeof(rdesc) + 73; } + +SEC("?fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_test_insert1, struct hid_bpf_ctx *hid_ctx) +{ + __u8 *data = hid_bpf_get_data(hid_ctx, 0 /* offset */, 4 /* size */); + + if (!data) + return 0; /* EPERM check */ + + /* we need to be run first */ + if (data[2] || data[3]) + return -1; + + data[1] = 1; + + return 0; +} + +SEC("?fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_test_insert2, struct hid_bpf_ctx *hid_ctx) +{ + __u8 *data = hid_bpf_get_data(hid_ctx, 0 /* offset */, 4 /* size */); + + if (!data) + return 0; /* EPERM check */ + + /* after insert0 and before insert2 */ + if (!data[1] || data[3]) + return -1; + + data[2] = 2; + + return 0; +} + +SEC("?fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_test_insert3, struct hid_bpf_ctx *hid_ctx) +{ + __u8 *data = hid_bpf_get_data(hid_ctx, 0 /* offset */, 4 /* size */); + + if (!data) + return 0; /* EPERM check */ + + /* at the end */ + if (!data[1] || !data[2]) + return -1; + + data[3] = 3; + + return 0; +}
Everything should be available in the selftest part of the tree, but providing an example without uhid and hidraw will be more easy to follow for users.
This example will probably ever only work on the Etekcity Scroll 6E because we need to adapt the various raw values to the actual device.
On that device, the X and Y axis will be swapped and inverted, and on any other device, chances are high that the device will not work until Ctrl-C is hit.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
changes in v9: - amended the usage part
no changes in v8
changes in v7: - remove unnecessary __must_check definition
changes in v6: - clean up code by removing old comments
changes in v5: - bring back same features than v3, with the new API
changes in v4: - dropped the not-yet-implemented rdesc_fixup - use the new API
changes in v3: - use the new hid_get_data API - add a comment for the report descriptor fixup to explain what is done
changes in v2: - split the series by bpf/libbpf/hid/selftests and samples
fix hid_mouse --- samples/bpf/.gitignore | 1 + samples/bpf/Makefile | 23 ++++++ samples/bpf/hid_mouse.bpf.c | 134 ++++++++++++++++++++++++++++++ samples/bpf/hid_mouse.c | 161 ++++++++++++++++++++++++++++++++++++ 4 files changed, 319 insertions(+) create mode 100644 samples/bpf/hid_mouse.bpf.c create mode 100644 samples/bpf/hid_mouse.c
diff --git a/samples/bpf/.gitignore b/samples/bpf/.gitignore index 0e7bfdbff80a..65440bd618b2 100644 --- a/samples/bpf/.gitignore +++ b/samples/bpf/.gitignore @@ -2,6 +2,7 @@ cpustat fds_example hbm +hid_mouse ibumad lathist lwt_len_hist diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index 727da3c5879b..a965bbfaca47 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -57,6 +57,8 @@ tprogs-y += xdp_redirect_map tprogs-y += xdp_redirect tprogs-y += xdp_monitor
+tprogs-y += hid_mouse + # Libbpf dependencies LIBBPF_SRC = $(TOOLS_PATH)/lib/bpf LIBBPF_OUTPUT = $(abspath $(BPF_SAMPLES_PATH))/libbpf @@ -119,6 +121,8 @@ xdp_redirect-objs := xdp_redirect_user.o $(XDP_SAMPLE) xdp_monitor-objs := xdp_monitor_user.o $(XDP_SAMPLE) xdp_router_ipv4-objs := xdp_router_ipv4_user.o $(XDP_SAMPLE)
+hid_mouse-objs := hid_mouse.o + # Tell kbuild to always build the programs always-y := $(tprogs-y) always-y += sockex1_kern.o @@ -338,6 +342,8 @@ $(obj)/hbm_out_kern.o: $(src)/hbm.h $(src)/hbm_kern.h $(obj)/hbm.o: $(src)/hbm.h $(obj)/hbm_edt_kern.o: $(src)/hbm.h $(src)/hbm_kern.h
+$(obj)/hid_mouse.o: $(obj)/hid_mouse.skel.h + # Override includes for xdp_sample_user.o because $(srctree)/usr/include in # TPROGS_CFLAGS causes conflicts XDP_SAMPLE_CFLAGS += -Wall -O2 \ @@ -422,6 +428,23 @@ $(BPF_SKELS_LINKED): $(BPF_OBJS_LINKED) $(BPFTOOL) @echo " BPF GEN-SKEL" $(@:.skel.h=) $(Q)$(BPFTOOL) gen skeleton $(@:.skel.h=.lbpf.o) name $(notdir $(@:.skel.h=)) > $@
+# Generate BPF skeletons for non XDP progs +OTHER_BPF_SKELS := hid_mouse.skel.h + +hid_mouse.skel.h-deps := hid_mouse.bpf.o + +OTHER_BPF_SRCS_LINKED := $(patsubst %.skel.h,%.bpf.c, $(OTHER_BPF_SKELS)) +OTHER_BPF_OBJS_LINKED := $(patsubst %.bpf.c,$(obj)/%.bpf.o, $(OTHER_BPF_SRCS_LINKED)) +OTHER_BPF_SKELS_LINKED := $(addprefix $(obj)/,$(OTHER_BPF_SKELS)) + +$(OTHER_BPF_SKELS_LINKED): $(OTHER_BPF_OBJS_LINKED) $(BPFTOOL) + @echo " BPF GEN-OBJ " $(@:.skel.h=) + $(Q)$(BPFTOOL) gen object $(@:.skel.h=.lbpf.o) $(addprefix $(obj)/,$($(@F)-deps)) + @echo " BPF GEN-SKEL" $(@:.skel.h=) + $(Q)$(BPFTOOL) gen skeleton $(@:.skel.h=.lbpf.o) name $(notdir $(@:.skel.h=_lskel)) > $@ +# $(call msg,GEN-SKEL,$@) +# $(Q)$(BPFTOOL) gen skeleton $< > $@ + # asm/sysreg.h - inline assembly used by it is incompatible with llvm. # But, there is no easy way to fix it, so just exclude it since it is # useless for BPF samples. diff --git a/samples/bpf/hid_mouse.bpf.c b/samples/bpf/hid_mouse.bpf.c new file mode 100644 index 000000000000..0113e603f7a7 --- /dev/null +++ b/samples/bpf/hid_mouse.bpf.c @@ -0,0 +1,134 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +/* following are kfuncs exported by HID for HID-BPF */ +extern int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, u32 flags) __ksym; +extern __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, + unsigned int offset, + const size_t __sz) __ksym; +extern void hid_bpf_data_release(__u8 *data) __ksym; +extern int hid_bpf_hw_request(struct hid_bpf_ctx *ctx) __ksym; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +SEC("syscall") +int attach_prog(struct attach_prog_args *ctx) +{ + ctx->retval = hid_bpf_attach_prog(ctx->hid, + ctx->prog_fd, + 0); + return 0; +} + +SEC("fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_y_event, struct hid_bpf_ctx *hctx) +{ + s16 y; + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */); + + if (!data) + return 0; /* EPERM check */ + + bpf_printk("event: size: %d", hctx->size); + bpf_printk("incoming event: %02x %02x %02x", + data[0], + data[1], + data[2]); + bpf_printk(" %02x %02x %02x", + data[3], + data[4], + data[5]); + bpf_printk(" %02x %02x %02x", + data[6], + data[7], + data[8]); + + y = data[3] | (data[4] << 8); + + y = -y; + + data[3] = y & 0xFF; + data[4] = (y >> 8) & 0xFF; + + bpf_printk("modified event: %02x %02x %02x", + data[0], + data[1], + data[2]); + bpf_printk(" %02x %02x %02x", + data[3], + data[4], + data[5]); + bpf_printk(" %02x %02x %02x", + data[6], + data[7], + data[8]); + + return 0; +} + +SEC("fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_x_event, struct hid_bpf_ctx *hctx) +{ + s16 x; + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */); + + if (!data) + return 0; /* EPERM check */ + + x = data[1] | (data[2] << 8); + + x = -x; + + data[1] = x & 0xFF; + data[2] = (x >> 8) & 0xFF; + return 0; +} + +SEC("fmod_ret/hid_bpf_rdesc_fixup") +int BPF_PROG(hid_rdesc_fixup, struct hid_bpf_ctx *hctx) +{ + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 4096 /* size */); + + if (!data) + return 0; /* EPERM check */ + + bpf_printk("rdesc: %02x %02x %02x", + data[0], + data[1], + data[2]); + bpf_printk(" %02x %02x %02x", + data[3], + data[4], + data[5]); + bpf_printk(" %02x %02x %02x ...", + data[6], + data[7], + data[8]); + + /* + * The original report descriptor contains: + * + * 0x05, 0x01, // Usage Page (Generic Desktop) 30 + * 0x16, 0x01, 0x80, // Logical Minimum (-32767) 32 + * 0x26, 0xff, 0x7f, // Logical Maximum (32767) 35 + * 0x09, 0x30, // Usage (X) 38 + * 0x09, 0x31, // Usage (Y) 40 + * + * So byte 39 contains Usage X and byte 41 Usage Y. + * + * We simply swap the axes here. + */ + data[39] = 0x31; + data[41] = 0x30; + + return 0; +} + +char _license[] SEC("license") = "GPL"; diff --git a/samples/bpf/hid_mouse.c b/samples/bpf/hid_mouse.c new file mode 100644 index 000000000000..bea3650787c5 --- /dev/null +++ b/samples/bpf/hid_mouse.c @@ -0,0 +1,161 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2022 Benjamin Tissoires + * + * This is a pure HID-BPF example, and should be considered as such: + * on the Etekcity Scroll 6E, the X and Y axes will be swapped and + * inverted. On any other device... Not sure what this will do. + * + * This C main file is generic though. To adapt the code and test, users + * must amend only the .bpf.c file, which this program will load any + * eBPF program it finds. + */ + +#include <assert.h> +#include <errno.h> +#include <fcntl.h> +#include <libgen.h> +#include <signal.h> +#include <stdbool.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/resource.h> +#include <unistd.h> + +#include <linux/bpf.h> +#include <linux/errno.h> + +#include "bpf_util.h" +#include <bpf/bpf.h> +#include <bpf/libbpf.h> + +#include "hid_mouse.skel.h" + +static bool running = true; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +static void int_exit(int sig) +{ + running = false; + exit(0); +} + +static void usage(const char *prog) +{ + fprintf(stderr, + "%s: %s /sys/bus/hid/devices/0BUS:0VID:0PID:00ID\n\n", + __func__, prog); + fprintf(stderr, + "This program will upload and attach a HID-BPF program to the given device.\n" + "On the Etekcity Scroll 6E, the X and Y axis will be inverted, but on any other\n" + "device, chances are high that the device will not be working anymore\n\n" + "consider this as a demo and adapt the eBPF program to your needs\n" + "Hit Ctrl-C to unbind the program and reset the device\n"); +} + +static int get_hid_id(const char *path) +{ + const char *str_id, *dir; + char uevent[1024]; + int fd; + + memset(uevent, 0, sizeof(uevent)); + snprintf(uevent, sizeof(uevent) - 1, "%s/uevent", path); + + fd = open(uevent, O_RDONLY | O_NONBLOCK); + if (fd < 0) + return -ENOENT; + + close(fd); + + dir = basename((char *)path); + + str_id = dir + sizeof("0003:0001:0A37."); + return (int)strtol(str_id, NULL, 16); +} + +int main(int argc, char **argv) +{ + struct hid_mouse_lskel *skel; + struct bpf_program *prog; + int err; + const char *optstr = ""; + const char *sysfs_path; + int opt, hid_id, attach_fd; + struct attach_prog_args args = { + .retval = -1, + }; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattr, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + while ((opt = getopt(argc, argv, optstr)) != -1) { + switch (opt) { + default: + usage(basename(argv[0])); + return 1; + } + } + + if (optind == argc) { + usage(basename(argv[0])); + return 1; + } + + sysfs_path = argv[optind]; + if (!sysfs_path) { + perror("sysfs"); + return 1; + } + + skel = hid_mouse_lskel__open_and_load(); + if (!skel) { + fprintf(stderr, "%s %s:%d", __func__, __FILE__, __LINE__); + return -1; + } + + hid_id = get_hid_id(sysfs_path); + + if (hid_id < 0) { + fprintf(stderr, "can not open HID device: %m\n"); + return 1; + } + args.hid = hid_id; + + attach_fd = bpf_program__fd(skel->progs.attach_prog); + if (attach_fd < 0) { + fprintf(stderr, "can't locate attach prog: %m\n"); + return 1; + } + + bpf_object__for_each_program(prog, *skel->skeleton->obj) { + /* ignore syscalls */ + if (bpf_program__get_type(prog) != BPF_PROG_TYPE_TRACING) + continue; + + args.retval = -1; + args.prog_fd = bpf_program__fd(prog); + err = bpf_prog_test_run_opts(attach_fd, &tattr); + if (err) { + fprintf(stderr, "can't attach prog to hid device %d: %m (err: %d)\n", + hid_id, err); + return 1; + } + } + + signal(SIGINT, int_exit); + signal(SIGTERM, int_exit); + + while (running) + sleep(1); + + hid_mouse_lskel__destroy(skel); + + return 0; +}
On Wed, Aug 24, 2022 at 3:42 PM Benjamin Tissoires benjamin.tissoires@redhat.com wrote:
Everything should be available in the selftest part of the tree, but providing an example without uhid and hidraw will be more easy to follow for users.
This example will probably ever only work on the Etekcity Scroll 6E because we need to adapt the various raw values to the actual device.
On that device, the X and Y axis will be swapped and inverted, and on any other device, chances are high that the device will not work until Ctrl-C is hit.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
Sorry, I realized that there are two 21/23 and 22/23... This one should be disregarded, as there are minor improvements in the other 21/23 :(
The good ones are the 2 starting with "samples/bpf: HID:"
Cheers, Benjamin
changes in v9:
- amended the usage part
no changes in v8
changes in v7:
- remove unnecessary __must_check definition
changes in v6:
- clean up code by removing old comments
changes in v5:
- bring back same features than v3, with the new API
changes in v4:
- dropped the not-yet-implemented rdesc_fixup
- use the new API
changes in v3:
- use the new hid_get_data API
- add a comment for the report descriptor fixup to explain what is done
changes in v2:
- split the series by bpf/libbpf/hid/selftests and samples
fix hid_mouse
samples/bpf/.gitignore | 1 + samples/bpf/Makefile | 23 ++++++ samples/bpf/hid_mouse.bpf.c | 134 ++++++++++++++++++++++++++++++ samples/bpf/hid_mouse.c | 161 ++++++++++++++++++++++++++++++++++++ 4 files changed, 319 insertions(+) create mode 100644 samples/bpf/hid_mouse.bpf.c create mode 100644 samples/bpf/hid_mouse.c
diff --git a/samples/bpf/.gitignore b/samples/bpf/.gitignore index 0e7bfdbff80a..65440bd618b2 100644 --- a/samples/bpf/.gitignore +++ b/samples/bpf/.gitignore @@ -2,6 +2,7 @@ cpustat fds_example hbm +hid_mouse ibumad lathist lwt_len_hist diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index 727da3c5879b..a965bbfaca47 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -57,6 +57,8 @@ tprogs-y += xdp_redirect_map tprogs-y += xdp_redirect tprogs-y += xdp_monitor
+tprogs-y += hid_mouse
# Libbpf dependencies LIBBPF_SRC = $(TOOLS_PATH)/lib/bpf LIBBPF_OUTPUT = $(abspath $(BPF_SAMPLES_PATH))/libbpf @@ -119,6 +121,8 @@ xdp_redirect-objs := xdp_redirect_user.o $(XDP_SAMPLE) xdp_monitor-objs := xdp_monitor_user.o $(XDP_SAMPLE) xdp_router_ipv4-objs := xdp_router_ipv4_user.o $(XDP_SAMPLE)
+hid_mouse-objs := hid_mouse.o
# Tell kbuild to always build the programs always-y := $(tprogs-y) always-y += sockex1_kern.o @@ -338,6 +342,8 @@ $(obj)/hbm_out_kern.o: $(src)/hbm.h $(src)/hbm_kern.h $(obj)/hbm.o: $(src)/hbm.h $(obj)/hbm_edt_kern.o: $(src)/hbm.h $(src)/hbm_kern.h
+$(obj)/hid_mouse.o: $(obj)/hid_mouse.skel.h
# Override includes for xdp_sample_user.o because $(srctree)/usr/include in # TPROGS_CFLAGS causes conflicts XDP_SAMPLE_CFLAGS += -Wall -O2 \ @@ -422,6 +428,23 @@ $(BPF_SKELS_LINKED): $(BPF_OBJS_LINKED) $(BPFTOOL) @echo " BPF GEN-SKEL" $(@:.skel.h=) $(Q)$(BPFTOOL) gen skeleton $(@:.skel.h=.lbpf.o) name $(notdir $(@:.skel.h=)) > $@
+# Generate BPF skeletons for non XDP progs +OTHER_BPF_SKELS := hid_mouse.skel.h
+hid_mouse.skel.h-deps := hid_mouse.bpf.o
+OTHER_BPF_SRCS_LINKED := $(patsubst %.skel.h,%.bpf.c, $(OTHER_BPF_SKELS)) +OTHER_BPF_OBJS_LINKED := $(patsubst %.bpf.c,$(obj)/%.bpf.o, $(OTHER_BPF_SRCS_LINKED)) +OTHER_BPF_SKELS_LINKED := $(addprefix $(obj)/,$(OTHER_BPF_SKELS))
+$(OTHER_BPF_SKELS_LINKED): $(OTHER_BPF_OBJS_LINKED) $(BPFTOOL)
@echo " BPF GEN-OBJ " $(@:.skel.h=)
$(Q)$(BPFTOOL) gen object $(@:.skel.h=.lbpf.o) $(addprefix $(obj)/,$($(@F)-deps))
@echo " BPF GEN-SKEL" $(@:.skel.h=)
$(Q)$(BPFTOOL) gen skeleton $(@:.skel.h=.lbpf.o) name $(notdir $(@:.skel.h=_lskel)) > $@
+# $(call msg,GEN-SKEL,$@) +# $(Q)$(BPFTOOL) gen skeleton $< > $@
# asm/sysreg.h - inline assembly used by it is incompatible with llvm. # But, there is no easy way to fix it, so just exclude it since it is # useless for BPF samples. diff --git a/samples/bpf/hid_mouse.bpf.c b/samples/bpf/hid_mouse.bpf.c new file mode 100644 index 000000000000..0113e603f7a7 --- /dev/null +++ b/samples/bpf/hid_mouse.bpf.c @@ -0,0 +1,134 @@ +// SPDX-License-Identifier: GPL-2.0
+#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h>
+/* following are kfuncs exported by HID for HID-BPF */ +extern int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, u32 flags) __ksym; +extern __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx,
unsigned int offset,
const size_t __sz) __ksym;
+extern void hid_bpf_data_release(__u8 *data) __ksym; +extern int hid_bpf_hw_request(struct hid_bpf_ctx *ctx) __ksym;
+struct attach_prog_args {
int prog_fd;
unsigned int hid;
int retval;
+};
+SEC("syscall") +int attach_prog(struct attach_prog_args *ctx) +{
ctx->retval = hid_bpf_attach_prog(ctx->hid,
ctx->prog_fd,
0);
return 0;
+}
+SEC("fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_y_event, struct hid_bpf_ctx *hctx) +{
s16 y;
__u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */);
if (!data)
return 0; /* EPERM check */
bpf_printk("event: size: %d", hctx->size);
bpf_printk("incoming event: %02x %02x %02x",
data[0],
data[1],
data[2]);
bpf_printk(" %02x %02x %02x",
data[3],
data[4],
data[5]);
bpf_printk(" %02x %02x %02x",
data[6],
data[7],
data[8]);
y = data[3] | (data[4] << 8);
y = -y;
data[3] = y & 0xFF;
data[4] = (y >> 8) & 0xFF;
bpf_printk("modified event: %02x %02x %02x",
data[0],
data[1],
data[2]);
bpf_printk(" %02x %02x %02x",
data[3],
data[4],
data[5]);
bpf_printk(" %02x %02x %02x",
data[6],
data[7],
data[8]);
return 0;
+}
+SEC("fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_x_event, struct hid_bpf_ctx *hctx) +{
s16 x;
__u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */);
if (!data)
return 0; /* EPERM check */
x = data[1] | (data[2] << 8);
x = -x;
data[1] = x & 0xFF;
data[2] = (x >> 8) & 0xFF;
return 0;
+}
+SEC("fmod_ret/hid_bpf_rdesc_fixup") +int BPF_PROG(hid_rdesc_fixup, struct hid_bpf_ctx *hctx) +{
__u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 4096 /* size */);
if (!data)
return 0; /* EPERM check */
bpf_printk("rdesc: %02x %02x %02x",
data[0],
data[1],
data[2]);
bpf_printk(" %02x %02x %02x",
data[3],
data[4],
data[5]);
bpf_printk(" %02x %02x %02x ...",
data[6],
data[7],
data[8]);
/*
* The original report descriptor contains:
*
* 0x05, 0x01, // Usage Page (Generic Desktop) 30
* 0x16, 0x01, 0x80, // Logical Minimum (-32767) 32
* 0x26, 0xff, 0x7f, // Logical Maximum (32767) 35
* 0x09, 0x30, // Usage (X) 38
* 0x09, 0x31, // Usage (Y) 40
*
* So byte 39 contains Usage X and byte 41 Usage Y.
*
* We simply swap the axes here.
*/
data[39] = 0x31;
data[41] = 0x30;
return 0;
+}
+char _license[] SEC("license") = "GPL"; diff --git a/samples/bpf/hid_mouse.c b/samples/bpf/hid_mouse.c new file mode 100644 index 000000000000..bea3650787c5 --- /dev/null +++ b/samples/bpf/hid_mouse.c @@ -0,0 +1,161 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2022 Benjamin Tissoires
- This is a pure HID-BPF example, and should be considered as such:
- on the Etekcity Scroll 6E, the X and Y axes will be swapped and
- inverted. On any other device... Not sure what this will do.
- This C main file is generic though. To adapt the code and test, users
- must amend only the .bpf.c file, which this program will load any
- eBPF program it finds.
- */
+#include <assert.h> +#include <errno.h> +#include <fcntl.h> +#include <libgen.h> +#include <signal.h> +#include <stdbool.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/resource.h> +#include <unistd.h>
+#include <linux/bpf.h> +#include <linux/errno.h>
+#include "bpf_util.h" +#include <bpf/bpf.h> +#include <bpf/libbpf.h>
+#include "hid_mouse.skel.h"
+static bool running = true;
+struct attach_prog_args {
int prog_fd;
unsigned int hid;
int retval;
+};
+static void int_exit(int sig) +{
running = false;
exit(0);
+}
+static void usage(const char *prog) +{
fprintf(stderr,
"%s: %s /sys/bus/hid/devices/0BUS:0VID:0PID:00ID\n\n",
__func__, prog);
fprintf(stderr,
"This program will upload and attach a HID-BPF program to the given device.\n"
"On the Etekcity Scroll 6E, the X and Y axis will be inverted, but on any other\n"
"device, chances are high that the device will not be working anymore\n\n"
"consider this as a demo and adapt the eBPF program to your needs\n"
"Hit Ctrl-C to unbind the program and reset the device\n");
+}
+static int get_hid_id(const char *path) +{
const char *str_id, *dir;
char uevent[1024];
int fd;
memset(uevent, 0, sizeof(uevent));
snprintf(uevent, sizeof(uevent) - 1, "%s/uevent", path);
fd = open(uevent, O_RDONLY | O_NONBLOCK);
if (fd < 0)
return -ENOENT;
close(fd);
dir = basename((char *)path);
str_id = dir + sizeof("0003:0001:0A37.");
return (int)strtol(str_id, NULL, 16);
+}
+int main(int argc, char **argv) +{
struct hid_mouse_lskel *skel;
struct bpf_program *prog;
int err;
const char *optstr = "";
const char *sysfs_path;
int opt, hid_id, attach_fd;
struct attach_prog_args args = {
.retval = -1,
};
DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattr,
.ctx_in = &args,
.ctx_size_in = sizeof(args),
);
while ((opt = getopt(argc, argv, optstr)) != -1) {
switch (opt) {
default:
usage(basename(argv[0]));
return 1;
}
}
if (optind == argc) {
usage(basename(argv[0]));
return 1;
}
sysfs_path = argv[optind];
if (!sysfs_path) {
perror("sysfs");
return 1;
}
skel = hid_mouse_lskel__open_and_load();
if (!skel) {
fprintf(stderr, "%s %s:%d", __func__, __FILE__, __LINE__);
return -1;
}
hid_id = get_hid_id(sysfs_path);
if (hid_id < 0) {
fprintf(stderr, "can not open HID device: %m\n");
return 1;
}
args.hid = hid_id;
attach_fd = bpf_program__fd(skel->progs.attach_prog);
if (attach_fd < 0) {
fprintf(stderr, "can't locate attach prog: %m\n");
return 1;
}
bpf_object__for_each_program(prog, *skel->skeleton->obj) {
/* ignore syscalls */
if (bpf_program__get_type(prog) != BPF_PROG_TYPE_TRACING)
continue;
args.retval = -1;
args.prog_fd = bpf_program__fd(prog);
err = bpf_prog_test_run_opts(attach_fd, &tattr);
if (err) {
fprintf(stderr, "can't attach prog to hid device %d: %m (err: %d)\n",
hid_id, err);
return 1;
}
}
signal(SIGINT, int_exit);
signal(SIGTERM, int_exit);
while (running)
sleep(1);
hid_mouse_lskel__destroy(skel);
return 0;
+}
2.36.1
Everything should be available in the selftest part of the tree, but providing an example without uhid and hidraw will be more easy to follow for users.
This example will probably ever only work on the Etekcity Scroll 6E because we need to adapt the various raw values to the actual device.
On that device, the X and Y axis will be swapped and inverted, and on any other device, chances are high that the device will not work until Ctrl-C is hit.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
changes in v9: - amended the usage part - changed the title of the commit
no changes in v8
changes in v7: - remove unnecessary __must_check definition
changes in v6: - clean up code by removing old comments
changes in v5: - bring back same features than v3, with the new API
changes in v4: - dropped the not-yet-implemented rdesc_fixup - use the new API
changes in v3: - use the new hid_get_data API - add a comment for the report descriptor fixup to explain what is done
changes in v2: - split the series by bpf/libbpf/hid/selftests and samples --- samples/bpf/.gitignore | 1 + samples/bpf/Makefile | 23 ++++++ samples/bpf/hid_mouse.bpf.c | 134 ++++++++++++++++++++++++++++++ samples/bpf/hid_mouse.c | 161 ++++++++++++++++++++++++++++++++++++ 4 files changed, 319 insertions(+) create mode 100644 samples/bpf/hid_mouse.bpf.c create mode 100644 samples/bpf/hid_mouse.c
diff --git a/samples/bpf/.gitignore b/samples/bpf/.gitignore index 0e7bfdbff80a..65440bd618b2 100644 --- a/samples/bpf/.gitignore +++ b/samples/bpf/.gitignore @@ -2,6 +2,7 @@ cpustat fds_example hbm +hid_mouse ibumad lathist lwt_len_hist diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index 727da3c5879b..a965bbfaca47 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -57,6 +57,8 @@ tprogs-y += xdp_redirect_map tprogs-y += xdp_redirect tprogs-y += xdp_monitor
+tprogs-y += hid_mouse + # Libbpf dependencies LIBBPF_SRC = $(TOOLS_PATH)/lib/bpf LIBBPF_OUTPUT = $(abspath $(BPF_SAMPLES_PATH))/libbpf @@ -119,6 +121,8 @@ xdp_redirect-objs := xdp_redirect_user.o $(XDP_SAMPLE) xdp_monitor-objs := xdp_monitor_user.o $(XDP_SAMPLE) xdp_router_ipv4-objs := xdp_router_ipv4_user.o $(XDP_SAMPLE)
+hid_mouse-objs := hid_mouse.o + # Tell kbuild to always build the programs always-y := $(tprogs-y) always-y += sockex1_kern.o @@ -338,6 +342,8 @@ $(obj)/hbm_out_kern.o: $(src)/hbm.h $(src)/hbm_kern.h $(obj)/hbm.o: $(src)/hbm.h $(obj)/hbm_edt_kern.o: $(src)/hbm.h $(src)/hbm_kern.h
+$(obj)/hid_mouse.o: $(obj)/hid_mouse.skel.h + # Override includes for xdp_sample_user.o because $(srctree)/usr/include in # TPROGS_CFLAGS causes conflicts XDP_SAMPLE_CFLAGS += -Wall -O2 \ @@ -422,6 +428,23 @@ $(BPF_SKELS_LINKED): $(BPF_OBJS_LINKED) $(BPFTOOL) @echo " BPF GEN-SKEL" $(@:.skel.h=) $(Q)$(BPFTOOL) gen skeleton $(@:.skel.h=.lbpf.o) name $(notdir $(@:.skel.h=)) > $@
+# Generate BPF skeletons for non XDP progs +OTHER_BPF_SKELS := hid_mouse.skel.h + +hid_mouse.skel.h-deps := hid_mouse.bpf.o + +OTHER_BPF_SRCS_LINKED := $(patsubst %.skel.h,%.bpf.c, $(OTHER_BPF_SKELS)) +OTHER_BPF_OBJS_LINKED := $(patsubst %.bpf.c,$(obj)/%.bpf.o, $(OTHER_BPF_SRCS_LINKED)) +OTHER_BPF_SKELS_LINKED := $(addprefix $(obj)/,$(OTHER_BPF_SKELS)) + +$(OTHER_BPF_SKELS_LINKED): $(OTHER_BPF_OBJS_LINKED) $(BPFTOOL) + @echo " BPF GEN-OBJ " $(@:.skel.h=) + $(Q)$(BPFTOOL) gen object $(@:.skel.h=.lbpf.o) $(addprefix $(obj)/,$($(@F)-deps)) + @echo " BPF GEN-SKEL" $(@:.skel.h=) + $(Q)$(BPFTOOL) gen skeleton $(@:.skel.h=.lbpf.o) name $(notdir $(@:.skel.h=_lskel)) > $@ +# $(call msg,GEN-SKEL,$@) +# $(Q)$(BPFTOOL) gen skeleton $< > $@ + # asm/sysreg.h - inline assembly used by it is incompatible with llvm. # But, there is no easy way to fix it, so just exclude it since it is # useless for BPF samples. diff --git a/samples/bpf/hid_mouse.bpf.c b/samples/bpf/hid_mouse.bpf.c new file mode 100644 index 000000000000..0113e603f7a7 --- /dev/null +++ b/samples/bpf/hid_mouse.bpf.c @@ -0,0 +1,134 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +/* following are kfuncs exported by HID for HID-BPF */ +extern int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, u32 flags) __ksym; +extern __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, + unsigned int offset, + const size_t __sz) __ksym; +extern void hid_bpf_data_release(__u8 *data) __ksym; +extern int hid_bpf_hw_request(struct hid_bpf_ctx *ctx) __ksym; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +SEC("syscall") +int attach_prog(struct attach_prog_args *ctx) +{ + ctx->retval = hid_bpf_attach_prog(ctx->hid, + ctx->prog_fd, + 0); + return 0; +} + +SEC("fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_y_event, struct hid_bpf_ctx *hctx) +{ + s16 y; + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */); + + if (!data) + return 0; /* EPERM check */ + + bpf_printk("event: size: %d", hctx->size); + bpf_printk("incoming event: %02x %02x %02x", + data[0], + data[1], + data[2]); + bpf_printk(" %02x %02x %02x", + data[3], + data[4], + data[5]); + bpf_printk(" %02x %02x %02x", + data[6], + data[7], + data[8]); + + y = data[3] | (data[4] << 8); + + y = -y; + + data[3] = y & 0xFF; + data[4] = (y >> 8) & 0xFF; + + bpf_printk("modified event: %02x %02x %02x", + data[0], + data[1], + data[2]); + bpf_printk(" %02x %02x %02x", + data[3], + data[4], + data[5]); + bpf_printk(" %02x %02x %02x", + data[6], + data[7], + data[8]); + + return 0; +} + +SEC("fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_x_event, struct hid_bpf_ctx *hctx) +{ + s16 x; + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */); + + if (!data) + return 0; /* EPERM check */ + + x = data[1] | (data[2] << 8); + + x = -x; + + data[1] = x & 0xFF; + data[2] = (x >> 8) & 0xFF; + return 0; +} + +SEC("fmod_ret/hid_bpf_rdesc_fixup") +int BPF_PROG(hid_rdesc_fixup, struct hid_bpf_ctx *hctx) +{ + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 4096 /* size */); + + if (!data) + return 0; /* EPERM check */ + + bpf_printk("rdesc: %02x %02x %02x", + data[0], + data[1], + data[2]); + bpf_printk(" %02x %02x %02x", + data[3], + data[4], + data[5]); + bpf_printk(" %02x %02x %02x ...", + data[6], + data[7], + data[8]); + + /* + * The original report descriptor contains: + * + * 0x05, 0x01, // Usage Page (Generic Desktop) 30 + * 0x16, 0x01, 0x80, // Logical Minimum (-32767) 32 + * 0x26, 0xff, 0x7f, // Logical Maximum (32767) 35 + * 0x09, 0x30, // Usage (X) 38 + * 0x09, 0x31, // Usage (Y) 40 + * + * So byte 39 contains Usage X and byte 41 Usage Y. + * + * We simply swap the axes here. + */ + data[39] = 0x31; + data[41] = 0x30; + + return 0; +} + +char _license[] SEC("license") = "GPL"; diff --git a/samples/bpf/hid_mouse.c b/samples/bpf/hid_mouse.c new file mode 100644 index 000000000000..bea3650787c5 --- /dev/null +++ b/samples/bpf/hid_mouse.c @@ -0,0 +1,161 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2022 Benjamin Tissoires + * + * This is a pure HID-BPF example, and should be considered as such: + * on the Etekcity Scroll 6E, the X and Y axes will be swapped and + * inverted. On any other device... Not sure what this will do. + * + * This C main file is generic though. To adapt the code and test, users + * must amend only the .bpf.c file, which this program will load any + * eBPF program it finds. + */ + +#include <assert.h> +#include <errno.h> +#include <fcntl.h> +#include <libgen.h> +#include <signal.h> +#include <stdbool.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/resource.h> +#include <unistd.h> + +#include <linux/bpf.h> +#include <linux/errno.h> + +#include "bpf_util.h" +#include <bpf/bpf.h> +#include <bpf/libbpf.h> + +#include "hid_mouse.skel.h" + +static bool running = true; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +static void int_exit(int sig) +{ + running = false; + exit(0); +} + +static void usage(const char *prog) +{ + fprintf(stderr, + "%s: %s /sys/bus/hid/devices/0BUS:0VID:0PID:00ID\n\n", + __func__, prog); + fprintf(stderr, + "This program will upload and attach a HID-BPF program to the given device.\n" + "On the Etekcity Scroll 6E, the X and Y axis will be inverted, but on any other\n" + "device, chances are high that the device will not be working anymore\n\n" + "consider this as a demo and adapt the eBPF program to your needs\n" + "Hit Ctrl-C to unbind the program and reset the device\n"); +} + +static int get_hid_id(const char *path) +{ + const char *str_id, *dir; + char uevent[1024]; + int fd; + + memset(uevent, 0, sizeof(uevent)); + snprintf(uevent, sizeof(uevent) - 1, "%s/uevent", path); + + fd = open(uevent, O_RDONLY | O_NONBLOCK); + if (fd < 0) + return -ENOENT; + + close(fd); + + dir = basename((char *)path); + + str_id = dir + sizeof("0003:0001:0A37."); + return (int)strtol(str_id, NULL, 16); +} + +int main(int argc, char **argv) +{ + struct hid_mouse_lskel *skel; + struct bpf_program *prog; + int err; + const char *optstr = ""; + const char *sysfs_path; + int opt, hid_id, attach_fd; + struct attach_prog_args args = { + .retval = -1, + }; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattr, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + while ((opt = getopt(argc, argv, optstr)) != -1) { + switch (opt) { + default: + usage(basename(argv[0])); + return 1; + } + } + + if (optind == argc) { + usage(basename(argv[0])); + return 1; + } + + sysfs_path = argv[optind]; + if (!sysfs_path) { + perror("sysfs"); + return 1; + } + + skel = hid_mouse_lskel__open_and_load(); + if (!skel) { + fprintf(stderr, "%s %s:%d", __func__, __FILE__, __LINE__); + return -1; + } + + hid_id = get_hid_id(sysfs_path); + + if (hid_id < 0) { + fprintf(stderr, "can not open HID device: %m\n"); + return 1; + } + args.hid = hid_id; + + attach_fd = bpf_program__fd(skel->progs.attach_prog); + if (attach_fd < 0) { + fprintf(stderr, "can't locate attach prog: %m\n"); + return 1; + } + + bpf_object__for_each_program(prog, *skel->skeleton->obj) { + /* ignore syscalls */ + if (bpf_program__get_type(prog) != BPF_PROG_TYPE_TRACING) + continue; + + args.retval = -1; + args.prog_fd = bpf_program__fd(prog); + err = bpf_prog_test_run_opts(attach_fd, &tattr); + if (err) { + fprintf(stderr, "can't attach prog to hid device %d: %m (err: %d)\n", + hid_id, err); + return 1; + } + } + + signal(SIGINT, int_exit); + signal(SIGTERM, int_exit); + + while (running) + sleep(1); + + hid_mouse_lskel__destroy(skel); + + return 0; +}
Add a more complete HID-BPF example.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
changes in v9: - extend the usage section - add sleep while waiting
no changes in v8
changes in v7: - remove unnecessary __must_check definition
new in v6
fix surface dial --- samples/bpf/.gitignore | 1 + samples/bpf/Makefile | 6 +- samples/bpf/hid_surface_dial.bpf.c | 161 ++++++++++++++++++++ samples/bpf/hid_surface_dial.c | 232 +++++++++++++++++++++++++++++ 4 files changed, 399 insertions(+), 1 deletion(-) create mode 100644 samples/bpf/hid_surface_dial.bpf.c create mode 100644 samples/bpf/hid_surface_dial.c
diff --git a/samples/bpf/.gitignore b/samples/bpf/.gitignore index 65440bd618b2..6a1079d3d064 100644 --- a/samples/bpf/.gitignore +++ b/samples/bpf/.gitignore @@ -3,6 +3,7 @@ cpustat fds_example hbm hid_mouse +hid_surface_dial ibumad lathist lwt_len_hist diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index a965bbfaca47..5f5aa7b32565 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -58,6 +58,7 @@ tprogs-y += xdp_redirect tprogs-y += xdp_monitor
tprogs-y += hid_mouse +tprogs-y += hid_surface_dial
# Libbpf dependencies LIBBPF_SRC = $(TOOLS_PATH)/lib/bpf @@ -122,6 +123,7 @@ xdp_monitor-objs := xdp_monitor_user.o $(XDP_SAMPLE) xdp_router_ipv4-objs := xdp_router_ipv4_user.o $(XDP_SAMPLE)
hid_mouse-objs := hid_mouse.o +hid_surface_dial-objs := hid_surface_dial.o
# Tell kbuild to always build the programs always-y := $(tprogs-y) @@ -343,6 +345,7 @@ $(obj)/hbm.o: $(src)/hbm.h $(obj)/hbm_edt_kern.o: $(src)/hbm.h $(src)/hbm_kern.h
$(obj)/hid_mouse.o: $(obj)/hid_mouse.skel.h +$(obj)/hid_surface_dial.o: $(obj)/hid_surface_dial.skel.h
# Override includes for xdp_sample_user.o because $(srctree)/usr/include in # TPROGS_CFLAGS causes conflicts @@ -429,9 +432,10 @@ $(BPF_SKELS_LINKED): $(BPF_OBJS_LINKED) $(BPFTOOL) $(Q)$(BPFTOOL) gen skeleton $(@:.skel.h=.lbpf.o) name $(notdir $(@:.skel.h=)) > $@
# Generate BPF skeletons for non XDP progs -OTHER_BPF_SKELS := hid_mouse.skel.h +OTHER_BPF_SKELS := hid_mouse.skel.h hid_surface_dial.skel.h
hid_mouse.skel.h-deps := hid_mouse.bpf.o +hid_surface_dial.skel.h-deps := hid_surface_dial.bpf.o
OTHER_BPF_SRCS_LINKED := $(patsubst %.skel.h,%.bpf.c, $(OTHER_BPF_SKELS)) OTHER_BPF_OBJS_LINKED := $(patsubst %.bpf.c,$(obj)/%.bpf.o, $(OTHER_BPF_SRCS_LINKED)) diff --git a/samples/bpf/hid_surface_dial.bpf.c b/samples/bpf/hid_surface_dial.bpf.c new file mode 100644 index 000000000000..16c821d3decf --- /dev/null +++ b/samples/bpf/hid_surface_dial.bpf.c @@ -0,0 +1,161 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2022 Benjamin Tissoires + */ + +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +#define HID_UP_BUTTON 0x0009 +#define HID_GD_WHEEL 0x0038 + +/* following are kfuncs exported by HID for HID-BPF */ +extern __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, + unsigned int offset, + const size_t __sz) __ksym; +extern int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, u32 flags) __ksym; +extern struct hid_bpf_ctx *hid_bpf_allocate_context(unsigned int hid_id) __ksym; +extern void hid_bpf_release_context(struct hid_bpf_ctx *ctx) __ksym; +extern int hid_bpf_hw_request(struct hid_bpf_ctx *ctx, + __u8 *data, + size_t buf__sz, + enum hid_report_type type, + enum hid_class_request reqtype) __ksym; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +SEC("syscall") +int attach_prog(struct attach_prog_args *ctx) +{ + ctx->retval = hid_bpf_attach_prog(ctx->hid, + ctx->prog_fd, + 0); + return 0; +} + +SEC("fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_event, struct hid_bpf_ctx *hctx) +{ + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */); + + if (!data) + return 0; /* EPERM check */ + + /* Touch */ + data[1] &= 0xfd; + + /* X */ + data[4] = 0; + data[5] = 0; + + /* Y */ + data[6] = 0; + data[7] = 0; + + return 0; +} + +/* 72 == 360 / 5 -> 1 report every 5 degrees */ +int resolution = 72; +int physical = 5; + +struct haptic_syscall_args { + unsigned int hid; + int retval; +}; + +static __u8 haptic_data[8]; + +SEC("syscall") +int set_haptic(struct haptic_syscall_args *args) +{ + struct hid_bpf_ctx *ctx; + const size_t size = sizeof(haptic_data); + u16 *res; + int ret; + + if (size > sizeof(haptic_data)) + return -7; /* -E2BIG */ + + ctx = hid_bpf_allocate_context(args->hid); + if (!ctx) + return -1; /* EPERM check */ + + haptic_data[0] = 1; /* report ID */ + + ret = hid_bpf_hw_request(ctx, haptic_data, size, HID_FEATURE_REPORT, HID_REQ_GET_REPORT); + + bpf_printk("probed/remove event ret value: %d", ret); + bpf_printk("buf: %02x %02x %02x", + haptic_data[0], + haptic_data[1], + haptic_data[2]); + bpf_printk(" %02x %02x %02x", + haptic_data[3], + haptic_data[4], + haptic_data[5]); + bpf_printk(" %02x %02x", + haptic_data[6], + haptic_data[7]); + + /* whenever resolution multiplier is not 3600, we have the fixed report descriptor */ + res = (u16 *)&haptic_data[1]; + if (*res != 3600) { +// haptic_data[1] = 72; /* resolution multiplier */ +// haptic_data[2] = 0; /* resolution multiplier */ +// haptic_data[3] = 0; /* Repeat Count */ + haptic_data[4] = 3; /* haptic Auto Trigger */ +// haptic_data[5] = 5; /* Waveform Cutoff Time */ +// haptic_data[6] = 80; /* Retrigger Period */ +// haptic_data[7] = 0; /* Retrigger Period */ + } else { + haptic_data[4] = 0; + } + + ret = hid_bpf_hw_request(ctx, haptic_data, size, HID_FEATURE_REPORT, HID_REQ_SET_REPORT); + + bpf_printk("set haptic ret value: %d -> %d", ret, haptic_data[4]); + + args->retval = ret; + + hid_bpf_release_context(ctx); + + return 0; +} + +/* Convert REL_DIAL into REL_WHEEL */ +SEC("fmod_ret/hid_bpf_rdesc_fixup") +int BPF_PROG(hid_rdesc_fixup, struct hid_bpf_ctx *hctx) +{ + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 4096 /* size */); + __u16 *res, *phys; + + if (!data) + return 0; /* EPERM check */ + + /* Convert TOUCH into a button */ + data[31] = HID_UP_BUTTON; + data[33] = 2; + + /* Convert REL_DIAL into REL_WHEEL */ + data[45] = HID_GD_WHEEL; + + /* Change Resolution Multiplier */ + phys = (__u16 *)&data[61]; + *phys = physical; + res = (__u16 *)&data[66]; + *res = resolution; + + /* Convert X,Y from Abs to Rel */ + data[88] = 0x06; + data[98] = 0x06; + + return 0; +} + +char _license[] SEC("license") = "GPL"; +u32 _version SEC("version") = 1; diff --git a/samples/bpf/hid_surface_dial.c b/samples/bpf/hid_surface_dial.c new file mode 100644 index 000000000000..c700bb0afa81 --- /dev/null +++ b/samples/bpf/hid_surface_dial.c @@ -0,0 +1,232 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2022 Benjamin Tissoires + * + * This program will morph the Microsoft Surface Dial into a mouse, + * and depending on the chosen resolution enable or not the haptic feedback: + * - a resolution (-r) of 3600 will report 3600 "ticks" in one full rotation + * wihout haptic feedback + * - any other resolution will report N "ticks" in a full rotation with haptic + * feedback + * + * A good default for low resolution haptic scrolling is 72 (1 "tick" every 5 + * degrees), and set to 3600 for smooth scrolling. + */ + +#include <assert.h> +#include <errno.h> +#include <fcntl.h> +#include <libgen.h> +#include <signal.h> +#include <stdbool.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/resource.h> +#include <unistd.h> + +#include <linux/bpf.h> +#include <linux/errno.h> + +#include "bpf_util.h" +#include <bpf/bpf.h> +#include <bpf/libbpf.h> + +#include "hid_surface_dial.skel.h" + +static bool running = true; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +struct haptic_syscall_args { + unsigned int hid; + int retval; +}; + +static void int_exit(int sig) +{ + running = false; + exit(0); +} + +static void usage(const char *prog) +{ + fprintf(stderr, + "%s: %s [OPTIONS] /sys/bus/hid/devices/0BUS:0VID:0PID:00ID\n\n" + " OPTIONS:\n" + " -r N\t set the given resolution to the device (number of ticks per 360°)\n\n", + __func__, prog); + fprintf(stderr, + "This program will morph the Microsoft Surface Dial into a mouse,\n" + "and depending on the chosen resolution enable or not the haptic feedback:\n" + "- a resolution (-r) of 3600 will report 3600 'ticks' in one full rotation\n" + " wihout haptic feedback\n" + "- any other resolution will report N 'ticks' in a full rotation with haptic\n" + " feedback\n" + "\n" + "A good default for low resolution haptic scrolling is 72 (1 'tick' every 5\n" + "degrees), and set to 3600 for smooth scrolling.\n"); +} + +static int get_hid_id(const char *path) +{ + const char *str_id, *dir; + char uevent[1024]; + int fd; + + memset(uevent, 0, sizeof(uevent)); + snprintf(uevent, sizeof(uevent) - 1, "%s/uevent", path); + + fd = open(uevent, O_RDONLY | O_NONBLOCK); + if (fd < 0) + return -ENOENT; + + close(fd); + + dir = basename((char *)path); + + str_id = dir + sizeof("0003:0001:0A37."); + return (int)strtol(str_id, NULL, 16); +} + +static int attach_prog(struct hid_surface_dial_lskel *skel, struct bpf_program *prog, int hid_id) +{ + struct attach_prog_args args = { + .hid = hid_id, + .retval = -1, + }; + int attach_fd, err; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattr, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + attach_fd = bpf_program__fd(skel->progs.attach_prog); + if (attach_fd < 0) { + fprintf(stderr, "can't locate attach prog: %m\n"); + return 1; + } + + args.prog_fd = bpf_program__fd(prog); + err = bpf_prog_test_run_opts(attach_fd, &tattr); + if (err) { + fprintf(stderr, "can't attach prog to hid device %d: %m (err: %d)\n", + hid_id, err); + return 1; + } + return 0; +} + +static int set_haptic(struct hid_surface_dial_lskel *skel, int hid_id) +{ + struct haptic_syscall_args args = { + .hid = hid_id, + .retval = -1, + }; + int haptic_fd, err; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattr, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + haptic_fd = bpf_program__fd(skel->progs.set_haptic); + if (haptic_fd < 0) { + fprintf(stderr, "can't locate haptic prog: %m\n"); + return 1; + } + + err = bpf_prog_test_run_opts(haptic_fd, &tattr); + if (err) { + fprintf(stderr, "can't set haptic configuration to hid device %d: %m (err: %d)\n", + hid_id, err); + return 1; + } + return 0; +} + +int main(int argc, char **argv) +{ + struct hid_surface_dial_lskel *skel; + struct bpf_program *prog; + const char *optstr = "r:"; + const char *sysfs_path; + int opt, hid_id, resolution = 72; + + while ((opt = getopt(argc, argv, optstr)) != -1) { + switch (opt) { + case 'r': + { + char *endp = NULL; + long l = -1; + + if (optarg) { + l = strtol(optarg, &endp, 10); + if (endp && *endp) + l = -1; + } + + if (l < 0) { + fprintf(stderr, + "invalid r option %s - expecting a number\n", + optarg ? optarg : ""); + exit(EXIT_FAILURE); + }; + + resolution = (int) l; + break; + } + default: + usage(basename(argv[0])); + return 1; + } + } + + if (optind == argc) { + usage(basename(argv[0])); + return 1; + } + + sysfs_path = argv[optind]; + if (!sysfs_path) { + perror("sysfs"); + return 1; + } + + skel = hid_surface_dial_lskel__open_and_load(); + if (!skel) { + fprintf(stderr, "%s %s:%d", __func__, __FILE__, __LINE__); + return -1; + } + + hid_id = get_hid_id(sysfs_path); + if (hid_id < 0) { + fprintf(stderr, "can not open HID device: %m\n"); + return 1; + } + + skel->data->resolution = resolution; + skel->data->physical = (int)(resolution / 72); + + bpf_object__for_each_program(prog, *skel->skeleton->obj) { + /* ignore syscalls */ + if (bpf_program__get_type(prog) != BPF_PROG_TYPE_TRACING) + continue; + + attach_prog(skel, prog, hid_id); + } + + signal(SIGINT, int_exit); + signal(SIGTERM, int_exit); + + set_haptic(skel, hid_id); + + while (running) + sleep(1); + + hid_surface_dial_lskel__destroy(skel); + + return 0; +}
Add a more complete HID-BPF example.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
changes in v9: - extend the usage section - add sleep while waiting - changed the title of the commit
no changes in v8
changes in v7: - remove unnecessary __must_check definition
new in v6 --- samples/bpf/.gitignore | 1 + samples/bpf/Makefile | 6 +- samples/bpf/hid_surface_dial.bpf.c | 161 ++++++++++++++++++++ samples/bpf/hid_surface_dial.c | 232 +++++++++++++++++++++++++++++ 4 files changed, 399 insertions(+), 1 deletion(-) create mode 100644 samples/bpf/hid_surface_dial.bpf.c create mode 100644 samples/bpf/hid_surface_dial.c
diff --git a/samples/bpf/.gitignore b/samples/bpf/.gitignore index 65440bd618b2..6a1079d3d064 100644 --- a/samples/bpf/.gitignore +++ b/samples/bpf/.gitignore @@ -3,6 +3,7 @@ cpustat fds_example hbm hid_mouse +hid_surface_dial ibumad lathist lwt_len_hist diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index a965bbfaca47..5f5aa7b32565 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -58,6 +58,7 @@ tprogs-y += xdp_redirect tprogs-y += xdp_monitor
tprogs-y += hid_mouse +tprogs-y += hid_surface_dial
# Libbpf dependencies LIBBPF_SRC = $(TOOLS_PATH)/lib/bpf @@ -122,6 +123,7 @@ xdp_monitor-objs := xdp_monitor_user.o $(XDP_SAMPLE) xdp_router_ipv4-objs := xdp_router_ipv4_user.o $(XDP_SAMPLE)
hid_mouse-objs := hid_mouse.o +hid_surface_dial-objs := hid_surface_dial.o
# Tell kbuild to always build the programs always-y := $(tprogs-y) @@ -343,6 +345,7 @@ $(obj)/hbm.o: $(src)/hbm.h $(obj)/hbm_edt_kern.o: $(src)/hbm.h $(src)/hbm_kern.h
$(obj)/hid_mouse.o: $(obj)/hid_mouse.skel.h +$(obj)/hid_surface_dial.o: $(obj)/hid_surface_dial.skel.h
# Override includes for xdp_sample_user.o because $(srctree)/usr/include in # TPROGS_CFLAGS causes conflicts @@ -429,9 +432,10 @@ $(BPF_SKELS_LINKED): $(BPF_OBJS_LINKED) $(BPFTOOL) $(Q)$(BPFTOOL) gen skeleton $(@:.skel.h=.lbpf.o) name $(notdir $(@:.skel.h=)) > $@
# Generate BPF skeletons for non XDP progs -OTHER_BPF_SKELS := hid_mouse.skel.h +OTHER_BPF_SKELS := hid_mouse.skel.h hid_surface_dial.skel.h
hid_mouse.skel.h-deps := hid_mouse.bpf.o +hid_surface_dial.skel.h-deps := hid_surface_dial.bpf.o
OTHER_BPF_SRCS_LINKED := $(patsubst %.skel.h,%.bpf.c, $(OTHER_BPF_SKELS)) OTHER_BPF_OBJS_LINKED := $(patsubst %.bpf.c,$(obj)/%.bpf.o, $(OTHER_BPF_SRCS_LINKED)) diff --git a/samples/bpf/hid_surface_dial.bpf.c b/samples/bpf/hid_surface_dial.bpf.c new file mode 100644 index 000000000000..16c821d3decf --- /dev/null +++ b/samples/bpf/hid_surface_dial.bpf.c @@ -0,0 +1,161 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2022 Benjamin Tissoires + */ + +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +#define HID_UP_BUTTON 0x0009 +#define HID_GD_WHEEL 0x0038 + +/* following are kfuncs exported by HID for HID-BPF */ +extern __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, + unsigned int offset, + const size_t __sz) __ksym; +extern int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, u32 flags) __ksym; +extern struct hid_bpf_ctx *hid_bpf_allocate_context(unsigned int hid_id) __ksym; +extern void hid_bpf_release_context(struct hid_bpf_ctx *ctx) __ksym; +extern int hid_bpf_hw_request(struct hid_bpf_ctx *ctx, + __u8 *data, + size_t buf__sz, + enum hid_report_type type, + enum hid_class_request reqtype) __ksym; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +SEC("syscall") +int attach_prog(struct attach_prog_args *ctx) +{ + ctx->retval = hid_bpf_attach_prog(ctx->hid, + ctx->prog_fd, + 0); + return 0; +} + +SEC("fmod_ret/hid_bpf_device_event") +int BPF_PROG(hid_event, struct hid_bpf_ctx *hctx) +{ + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */); + + if (!data) + return 0; /* EPERM check */ + + /* Touch */ + data[1] &= 0xfd; + + /* X */ + data[4] = 0; + data[5] = 0; + + /* Y */ + data[6] = 0; + data[7] = 0; + + return 0; +} + +/* 72 == 360 / 5 -> 1 report every 5 degrees */ +int resolution = 72; +int physical = 5; + +struct haptic_syscall_args { + unsigned int hid; + int retval; +}; + +static __u8 haptic_data[8]; + +SEC("syscall") +int set_haptic(struct haptic_syscall_args *args) +{ + struct hid_bpf_ctx *ctx; + const size_t size = sizeof(haptic_data); + u16 *res; + int ret; + + if (size > sizeof(haptic_data)) + return -7; /* -E2BIG */ + + ctx = hid_bpf_allocate_context(args->hid); + if (!ctx) + return -1; /* EPERM check */ + + haptic_data[0] = 1; /* report ID */ + + ret = hid_bpf_hw_request(ctx, haptic_data, size, HID_FEATURE_REPORT, HID_REQ_GET_REPORT); + + bpf_printk("probed/remove event ret value: %d", ret); + bpf_printk("buf: %02x %02x %02x", + haptic_data[0], + haptic_data[1], + haptic_data[2]); + bpf_printk(" %02x %02x %02x", + haptic_data[3], + haptic_data[4], + haptic_data[5]); + bpf_printk(" %02x %02x", + haptic_data[6], + haptic_data[7]); + + /* whenever resolution multiplier is not 3600, we have the fixed report descriptor */ + res = (u16 *)&haptic_data[1]; + if (*res != 3600) { +// haptic_data[1] = 72; /* resolution multiplier */ +// haptic_data[2] = 0; /* resolution multiplier */ +// haptic_data[3] = 0; /* Repeat Count */ + haptic_data[4] = 3; /* haptic Auto Trigger */ +// haptic_data[5] = 5; /* Waveform Cutoff Time */ +// haptic_data[6] = 80; /* Retrigger Period */ +// haptic_data[7] = 0; /* Retrigger Period */ + } else { + haptic_data[4] = 0; + } + + ret = hid_bpf_hw_request(ctx, haptic_data, size, HID_FEATURE_REPORT, HID_REQ_SET_REPORT); + + bpf_printk("set haptic ret value: %d -> %d", ret, haptic_data[4]); + + args->retval = ret; + + hid_bpf_release_context(ctx); + + return 0; +} + +/* Convert REL_DIAL into REL_WHEEL */ +SEC("fmod_ret/hid_bpf_rdesc_fixup") +int BPF_PROG(hid_rdesc_fixup, struct hid_bpf_ctx *hctx) +{ + __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 4096 /* size */); + __u16 *res, *phys; + + if (!data) + return 0; /* EPERM check */ + + /* Convert TOUCH into a button */ + data[31] = HID_UP_BUTTON; + data[33] = 2; + + /* Convert REL_DIAL into REL_WHEEL */ + data[45] = HID_GD_WHEEL; + + /* Change Resolution Multiplier */ + phys = (__u16 *)&data[61]; + *phys = physical; + res = (__u16 *)&data[66]; + *res = resolution; + + /* Convert X,Y from Abs to Rel */ + data[88] = 0x06; + data[98] = 0x06; + + return 0; +} + +char _license[] SEC("license") = "GPL"; +u32 _version SEC("version") = 1; diff --git a/samples/bpf/hid_surface_dial.c b/samples/bpf/hid_surface_dial.c new file mode 100644 index 000000000000..c700bb0afa81 --- /dev/null +++ b/samples/bpf/hid_surface_dial.c @@ -0,0 +1,232 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2022 Benjamin Tissoires + * + * This program will morph the Microsoft Surface Dial into a mouse, + * and depending on the chosen resolution enable or not the haptic feedback: + * - a resolution (-r) of 3600 will report 3600 "ticks" in one full rotation + * wihout haptic feedback + * - any other resolution will report N "ticks" in a full rotation with haptic + * feedback + * + * A good default for low resolution haptic scrolling is 72 (1 "tick" every 5 + * degrees), and set to 3600 for smooth scrolling. + */ + +#include <assert.h> +#include <errno.h> +#include <fcntl.h> +#include <libgen.h> +#include <signal.h> +#include <stdbool.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/resource.h> +#include <unistd.h> + +#include <linux/bpf.h> +#include <linux/errno.h> + +#include "bpf_util.h" +#include <bpf/bpf.h> +#include <bpf/libbpf.h> + +#include "hid_surface_dial.skel.h" + +static bool running = true; + +struct attach_prog_args { + int prog_fd; + unsigned int hid; + int retval; +}; + +struct haptic_syscall_args { + unsigned int hid; + int retval; +}; + +static void int_exit(int sig) +{ + running = false; + exit(0); +} + +static void usage(const char *prog) +{ + fprintf(stderr, + "%s: %s [OPTIONS] /sys/bus/hid/devices/0BUS:0VID:0PID:00ID\n\n" + " OPTIONS:\n" + " -r N\t set the given resolution to the device (number of ticks per 360°)\n\n", + __func__, prog); + fprintf(stderr, + "This program will morph the Microsoft Surface Dial into a mouse,\n" + "and depending on the chosen resolution enable or not the haptic feedback:\n" + "- a resolution (-r) of 3600 will report 3600 'ticks' in one full rotation\n" + " wihout haptic feedback\n" + "- any other resolution will report N 'ticks' in a full rotation with haptic\n" + " feedback\n" + "\n" + "A good default for low resolution haptic scrolling is 72 (1 'tick' every 5\n" + "degrees), and set to 3600 for smooth scrolling.\n"); +} + +static int get_hid_id(const char *path) +{ + const char *str_id, *dir; + char uevent[1024]; + int fd; + + memset(uevent, 0, sizeof(uevent)); + snprintf(uevent, sizeof(uevent) - 1, "%s/uevent", path); + + fd = open(uevent, O_RDONLY | O_NONBLOCK); + if (fd < 0) + return -ENOENT; + + close(fd); + + dir = basename((char *)path); + + str_id = dir + sizeof("0003:0001:0A37."); + return (int)strtol(str_id, NULL, 16); +} + +static int attach_prog(struct hid_surface_dial_lskel *skel, struct bpf_program *prog, int hid_id) +{ + struct attach_prog_args args = { + .hid = hid_id, + .retval = -1, + }; + int attach_fd, err; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattr, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + attach_fd = bpf_program__fd(skel->progs.attach_prog); + if (attach_fd < 0) { + fprintf(stderr, "can't locate attach prog: %m\n"); + return 1; + } + + args.prog_fd = bpf_program__fd(prog); + err = bpf_prog_test_run_opts(attach_fd, &tattr); + if (err) { + fprintf(stderr, "can't attach prog to hid device %d: %m (err: %d)\n", + hid_id, err); + return 1; + } + return 0; +} + +static int set_haptic(struct hid_surface_dial_lskel *skel, int hid_id) +{ + struct haptic_syscall_args args = { + .hid = hid_id, + .retval = -1, + }; + int haptic_fd, err; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattr, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + haptic_fd = bpf_program__fd(skel->progs.set_haptic); + if (haptic_fd < 0) { + fprintf(stderr, "can't locate haptic prog: %m\n"); + return 1; + } + + err = bpf_prog_test_run_opts(haptic_fd, &tattr); + if (err) { + fprintf(stderr, "can't set haptic configuration to hid device %d: %m (err: %d)\n", + hid_id, err); + return 1; + } + return 0; +} + +int main(int argc, char **argv) +{ + struct hid_surface_dial_lskel *skel; + struct bpf_program *prog; + const char *optstr = "r:"; + const char *sysfs_path; + int opt, hid_id, resolution = 72; + + while ((opt = getopt(argc, argv, optstr)) != -1) { + switch (opt) { + case 'r': + { + char *endp = NULL; + long l = -1; + + if (optarg) { + l = strtol(optarg, &endp, 10); + if (endp && *endp) + l = -1; + } + + if (l < 0) { + fprintf(stderr, + "invalid r option %s - expecting a number\n", + optarg ? optarg : ""); + exit(EXIT_FAILURE); + }; + + resolution = (int) l; + break; + } + default: + usage(basename(argv[0])); + return 1; + } + } + + if (optind == argc) { + usage(basename(argv[0])); + return 1; + } + + sysfs_path = argv[optind]; + if (!sysfs_path) { + perror("sysfs"); + return 1; + } + + skel = hid_surface_dial_lskel__open_and_load(); + if (!skel) { + fprintf(stderr, "%s %s:%d", __func__, __FILE__, __LINE__); + return -1; + } + + hid_id = get_hid_id(sysfs_path); + if (hid_id < 0) { + fprintf(stderr, "can not open HID device: %m\n"); + return 1; + } + + skel->data->resolution = resolution; + skel->data->physical = (int)(resolution / 72); + + bpf_object__for_each_program(prog, *skel->skeleton->obj) { + /* ignore syscalls */ + if (bpf_program__get_type(prog) != BPF_PROG_TYPE_TRACING) + continue; + + attach_prog(skel, prog, hid_id); + } + + signal(SIGINT, int_exit); + signal(SIGTERM, int_exit); + + set_haptic(skel, hid_id); + + while (running) + sleep(1); + + hid_surface_dial_lskel__destroy(skel); + + return 0; +}
Gives a primer on HID-BPF.
Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com
---
no changes in v9
no changes in v8
no changes in v7
changes in v6: - amended the example now that we can directly use the data from the syscall context
changes in v5: - amended for new API - reworded most of the sentences (thanks to Peter Hutterer for the review)
changes in v4: - fixed typos
new in v3 --- Documentation/hid/hid-bpf.rst | 512 ++++++++++++++++++++++++++++++++++ Documentation/hid/index.rst | 1 + 2 files changed, 513 insertions(+) create mode 100644 Documentation/hid/hid-bpf.rst
diff --git a/Documentation/hid/hid-bpf.rst b/Documentation/hid/hid-bpf.rst new file mode 100644 index 000000000000..75e65c135925 --- /dev/null +++ b/Documentation/hid/hid-bpf.rst @@ -0,0 +1,512 @@ +.. SPDX-License-Identifier: GPL-2.0 + +======= +HID-BPF +======= + +HID is a standard protocol for input devices but some devices may require +custom tweaks, traditionally done with a kernel driver fix. Using the eBPF +capabilities instead speeds up development and adds new capabilities to the +existing HID interfaces. + +.. contents:: + :local: + :depth: 2 + + +When (and why) to use HID-BPF +============================= + +We can enumerate several use cases for when using HID-BPF is better than +using a standard kernel driver fix: + +Dead zone of a joystick +----------------------- + +Assuming you have a joystick that is getting older, it is common to see it +wobbling around its neutral point. This is usually filtered at the application +level by adding a *dead zone* for this specific axis. + +With HID-BPF, we can apply this filtering in the kernel directly so userspace +does not get woken up when nothing else is happening on the input controller. + +Of course, given that this dead zone is specific to an individual device, we +can not create a generic fix for all of the same joysticks. Adding a custom +kernel API for this (e.g. by adding a sysfs entry) does not guarantee this new +kernel API will be broadly adopted and maintained. + +HID-BPF allows the userspace program to load the program itself, ensuring we +only load the custom API when we have a user. + +Simple fixup of report descriptor +--------------------------------- + +In the HID tree, half of the drivers only fix one key or one byte +in the report descriptor. These fixes all require a kernel patch and the +subsequent shepherding into a release, a long and painful process for users. + +We can reduce this burden by providing an eBPF program instead. Once such a +program has been verified by the user, we can embed the source code into the +kernel tree and ship the eBPF program and load it directly instead of loading +a specific kernel module for it. + +Note: distribution of eBPF programs and their inclusion in the kernel is not +yet fully implemented + +Add a new feature that requires a new kernel API +------------------------------------------------ + +An example for such a feature are the Universal Stylus Interface (USI) pens. +Basically, USI pens require a new kernel API because there are new +channels of communication that our HID and input stack do not support. +Instead of using hidraw or creating new sysfs entries or ioctls, we can rely +on eBPF to have the kernel API controlled by the consumer and to not +impact the performances by waking up userspace every time there is an +event. + +Morph a device into something else and control that from userspace +------------------------------------------------------------------ + +The kernel has a relatively static mapping of HID items to evdev bits. +It cannot decide to dynamically transform a given device into something else +as it does not have the required context and any such transformation cannot be +undone (or even discovered) by userspace. + +However, some devices are useless with that static way of defining devices. For +example, the Microsoft Surface Dial is a pushbutton with haptic feedback that +is barely usable as of today. + +With eBPF, userspace can morph that device into a mouse, and convert the dial +events into wheel events. Also, the userspace program can set/unset the haptic +feedback depending on the context. For example, if a menu is visible on the +screen we likely need to have a haptic click every 15 degrees. But when +scrolling in a web page the user experience is better when the device emits +events at the highest resolution. + +Firewall +-------- + +What if we want to prevent other users to access a specific feature of a +device? (think a possibly broken firmware update entry point) + +With eBPF, we can intercept any HID command emitted to the device and +validate it or not. + +This also allows to sync the state between the userspace and the +kernel/bpf program because we can intercept any incoming command. + +Tracing +------- + +The last usage is tracing events and all the fun we can do we BPF to summarize +and analyze events. + +Right now, tracing relies on hidraw. It works well except for a couple +of issues: + +1. if the driver doesn't export a hidraw node, we can't trace anything + (eBPF will be a "god-mode" there, so this may raise some eyebrows) +2. hidraw doesn't catch other processes' requests to the device, which + means that we have cases where we need to add printks to the kernel + to understand what is happening. + +High-level view of HID-BPF +========================== + +The main idea behind HID-BPF is that it works at an array of bytes level. +Thus, all of the parsing of the HID report and the HID report descriptor +must be implemented in the userspace component that loads the eBPF +program. + +For example, in the dead zone joystick from above, knowing which fields +in the data stream needs to be set to ``0`` needs to be computed by userspace. + +A corollary of this is that HID-BPF doesn't know about the other subsystems +available in the kernel. *You can not directly emit input event through the +input API from eBPF*. + +When a BPF program needs to emit input events, it needs to talk HID, and rely +on the HID kernel processing to translate the HID data into input events. + +Available types of programs +=========================== + +HID-BPF is built "on top" of BPF, meaning that we use tracing method to +declare our programs. + +HID-BPF has the following attachment types available: + +1. event processing/filtering with ``SEC("fmod_ret/hid_bpf_device_event")`` in libbpf +2. actions coming from userspace with ``SEC("syscall")`` in libbpf +3. change of the report descriptor with ``SEC("fmod_ret/hid_bpf_rdesc_fixup")`` in libbpf + +A ``hid_bpf_device_event`` is calling a BPF program when an event is received from +the device. Thus we are in IRQ context and can act on the data or notify userspace. +And given that we are in IRQ context, we can not talk back to the device. + +A ``syscall`` means that userspace called the syscall ``BPF_PROG_RUN`` facility. +This time, we can do any operations allowed by HID-BPF, and talking to the device is +allowed. + +Last, ``hid_bpf_rdesc_fixup`` is different from the others as there can be only one +BPF program of this type. This is called on ``probe`` from the driver and allows to +change the report descriptor from the BPF program. Once a ``hid_bpf_rdesc_fixup`` +program has been loaded, it is not possible to overwrite it unless the program which +inserted it allows us by pinning the program and closing all of its fds pointing to it. + +Developer API: +============== + +User API data structures available in programs: +----------------------------------------------- + +.. kernel-doc:: include/uapi/linux/hid_bpf.h +.. kernel-doc:: include/linux/hid_bpf.h + +Available tracing functions to attach a HID-BPF program: +-------------------------------------------------------- + +.. kernel-doc:: drivers/hid/bpf/hid_bpf_dispatch.c + :functions: hid_bpf_device_event hid_bpf_rdesc_fixup + +Available API that can be used in all HID-BPF programs: +------------------------------------------------------- + +.. kernel-doc:: drivers/hid/bpf/hid_bpf_dispatch.c + :functions: hid_bpf_get_data + +Available API that can be used in syscall HID-BPF programs: +----------------------------------------------------------- + +.. kernel-doc:: drivers/hid/bpf/hid_bpf_dispatch.c + :functions: hid_bpf_attach_prog hid_bpf_hw_request hid_bpf_allocate_context hid_bpf_release_context + +General overview of a HID-BPF program +===================================== + +Accessing the data attached to the context +------------------------------------------ + +The ``struct hid_bpf_ctx`` doesn't export the ``data`` fields directly and to access +it, a bpf program needs to first call :c:func:`hid_bpf_get_data`. + +``offset`` can be any integer, but ``size`` needs to be constant, known at compile +time. + +This allows the following: + +1. for a given device, if we know that the report length will always be of a certain value, + we can request the ``data`` pointer to point at the full report length. + + The kernel will ensure we are using a correct size and offset and eBPF will ensure + the code will not attempt to read or write outside of the boundaries:: + + __u8 *data = hid_bpf_get_data(ctx, 0 /* offset */, 256 /* size */); + + if (!data) + return 0; /* ensure data is correct, now the verifier knows we + * have 256 bytes available */ + + bpf_printk("hello world: %02x %02x %02x", data[0], data[128], data[255]); + +2. if the report length is variable, but we know the value of ``X`` is always a 16-bit + integer, we can then have a pointer to that value only:: + + __u16 *x = hid_bpf_get_data(ctx, offset, sizeof(*x)); + + if (!x) + return 0; /* something went wrong */ + + *x += 1; /* increment X by one */ + +Effect of a HID-BPF program +--------------------------- + +For all HID-BPF attachment types except for :c:func:`hid_bpf_rdesc_fixup`, several eBPF +programs can be attached to the same device. + +Unless ``HID_BPF_FLAG_INSERT_HEAD`` is added to the flags while attaching the +program, the new program is appended at the end of the list. +``HID_BPF_FLAG_INSERT_HEAD`` will insert the new program at the beginning of the +list which is useful for e.g. tracing where we need to get the unprocessed events +from the device. + +Note that if there are multiple programs using the ``HID_BPF_FLAG_INSERT_HEAD`` flag, +only the most recently loaded one is actually the first in the list. + +``SEC("fmod_ret/hid_bpf_device_event")`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Whenever a matching event is raised, the eBPF programs are called one after the other +and are working on the same data buffer. + +If a program changes the data associated with the context, the next one will see +the modified data but it will have *no* idea of what the original data was. + +Once all the programs are run and return ``0`` or a positive value, the rest of the +HID stack will work on the modified data, with the ``size`` field of the last hid_bpf_ctx +being the new size of the input stream of data. + +A BPF program returning a negative error discards the event, i.e. this event will not be +processed by the HID stack. Clients (hidraw, input, LEDs) will **not** see this event. + +``SEC("syscall")`` +~~~~~~~~~~~~~~~~~~ + +``syscall`` are not attached to a given device. To tell which device we are working +with, userspace needs to refer to the device by its unique system id (the last 4 numbers +in the sysfs path: ``/sys/bus/hid/devices/xxxx:yyyy:zzzz:0000``). + +To retrieve a context associated with the device, the program must call +:c:func:`hid_bpf_allocate_context` and must release it with :c:func:`hid_bpf_release_context` +before returning. +Once the context is retrieved, one can also request a pointer to kernel memory with +:c:func:`hid_bpf_get_data`. This memory is big enough to support all input/output/feature +reports of the given device. + +``SEC("fmod_ret/hid_bpf_rdesc_fixup")`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``hid_bpf_rdesc_fixup`` program works in a similar manner to +``.report_fixup`` of ``struct hid_driver``. + +When the device is probed, the kernel sets the data buffer of the context with the +content of the report descriptor. The memory associated with that buffer is +``HID_MAX_DESCRIPTOR_SIZE`` (currently 4kB). + +The eBPF program can modify the data buffer at-will and the kernel uses the +modified content and size as the report descriptor. + +Whenever a ``SEC("fmod_ret/hid_bpf_rdesc_fixup")`` program is attached (if no +program was attached before), the kernel immediately disconnects the HID device +and does a reprobe. + +In the same way, when the ``SEC("fmod_ret/hid_bpf_rdesc_fixup")`` program is +detached, the kernel issues a disconnect on the device. + +There is no ``detach`` facility in HID-BPF. Detaching a program happens when +all the user space file descriptors pointing at a program are closed. +Thus, if we need to replace a report descriptor fixup, some cooperation is +required from the owner of the original report descriptor fixup. +The previous owner will likely pin the program in the bpffs, and we can then +replace it through normal bpf operations. + +Attaching a bpf program to a device +=================================== + +``libbpf`` does not export any helper to attach a HID-BPF program. +Users need to use a dedicated ``syscall`` program which will call +``hid_bpf_attach_prog(hid_id, program_fd, flags)``. + +``hid_id`` is the unique system ID of the HID device (the last 4 numbers in the +sysfs path: ``/sys/bus/hid/devices/xxxx:yyyy:zzzz:0000``) + +``progam_fd`` is the opened file descriptor of the program to attach. + +``flags`` is of type ``enum hid_bpf_attach_flags``. + +We can not rely on hidraw to bind a BPF program to a HID device. hidraw is an +artefact of the processing of the HID device, and is not stable. Some drivers +even disable it, so that removes the tracing capabilies on those devices +(where it is interesting to get the non-hidraw traces). + +On the other hand, the ``hid_id`` is stable for the entire life of the HID device, +even if we change its report descriptor. + +Given that hidraw is not stable when the device disconnects/reconnects, we recommend +accessing the current report descriptor of the device through the sysfs. +This is available at ``/sys/bus/hid/devices/BUS:VID:PID.000N/report_descriptor`` as a +binary stream. + +Parsing the report descriptor is the responsibility of the BPF programmer or the userspace +component that loads the eBPF program. + +An (almost) complete example of a BPF enhanced HID device +========================================================= + +*Foreword: for most parts, this could be implemented as a kernel driver* + +Let's imagine we have a new tablet device that has some haptic capabilities +to simulate the surface the user is scratching on. This device would also have +a specific 3 positions switch to toggle between *pencil on paper*, *cray on a wall* +and *brush on a painting canvas*. To make things even better, we can control the +physical position of the switch through a feature report. + +And of course, the switch is relying on some userspace component to control the +haptic feature of the device itself. + +Filtering events +---------------- + +The first step consists in filtering events from the device. Given that the switch +position is actually reported in the flow of the pen events, using hidraw to implement +that filtering would mean that we wake up userspace for every single event. + +This is OK for libinput, but having an external library that is just interested in +one byte in the report is less than ideal. + +For that, we can create a basic skeleton for our BPF program:: + + #include "vmlinux.h" + #include <bpf/bpf_helpers.h> + #include <bpf/bpf_tracing.h> + + /* HID programs need to be GPL */ + char _license[] SEC("license") = "GPL"; + + /* HID-BPF kfunc API definitions */ + extern __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, + unsigned int offset, + const size_t __sz) __ksym; + extern int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, u32 flags) __ksym; + + struct { + __uint(type, BPF_MAP_TYPE_RINGBUF); + __uint(max_entries, 4096 * 64); + } ringbuf SEC(".maps"); + + struct attach_prog_args { + int prog_fd; + unsigned int hid; + unsigned int flags; + int retval; + }; + + SEC("syscall") + int attach_prog(struct attach_prog_args *ctx) + { + ctx->retval = hid_bpf_attach_prog(ctx->hid, + ctx->prog_fd, + ctx->flags); + return 0; + } + + __u8 current_value = 0; + + SEC("?fmod_ret/hid_bpf_device_event") + int BPF_PROG(filter_switch, struct hid_bpf_ctx *hid_ctx) + { + __u8 *data = hid_bpf_get_data(hid_ctx, 0 /* offset */, 192 /* size */); + __u8 *buf; + + if (!data) + return 0; /* EPERM check */ + + if (current_value != data[152]) { + buf = bpf_ringbuf_reserve(&ringbuf, 1, 0); + if (!buf) + return 0; + + *buf = data[152]; + + bpf_ringbuf_commit(buf, 0); + + current_value = data[152]; + } + + return 0; + } + +To attach ``filter_switch``, userspace needs to call the ``attach_prog`` syscall +program first:: + + static int attach_filter(struct hid *hid_skel, int hid_id) + { + int err, prog_fd; + int ret = -1; + struct attach_prog_args args = { + .hid = hid_id, + }; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattrs, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + args.prog_fd = bpf_program__fd(hid_skel->progs.filter_switch); + + prog_fd = bpf_program__fd(hid_skel->progs.attach_prog); + + err = bpf_prog_test_run_opts(prog_fd, &tattrs); + return err; + } + +Our userspace program can now listen to notifications on the ring buffer, and +is awaken only when the value changes. + +Controlling the device +---------------------- + +To be able to change the haptic feedback from the tablet, the userspace program +needs to emit a feature report on the device itself. + +Instead of using hidraw for that, we can create a ``SEC("syscall")`` program +that talks to the device:: + + /* some more HID-BPF kfunc API definitions */ + extern struct hid_bpf_ctx *hid_bpf_allocate_context(unsigned int hid_id) __ksym; + extern void hid_bpf_release_context(struct hid_bpf_ctx *ctx) __ksym; + extern int hid_bpf_hw_request(struct hid_bpf_ctx *ctx, + __u8* data, + size_t len, + enum hid_report_type type, + enum hid_class_request reqtype) __ksym; + + + struct hid_send_haptics_args { + /* data needs to come at offset 0 so we can do a memcpy into it */ + __u8 data[10]; + unsigned int hid; + }; + + SEC("syscall") + int send_haptic(struct hid_send_haptics_args *args) + { + struct hid_bpf_ctx *ctx; + int ret = 0; + + ctx = hid_bpf_allocate_context(args->hid); + if (!ctx) + return 0; /* EPERM check */ + + ret = hid_bpf_hw_request(ctx, + args->data, + 10, + HID_FEATURE_REPORT, + HID_REQ_SET_REPORT); + + hid_bpf_release_context(ctx); + + return ret; + } + +And then userspace needs to call that program directly:: + + static int set_haptic(struct hid *hid_skel, int hid_id, __u8 haptic_value) + { + int err, prog_fd; + int ret = -1; + struct hid_send_haptics_args args = { + .hid = hid_id, + }; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, tattrs, + .ctx_in = &args, + .ctx_size_in = sizeof(args), + ); + + args.data[0] = 0x02; /* report ID of the feature on our device */ + args.data[1] = haptic_value; + + prog_fd = bpf_program__fd(hid_skel->progs.set_haptic); + + err = bpf_prog_test_run_opts(prog_fd, &tattrs); + return err; + } + +Now our userspace program is aware of the haptic state and can control it. The +program could make this state further available to other userspace programs +(e.g. via a DBus API). + +The interesting bit here is that we did not created a new kernel API for this. +Which means that if there is a bug in our implementation, we can change the +interface with the kernel at-will, because the userspace application is +responsible for its own usage. diff --git a/Documentation/hid/index.rst b/Documentation/hid/index.rst index e50f513c579c..b2028f382f11 100644 --- a/Documentation/hid/index.rst +++ b/Documentation/hid/index.rst @@ -11,6 +11,7 @@ Human Interface Devices (HID) hidraw hid-sensor hid-transport + hid-bpf
uhid
Hello:
This series was applied to bpf/bpf-next.git (master) by Alexei Starovoitov ast@kernel.org:
On Wed, 24 Aug 2022 15:40:30 +0200 you wrote:
Hi,
here comes the v9 of the HID-BPF series.
Again, for a full explanation of HID-BPF, please refer to the last patch in this series (23/23).
[...]
Here is the summary with links: - [bpf-next,v9,01/23] bpf/verifier: allow all functions to read user provided context (no matching commit) - [bpf-next,v9,02/23] bpf/verifier: do not clear meta in check_mem_size (no matching commit) - [bpf-next,v9,03/23] selftests/bpf: add test for accessing ctx from syscall program type (no matching commit) - [bpf-next,v9,04/23] bpf/verifier: allow kfunc to return an allocated mem (no matching commit) - [bpf-next,v9,05/23] selftests/bpf: Add tests for kfunc returning a memory pointer (no matching commit) - [bpf-next,v9,06/23] bpf: prepare for more bpf syscall to be used from kernel and user space. https://git.kernel.org/bpf/bpf-next/c/b88df6979682 - [bpf-next,v9,07/23] libbpf: add map_get_fd_by_id and map_delete_elem in light skeleton https://git.kernel.org/bpf/bpf-next/c/343949e10798 - [bpf-next,v9,08/23] HID: core: store the unique system identifier in hid_device (no matching commit) - [bpf-next,v9,09/23] HID: export hid_report_type to uapi (no matching commit) - [bpf-next,v9,10/23] HID: convert defines of HID class requests into a proper enum (no matching commit) - [bpf-next,v9,11/23] HID: Kconfig: split HID support and hid-core compilation (no matching commit) - [bpf-next,v9,12/23] HID: initial BPF implementation (no matching commit) - [bpf-next,v9,13/23] selftests/bpf: add tests for the HID-bpf initial implementation (no matching commit) - [bpf-next,v9,14/23] HID: bpf: allocate data memory for device_event BPF programs (no matching commit) - [bpf-next,v9,15/23] selftests/bpf/hid: add test to change the report size (no matching commit) - [bpf-next,v9,16/23] HID: bpf: introduce hid_hw_request() (no matching commit) - [bpf-next,v9,17/23] selftests/bpf: add tests for bpf_hid_hw_request (no matching commit) - [bpf-next,v9,18/23] HID: bpf: allow to change the report descriptor (no matching commit) - [bpf-next,v9,19/23] selftests/bpf: add report descriptor fixup tests (no matching commit) - [bpf-next,v9,20/23] selftests/bpf: Add a test for BPF_F_INSERT_HEAD (no matching commit) - [bpf-next,v9,21/23] samples/bpf: add new hid_mouse example (no matching commit) - [bpf-next,v9,22/23] HID: bpf: add Surface Dial example (no matching commit) - [bpf-next,v9,23/23] Documentation: add HID-BPF docs (no matching commit)
You are awesome, thank you!
linux-kselftest-mirror@lists.linaro.org