From: Roberto Sassu roberto.sassu@huawei.com
One of the desirable features in security is the ability to restrict import of data to a given system based on data authenticity. If data import can be restricted, it would be possible to enforce a system-wide policy based on the signing keys the system owner trusts.
This feature is widely used in the kernel. For example, if the restriction is enabled, kernel modules can be plugged in only if they are signed with a key whose public part is in the primary or secondary keyring.
For eBPF, it can be useful as well. For example, it might be useful to authenticate data an eBPF program makes security decisions on.
After a discussion in the eBPF mailing list, it was decided that the stated goal should be accomplished by introducing four new kfuncs: bpf_lookup_user_key() and bpf_lookup_system_key(), for retrieving a keyring with keys trusted for signature verification, respectively from its serial and from a pre-determined ID; bpf_key_put(), to release the reference obtained with the former two kfuncs, bpf_verify_pkcs7_signature(), for verifying PKCS#7 signatures.
Other than the key serial, bpf_lookup_user_key() also accepts key lookup flags, that influence the behavior of the lookup. bpf_lookup_system_key() accepts pre-determined IDs defined in include/linux/verification.h.
bpf_key_put() accepts the new bpf_key structure, introduced to tell whether the other structure member, a key pointer, is valid or not. The reason is that verify_pkcs7_signature() also accepts invalid pointers, set with the pre-determined ID, to select a system-defined keyring. key_put() must be called only for valid key pointers.
Since the two key lookup functions allocate memory and one increments a key reference count, they must be used in conjunction with bpf_key_put(). The latter must be called only if the lookup functions returned a non-NULL pointer. The verifier denies the execution of eBPF programs that don't respect this rule.
The two key lookup functions should be used in alternative, depending on the use case. While bpf_lookup_user_key() provides great flexibility, it seems suboptimal in terms of security guarantees, as even if the eBPF program is assumed to be trusted, the serial used to obtain the key pointer might come from untrusted user space not choosing one that the system administrator approves to enforce a mandatory policy.
bpf_lookup_system_key() instead provides much stronger guarantees, especially if the pre-determined ID is not passed by user space but is hardcoded in the eBPF program, and that program is signed. In this case, bpf_verify_pkcs7_signature() will always perform signature verification with a key that the system administrator approves, i.e. the primary, secondary or platform keyring.
Nevertheless, key permission checks need to be done accurately. Since bpf_lookup_user_key() cannot determine how a key will be used by other kfuncs, it has to defer the permission check to the actual kfunc using the key. It does it by calling lookup_user_key() with KEY_DEFER_PERM_CHECK as needed permission. Later, bpf_verify_pkcs7_signature(), if called, completes the permission check by calling key_validate(). It does not need to call key_task_permission() with permission KEY_NEED_SEARCH, as it is already done elsewhere by the key subsystem. Future kfuncs using the bpf_key structure need to implement the proper checks as well.
Finally, the last kfunc, bpf_verify_pkcs7_signature(), accepts the data and signature to verify as eBPF dynamic pointers, to minimize the number of kfunc parameters, and the keyring with keys for signature verification as a bpf_key structure, returned by one of the two key lookup functions.
bpf_lookup_user_key() and bpf_verify_pkcs7_signature() can be called only from sleepable programs, because of memory allocation and crypto operations. For example, the lsm.s/bpf attach point is suitable, fexit/array_map_update_elem is not.
The correctness of implementation of the new kfuncs and of their usage is checked with the introduced tests.
The patch set includes a patch from another author (dependency) for sake of completeness. It is organized as follows.
Patch 1 from KP Singh allows kfuncs to be used by LSM programs. Patch 2 splits is_dynptr_reg_valid_init() and introduces is_dynptr_type_expected(), to know more precisely the cause of a negative result of a dynamic pointer check. Patch 3 allows dynamic pointers to be used as kfunc parameters. Patch 4 exports bpf_dynptr_get_size(), to obtain the real size of data carried by a dynamic pointer. Patch 5 makes available for new eBPF kfuncs and programs some key-related definitions. Patch 6 introduces the bpf_lookup_*_key() and bpf_key_put() kfuncs. Patch 7 introduces the bpf_verify_pkcs7_signature() kfunc. Patch 8 changes the testing kernel configuration to compile everything as built-in. Finally, patches 9-12 introduce the tests.
Changelog
v15: - Add kfunc_dynptr_param test to deny list for s390x
v14: - Explain that is_dynptr_type_expected() will be useful also for BTF (suggested by Joanne) - Rename KEY_LOOKUP_FLAGS_ALL to KEY_LOOKUP_ALL (suggested by Jarkko) - Swap declaration of spi and dynptr_type in is_dynptr_type_expected() (suggested by Joanne) - Reimplement kfunc dynptr tests with a regular eBPF program instead of executing them with test_verifier (suggested by Joanne) - Make key lookup flags as enum so that they are automatically exported through BTF (suggested by Alexei)
v13: - Split is_dynptr_reg_valid_init() and introduce is_dynptr_type_expected() to see if the dynamic pointer type passed as argument to a kfunc is supported (suggested by Kumar) - Add forward declaration of struct key in include/linux/bpf.h (suggested by Song) - Declare mask for key lookup flags, remove key_lookup_flags_check() (suggested by Jarkko and KP) - Allow only certain dynamic pointer types (currently, local) to be passed as argument to kfuncs (suggested by Kumar) - For each dynamic pointer parameter in kfunc, additionally check if the passed pointer is to the stack (suggested by Kumar) - Split the validity/initialization and dynamic pointer type check also in the verifier, and adjust the expected error message in the test (a test for an unexpected dynptr type passed to a helper cannot be added due to missing suitable helpers, but this case has been tested manually) - Add verifier tests to check the dynamic pointers passed as argument to kfuncs (suggested by Kumar)
v12: - Put lookup_key and verify_pkcs7_sig tests in deny list for s390x (JIT does not support calling kernel function)
v11: - Move stringify_struct() macro to include/linux/btf.h (suggested by Daniel) - Change kernel configuration options in tools/testing/selftests/bpf/config* from =m to =y
v10: - Introduce key_lookup_flags_check() and system_keyring_id_check() inline functions to check parameters (suggested by KP) - Fix descriptions and comment of key-related kfuncs (suggested by KP) - Register kfunc set only once (suggested by Alexei) - Move needed kernel options to the architecture-independent configuration for testing
v9: - Drop patch to introduce KF_SLEEPABLE kfunc flag (already merged) - Rename valid_ptr member of bpf_key to has_ref (suggested by Daniel) - Check dynamic pointers in kfunc definition with bpf_dynptr_kern struct definition instead of string, to detect structure renames (suggested by Daniel) - Explicitly say that we permit initialized dynamic pointers in kfunc definition (suggested by Daniel) - Remove noinline __weak from kfuncs definition (reported by Daniel) - Simplify key lookup flags check in bpf_lookup_user_key() (suggested by Daniel) - Explain the reason for deferring key permission check (suggested by Daniel) - Allocate memory with GFP_ATOMIC in bpf_lookup_system_key(), and remove KF_SLEEPABLE kfunc flag from kfunc declaration (suggested by Daniel) - Define only one kfunc set and remove the loop for registration (suggested by Alexei)
v8: - Define the new bpf_key structure to carry the key pointer and whether that pointer is valid or not (suggested by Daniel) - Drop patch to mark a kfunc parameter with the __maybe_null suffix - Improve documentation of kfuncs - Introduce bpf_lookup_system_key() to obtain a key pointer suitable for verify_pkcs7_signature() (suggested by Daniel) - Use the new kfunc registration API - Drop patch to test the __maybe_null suffix - Add tests for bpf_lookup_system_key()
v7: - Add support for using dynamic and NULL pointers in kfunc (suggested by Alexei) - Add new kfunc-related tests
v6: - Switch back to key lookup helpers + signature verification (until v5), and defer permission check from bpf_lookup_user_key() to bpf_verify_pkcs7_signature() - Add additional key lookup test to illustrate the usage of the KEY_LOOKUP_CREATE flag and validate the flags (suggested by Daniel) - Make description of flags of bpf_lookup_user_key() more user-friendly (suggested by Daniel) - Fix validation of flags parameter in bpf_lookup_user_key() (reported by Daniel) - Rename bpf_verify_pkcs7_signature() keyring-related parameters to user_keyring and system_keyring to make their purpose more clear - Accept keyring-related parameters of bpf_verify_pkcs7_signature() as alternatives (suggested by KP) - Replace unsigned long type with u64 in helper declaration (suggested by Daniel) - Extend the bpf_verify_pkcs7_signature() test by calling the helper without data, by ensuring that the helper enforces the keyring-related parameters as alternatives, by ensuring that the helper rejects inaccessible and expired keyrings, and by checking all system keyrings - Move bpf_lookup_user_key() and bpf_key_put() usage tests to ref_tracking.c (suggested by John) - Call bpf_lookup_user_key() and bpf_key_put() only in sleepable programs
v5: - Move KEY_LOOKUP_ to include/linux/key.h for validation of bpf_verify_pkcs7_signature() parameter - Remove bpf_lookup_user_key() and bpf_key_put() helpers, and the corresponding tests - Replace struct key parameter of bpf_verify_pkcs7_signature() with the keyring serial and lookup flags - Call lookup_user_key() and key_put() in bpf_verify_pkcs7_signature() code, to ensure that the retrieved key is used according to the permission requested at lookup time - Clarified keyring precedence in the description of bpf_verify_pkcs7_signature() (suggested by John) - Remove newline in the second argument of ASSERT_ - Fix helper prototype regular expression in bpf_doc.py
v4: - Remove bpf_request_key_by_id(), don't return an invalid pointer that other helpers can use - Pass the keyring ID (without ULONG_MAX, suggested by Alexei) to bpf_verify_pkcs7_signature() - Introduce bpf_lookup_user_key() and bpf_key_put() helpers (suggested by Alexei) - Add lookup_key_norelease test, to ensure that the verifier blocks eBPF programs which don't decrement the key reference count - Parse raw PKCS#7 signature instead of module-style signature in the verify_pkcs7_signature test (suggested by Alexei) - Parse kernel module in user space and pass raw PKCS#7 signature to the eBPF program for signature verification
v3: - Rename bpf_verify_signature() back to bpf_verify_pkcs7_signature() to avoid managing different parameters for each signature verification function in one helper (suggested by Daniel) - Use dynamic pointers and export bpf_dynptr_get_size() (suggested by Alexei) - Introduce bpf_request_key_by_id() to give more flexibility to the caller of bpf_verify_pkcs7_signature() to retrieve the appropriate keyring (suggested by Alexei) - Fix test by reordering the gcc command line, always compile sign-file - Improve helper support check mechanism in the test
v2: - Rename bpf_verify_pkcs7_signature() to a more generic bpf_verify_signature() and pass the signature type (suggested by KP) - Move the helper and prototype declaration under #ifdef so that user space can probe for support for the helper (suggested by Daniel) - Describe better the keyring types (suggested by Daniel) - Include linux/bpf.h instead of vmlinux.h to avoid implicit or redeclaration - Make the test selfcontained (suggested by Alexei)
v1: - Don't define new map flag but introduce simple wrapper of verify_pkcs7_signature() (suggested by Alexei and KP)
KP Singh (1): bpf: Allow kfuncs to be used in LSM programs
Roberto Sassu (11): bpf: Move dynptr type check to is_dynptr_type_expected() btf: Allow dynamic pointer parameters in kfuncs bpf: Export bpf_dynptr_get_size() KEYS: Move KEY_LOOKUP_ to include/linux/key.h and define KEY_LOOKUP_ALL bpf: Add bpf_lookup_*_key() and bpf_key_put() kfuncs bpf: Add bpf_verify_pkcs7_signature() kfunc selftests/bpf: Compile kernel with everything as built-in selftests/bpf: Add verifier tests for bpf_lookup_*_key() and bpf_key_put() selftests/bpf: Add additional tests for bpf_lookup_*_key() selftests/bpf: Add test for bpf_verify_pkcs7_signature() kfunc selftests/bpf: Add tests for dynamic pointers parameters in kfuncs
include/linux/bpf.h | 9 + include/linux/bpf_verifier.h | 5 + include/linux/btf.h | 9 + include/linux/key.h | 6 + include/linux/verification.h | 8 + kernel/bpf/btf.c | 34 ++ kernel/bpf/helpers.c | 2 +- kernel/bpf/verifier.c | 35 +- kernel/trace/bpf_trace.c | 180 ++++++++ security/keys/internal.h | 2 - tools/testing/selftests/bpf/DENYLIST.s390x | 3 + tools/testing/selftests/bpf/Makefile | 14 +- tools/testing/selftests/bpf/config | 32 +- tools/testing/selftests/bpf/config.x86_64 | 7 +- .../testing/selftests/bpf/prog_tests/dynptr.c | 2 +- .../bpf/prog_tests/kfunc_dynptr_param.c | 103 +++++ .../selftests/bpf/prog_tests/lookup_key.c | 112 +++++ .../bpf/prog_tests/verify_pkcs7_sig.c | 399 ++++++++++++++++++ .../bpf/progs/test_kfunc_dynptr_param.c | 57 +++ .../selftests/bpf/progs/test_lookup_key.c | 46 ++ .../bpf/progs/test_verify_pkcs7_sig.c | 100 +++++ tools/testing/selftests/bpf/test_verifier.c | 3 +- .../selftests/bpf/verifier/ref_tracking.c | 139 ++++++ .../testing/selftests/bpf/verify_sig_setup.sh | 104 +++++ 24 files changed, 1376 insertions(+), 35 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c create mode 100644 tools/testing/selftests/bpf/prog_tests/lookup_key.c create mode 100644 tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c create mode 100644 tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c create mode 100644 tools/testing/selftests/bpf/progs/test_lookup_key.c create mode 100644 tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c create mode 100755 tools/testing/selftests/bpf/verify_sig_setup.sh
From: KP Singh kpsingh@kernel.org
In preparation for the addition of new kfuncs, allow kfuncs defined in the tracing subsystem to be used in LSM programs by mapping the LSM program type to the TRACING hook.
Signed-off-by: KP Singh kpsingh@kernel.org Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- kernel/bpf/btf.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 903719b89238..e49b3b6d48ad 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -7243,6 +7243,7 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type) case BPF_PROG_TYPE_STRUCT_OPS: return BTF_KFUNC_HOOK_STRUCT_OPS; case BPF_PROG_TYPE_TRACING: + case BPF_PROG_TYPE_LSM: return BTF_KFUNC_HOOK_TRACING; case BPF_PROG_TYPE_SYSCALL: return BTF_KFUNC_HOOK_SYSCALL;
On Mon, 5 Sept 2022 at 16:34, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: KP Singh kpsingh@kernel.org
In preparation for the addition of new kfuncs, allow kfuncs defined in the tracing subsystem to be used in LSM programs by mapping the LSM program type to the TRACING hook.
Signed-off-by: KP Singh kpsingh@kernel.org Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com
kernel/bpf/btf.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 903719b89238..e49b3b6d48ad 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -7243,6 +7243,7 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type) case BPF_PROG_TYPE_STRUCT_OPS: return BTF_KFUNC_HOOK_STRUCT_OPS; case BPF_PROG_TYPE_TRACING:
case BPF_PROG_TYPE_LSM: return BTF_KFUNC_HOOK_TRACING; case BPF_PROG_TYPE_SYSCALL: return BTF_KFUNC_HOOK_SYSCALL;
-- 2.25.1
From: Roberto Sassu roberto.sassu@huawei.com
Move dynptr type check to is_dynptr_type_expected() from is_dynptr_reg_valid_init(), so that callers can better determine the cause of a negative result (dynamic pointer not valid/initialized, dynamic pointer of the wrong type). It will be useful for example for BTF, to restrict which dynamic pointer types can be passed to kfuncs, as initially only the local type will be supported.
Also, splitting makes the code more readable, since checking the dynamic pointer type is not necessarily related to validity and initialization.
Split the validity/initialization and dynamic pointer type check also in the verifier, and adjust the expected error message in the test (a test for an unexpected dynptr type passed to a helper cannot be added due to missing suitable helpers, but this case has been tested manually).
Cc: Joanne Koong joannelkoong@gmail.com Cc: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- kernel/bpf/verifier.c | 35 ++++++++++++++----- .../testing/selftests/bpf/prog_tests/dynptr.c | 2 +- 2 files changed, 28 insertions(+), 9 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 068b20ed34d2..10b3c0a81d09 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -779,8 +779,8 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_ return true; }
-static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg, - enum bpf_arg_type arg_type) +static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, + struct bpf_reg_state *reg) { struct bpf_func_state *state = func(env, reg); int spi = get_spi(reg->off); @@ -796,11 +796,24 @@ static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_re return false; }
+ return true; +} + +static bool is_dynptr_type_expected(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, + enum bpf_arg_type arg_type) +{ + struct bpf_func_state *state = func(env, reg); + enum bpf_dynptr_type dynptr_type; + int spi = get_spi(reg->off); + /* ARG_PTR_TO_DYNPTR takes any type of dynptr */ if (arg_type == ARG_PTR_TO_DYNPTR) return true;
- return state->stack[spi].spilled_ptr.dynptr.type == arg_to_dynptr_type(arg_type); + dynptr_type = arg_to_dynptr_type(arg_type); + + return state->stack[spi].spilled_ptr.dynptr.type == dynptr_type; }
/* The reg state of a pointer or a bounded scalar was saved when @@ -6050,21 +6063,27 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, }
meta->uninit_dynptr_regno = regno; - } else if (!is_dynptr_reg_valid_init(env, reg, arg_type)) { + } else if (!is_dynptr_reg_valid_init(env, reg)) { + verbose(env, + "Expected an initialized dynptr as arg #%d\n", + arg + 1); + return -EINVAL; + } else if (!is_dynptr_type_expected(env, reg, arg_type)) { const char *err_extra = "";
switch (arg_type & DYNPTR_TYPE_FLAG_MASK) { case DYNPTR_TYPE_LOCAL: - err_extra = "local "; + err_extra = "local"; break; case DYNPTR_TYPE_RINGBUF: - err_extra = "ringbuf "; + err_extra = "ringbuf"; break; default: + err_extra = "<unknown>"; break; } - - verbose(env, "Expected an initialized %sdynptr as arg #%d\n", + verbose(env, + "Expected a dynptr of type %s as arg #%d\n", err_extra, arg + 1); return -EINVAL; } diff --git a/tools/testing/selftests/bpf/prog_tests/dynptr.c b/tools/testing/selftests/bpf/prog_tests/dynptr.c index bcf80b9f7c27..8fc4e6c02bfd 100644 --- a/tools/testing/selftests/bpf/prog_tests/dynptr.c +++ b/tools/testing/selftests/bpf/prog_tests/dynptr.c @@ -30,7 +30,7 @@ static struct { {"invalid_helper2", "Expected an initialized dynptr as arg #3"}, {"invalid_write1", "Expected an initialized dynptr as arg #1"}, {"invalid_write2", "Expected an initialized dynptr as arg #3"}, - {"invalid_write3", "Expected an initialized ringbuf dynptr as arg #1"}, + {"invalid_write3", "Expected an initialized dynptr as arg #1"}, {"invalid_write4", "arg 1 is an unacquired reference"}, {"invalid_read1", "invalid read from stack"}, {"invalid_read2", "cannot pass in dynptr at an offset"},
On Mon, 5 Sept 2022 at 16:34, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Move dynptr type check to is_dynptr_type_expected() from is_dynptr_reg_valid_init(), so that callers can better determine the cause of a negative result (dynamic pointer not valid/initialized, dynamic pointer of the wrong type). It will be useful for example for BTF, to restrict which dynamic pointer types can be passed to kfuncs, as initially only the local type will be supported.
Also, splitting makes the code more readable, since checking the dynamic pointer type is not necessarily related to validity and initialization.
Split the validity/initialization and dynamic pointer type check also in the verifier, and adjust the expected error message in the test (a test for an unexpected dynptr type passed to a helper cannot be added due to missing suitable helpers, but this case has been tested manually).
Cc: Joanne Koong joannelkoong@gmail.com Cc: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
I'm not sure the split is _really_ needed, but I don't feel strongly about it and defer to Joanne and others. Otherwise, Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com
kernel/bpf/verifier.c | 35 ++++++++++++++----- .../testing/selftests/bpf/prog_tests/dynptr.c | 2 +- 2 files changed, 28 insertions(+), 9 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 068b20ed34d2..10b3c0a81d09 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -779,8 +779,8 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_ return true; }
-static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
enum bpf_arg_type arg_type)
+static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env,
struct bpf_reg_state *reg)
{ struct bpf_func_state *state = func(env, reg); int spi = get_spi(reg->off); @@ -796,11 +796,24 @@ static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_re return false; }
return true;
+}
+static bool is_dynptr_type_expected(struct bpf_verifier_env *env,
struct bpf_reg_state *reg,
enum bpf_arg_type arg_type)
+{
struct bpf_func_state *state = func(env, reg);
enum bpf_dynptr_type dynptr_type;
int spi = get_spi(reg->off);
/* ARG_PTR_TO_DYNPTR takes any type of dynptr */ if (arg_type == ARG_PTR_TO_DYNPTR) return true;
return state->stack[spi].spilled_ptr.dynptr.type == arg_to_dynptr_type(arg_type);
dynptr_type = arg_to_dynptr_type(arg_type);
return state->stack[spi].spilled_ptr.dynptr.type == dynptr_type;
}
/* The reg state of a pointer or a bounded scalar was saved when @@ -6050,21 +6063,27 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, }
meta->uninit_dynptr_regno = regno;
} else if (!is_dynptr_reg_valid_init(env, reg, arg_type)) {
} else if (!is_dynptr_reg_valid_init(env, reg)) {
verbose(env,
"Expected an initialized dynptr as arg #%d\n",
arg + 1);
return -EINVAL;
} else if (!is_dynptr_type_expected(env, reg, arg_type)) { const char *err_extra = ""; switch (arg_type & DYNPTR_TYPE_FLAG_MASK) { case DYNPTR_TYPE_LOCAL:
err_extra = "local ";
err_extra = "local"; break; case DYNPTR_TYPE_RINGBUF:
err_extra = "ringbuf ";
err_extra = "ringbuf"; break; default:
err_extra = "<unknown>"; break; }
verbose(env, "Expected an initialized %sdynptr as arg #%d\n",
verbose(env,
"Expected a dynptr of type %s as arg #%d\n", err_extra, arg + 1); return -EINVAL; }
diff --git a/tools/testing/selftests/bpf/prog_tests/dynptr.c b/tools/testing/selftests/bpf/prog_tests/dynptr.c index bcf80b9f7c27..8fc4e6c02bfd 100644 --- a/tools/testing/selftests/bpf/prog_tests/dynptr.c +++ b/tools/testing/selftests/bpf/prog_tests/dynptr.c @@ -30,7 +30,7 @@ static struct { {"invalid_helper2", "Expected an initialized dynptr as arg #3"}, {"invalid_write1", "Expected an initialized dynptr as arg #1"}, {"invalid_write2", "Expected an initialized dynptr as arg #3"},
{"invalid_write3", "Expected an initialized ringbuf dynptr as arg #1"},
{"invalid_write3", "Expected an initialized dynptr as arg #1"}, {"invalid_write4", "arg 1 is an unacquired reference"}, {"invalid_read1", "invalid read from stack"}, {"invalid_read2", "cannot pass in dynptr at an offset"},
-- 2.25.1
From: Roberto Sassu roberto.sassu@huawei.com
Allow dynamic pointers (struct bpf_dynptr_kern *) to be specified as parameters in kfuncs. Also, ensure that dynamic pointers passed as argument are valid and initialized, are a pointer to the stack, and of the type local. More dynamic pointer types can be supported in the future.
To properly detect whether a parameter is of the desired type, introduce the stringify_struct() macro to compare the returned structure name with the desired name. In addition, protect against structure renames, by halting the build with BUILD_BUG_ON(), so that developers have to revisit the code.
To check if a dynamic pointer passed to the kfunc is valid and initialized, and if its type is local, export the existing functions is_dynptr_reg_valid_init() and is_dynptr_type_expected().
Cc: Joanne Koong joannelkoong@gmail.com Cc: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- include/linux/bpf_verifier.h | 5 +++++ include/linux/btf.h | 9 +++++++++ kernel/bpf/btf.c | 33 +++++++++++++++++++++++++++++++++ kernel/bpf/verifier.c | 10 +++++----- 4 files changed, 52 insertions(+), 5 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 1fdddbf3546b..dd58dfccd025 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -571,6 +571,11 @@ int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state u32 regno); int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, u32 mem_size); +bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, + struct bpf_reg_state *reg); +bool is_dynptr_type_expected(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, + enum bpf_arg_type arg_type);
/* this lives here instead of in bpf.h because it needs to dereference tgt_prog */ static inline u64 bpf_trampoline_compute_key(const struct bpf_prog *tgt_prog, diff --git a/include/linux/btf.h b/include/linux/btf.h index ad93c2d9cc1c..f546d368ac5d 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -52,6 +52,15 @@ #define KF_SLEEPABLE (1 << 5) /* kfunc may sleep */ #define KF_DESTRUCTIVE (1 << 6) /* kfunc performs destructive actions */
+/* + * Return the name of the passed struct, if exists, or halt the build if for + * example the structure gets renamed. In this way, developers have to revisit + * the code using that structure name, and update it accordingly. + */ +#define stringify_struct(x) \ + ({ BUILD_BUG_ON(sizeof(struct x) < 0); \ + __stringify(x); }) + struct btf; struct btf_member; struct btf_type; diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index e49b3b6d48ad..4266bff5fada 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6362,15 +6362,20 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
if (is_kfunc) { bool arg_mem_size = i + 1 < nargs && is_kfunc_arg_mem_size(btf, &args[i + 1], ®s[regno + 1]); + bool arg_dynptr = btf_type_is_struct(ref_t) && + !strcmp(ref_tname, + stringify_struct(bpf_dynptr_kern));
/* Permit pointer to mem, but only when argument * type is pointer to scalar, or struct composed * (recursively) of scalars. * When arg_mem_size is true, the pointer can be * void *. + * Also permit initialized local dynamic pointers. */ if (!btf_type_is_scalar(ref_t) && !__btf_type_is_scalar_struct(log, btf, ref_t, 0) && + !arg_dynptr && (arg_mem_size ? !btf_type_is_void(ref_t) : 1)) { bpf_log(log, "arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n", @@ -6378,6 +6383,34 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, return -EINVAL; }
+ if (arg_dynptr) { + if (reg->type != PTR_TO_STACK) { + bpf_log(log, "arg#%d pointer type %s %s not to stack\n", + i, btf_type_str(ref_t), + ref_tname); + return -EINVAL; + } + + if (!is_dynptr_reg_valid_init(env, reg)) { + bpf_log(log, + "arg#%d pointer type %s %s must be valid and initialized\n", + i, btf_type_str(ref_t), + ref_tname); + return -EINVAL; + } + + if (!is_dynptr_type_expected(env, reg, + ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_LOCAL)) { + bpf_log(log, + "arg#%d pointer type %s %s points to unsupported dynamic pointer type\n", + i, btf_type_str(ref_t), + ref_tname); + return -EINVAL; + } + + continue; + } + /* Check for mem, len pair */ if (arg_mem_size) { if (check_kfunc_mem_size_reg(env, ®s[regno + 1], regno + 1)) { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 10b3c0a81d09..8f02729074c6 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -779,8 +779,8 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_ return true; }
-static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, - struct bpf_reg_state *reg) +bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, + struct bpf_reg_state *reg) { struct bpf_func_state *state = func(env, reg); int spi = get_spi(reg->off); @@ -799,9 +799,9 @@ static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, return true; }
-static bool is_dynptr_type_expected(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, - enum bpf_arg_type arg_type) +bool is_dynptr_type_expected(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, + enum bpf_arg_type arg_type) { struct bpf_func_state *state = func(env, reg); enum bpf_dynptr_type dynptr_type;
On Mon, 5 Sept 2022 at 16:34, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Allow dynamic pointers (struct bpf_dynptr_kern *) to be specified as parameters in kfuncs. Also, ensure that dynamic pointers passed as argument are valid and initialized, are a pointer to the stack, and of the type local. More dynamic pointer types can be supported in the future.
To properly detect whether a parameter is of the desired type, introduce the stringify_struct() macro to compare the returned structure name with the desired name. In addition, protect against structure renames, by halting the build with BUILD_BUG_ON(), so that developers have to revisit the code.
To check if a dynamic pointer passed to the kfunc is valid and initialized, and if its type is local, export the existing functions is_dynptr_reg_valid_init() and is_dynptr_type_expected().
Cc: Joanne Koong joannelkoong@gmail.com Cc: Kumar Kartikeya Dwivedi memxor@gmail.com Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com
include/linux/bpf_verifier.h | 5 +++++ include/linux/btf.h | 9 +++++++++ kernel/bpf/btf.c | 33 +++++++++++++++++++++++++++++++++ kernel/bpf/verifier.c | 10 +++++----- 4 files changed, 52 insertions(+), 5 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 1fdddbf3546b..dd58dfccd025 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -571,6 +571,11 @@ int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state u32 regno); int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, u32 mem_size); +bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env,
struct bpf_reg_state *reg);
+bool is_dynptr_type_expected(struct bpf_verifier_env *env,
struct bpf_reg_state *reg,
enum bpf_arg_type arg_type);
/* this lives here instead of in bpf.h because it needs to dereference tgt_prog */ static inline u64 bpf_trampoline_compute_key(const struct bpf_prog *tgt_prog, diff --git a/include/linux/btf.h b/include/linux/btf.h index ad93c2d9cc1c..f546d368ac5d 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -52,6 +52,15 @@ #define KF_SLEEPABLE (1 << 5) /* kfunc may sleep */ #define KF_DESTRUCTIVE (1 << 6) /* kfunc performs destructive actions */
+/*
- Return the name of the passed struct, if exists, or halt the build if for
- example the structure gets renamed. In this way, developers have to revisit
- the code using that structure name, and update it accordingly.
- */
+#define stringify_struct(x) \
({ BUILD_BUG_ON(sizeof(struct x) < 0); \
__stringify(x); })
struct btf; struct btf_member; struct btf_type; diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index e49b3b6d48ad..4266bff5fada 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6362,15 +6362,20 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
if (is_kfunc) { bool arg_mem_size = i + 1 < nargs && is_kfunc_arg_mem_size(btf, &args[i + 1], ®s[regno + 1]);
bool arg_dynptr = btf_type_is_struct(ref_t) &&
!strcmp(ref_tname,
stringify_struct(bpf_dynptr_kern)); /* Permit pointer to mem, but only when argument * type is pointer to scalar, or struct composed * (recursively) of scalars. * When arg_mem_size is true, the pointer can be * void *.
* Also permit initialized local dynamic pointers. */ if (!btf_type_is_scalar(ref_t) && !__btf_type_is_scalar_struct(log, btf, ref_t, 0) &&
!arg_dynptr && (arg_mem_size ? !btf_type_is_void(ref_t) : 1)) { bpf_log(log, "arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n",
@@ -6378,6 +6383,34 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, return -EINVAL; }
if (arg_dynptr) {
if (reg->type != PTR_TO_STACK) {
bpf_log(log, "arg#%d pointer type %s %s not to stack\n",
i, btf_type_str(ref_t),
ref_tname);
return -EINVAL;
}
if (!is_dynptr_reg_valid_init(env, reg)) {
bpf_log(log,
"arg#%d pointer type %s %s must be valid and initialized\n",
i, btf_type_str(ref_t),
ref_tname);
return -EINVAL;
}
if (!is_dynptr_type_expected(env, reg,
ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_LOCAL)) {
bpf_log(log,
"arg#%d pointer type %s %s points to unsupported dynamic pointer type\n",
i, btf_type_str(ref_t),
ref_tname);
return -EINVAL;
}
continue;
}
/* Check for mem, len pair */ if (arg_mem_size) { if (check_kfunc_mem_size_reg(env, ®s[regno + 1], regno + 1)) {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 10b3c0a81d09..8f02729074c6 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -779,8 +779,8 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_ return true; }
-static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env,
struct bpf_reg_state *reg)
+bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env,
struct bpf_reg_state *reg)
{ struct bpf_func_state *state = func(env, reg); int spi = get_spi(reg->off); @@ -799,9 +799,9 @@ static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, return true; }
-static bool is_dynptr_type_expected(struct bpf_verifier_env *env,
struct bpf_reg_state *reg,
enum bpf_arg_type arg_type)
+bool is_dynptr_type_expected(struct bpf_verifier_env *env,
struct bpf_reg_state *reg,
enum bpf_arg_type arg_type)
{ struct bpf_func_state *state = func(env, reg); enum bpf_dynptr_type dynptr_type; -- 2.25.1
From: Roberto Sassu roberto.sassu@huawei.com
Export bpf_dynptr_get_size(), so that kernel code dealing with eBPF dynamic pointers can obtain the real size of data carried by this data structure.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Reviewed-by: Joanne Koong joannelkoong@gmail.com Acked-by: KP Singh kpsingh@kernel.org --- include/linux/bpf.h | 1 + kernel/bpf/helpers.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 9c1674973e03..9dbd7c3f8929 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2585,6 +2585,7 @@ void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type, u32 offset, u32 size); void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr); int bpf_dynptr_check_size(u32 size); +u32 bpf_dynptr_get_size(struct bpf_dynptr_kern *ptr);
#ifdef CONFIG_BPF_LSM void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype); diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index fc08035f14ed..824864ac82d1 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1408,7 +1408,7 @@ static void bpf_dynptr_set_type(struct bpf_dynptr_kern *ptr, enum bpf_dynptr_typ ptr->size |= type << DYNPTR_TYPE_SHIFT; }
-static u32 bpf_dynptr_get_size(struct bpf_dynptr_kern *ptr) +u32 bpf_dynptr_get_size(struct bpf_dynptr_kern *ptr) { return ptr->size & DYNPTR_SIZE_MASK; }
On Mon, 5 Sept 2022 at 16:35, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Export bpf_dynptr_get_size(), so that kernel code dealing with eBPF dynamic pointers can obtain the real size of data carried by this data structure.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Reviewed-by: Joanne Koong joannelkoong@gmail.com Acked-by: KP Singh kpsingh@kernel.org
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com
include/linux/bpf.h | 1 + kernel/bpf/helpers.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 9c1674973e03..9dbd7c3f8929 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2585,6 +2585,7 @@ void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type, u32 offset, u32 size); void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr); int bpf_dynptr_check_size(u32 size); +u32 bpf_dynptr_get_size(struct bpf_dynptr_kern *ptr);
#ifdef CONFIG_BPF_LSM void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype); diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index fc08035f14ed..824864ac82d1 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1408,7 +1408,7 @@ static void bpf_dynptr_set_type(struct bpf_dynptr_kern *ptr, enum bpf_dynptr_typ ptr->size |= type << DYNPTR_TYPE_SHIFT; }
-static u32 bpf_dynptr_get_size(struct bpf_dynptr_kern *ptr) +u32 bpf_dynptr_get_size(struct bpf_dynptr_kern *ptr) { return ptr->size & DYNPTR_SIZE_MASK; } -- 2.25.1
Hi,
On 9/5/2022 10:33 PM, Roberto Sassu wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Export bpf_dynptr_get_size(), so that kernel code dealing with eBPF dynamic pointers can obtain the real size of data carried by this data structure.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Reviewed-by: Joanne Koong joannelkoong@gmail.com Acked-by: KP Singh kpsingh@kernel.org
include/linux/bpf.h | 1 + kernel/bpf/helpers.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
SNIP
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index fc08035f14ed..824864ac82d1 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1408,7 +1408,7 @@ static void bpf_dynptr_set_type(struct bpf_dynptr_kern *ptr, enum bpf_dynptr_typ ptr->size |= type << DYNPTR_TYPE_SHIFT; } -static u32 bpf_dynptr_get_size(struct bpf_dynptr_kern *ptr) +u32 bpf_dynptr_get_size(struct bpf_dynptr_kern *ptr) { return ptr->size & DYNPTR_SIZE_MASK; }
qp-trie also need it. But considering bpf_dynptr_get_size() is just one line, Would moving it and the related definitions into bpf.h be a better choice ?
From: Roberto Sassu roberto.sassu@huawei.com
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to validate the kfunc parameters. Add them to enum key_lookup_flag, so that all the current ones and the ones defined in the future are automatically exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, to facilitate checking whether a variable contains only defined flags.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Reviewed-by: KP Singh kpsingh@kernel.org Acked-by: Jarkko Sakkinen jarkko@kernel.org --- include/linux/key.h | 6 ++++++ security/keys/internal.h | 2 -- 2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/key.h b/include/linux/key.h index 7febc4881363..d84171f90cbd 100644 --- a/include/linux/key.h +++ b/include/linux/key.h @@ -88,6 +88,12 @@ enum key_need_perm { KEY_DEFER_PERM_CHECK, /* Special: permission check is deferred */ };
+enum key_lookup_flag { + KEY_LOOKUP_CREATE = 0x01, /* Create special keyrings if they don't exist */ + KEY_LOOKUP_PARTIAL = 0x02, /* Permit partially constructed keys to be found */ + KEY_LOOKUP_ALL = (KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL), /* OR of previous flags */ +}; + struct seq_file; struct user_struct; struct signal_struct; diff --git a/security/keys/internal.h b/security/keys/internal.h index 9b9cf3b6fcbb..3c1e7122076b 100644 --- a/security/keys/internal.h +++ b/security/keys/internal.h @@ -165,8 +165,6 @@ extern struct key *request_key_and_link(struct key_type *type,
extern bool lookup_user_key_possessed(const struct key *key, const struct key_match_data *match_data); -#define KEY_LOOKUP_CREATE 0x01 -#define KEY_LOOKUP_PARTIAL 0x02
extern long join_session_keyring(const char *name); extern void key_change_session_keyring(struct callback_head *twork);
On Mon, Sep 05, 2022 at 04:33:11PM +0200, Roberto Sassu wrote:
From: Roberto Sassu roberto.sassu@huawei.com
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to validate the kfunc parameters. Add them to enum key_lookup_flag, so that all the current ones and the ones defined in the future are automatically exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, to facilitate checking whether a variable contains only defined flags.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Reviewed-by: KP Singh kpsingh@kernel.org Acked-by: Jarkko Sakkinen jarkko@kernel.org
You should remove ack if there is any substantial change.
include/linux/key.h | 6 ++++++ security/keys/internal.h | 2 -- 2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/key.h b/include/linux/key.h index 7febc4881363..d84171f90cbd 100644 --- a/include/linux/key.h +++ b/include/linux/key.h @@ -88,6 +88,12 @@ enum key_need_perm { KEY_DEFER_PERM_CHECK, /* Special: permission check is deferred */ }; +enum key_lookup_flag {
- KEY_LOOKUP_CREATE = 0x01, /* Create special keyrings if they don't exist */
- KEY_LOOKUP_PARTIAL = 0x02, /* Permit partially constructed keys to be found */
- KEY_LOOKUP_ALL = (KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL), /* OR of previous flags */
Drop the comments (should be reviewed separately + out of context).
+};
struct seq_file; struct user_struct; struct signal_struct; diff --git a/security/keys/internal.h b/security/keys/internal.h index 9b9cf3b6fcbb..3c1e7122076b 100644 --- a/security/keys/internal.h +++ b/security/keys/internal.h @@ -165,8 +165,6 @@ extern struct key *request_key_and_link(struct key_type *type, extern bool lookup_user_key_possessed(const struct key *key, const struct key_match_data *match_data); -#define KEY_LOOKUP_CREATE 0x01 -#define KEY_LOOKUP_PARTIAL 0x02 extern long join_session_keyring(const char *name); extern void key_change_session_keyring(struct callback_head *twork); -- 2.25.1
BR, Jarkko
On Tue, 2022-09-06 at 00:38 +0300, Jarkko Sakkinen wrote:
On Mon, Sep 05, 2022 at 04:33:11PM +0200, Roberto Sassu wrote:
From: Roberto Sassu roberto.sassu@huawei.com
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to validate the kfunc parameters. Add them to enum key_lookup_flag, so that all the current ones and the ones defined in the future are automatically exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, to facilitate checking whether a variable contains only defined flags.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Reviewed-by: KP Singh kpsingh@kernel.org Acked-by: Jarkko Sakkinen jarkko@kernel.org
You should remove ack if there is any substantial change.
Yes, sorry. I thought you were fine with the change due to:
https://lore.kernel.org/bpf/YxF4H9MTDj+PnJ+V@kernel.org/
include/linux/key.h | 6 ++++++ security/keys/internal.h | 2 -- 2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/key.h b/include/linux/key.h index 7febc4881363..d84171f90cbd 100644 --- a/include/linux/key.h +++ b/include/linux/key.h @@ -88,6 +88,12 @@ enum key_need_perm { KEY_DEFER_PERM_CHECK, /* Special: permission check is deferred */ }; +enum key_lookup_flag {
- KEY_LOOKUP_CREATE = 0x01, /* Create special keyrings if they
don't exist */
- KEY_LOOKUP_PARTIAL = 0x02, /* Permit partially constructed
keys to be found */
- KEY_LOOKUP_ALL = (KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL), /*
OR of previous flags */
Drop the comments (should be reviewed separately + out of context).
The same style is used for many definitions in include/linux/key.h
No problem to remove them, please just let me know where they should be. Often, eBPF maintainers asked me to add a description to the code to explain how new definitions should be used.
Thanks
Roberto
On Tue, Sep 06, 2022 at 09:08:23AM +0200, Roberto Sassu wrote:
On Tue, 2022-09-06 at 00:38 +0300, Jarkko Sakkinen wrote:
On Mon, Sep 05, 2022 at 04:33:11PM +0200, Roberto Sassu wrote:
From: Roberto Sassu roberto.sassu@huawei.com
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to validate the kfunc parameters. Add them to enum key_lookup_flag, so that all the current ones and the ones defined in the future are automatically exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, to facilitate checking whether a variable contains only defined flags.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Reviewed-by: KP Singh kpsingh@kernel.org Acked-by: Jarkko Sakkinen jarkko@kernel.org
You should remove ack if there is any substantial change.
Yes, sorry. I thought you were fine with the change due to:
It was the documentation part, not really the enum change.
BR, Jarkko
On Tue, 2022-09-06 at 13:37 +0300, Jarkko Sakkinen wrote:
On Tue, Sep 06, 2022 at 09:08:23AM +0200, Roberto Sassu wrote:
On Tue, 2022-09-06 at 00:38 +0300, Jarkko Sakkinen wrote:
On Mon, Sep 05, 2022 at 04:33:11PM +0200, Roberto Sassu wrote:
From: Roberto Sassu roberto.sassu@huawei.com
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to validate the kfunc parameters. Add them to enum key_lookup_flag, so that all the current ones and the ones defined in the future are automatically exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, to facilitate checking whether a variable contains only defined flags.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Reviewed-by: KP Singh kpsingh@kernel.org Acked-by: Jarkko Sakkinen jarkko@kernel.org
You should remove ack if there is any substantial change.
Yes, sorry. I thought you were fine with the change due to:
It was the documentation part, not really the enum change.
Ok, so if I remove the documentation I can keep your ack?
Thanks
Roberto
On Tue, Sep 06, 2022 at 01:04:35PM +0200, Roberto Sassu wrote:
On Tue, 2022-09-06 at 13:37 +0300, Jarkko Sakkinen wrote:
On Tue, Sep 06, 2022 at 09:08:23AM +0200, Roberto Sassu wrote:
On Tue, 2022-09-06 at 00:38 +0300, Jarkko Sakkinen wrote:
On Mon, Sep 05, 2022 at 04:33:11PM +0200, Roberto Sassu wrote:
From: Roberto Sassu roberto.sassu@huawei.com
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to validate the kfunc parameters. Add them to enum key_lookup_flag, so that all the current ones and the ones defined in the future are automatically exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, to facilitate checking whether a variable contains only defined flags.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Reviewed-by: KP Singh kpsingh@kernel.org Acked-by: Jarkko Sakkinen jarkko@kernel.org
You should remove ack if there is any substantial change.
Yes, sorry. I thought you were fine with the change due to:
It was the documentation part, not really the enum change.
Ok, so if I remove the documentation I can keep your ack?
Can you show the updated patch (e.g. via using attachment)?
BR, Jarkko
From: Roberto Sassu roberto.sassu@huawei.com
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to validate the kfunc parameters. Add them to enum key_lookup_flag, so that all the current ones and the ones defined in the future are automatically exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, with the logical OR of currently defined flags as value, to facilitate checking whether a variable contains only those flags.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- include/linux/key.h | 6 ++++++ security/keys/internal.h | 2 -- 2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/key.h b/include/linux/key.h index 7febc4881363..d27477faf00d 100644 --- a/include/linux/key.h +++ b/include/linux/key.h @@ -88,6 +88,12 @@ enum key_need_perm { KEY_DEFER_PERM_CHECK, /* Special: permission check is deferred */ };
+enum key_lookup_flag { + KEY_LOOKUP_CREATE = 0x01, + KEY_LOOKUP_PARTIAL = 0x02, + KEY_LOOKUP_ALL = (KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL), +}; + struct seq_file; struct user_struct; struct signal_struct; diff --git a/security/keys/internal.h b/security/keys/internal.h index 9b9cf3b6fcbb..3c1e7122076b 100644 --- a/security/keys/internal.h +++ b/security/keys/internal.h @@ -165,8 +165,6 @@ extern struct key *request_key_and_link(struct key_type *type,
extern bool lookup_user_key_possessed(const struct key *key, const struct key_match_data *match_data); -#define KEY_LOOKUP_CREATE 0x01 -#define KEY_LOOKUP_PARTIAL 0x02
extern long join_session_keyring(const char *name); extern void key_change_session_keyring(struct callback_head *twork);
On Tue, Sep 06, 2022 at 02:15:06PM +0200, Roberto Sassu wrote:
From: Roberto Sassu roberto.sassu@huawei.com
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to validate the kfunc parameters. Add them to enum key_lookup_flag, so that all the current ones and the ones defined in the future are automatically exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, with the logical OR of currently defined flags as value, to facilitate checking whether a variable contains only those flags.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
include/linux/key.h | 6 ++++++ security/keys/internal.h | 2 -- 2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/key.h b/include/linux/key.h index 7febc4881363..d27477faf00d 100644 --- a/include/linux/key.h +++ b/include/linux/key.h @@ -88,6 +88,12 @@ enum key_need_perm { KEY_DEFER_PERM_CHECK, /* Special: permission check is deferred */ }; +enum key_lookup_flag {
- KEY_LOOKUP_CREATE = 0x01,
- KEY_LOOKUP_PARTIAL = 0x02,
- KEY_LOOKUP_ALL = (KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL),
+};
struct seq_file; struct user_struct; struct signal_struct; diff --git a/security/keys/internal.h b/security/keys/internal.h index 9b9cf3b6fcbb..3c1e7122076b 100644 --- a/security/keys/internal.h +++ b/security/keys/internal.h @@ -165,8 +165,6 @@ extern struct key *request_key_and_link(struct key_type *type, extern bool lookup_user_key_possessed(const struct key *key, const struct key_match_data *match_data); -#define KEY_LOOKUP_CREATE 0x01 -#define KEY_LOOKUP_PARTIAL 0x02 extern long join_session_keyring(const char *name); extern void key_change_session_keyring(struct callback_head *twork); -- 2.25.1
Thanks,
Acked-by: Jarkko Sakkinen jarkko@kernel.org
BR, Jarkko
On Tue, 2022-09-06 at 15:26 +0300, Jarkko Sakkinen wrote:
On Tue, Sep 06, 2022 at 02:15:06PM +0200, Roberto Sassu wrote:
From: Roberto Sassu roberto.sassu@huawei.com
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to validate the kfunc parameters. Add them to enum key_lookup_flag, so that all the current ones and the ones defined in the future are automatically exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, with the logical OR of currently defined flags as value, to facilitate checking whether a variable contains only those flags.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
include/linux/key.h | 6 ++++++ security/keys/internal.h | 2 -- 2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/key.h b/include/linux/key.h index 7febc4881363..d27477faf00d 100644 --- a/include/linux/key.h +++ b/include/linux/key.h @@ -88,6 +88,12 @@ enum key_need_perm { KEY_DEFER_PERM_CHECK, /* Special: permission check is deferred */ }; +enum key_lookup_flag {
- KEY_LOOKUP_CREATE = 0x01,
- KEY_LOOKUP_PARTIAL = 0x02,
- KEY_LOOKUP_ALL = (KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL),
+};
struct seq_file; struct user_struct; struct signal_struct; diff --git a/security/keys/internal.h b/security/keys/internal.h index 9b9cf3b6fcbb..3c1e7122076b 100644 --- a/security/keys/internal.h +++ b/security/keys/internal.h @@ -165,8 +165,6 @@ extern struct key *request_key_and_link(struct key_type *type, extern bool lookup_user_key_possessed(const struct key *key, const struct key_match_data *match_data); -#define KEY_LOOKUP_CREATE 0x01 -#define KEY_LOOKUP_PARTIAL 0x02 extern long join_session_keyring(const char *name); extern void key_change_session_keyring(struct callback_head
*twork);
2.25.1
Thanks,
Acked-by: Jarkko Sakkinen jarkko@kernel.org
Thanks Jarkko.
KP, ok also for you?
Thanks
Roberto
From: Roberto Sassu roberto.sassu@huawei.com
Add the bpf_lookup_user_key(), bpf_lookup_system_key() and bpf_key_put() kfuncs, to respectively search a key with a given key handle serial number and flags, obtain a key from a pre-determined ID defined in include/linux/verification.h, and cleanup.
Introduce system_keyring_id_check() to validate the keyring ID parameter of bpf_lookup_system_key().
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- include/linux/bpf.h | 8 +++ include/linux/verification.h | 8 +++ kernel/trace/bpf_trace.c | 135 +++++++++++++++++++++++++++++++++++ 3 files changed, 151 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 9dbd7c3f8929..f3db614aece6 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2595,4 +2595,12 @@ static inline void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype) {} static inline void bpf_cgroup_atype_put(int cgroup_atype) {} #endif /* CONFIG_BPF_LSM */
+struct key; + +#ifdef CONFIG_KEYS +struct bpf_key { + struct key *key; + bool has_ref; +}; +#endif /* CONFIG_KEYS */ #endif /* _LINUX_BPF_H */ diff --git a/include/linux/verification.h b/include/linux/verification.h index a655923335ae..f34e50ebcf60 100644 --- a/include/linux/verification.h +++ b/include/linux/verification.h @@ -17,6 +17,14 @@ #define VERIFY_USE_SECONDARY_KEYRING ((struct key *)1UL) #define VERIFY_USE_PLATFORM_KEYRING ((struct key *)2UL)
+static inline int system_keyring_id_check(u64 id) +{ + if (id > (unsigned long)VERIFY_USE_PLATFORM_KEYRING) + return -EINVAL; + + return 0; +} + /* * The use to which an asymmetric key is being put. */ diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 68e5cdd24cef..7a7023704ac2 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -20,6 +20,8 @@ #include <linux/fprobe.h> #include <linux/bsearch.h> #include <linux/sort.h> +#include <linux/key.h> +#include <linux/verification.h>
#include <net/bpf_sk_storage.h>
@@ -1181,6 +1183,139 @@ static const struct bpf_func_proto bpf_get_func_arg_cnt_proto = { .arg1_type = ARG_PTR_TO_CTX, };
+#ifdef CONFIG_KEYS +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "kfuncs which will be used in BPF programs"); + +/** + * bpf_lookup_user_key - lookup a key by its serial + * @serial: key handle serial number + * @flags: lookup-specific flags + * + * Search a key with a given *serial* and the provided *flags*. + * If found, increment the reference count of the key by one, and + * return it in the bpf_key structure. + * + * The bpf_key structure must be passed to bpf_key_put() when done + * with it, so that the key reference count is decremented and the + * bpf_key structure is freed. + * + * Permission checks are deferred to the time the key is used by + * one of the available key-specific kfuncs. + * + * Set *flags* with KEY_LOOKUP_CREATE, to attempt creating a requested + * special keyring (e.g. session keyring), if it doesn't yet exist. + * Set *flags* with KEY_LOOKUP_PARTIAL, to lookup a key without waiting + * for the key construction, and to retrieve uninstantiated keys (keys + * without data attached to them). + * + * Return: a bpf_key pointer with a valid key pointer if the key is found, a + * NULL pointer otherwise. + */ +struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags) +{ + key_ref_t key_ref; + struct bpf_key *bkey; + + if (flags & ~KEY_LOOKUP_ALL) + return NULL; + + /* + * Permission check is deferred until the key is used, as the + * intent of the caller is unknown here. + */ + key_ref = lookup_user_key(serial, flags, KEY_DEFER_PERM_CHECK); + if (IS_ERR(key_ref)) + return NULL; + + bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC); + if (!bkey) { + key_put(key_ref_to_ptr(key_ref)); + return NULL; + } + + bkey->key = key_ref_to_ptr(key_ref); + bkey->has_ref = true; + + return bkey; +} + +/** + * bpf_lookup_system_key - lookup a key by a system-defined ID + * @id: key ID + * + * Obtain a bpf_key structure with a key pointer set to the passed key ID. + * The key pointer is marked as invalid, to prevent bpf_key_put() from + * attempting to decrement the key reference count on that pointer. The key + * pointer set in such way is currently understood only by + * verify_pkcs7_signature(). + * + * Set *id* to one of the values defined in include/linux/verification.h: + * 0 for the primary keyring (immutable keyring of system keys); + * VERIFY_USE_SECONDARY_KEYRING for both the primary and secondary keyring + * (where keys can be added only if they are vouched for by existing keys + * in those keyrings); VERIFY_USE_PLATFORM_KEYRING for the platform + * keyring (primarily used by the integrity subsystem to verify a kexec'ed + * kerned image and, possibly, the initramfs signature). + * + * Return: a bpf_key pointer with an invalid key pointer set from the + * pre-determined ID on success, a NULL pointer otherwise + */ +struct bpf_key *bpf_lookup_system_key(u64 id) +{ + struct bpf_key *bkey; + + if (system_keyring_id_check(id) < 0) + return NULL; + + bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC); + if (!bkey) + return NULL; + + bkey->key = (struct key *)(unsigned long)id; + bkey->has_ref = false; + + return bkey; +} + +/** + * bpf_key_put - decrement key reference count if key is valid and free bpf_key + * @bkey: bpf_key structure + * + * Decrement the reference count of the key inside *bkey*, if the pointer + * is valid, and free *bkey*. + */ +void bpf_key_put(struct bpf_key *bkey) +{ + if (bkey->has_ref) + key_put(bkey->key); + + kfree(bkey); +} + +__diag_pop(); + +BTF_SET8_START(key_sig_kfunc_set) +BTF_ID_FLAGS(func, bpf_lookup_user_key, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_lookup_system_key, KF_ACQUIRE | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_key_put, KF_RELEASE) +BTF_SET8_END(key_sig_kfunc_set) + +static const struct btf_kfunc_id_set bpf_key_sig_kfunc_set = { + .owner = THIS_MODULE, + .set = &key_sig_kfunc_set, +}; + +static int __init bpf_key_sig_kfuncs_init(void) +{ + return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, + &bpf_key_sig_kfunc_set); +} + +late_initcall(bpf_key_sig_kfuncs_init); +#endif /* CONFIG_KEYS */ + static const struct bpf_func_proto * bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) {
On Mon, 5 Sept 2022 at 16:35, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add the bpf_lookup_user_key(), bpf_lookup_system_key() and bpf_key_put() kfuncs, to respectively search a key with a given key handle serial number and flags, obtain a key from a pre-determined ID defined in include/linux/verification.h, and cleanup.
Introduce system_keyring_id_check() to validate the keyring ID parameter of bpf_lookup_system_key().
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
With a small nit below, Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com
include/linux/bpf.h | 8 +++ include/linux/verification.h | 8 +++ kernel/trace/bpf_trace.c | 135 +++++++++++++++++++++++++++++++++++ 3 files changed, 151 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 9dbd7c3f8929..f3db614aece6 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2595,4 +2595,12 @@ static inline void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype) {} static inline void bpf_cgroup_atype_put(int cgroup_atype) {} #endif /* CONFIG_BPF_LSM */
+struct key;
+#ifdef CONFIG_KEYS +struct bpf_key {
struct key *key;
bool has_ref;
+}; +#endif /* CONFIG_KEYS */ #endif /* _LINUX_BPF_H */ diff --git a/include/linux/verification.h b/include/linux/verification.h index a655923335ae..f34e50ebcf60 100644 --- a/include/linux/verification.h +++ b/include/linux/verification.h @@ -17,6 +17,14 @@ #define VERIFY_USE_SECONDARY_KEYRING ((struct key *)1UL) #define VERIFY_USE_PLATFORM_KEYRING ((struct key *)2UL)
+static inline int system_keyring_id_check(u64 id) +{
if (id > (unsigned long)VERIFY_USE_PLATFORM_KEYRING)
return -EINVAL;
return 0;
+}
/*
- The use to which an asymmetric key is being put.
*/ diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 68e5cdd24cef..7a7023704ac2 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -20,6 +20,8 @@ #include <linux/fprobe.h> #include <linux/bsearch.h> #include <linux/sort.h> +#include <linux/key.h> +#include <linux/verification.h>
#include <net/bpf_sk_storage.h>
@@ -1181,6 +1183,139 @@ static const struct bpf_func_proto bpf_get_func_arg_cnt_proto = { .arg1_type = ARG_PTR_TO_CTX, };
+#ifdef CONFIG_KEYS +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes",
"kfuncs which will be used in BPF programs");
+/**
- bpf_lookup_user_key - lookup a key by its serial
- @serial: key handle serial number
- @flags: lookup-specific flags
- Search a key with a given *serial* and the provided *flags*.
- If found, increment the reference count of the key by one, and
- return it in the bpf_key structure.
- The bpf_key structure must be passed to bpf_key_put() when done
- with it, so that the key reference count is decremented and the
- bpf_key structure is freed.
- Permission checks are deferred to the time the key is used by
- one of the available key-specific kfuncs.
- Set *flags* with KEY_LOOKUP_CREATE, to attempt creating a requested
- special keyring (e.g. session keyring), if it doesn't yet exist.
- Set *flags* with KEY_LOOKUP_PARTIAL, to lookup a key without waiting
- for the key construction, and to retrieve uninstantiated keys (keys
- without data attached to them).
- Return: a bpf_key pointer with a valid key pointer if the key is found, a
NULL pointer otherwise.
- */
+struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags) +{
key_ref_t key_ref;
struct bpf_key *bkey;
if (flags & ~KEY_LOOKUP_ALL)
return NULL;
/*
* Permission check is deferred until the key is used, as the
* intent of the caller is unknown here.
*/
key_ref = lookup_user_key(serial, flags, KEY_DEFER_PERM_CHECK);
if (IS_ERR(key_ref))
return NULL;
bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
Since this function (due to lookup_user_key) is sleepable, do we really need GFP_ATOMIC here?
if (!bkey) {
key_put(key_ref_to_ptr(key_ref));
return NULL;
}
bkey->key = key_ref_to_ptr(key_ref);
bkey->has_ref = true;
return bkey;
+}
+/**
- bpf_lookup_system_key - lookup a key by a system-defined ID
- @id: key ID
- Obtain a bpf_key structure with a key pointer set to the passed key ID.
- The key pointer is marked as invalid, to prevent bpf_key_put() from
- attempting to decrement the key reference count on that pointer. The key
- pointer set in such way is currently understood only by
- verify_pkcs7_signature().
- Set *id* to one of the values defined in include/linux/verification.h:
- 0 for the primary keyring (immutable keyring of system keys);
- VERIFY_USE_SECONDARY_KEYRING for both the primary and secondary keyring
- (where keys can be added only if they are vouched for by existing keys
- in those keyrings); VERIFY_USE_PLATFORM_KEYRING for the platform
- keyring (primarily used by the integrity subsystem to verify a kexec'ed
- kerned image and, possibly, the initramfs signature).
- Return: a bpf_key pointer with an invalid key pointer set from the
pre-determined ID on success, a NULL pointer otherwise
- */
+struct bpf_key *bpf_lookup_system_key(u64 id) +{
struct bpf_key *bkey;
if (system_keyring_id_check(id) < 0)
return NULL;
bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
if (!bkey)
return NULL;
bkey->key = (struct key *)(unsigned long)id;
bkey->has_ref = false;
return bkey;
+}
+/**
- bpf_key_put - decrement key reference count if key is valid and free bpf_key
- @bkey: bpf_key structure
- Decrement the reference count of the key inside *bkey*, if the pointer
- is valid, and free *bkey*.
- */
+void bpf_key_put(struct bpf_key *bkey) +{
if (bkey->has_ref)
key_put(bkey->key);
kfree(bkey);
+}
+__diag_pop();
+BTF_SET8_START(key_sig_kfunc_set) +BTF_ID_FLAGS(func, bpf_lookup_user_key, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_lookup_system_key, KF_ACQUIRE | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_key_put, KF_RELEASE) +BTF_SET8_END(key_sig_kfunc_set)
+static const struct btf_kfunc_id_set bpf_key_sig_kfunc_set = {
.owner = THIS_MODULE,
.set = &key_sig_kfunc_set,
+};
+static int __init bpf_key_sig_kfuncs_init(void) +{
return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
&bpf_key_sig_kfunc_set);
+}
+late_initcall(bpf_key_sig_kfuncs_init); +#endif /* CONFIG_KEYS */
static const struct bpf_func_proto * bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { -- 2.25.1
On Tue, 2022-09-06 at 04:43 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:35, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add the bpf_lookup_user_key(), bpf_lookup_system_key() and bpf_key_put() kfuncs, to respectively search a key with a given key handle serial number and flags, obtain a key from a pre-determined ID defined in include/linux/verification.h, and cleanup.
Introduce system_keyring_id_check() to validate the keyring ID parameter of bpf_lookup_system_key().
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
With a small nit below, Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com
include/linux/bpf.h | 8 +++ include/linux/verification.h | 8 +++ kernel/trace/bpf_trace.c | 135 +++++++++++++++++++++++++++++++++++ 3 files changed, 151 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 9dbd7c3f8929..f3db614aece6 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2595,4 +2595,12 @@ static inline void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype) {} static inline void bpf_cgroup_atype_put(int cgroup_atype) {} #endif /* CONFIG_BPF_LSM */
+struct key;
+#ifdef CONFIG_KEYS +struct bpf_key {
struct key *key;
bool has_ref;
+}; +#endif /* CONFIG_KEYS */ #endif /* _LINUX_BPF_H */ diff --git a/include/linux/verification.h b/include/linux/verification.h index a655923335ae..f34e50ebcf60 100644 --- a/include/linux/verification.h +++ b/include/linux/verification.h @@ -17,6 +17,14 @@ #define VERIFY_USE_SECONDARY_KEYRING ((struct key *)1UL) #define VERIFY_USE_PLATFORM_KEYRING ((struct key *)2UL)
+static inline int system_keyring_id_check(u64 id) +{
if (id > (unsigned long)VERIFY_USE_PLATFORM_KEYRING)
return -EINVAL;
return 0;
+}
/*
- The use to which an asymmetric key is being put.
*/ diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 68e5cdd24cef..7a7023704ac2 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -20,6 +20,8 @@ #include <linux/fprobe.h> #include <linux/bsearch.h> #include <linux/sort.h> +#include <linux/key.h> +#include <linux/verification.h>
#include <net/bpf_sk_storage.h>
@@ -1181,6 +1183,139 @@ static const struct bpf_func_proto bpf_get_func_arg_cnt_proto = { .arg1_type = ARG_PTR_TO_CTX, };
+#ifdef CONFIG_KEYS +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes",
"kfuncs which will be used in BPF programs");
+/**
- bpf_lookup_user_key - lookup a key by its serial
- @serial: key handle serial number
- @flags: lookup-specific flags
- Search a key with a given *serial* and the provided *flags*.
- If found, increment the reference count of the key by one, and
- return it in the bpf_key structure.
- The bpf_key structure must be passed to bpf_key_put() when done
- with it, so that the key reference count is decremented and the
- bpf_key structure is freed.
- Permission checks are deferred to the time the key is used by
- one of the available key-specific kfuncs.
- Set *flags* with KEY_LOOKUP_CREATE, to attempt creating a
requested
- special keyring (e.g. session keyring), if it doesn't yet
exist.
- Set *flags* with KEY_LOOKUP_PARTIAL, to lookup a key without
waiting
- for the key construction, and to retrieve uninstantiated keys
(keys
- without data attached to them).
- Return: a bpf_key pointer with a valid key pointer if the key
is found, a
NULL pointer otherwise.
- */
+struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags) +{
key_ref_t key_ref;
struct bpf_key *bkey;
if (flags & ~KEY_LOOKUP_ALL)
return NULL;
/*
* Permission check is deferred until the key is used, as
the
* intent of the caller is unknown here.
*/
key_ref = lookup_user_key(serial, flags,
KEY_DEFER_PERM_CHECK);
if (IS_ERR(key_ref))
return NULL;
bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
Since this function (due to lookup_user_key) is sleepable, do we really need GFP_ATOMIC here?
Daniel suggested it for bpf_lookup_system_key(), so that the kfunc does not have to be sleepable. For symmetry, I did the same to bpf_lookup_user_key(). Will switch back to GFP_KERNEL.
Thanks
Roberto
On Tue, Sep 6, 2022 at 1:01 AM Roberto Sassu roberto.sassu@huaweicloud.com wrote:
+struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags) +{
key_ref_t key_ref;
struct bpf_key *bkey;
if (flags & ~KEY_LOOKUP_ALL)
return NULL;
/*
* Permission check is deferred until the key is used, as
the
* intent of the caller is unknown here.
*/
key_ref = lookup_user_key(serial, flags,
KEY_DEFER_PERM_CHECK);
if (IS_ERR(key_ref))
return NULL;
bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
Since this function (due to lookup_user_key) is sleepable, do we really need GFP_ATOMIC here?
Daniel suggested it for bpf_lookup_system_key(), so that the kfunc does not have to be sleepable.
Hold on. It has to be sleepable. Just take a look at what lookup_user_key is doing inside.
For symmetry, I did the same to bpf_lookup_user_key(). Will switch back to GFP_KERNEL.
Thanks
Roberto
On Tue, 2022-09-06 at 11:45 -0700, Alexei Starovoitov wrote:
On Tue, Sep 6, 2022 at 1:01 AM Roberto Sassu roberto.sassu@huaweicloud.com wrote:
+struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags) +{
key_ref_t key_ref;
struct bpf_key *bkey;
if (flags & ~KEY_LOOKUP_ALL)
return NULL;
/*
* Permission check is deferred until the key is used,
as the
* intent of the caller is unknown here.
*/
key_ref = lookup_user_key(serial, flags,
KEY_DEFER_PERM_CHECK);
if (IS_ERR(key_ref))
return NULL;
bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
Since this function (due to lookup_user_key) is sleepable, do we really need GFP_ATOMIC here?
Daniel suggested it for bpf_lookup_system_key(), so that the kfunc does not have to be sleepable.
Hold on. It has to be sleepable. Just take a look at what lookup_user_key is doing inside.
https://lore.kernel.org/bpf/2b1d62ad-af4b-4694-ecc8-639fbd821a05@iogearbox.n...
Roberto
From: Roberto Sassu roberto.sassu@huawei.com
Add the bpf_verify_pkcs7_signature() kfunc, to give eBPF security modules the ability to check the validity of a signature against supplied data, by using user-provided or system-provided keys as trust anchor.
The new kfunc makes it possible to enforce mandatory policies, as eBPF programs might be allowed to make security decisions only based on data sources the system administrator approves.
The caller should provide the data to be verified and the signature as eBPF dynamic pointers (to minimize the number of parameters) and a bpf_key structure containing a reference to the keyring with keys trusted for signature verification, obtained from bpf_lookup_user_key() or bpf_lookup_system_key().
For bpf_key structures obtained from the former lookup function, bpf_verify_pkcs7_signature() completes the permission check deferred by that function by calling key_validate(). key_task_permission() is already called by the PKCS#7 code.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Acked-by: KP Singh kpsingh@kernel.org --- kernel/trace/bpf_trace.c | 45 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 7a7023704ac2..8e2c026b0a58 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1294,12 +1294,57 @@ void bpf_key_put(struct bpf_key *bkey) kfree(bkey); }
+#ifdef CONFIG_SYSTEM_DATA_VERIFICATION +/** + * bpf_verify_pkcs7_signature - verify a PKCS#7 signature + * @data_ptr: data to verify + * @sig_ptr: signature of the data + * @trusted_keyring: keyring with keys trusted for signature verification + * + * Verify the PKCS#7 signature *sig_ptr* against the supplied *data_ptr* + * with keys in a keyring referenced by *trusted_keyring*. + * + * Return: 0 on success, a negative value on error. + */ +int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr, + struct bpf_dynptr_kern *sig_ptr, + struct bpf_key *trusted_keyring) +{ + int ret; + + if (trusted_keyring->has_ref) { + /* + * Do the permission check deferred in bpf_lookup_user_key(). + * See bpf_lookup_user_key() for more details. + * + * A call to key_task_permission() here would be redundant, as + * it is already done by keyring_search() called by + * find_asymmetric_key(). + */ + ret = key_validate(trusted_keyring->key); + if (ret < 0) + return ret; + } + + return verify_pkcs7_signature(data_ptr->data, + bpf_dynptr_get_size(data_ptr), + sig_ptr->data, + bpf_dynptr_get_size(sig_ptr), + trusted_keyring->key, + VERIFYING_UNSPECIFIED_SIGNATURE, NULL, + NULL); +} +#endif /* CONFIG_SYSTEM_DATA_VERIFICATION */ + __diag_pop();
BTF_SET8_START(key_sig_kfunc_set) BTF_ID_FLAGS(func, bpf_lookup_user_key, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE) BTF_ID_FLAGS(func, bpf_lookup_system_key, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_key_put, KF_RELEASE) +#ifdef CONFIG_SYSTEM_DATA_VERIFICATION +BTF_ID_FLAGS(func, bpf_verify_pkcs7_signature, KF_SLEEPABLE) +#endif BTF_SET8_END(key_sig_kfunc_set)
static const struct btf_kfunc_id_set bpf_key_sig_kfunc_set = {
On Mon, 5 Sept 2022 at 16:35, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add the bpf_verify_pkcs7_signature() kfunc, to give eBPF security modules the ability to check the validity of a signature against supplied data, by using user-provided or system-provided keys as trust anchor.
The new kfunc makes it possible to enforce mandatory policies, as eBPF programs might be allowed to make security decisions only based on data sources the system administrator approves.
The caller should provide the data to be verified and the signature as eBPF dynamic pointers (to minimize the number of parameters) and a bpf_key structure containing a reference to the keyring with keys trusted for signature verification, obtained from bpf_lookup_user_key() or bpf_lookup_system_key().
For bpf_key structures obtained from the former lookup function, bpf_verify_pkcs7_signature() completes the permission check deferred by that function by calling key_validate(). key_task_permission() is already called by the PKCS#7 code.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Acked-by: KP Singh kpsingh@kernel.org
kernel/trace/bpf_trace.c | 45 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 7a7023704ac2..8e2c026b0a58 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1294,12 +1294,57 @@ void bpf_key_put(struct bpf_key *bkey) kfree(bkey); }
+#ifdef CONFIG_SYSTEM_DATA_VERIFICATION +/**
- bpf_verify_pkcs7_signature - verify a PKCS#7 signature
- @data_ptr: data to verify
- @sig_ptr: signature of the data
- @trusted_keyring: keyring with keys trusted for signature verification
- Verify the PKCS#7 signature *sig_ptr* against the supplied *data_ptr*
- with keys in a keyring referenced by *trusted_keyring*.
- Return: 0 on success, a negative value on error.
- */
+int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
struct bpf_dynptr_kern *sig_ptr,
struct bpf_key *trusted_keyring)
+{
int ret;
if (trusted_keyring->has_ref) {
/*
* Do the permission check deferred in bpf_lookup_user_key().
* See bpf_lookup_user_key() for more details.
*
* A call to key_task_permission() here would be redundant, as
* it is already done by keyring_search() called by
* find_asymmetric_key().
*/
ret = key_validate(trusted_keyring->key);
if (ret < 0)
return ret;
}
return verify_pkcs7_signature(data_ptr->data,
bpf_dynptr_get_size(data_ptr),
sig_ptr->data,
bpf_dynptr_get_size(sig_ptr),
MIssing check for data_ptr->data == NULL before making this call? Same for sig_ptr.
trusted_keyring->key,
VERIFYING_UNSPECIFIED_SIGNATURE, NULL,
NULL);
+} +#endif /* CONFIG_SYSTEM_DATA_VERIFICATION */
__diag_pop();
BTF_SET8_START(key_sig_kfunc_set) BTF_ID_FLAGS(func, bpf_lookup_user_key, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE) BTF_ID_FLAGS(func, bpf_lookup_system_key, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_key_put, KF_RELEASE) +#ifdef CONFIG_SYSTEM_DATA_VERIFICATION +BTF_ID_FLAGS(func, bpf_verify_pkcs7_signature, KF_SLEEPABLE) +#endif BTF_SET8_END(key_sig_kfunc_set)
static const struct btf_kfunc_id_set bpf_key_sig_kfunc_set = {
2.25.1
On Tue, 2022-09-06 at 04:57 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:35, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add the bpf_verify_pkcs7_signature() kfunc, to give eBPF security modules the ability to check the validity of a signature against supplied data, by using user-provided or system-provided keys as trust anchor.
The new kfunc makes it possible to enforce mandatory policies, as eBPF programs might be allowed to make security decisions only based on data sources the system administrator approves.
The caller should provide the data to be verified and the signature as eBPF dynamic pointers (to minimize the number of parameters) and a bpf_key structure containing a reference to the keyring with keys trusted for signature verification, obtained from bpf_lookup_user_key() or bpf_lookup_system_key().
For bpf_key structures obtained from the former lookup function, bpf_verify_pkcs7_signature() completes the permission check deferred by that function by calling key_validate(). key_task_permission() is already called by the PKCS#7 code.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Acked-by: KP Singh kpsingh@kernel.org
kernel/trace/bpf_trace.c | 45 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 7a7023704ac2..8e2c026b0a58 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1294,12 +1294,57 @@ void bpf_key_put(struct bpf_key *bkey) kfree(bkey); }
+#ifdef CONFIG_SYSTEM_DATA_VERIFICATION +/**
- bpf_verify_pkcs7_signature - verify a PKCS#7 signature
- @data_ptr: data to verify
- @sig_ptr: signature of the data
- @trusted_keyring: keyring with keys trusted for signature
verification
- Verify the PKCS#7 signature *sig_ptr* against the supplied
*data_ptr*
- with keys in a keyring referenced by *trusted_keyring*.
- Return: 0 on success, a negative value on error.
- */
+int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
struct bpf_dynptr_kern *sig_ptr,
struct bpf_key *trusted_keyring)
+{
int ret;
if (trusted_keyring->has_ref) {
/*
* Do the permission check deferred in
bpf_lookup_user_key().
* See bpf_lookup_user_key() for more details.
*
* A call to key_task_permission() here would be
redundant, as
* it is already done by keyring_search() called by
* find_asymmetric_key().
*/
ret = key_validate(trusted_keyring->key);
if (ret < 0)
return ret;
}
return verify_pkcs7_signature(data_ptr->data,
bpf_dynptr_get_size(data_ptr)
,
sig_ptr->data,
bpf_dynptr_get_size(sig_ptr),
MIssing check for data_ptr->data == NULL before making this call? Same for sig_ptr.
Patch 3 requires the dynptrs to be initialized. Isn't enough?
Thanks
Roberto
On Tue, 6 Sept 2022 at 10:08, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
On Tue, 2022-09-06 at 04:57 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:35, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add the bpf_verify_pkcs7_signature() kfunc, to give eBPF security modules the ability to check the validity of a signature against supplied data, by using user-provided or system-provided keys as trust anchor.
The new kfunc makes it possible to enforce mandatory policies, as eBPF programs might be allowed to make security decisions only based on data sources the system administrator approves.
The caller should provide the data to be verified and the signature as eBPF dynamic pointers (to minimize the number of parameters) and a bpf_key structure containing a reference to the keyring with keys trusted for signature verification, obtained from bpf_lookup_user_key() or bpf_lookup_system_key().
For bpf_key structures obtained from the former lookup function, bpf_verify_pkcs7_signature() completes the permission check deferred by that function by calling key_validate(). key_task_permission() is already called by the PKCS#7 code.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Acked-by: KP Singh kpsingh@kernel.org
kernel/trace/bpf_trace.c | 45 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 7a7023704ac2..8e2c026b0a58 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1294,12 +1294,57 @@ void bpf_key_put(struct bpf_key *bkey) kfree(bkey); }
+#ifdef CONFIG_SYSTEM_DATA_VERIFICATION +/**
- bpf_verify_pkcs7_signature - verify a PKCS#7 signature
- @data_ptr: data to verify
- @sig_ptr: signature of the data
- @trusted_keyring: keyring with keys trusted for signature
verification
- Verify the PKCS#7 signature *sig_ptr* against the supplied
*data_ptr*
- with keys in a keyring referenced by *trusted_keyring*.
- Return: 0 on success, a negative value on error.
- */
+int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
struct bpf_dynptr_kern *sig_ptr,
struct bpf_key *trusted_keyring)
+{
int ret;
if (trusted_keyring->has_ref) {
/*
* Do the permission check deferred in
bpf_lookup_user_key().
* See bpf_lookup_user_key() for more details.
*
* A call to key_task_permission() here would be
redundant, as
* it is already done by keyring_search() called by
* find_asymmetric_key().
*/
ret = key_validate(trusted_keyring->key);
if (ret < 0)
return ret;
}
return verify_pkcs7_signature(data_ptr->data,
bpf_dynptr_get_size(data_ptr)
,
sig_ptr->data,
bpf_dynptr_get_size(sig_ptr),
MIssing check for data_ptr->data == NULL before making this call? Same for sig_ptr.
Patch 3 requires the dynptrs to be initialized. Isn't enough?
No, it seems even initialized dynptr can be NULL at runtime. Look at both ringbuf_submit_dynptr and ringbuf_discard_dynptr. The verifier won't know after ringbuf_reserve_dynptr whether it set it to NULL or some valid pointer.
dynptr_init is basically that stack slot is now STACK_DYNPTR, it says nothing more about the dynptr.
As far as testing this goes, you can pass invalid parameters to ringbuf_reserve_dynptr to have it set to NULL, then make sure your helper returns an error at runtime for it.
Thanks
Roberto
On Wed, 2022-09-07 at 04:28 +0200, Kumar Kartikeya Dwivedi wrote:
On Tue, 6 Sept 2022 at 10:08, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
On Tue, 2022-09-06 at 04:57 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:35, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add the bpf_verify_pkcs7_signature() kfunc, to give eBPF security modules the ability to check the validity of a signature against supplied data, by using user-provided or system-provided keys as trust anchor.
The new kfunc makes it possible to enforce mandatory policies, as eBPF programs might be allowed to make security decisions only based on data sources the system administrator approves.
The caller should provide the data to be verified and the signature as eBPF dynamic pointers (to minimize the number of parameters) and a bpf_key structure containing a reference to the keyring with keys trusted for signature verification, obtained from bpf_lookup_user_key() or bpf_lookup_system_key().
For bpf_key structures obtained from the former lookup function, bpf_verify_pkcs7_signature() completes the permission check deferred by that function by calling key_validate(). key_task_permission() is already called by the PKCS#7 code.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Acked-by: KP Singh kpsingh@kernel.org
kernel/trace/bpf_trace.c | 45 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 7a7023704ac2..8e2c026b0a58 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1294,12 +1294,57 @@ void bpf_key_put(struct bpf_key *bkey) kfree(bkey); }
+#ifdef CONFIG_SYSTEM_DATA_VERIFICATION +/**
- bpf_verify_pkcs7_signature - verify a PKCS#7 signature
- @data_ptr: data to verify
- @sig_ptr: signature of the data
- @trusted_keyring: keyring with keys trusted for signature
verification
- Verify the PKCS#7 signature *sig_ptr* against the supplied
*data_ptr*
- with keys in a keyring referenced by *trusted_keyring*.
- Return: 0 on success, a negative value on error.
- */
+int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
struct bpf_dynptr_kern *sig_ptr,
struct bpf_key *trusted_keyring)
+{
int ret;
if (trusted_keyring->has_ref) {
/*
* Do the permission check deferred in
bpf_lookup_user_key().
* See bpf_lookup_user_key() for more details.
*
* A call to key_task_permission() here would
be redundant, as
* it is already done by keyring_search()
called by
* find_asymmetric_key().
*/
ret = key_validate(trusted_keyring->key);
if (ret < 0)
return ret;
}
return verify_pkcs7_signature(data_ptr->data,
bpf_dynptr_get_size(data_
ptr) ,
sig_ptr->data,
bpf_dynptr_get_size(sig_p
tr),
MIssing check for data_ptr->data == NULL before making this call? Same for sig_ptr.
Patch 3 requires the dynptrs to be initialized. Isn't enough?
No, it seems even initialized dynptr can be NULL at runtime. Look at both ringbuf_submit_dynptr and ringbuf_discard_dynptr. The verifier won't know after ringbuf_reserve_dynptr whether it set it to NULL or some valid pointer.
dynptr_init is basically that stack slot is now STACK_DYNPTR, it says nothing more about the dynptr.
As far as testing this goes, you can pass invalid parameters to ringbuf_reserve_dynptr to have it set to NULL, then make sure your helper returns an error at runtime for it.
I see, thanks.
I did a quick test. Pass 1 as flags argument to bpf_dynptr_from_mem() (not supported), and see how bpf_verify_pkcs7_signature() handles it.
Everything seems good, the ASN1 parser called by pkcs7_parse_message() correctly handles zero length.
So, I will add just this test, right?
Thanks
Roberto
On Wed, 7 Sept 2022 at 14:20, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
On Wed, 2022-09-07 at 04:28 +0200, Kumar Kartikeya Dwivedi wrote:
On Tue, 6 Sept 2022 at 10:08, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
On Tue, 2022-09-06 at 04:57 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:35, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add the bpf_verify_pkcs7_signature() kfunc, to give eBPF security modules the ability to check the validity of a signature against supplied data, by using user-provided or system-provided keys as trust anchor.
The new kfunc makes it possible to enforce mandatory policies, as eBPF programs might be allowed to make security decisions only based on data sources the system administrator approves.
The caller should provide the data to be verified and the signature as eBPF dynamic pointers (to minimize the number of parameters) and a bpf_key structure containing a reference to the keyring with keys trusted for signature verification, obtained from bpf_lookup_user_key() or bpf_lookup_system_key().
For bpf_key structures obtained from the former lookup function, bpf_verify_pkcs7_signature() completes the permission check deferred by that function by calling key_validate(). key_task_permission() is already called by the PKCS#7 code.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Acked-by: KP Singh kpsingh@kernel.org
kernel/trace/bpf_trace.c | 45 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 7a7023704ac2..8e2c026b0a58 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1294,12 +1294,57 @@ void bpf_key_put(struct bpf_key *bkey) kfree(bkey); }
+#ifdef CONFIG_SYSTEM_DATA_VERIFICATION +/**
- bpf_verify_pkcs7_signature - verify a PKCS#7 signature
- @data_ptr: data to verify
- @sig_ptr: signature of the data
- @trusted_keyring: keyring with keys trusted for signature
verification
- Verify the PKCS#7 signature *sig_ptr* against the supplied
*data_ptr*
- with keys in a keyring referenced by *trusted_keyring*.
- Return: 0 on success, a negative value on error.
- */
+int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
struct bpf_dynptr_kern *sig_ptr,
struct bpf_key *trusted_keyring)
+{
int ret;
if (trusted_keyring->has_ref) {
/*
* Do the permission check deferred in
bpf_lookup_user_key().
* See bpf_lookup_user_key() for more details.
*
* A call to key_task_permission() here would
be redundant, as
* it is already done by keyring_search()
called by
* find_asymmetric_key().
*/
ret = key_validate(trusted_keyring->key);
if (ret < 0)
return ret;
}
return verify_pkcs7_signature(data_ptr->data,
bpf_dynptr_get_size(data_
ptr) ,
sig_ptr->data,
bpf_dynptr_get_size(sig_p
tr),
MIssing check for data_ptr->data == NULL before making this call? Same for sig_ptr.
Patch 3 requires the dynptrs to be initialized. Isn't enough?
No, it seems even initialized dynptr can be NULL at runtime. Look at both ringbuf_submit_dynptr and ringbuf_discard_dynptr. The verifier won't know after ringbuf_reserve_dynptr whether it set it to NULL or some valid pointer.
dynptr_init is basically that stack slot is now STACK_DYNPTR, it says nothing more about the dynptr.
As far as testing this goes, you can pass invalid parameters to ringbuf_reserve_dynptr to have it set to NULL, then make sure your helper returns an error at runtime for it.
I see, thanks.
I did a quick test. Pass 1 as flags argument to bpf_dynptr_from_mem() (not supported), and see how bpf_verify_pkcs7_signature() handles it.
Everything seems good, the ASN1 parser called by pkcs7_parse_message() correctly handles zero length.
So, I will add just this test, right?
Yeah, if it handles it correctly, just adding a test to make sure it stays that way in the future would be fine.
From: Roberto Sassu roberto.sassu@huawei.com
Since the eBPF CI does not support kernel modules, change the kernel config to compile everything as built-in.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Acked-by: Daniel Müller deso@posteo.net --- tools/testing/selftests/bpf/config | 26 +++++++++++------------ tools/testing/selftests/bpf/config.x86_64 | 2 +- 2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index 3fc46f9cfb22..0fdd11e6b742 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -7,9 +7,9 @@ CONFIG_BPF_LSM=y CONFIG_BPF_STREAM_PARSER=y CONFIG_BPF_SYSCALL=y CONFIG_CGROUP_BPF=y -CONFIG_CRYPTO_HMAC=m -CONFIG_CRYPTO_SHA256=m -CONFIG_CRYPTO_USER_API_HASH=m +CONFIG_CRYPTO_HMAC=y +CONFIG_CRYPTO_SHA256=y +CONFIG_CRYPTO_USER_API_HASH=y CONFIG_DYNAMIC_FTRACE=y CONFIG_FPROBE=y CONFIG_FTRACE_SYSCALLS=y @@ -24,30 +24,30 @@ CONFIG_IP_NF_FILTER=y CONFIG_IP_NF_RAW=y CONFIG_IP_NF_TARGET_SYNPROXY=y CONFIG_IPV6=y -CONFIG_IPV6_FOU=m -CONFIG_IPV6_FOU_TUNNEL=m +CONFIG_IPV6_FOU=y +CONFIG_IPV6_FOU_TUNNEL=y CONFIG_IPV6_GRE=y CONFIG_IPV6_SEG6_BPF=y -CONFIG_IPV6_SIT=m +CONFIG_IPV6_SIT=y CONFIG_IPV6_TUNNEL=y CONFIG_LIRC=y CONFIG_LWTUNNEL=y CONFIG_MPLS=y -CONFIG_MPLS_IPTUNNEL=m -CONFIG_MPLS_ROUTING=m +CONFIG_MPLS_IPTUNNEL=y +CONFIG_MPLS_ROUTING=y CONFIG_MPTCP=y CONFIG_NET_CLS_ACT=y CONFIG_NET_CLS_BPF=y -CONFIG_NET_CLS_FLOWER=m -CONFIG_NET_FOU=m +CONFIG_NET_CLS_FLOWER=y +CONFIG_NET_FOU=y CONFIG_NET_FOU_IP_TUNNELS=y CONFIG_NET_IPGRE=y CONFIG_NET_IPGRE_DEMUX=y CONFIG_NET_IPIP=y -CONFIG_NET_MPLS_GSO=m +CONFIG_NET_MPLS_GSO=y CONFIG_NET_SCH_INGRESS=y CONFIG_NET_SCHED=y -CONFIG_NETDEVSIM=m +CONFIG_NETDEVSIM=y CONFIG_NETFILTER=y CONFIG_NETFILTER_SYNPROXY=y CONFIG_NETFILTER_XT_CONNMARK=y @@ -60,7 +60,7 @@ CONFIG_NF_DEFRAG_IPV6=y CONFIG_RC_CORE=y CONFIG_SECURITY=y CONFIG_SECURITYFS=y -CONFIG_TEST_BPF=m +CONFIG_TEST_BPF=y CONFIG_USERFAULTFD=y CONFIG_VXLAN=y CONFIG_XDP_SOCKETS=y diff --git a/tools/testing/selftests/bpf/config.x86_64 b/tools/testing/selftests/bpf/config.x86_64 index f0859a1d37ab..ce70c9509204 100644 --- a/tools/testing/selftests/bpf/config.x86_64 +++ b/tools/testing/selftests/bpf/config.x86_64 @@ -47,7 +47,7 @@ CONFIG_CPU_IDLE_GOV_LADDER=y CONFIG_CPUSETS=y CONFIG_CRC_T10DIF=y CONFIG_CRYPTO_BLAKE2B=y -CONFIG_CRYPTO_DEV_VIRTIO=m +CONFIG_CRYPTO_DEV_VIRTIO=y CONFIG_CRYPTO_SEQIV=y CONFIG_CRYPTO_XXHASH=y CONFIG_DCB=y
On Mon, 5 Sept 2022 at 16:35, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Since the eBPF CI does not support kernel modules, change the kernel config to compile everything as built-in.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com Acked-by: Daniel Müller deso@posteo.net
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com
tools/testing/selftests/bpf/config | 26 +++++++++++------------ tools/testing/selftests/bpf/config.x86_64 | 2 +- 2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index 3fc46f9cfb22..0fdd11e6b742 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -7,9 +7,9 @@ CONFIG_BPF_LSM=y CONFIG_BPF_STREAM_PARSER=y CONFIG_BPF_SYSCALL=y CONFIG_CGROUP_BPF=y -CONFIG_CRYPTO_HMAC=m -CONFIG_CRYPTO_SHA256=m -CONFIG_CRYPTO_USER_API_HASH=m +CONFIG_CRYPTO_HMAC=y +CONFIG_CRYPTO_SHA256=y +CONFIG_CRYPTO_USER_API_HASH=y CONFIG_DYNAMIC_FTRACE=y CONFIG_FPROBE=y CONFIG_FTRACE_SYSCALLS=y @@ -24,30 +24,30 @@ CONFIG_IP_NF_FILTER=y CONFIG_IP_NF_RAW=y CONFIG_IP_NF_TARGET_SYNPROXY=y CONFIG_IPV6=y -CONFIG_IPV6_FOU=m -CONFIG_IPV6_FOU_TUNNEL=m +CONFIG_IPV6_FOU=y +CONFIG_IPV6_FOU_TUNNEL=y CONFIG_IPV6_GRE=y CONFIG_IPV6_SEG6_BPF=y -CONFIG_IPV6_SIT=m +CONFIG_IPV6_SIT=y CONFIG_IPV6_TUNNEL=y CONFIG_LIRC=y CONFIG_LWTUNNEL=y CONFIG_MPLS=y -CONFIG_MPLS_IPTUNNEL=m -CONFIG_MPLS_ROUTING=m +CONFIG_MPLS_IPTUNNEL=y +CONFIG_MPLS_ROUTING=y CONFIG_MPTCP=y CONFIG_NET_CLS_ACT=y CONFIG_NET_CLS_BPF=y -CONFIG_NET_CLS_FLOWER=m -CONFIG_NET_FOU=m +CONFIG_NET_CLS_FLOWER=y +CONFIG_NET_FOU=y CONFIG_NET_FOU_IP_TUNNELS=y CONFIG_NET_IPGRE=y CONFIG_NET_IPGRE_DEMUX=y CONFIG_NET_IPIP=y -CONFIG_NET_MPLS_GSO=m +CONFIG_NET_MPLS_GSO=y CONFIG_NET_SCH_INGRESS=y CONFIG_NET_SCHED=y -CONFIG_NETDEVSIM=m +CONFIG_NETDEVSIM=y CONFIG_NETFILTER=y CONFIG_NETFILTER_SYNPROXY=y CONFIG_NETFILTER_XT_CONNMARK=y @@ -60,7 +60,7 @@ CONFIG_NF_DEFRAG_IPV6=y CONFIG_RC_CORE=y CONFIG_SECURITY=y CONFIG_SECURITYFS=y -CONFIG_TEST_BPF=m +CONFIG_TEST_BPF=y CONFIG_USERFAULTFD=y CONFIG_VXLAN=y CONFIG_XDP_SOCKETS=y diff --git a/tools/testing/selftests/bpf/config.x86_64 b/tools/testing/selftests/bpf/config.x86_64 index f0859a1d37ab..ce70c9509204 100644 --- a/tools/testing/selftests/bpf/config.x86_64 +++ b/tools/testing/selftests/bpf/config.x86_64 @@ -47,7 +47,7 @@ CONFIG_CPU_IDLE_GOV_LADDER=y CONFIG_CPUSETS=y CONFIG_CRC_T10DIF=y CONFIG_CRYPTO_BLAKE2B=y -CONFIG_CRYPTO_DEV_VIRTIO=m +CONFIG_CRYPTO_DEV_VIRTIO=y CONFIG_CRYPTO_SEQIV=y CONFIG_CRYPTO_XXHASH=y CONFIG_DCB=y -- 2.25.1
From: Roberto Sassu roberto.sassu@huawei.com
Add verifier tests for bpf_lookup_*_key() and bpf_key_put(), to ensure that acquired key references stored in the bpf_key structure are released, that a non-NULL bpf_key pointer is passed to bpf_key_put(), and that key references are not leaked.
Also, slightly modify test_verifier.c, to find the BTF ID of the attach point for the LSM program type (currently, it is done only for TRACING).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- tools/testing/selftests/bpf/config | 1 + tools/testing/selftests/bpf/test_verifier.c | 3 +- .../selftests/bpf/verifier/ref_tracking.c | 139 ++++++++++++++++++ 3 files changed, 142 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index 0fdd11e6b742..add5a5a919b4 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -30,6 +30,7 @@ CONFIG_IPV6_GRE=y CONFIG_IPV6_SEG6_BPF=y CONFIG_IPV6_SIT=y CONFIG_IPV6_TUNNEL=y +CONFIG_KEYS=y CONFIG_LIRC=y CONFIG_LWTUNNEL=y CONFIG_MPLS=y diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index f9d553fbf68a..2dbcbf363c18 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -1498,7 +1498,8 @@ static void do_test_single(struct bpf_test *test, bool unpriv, opts.log_level = DEFAULT_LIBBPF_LOG_LEVEL; opts.prog_flags = pflags;
- if (prog_type == BPF_PROG_TYPE_TRACING && test->kfunc) { + if ((prog_type == BPF_PROG_TYPE_TRACING || + prog_type == BPF_PROG_TYPE_LSM) && test->kfunc) { int attach_btf_id;
attach_btf_id = libbpf_find_vmlinux_btf_id(test->kfunc, diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c index 57a83d763ec1..f18ce867271f 100644 --- a/tools/testing/selftests/bpf/verifier/ref_tracking.c +++ b/tools/testing/selftests/bpf/verifier/ref_tracking.c @@ -84,6 +84,145 @@ .errstr = "Unreleased reference", .result = REJECT, }, +{ + "reference tracking: acquire/release user key reference", + .insns = { + BPF_MOV64_IMM(BPF_REG_1, -3), + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_LSM, + .kfunc = "bpf", + .expected_attach_type = BPF_LSM_MAC, + .flags = BPF_F_SLEEPABLE, + .fixup_kfunc_btf_id = { + { "bpf_lookup_user_key", 2 }, + { "bpf_key_put", 5 }, + }, + .result = ACCEPT, +}, +{ + "reference tracking: acquire/release system key reference", + .insns = { + BPF_MOV64_IMM(BPF_REG_1, 1), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_LSM, + .kfunc = "bpf", + .expected_attach_type = BPF_LSM_MAC, + .flags = BPF_F_SLEEPABLE, + .fixup_kfunc_btf_id = { + { "bpf_lookup_system_key", 1 }, + { "bpf_key_put", 4 }, + }, + .result = ACCEPT, +}, +{ + "reference tracking: release user key reference without check", + .insns = { + BPF_MOV64_IMM(BPF_REG_1, -3), + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_LSM, + .kfunc = "bpf", + .expected_attach_type = BPF_LSM_MAC, + .flags = BPF_F_SLEEPABLE, + .errstr = "arg#0 pointer type STRUCT bpf_key must point to scalar, or struct with scalar", + .fixup_kfunc_btf_id = { + { "bpf_lookup_user_key", 2 }, + { "bpf_key_put", 4 }, + }, + .result = REJECT, +}, +{ + "reference tracking: release system key reference without check", + .insns = { + BPF_MOV64_IMM(BPF_REG_1, 1), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_LSM, + .kfunc = "bpf", + .expected_attach_type = BPF_LSM_MAC, + .flags = BPF_F_SLEEPABLE, + .errstr = "arg#0 pointer type STRUCT bpf_key must point to scalar, or struct with scalar", + .fixup_kfunc_btf_id = { + { "bpf_lookup_system_key", 1 }, + { "bpf_key_put", 3 }, + }, + .result = REJECT, +}, +{ + "reference tracking: release with NULL key pointer", + .insns = { + BPF_MOV64_IMM(BPF_REG_1, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_LSM, + .kfunc = "bpf", + .expected_attach_type = BPF_LSM_MAC, + .flags = BPF_F_SLEEPABLE, + .errstr = "arg#0 pointer type STRUCT bpf_key must point to scalar, or struct with scalar", + .fixup_kfunc_btf_id = { + { "bpf_key_put", 1 }, + }, + .result = REJECT, +}, +{ + "reference tracking: leak potential reference to user key", + .insns = { + BPF_MOV64_IMM(BPF_REG_1, -3), + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_LSM, + .kfunc = "bpf", + .expected_attach_type = BPF_LSM_MAC, + .flags = BPF_F_SLEEPABLE, + .errstr = "Unreleased reference", + .fixup_kfunc_btf_id = { + { "bpf_lookup_user_key", 2 }, + }, + .result = REJECT, +}, +{ + "reference tracking: leak potential reference to system key", + .insns = { + BPF_MOV64_IMM(BPF_REG_1, 1), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_LSM, + .kfunc = "bpf", + .expected_attach_type = BPF_LSM_MAC, + .flags = BPF_F_SLEEPABLE, + .errstr = "Unreleased reference", + .fixup_kfunc_btf_id = { + { "bpf_lookup_system_key", 1 }, + }, + .result = REJECT, +}, { "reference tracking: release reference without check", .insns = {
On Mon, 5 Sept 2022 at 16:36, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add verifier tests for bpf_lookup_*_key() and bpf_key_put(), to ensure that acquired key references stored in the bpf_key structure are released, that a non-NULL bpf_key pointer is passed to bpf_key_put(), and that key references are not leaked.
Also, slightly modify test_verifier.c, to find the BTF ID of the attach point for the LSM program type (currently, it is done only for TRACING).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com
tools/testing/selftests/bpf/config | 1 + tools/testing/selftests/bpf/test_verifier.c | 3 +- .../selftests/bpf/verifier/ref_tracking.c | 139 ++++++++++++++++++ 3 files changed, 142 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index 0fdd11e6b742..add5a5a919b4 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -30,6 +30,7 @@ CONFIG_IPV6_GRE=y CONFIG_IPV6_SEG6_BPF=y CONFIG_IPV6_SIT=y CONFIG_IPV6_TUNNEL=y +CONFIG_KEYS=y CONFIG_LIRC=y CONFIG_LWTUNNEL=y CONFIG_MPLS=y diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index f9d553fbf68a..2dbcbf363c18 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -1498,7 +1498,8 @@ static void do_test_single(struct bpf_test *test, bool unpriv, opts.log_level = DEFAULT_LIBBPF_LOG_LEVEL; opts.prog_flags = pflags;
if (prog_type == BPF_PROG_TYPE_TRACING && test->kfunc) {
if ((prog_type == BPF_PROG_TYPE_TRACING ||
prog_type == BPF_PROG_TYPE_LSM) && test->kfunc) { int attach_btf_id; attach_btf_id = libbpf_find_vmlinux_btf_id(test->kfunc,
diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c index 57a83d763ec1..f18ce867271f 100644 --- a/tools/testing/selftests/bpf/verifier/ref_tracking.c +++ b/tools/testing/selftests/bpf/verifier/ref_tracking.c @@ -84,6 +84,145 @@ .errstr = "Unreleased reference", .result = REJECT, }, +{
"reference tracking: acquire/release user key reference",
.insns = {
BPF_MOV64_IMM(BPF_REG_1, -3),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_LSM,
.kfunc = "bpf",
.expected_attach_type = BPF_LSM_MAC,
.flags = BPF_F_SLEEPABLE,
.fixup_kfunc_btf_id = {
{ "bpf_lookup_user_key", 2 },
{ "bpf_key_put", 5 },
},
.result = ACCEPT,
+}, +{
"reference tracking: acquire/release system key reference",
.insns = {
BPF_MOV64_IMM(BPF_REG_1, 1),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_LSM,
.kfunc = "bpf",
.expected_attach_type = BPF_LSM_MAC,
.flags = BPF_F_SLEEPABLE,
.fixup_kfunc_btf_id = {
{ "bpf_lookup_system_key", 1 },
{ "bpf_key_put", 4 },
},
.result = ACCEPT,
+}, +{
"reference tracking: release user key reference without check",
.insns = {
BPF_MOV64_IMM(BPF_REG_1, -3),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_LSM,
.kfunc = "bpf",
.expected_attach_type = BPF_LSM_MAC,
.flags = BPF_F_SLEEPABLE,
.errstr = "arg#0 pointer type STRUCT bpf_key must point to scalar, or struct with scalar",
.fixup_kfunc_btf_id = {
{ "bpf_lookup_user_key", 2 },
{ "bpf_key_put", 4 },
},
.result = REJECT,
+}, +{
"reference tracking: release system key reference without check",
.insns = {
BPF_MOV64_IMM(BPF_REG_1, 1),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_LSM,
.kfunc = "bpf",
.expected_attach_type = BPF_LSM_MAC,
.flags = BPF_F_SLEEPABLE,
.errstr = "arg#0 pointer type STRUCT bpf_key must point to scalar, or struct with scalar",
.fixup_kfunc_btf_id = {
{ "bpf_lookup_system_key", 1 },
{ "bpf_key_put", 3 },
},
.result = REJECT,
+}, +{
"reference tracking: release with NULL key pointer",
.insns = {
BPF_MOV64_IMM(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_LSM,
.kfunc = "bpf",
.expected_attach_type = BPF_LSM_MAC,
.flags = BPF_F_SLEEPABLE,
.errstr = "arg#0 pointer type STRUCT bpf_key must point to scalar, or struct with scalar",
.fixup_kfunc_btf_id = {
{ "bpf_key_put", 1 },
},
.result = REJECT,
+}, +{
"reference tracking: leak potential reference to user key",
.insns = {
BPF_MOV64_IMM(BPF_REG_1, -3),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_LSM,
.kfunc = "bpf",
.expected_attach_type = BPF_LSM_MAC,
.flags = BPF_F_SLEEPABLE,
.errstr = "Unreleased reference",
.fixup_kfunc_btf_id = {
{ "bpf_lookup_user_key", 2 },
},
.result = REJECT,
+}, +{
"reference tracking: leak potential reference to system key",
.insns = {
BPF_MOV64_IMM(BPF_REG_1, 1),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_LSM,
.kfunc = "bpf",
.expected_attach_type = BPF_LSM_MAC,
.flags = BPF_F_SLEEPABLE,
.errstr = "Unreleased reference",
.fixup_kfunc_btf_id = {
{ "bpf_lookup_system_key", 1 },
},
.result = REJECT,
+}, { "reference tracking: release reference without check", .insns = { -- 2.25.1
From: Roberto Sassu roberto.sassu@huawei.com
Add a test to ensure that bpf_lookup_user_key() creates a referenced special keyring when the KEY_LOOKUP_CREATE flag is passed to this function.
Ensure that the kfunc rejects invalid flags.
Ensure that a keyring can be obtained from bpf_lookup_system_key() when one of the pre-determined keyring IDs is provided.
The test is currently blacklisted for s390x (JIT does not support calling kernel function).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../selftests/bpf/prog_tests/lookup_key.c | 112 ++++++++++++++++++ .../selftests/bpf/progs/test_lookup_key.c | 46 +++++++ 3 files changed, 159 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/lookup_key.c create mode 100644 tools/testing/selftests/bpf/progs/test_lookup_key.c
diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index ba02b559ca68..8c90f6485827 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -69,3 +69,4 @@ setget_sockopt # attach unexpected error: -524 cb_refs # expected error message unexpected error: -524 (trampoline) cgroup_hierarchical_stats # JIT does not support calling kernel function (kfunc) htab_update # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) +lookup_key # JIT does not support calling kernel function (kfunc) diff --git a/tools/testing/selftests/bpf/prog_tests/lookup_key.c b/tools/testing/selftests/bpf/prog_tests/lookup_key.c new file mode 100644 index 000000000000..2e0cde729dc7 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/lookup_key.c @@ -0,0 +1,112 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include <linux/keyctl.h> +#include <test_progs.h> + +#include "test_lookup_key.skel.h" + +#define KEY_LOOKUP_CREATE 0x01 +#define KEY_LOOKUP_PARTIAL 0x02 + +static bool kfunc_not_supported; + +static int libbpf_print_cb(enum libbpf_print_level level, const char *fmt, + va_list args) +{ + char *func; + + if (strcmp(fmt, "libbpf: extern (func ksym) '%s': not found in kernel or module BTFs\n")) + return 0; + + func = va_arg(args, char *); + + if (strcmp(func, "bpf_lookup_user_key") && strcmp(func, "bpf_key_put") && + strcmp(func, "bpf_lookup_system_key")) + return 0; + + kfunc_not_supported = true; + return 0; +} + +void test_lookup_key(void) +{ + libbpf_print_fn_t old_print_cb; + struct test_lookup_key *skel; + u32 next_id; + int ret; + + skel = test_lookup_key__open(); + if (!ASSERT_OK_PTR(skel, "test_lookup_key__open")) + return; + + old_print_cb = libbpf_set_print(libbpf_print_cb); + ret = test_lookup_key__load(skel); + libbpf_set_print(old_print_cb); + + if (ret < 0 && kfunc_not_supported) { + printf("%s:SKIP:bpf_lookup_*_key(), bpf_key_put() kfuncs not supported\n", + __func__); + test__skip(); + goto close_prog; + } + + if (!ASSERT_OK(ret, "test_lookup_key__load")) + goto close_prog; + + ret = test_lookup_key__attach(skel); + if (!ASSERT_OK(ret, "test_lookup_key__attach")) + goto close_prog; + + skel->bss->monitored_pid = getpid(); + skel->bss->key_serial = KEY_SPEC_THREAD_KEYRING; + + /* The thread-specific keyring does not exist, this test fails. */ + skel->bss->flags = 0; + + ret = bpf_prog_get_next_id(0, &next_id); + if (!ASSERT_LT(ret, 0, "bpf_prog_get_next_id")) + goto close_prog; + + /* Force creation of the thread-specific keyring, this test succeeds. */ + skel->bss->flags = KEY_LOOKUP_CREATE; + + ret = bpf_prog_get_next_id(0, &next_id); + if (!ASSERT_OK(ret, "bpf_prog_get_next_id")) + goto close_prog; + + /* Pass both lookup flags for parameter validation. */ + skel->bss->flags = KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL; + + ret = bpf_prog_get_next_id(0, &next_id); + if (!ASSERT_OK(ret, "bpf_prog_get_next_id")) + goto close_prog; + + /* Pass invalid flags. */ + skel->bss->flags = UINT64_MAX; + + ret = bpf_prog_get_next_id(0, &next_id); + if (!ASSERT_LT(ret, 0, "bpf_prog_get_next_id")) + goto close_prog; + + skel->bss->key_serial = 0; + skel->bss->key_id = 1; + + ret = bpf_prog_get_next_id(0, &next_id); + if (!ASSERT_OK(ret, "bpf_prog_get_next_id")) + goto close_prog; + + skel->bss->key_id = UINT32_MAX; + + ret = bpf_prog_get_next_id(0, &next_id); + ASSERT_LT(ret, 0, "bpf_prog_get_next_id"); + +close_prog: + skel->bss->monitored_pid = 0; + test_lookup_key__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/test_lookup_key.c b/tools/testing/selftests/bpf/progs/test_lookup_key.c new file mode 100644 index 000000000000..c73776990ae3 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_lookup_key.c @@ -0,0 +1,46 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include "vmlinux.h" +#include <errno.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +char _license[] SEC("license") = "GPL"; + +__u32 monitored_pid; +__u32 key_serial; +__u32 key_id; +__u64 flags; + +extern struct bpf_key *bpf_lookup_user_key(__u32 serial, __u64 flags) __ksym; +extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym; +extern void bpf_key_put(struct bpf_key *key) __ksym; + +SEC("lsm.s/bpf") +int BPF_PROG(bpf, int cmd, union bpf_attr *attr, unsigned int size) +{ + struct bpf_key *bkey; + __u32 pid; + + pid = bpf_get_current_pid_tgid() >> 32; + if (pid != monitored_pid) + return 0; + + if (key_serial) + bkey = bpf_lookup_user_key(key_serial, flags); + else + bkey = bpf_lookup_system_key(key_id); + + if (!bkey) + return -ENOENT; + + bpf_key_put(bkey); + + return 0; +}
From: Roberto Sassu roberto.sassu@huawei.com
Perform several tests to ensure the correct implementation of the bpf_verify_pkcs7_signature() kfunc.
Do the tests with data signed with a generated testing key (by using sign-file from scripts/) and with the tcp_bic.ko kernel module if it is found in the system. The test does not fail if tcp_bic.ko is not found.
First, perform an unsuccessful signature verification without data.
Second, perform a successful signature verification with the session keyring and a new one created for testing.
Then, ensure that permission and validation checks are done properly on the keyring provided to bpf_verify_pkcs7_signature(), despite those checks were deferred at the time the keyring was retrieved with bpf_lookup_user_key(). The tests expect to encounter an error if the Search permission is removed from the keyring, or the keyring is expired.
Finally, perform a successful and unsuccessful signature verification with the keyrings with pre-determined IDs (the last test fails because the key is not in the platform keyring).
The test is currently in the deny list for s390x (JIT does not support calling kernel function).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + tools/testing/selftests/bpf/Makefile | 14 +- tools/testing/selftests/bpf/config | 5 + tools/testing/selftests/bpf/config.x86_64 | 5 - .../bpf/prog_tests/verify_pkcs7_sig.c | 399 ++++++++++++++++++ .../bpf/progs/test_verify_pkcs7_sig.c | 100 +++++ .../testing/selftests/bpf/verify_sig_setup.sh | 104 +++++ 7 files changed, 620 insertions(+), 8 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c create mode 100644 tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c create mode 100755 tools/testing/selftests/bpf/verify_sig_setup.sh
diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 8c90f6485827..4e305baa5277 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -70,3 +70,4 @@ cb_refs # expected error message unexpected err cgroup_hierarchical_stats # JIT does not support calling kernel function (kfunc) htab_update # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) lookup_key # JIT does not support calling kernel function (kfunc) +verify_pkcs7_sig # JIT does not support calling kernel function (kfunc) diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index c10adecb5a73..d2dc38bdc984 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -14,6 +14,7 @@ BPFTOOLDIR := $(TOOLSDIR)/bpf/bpftool APIDIR := $(TOOLSINCDIR)/uapi GENDIR := $(abspath ../../../../include/generated) GENHDR := $(GENDIR)/autoconf.h +HOSTPKG_CONFIG := pkg-config
ifneq ($(wildcard $(GENHDR)),) GENFLAGS := -DHAVE_GENHDR @@ -75,7 +76,7 @@ TEST_PROGS := test_kmod.sh \ test_xsk.sh
TEST_PROGS_EXTENDED := with_addr.sh \ - with_tunnels.sh ima_setup.sh \ + with_tunnels.sh ima_setup.sh verify_sig_setup.sh \ test_xdp_vlan.sh test_bpftool.py
# Compile but not part of 'make run_tests' @@ -84,7 +85,7 @@ TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \ test_lirc_mode2_user xdping test_cpp runqslower bench bpf_testmod.ko \ xskxceiver xdp_redirect_multi xdp_synproxy
-TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read +TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read $(OUTPUT)/sign-file
# Emit succinct information message describing current building step # $1 - generic step name (e.g., CC, LINK, etc); @@ -189,6 +190,12 @@ $(OUTPUT)/urandom_read: urandom_read.c urandom_read_aux.c $(OUTPUT)/liburandom_r -fuse-ld=$(LLD) -Wl,-znoseparate-code \ -Wl,-rpath=. -Wl,--build-id=sha1 -o $@
+$(OUTPUT)/sign-file: ../../../../scripts/sign-file.c + $(call msg,SIGN-FILE,,$@) + $(Q)$(CC) $(shell $(HOSTPKG_CONFIG)--cflags libcrypto 2> /dev/null) \ + $< -o $@ \ + $(shell $(HOSTPKG_CONFIG) --libs libcrypto 2> /dev/null || echo -lcrypto) + $(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(wildcard bpf_testmod/Makefile bpf_testmod/*.[ch]) $(call msg,MOD,,$@) $(Q)$(RM) bpf_testmod/bpf_testmod.ko # force re-compilation @@ -515,7 +522,8 @@ TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c \ TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \ $(OUTPUT)/liburandom_read.so \ $(OUTPUT)/xdp_synproxy \ - ima_setup.sh \ + $(OUTPUT)/sign-file \ + ima_setup.sh verify_sig_setup.sh \ $(wildcard progs/btf_dump_test_case_*.c) TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) -DENABLE_ATOMICS_TESTS diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index add5a5a919b4..905a9be8d0a2 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -33,6 +33,11 @@ CONFIG_IPV6_TUNNEL=y CONFIG_KEYS=y CONFIG_LIRC=y CONFIG_LWTUNNEL=y +CONFIG_MODULE_SIG=y +CONFIG_MODULE_SRCVERSION_ALL=y +CONFIG_MODULE_UNLOAD=y +CONFIG_MODULES=y +CONFIG_MODVERSIONS=y CONFIG_MPLS=y CONFIG_MPLS_IPTUNNEL=y CONFIG_MPLS_ROUTING=y diff --git a/tools/testing/selftests/bpf/config.x86_64 b/tools/testing/selftests/bpf/config.x86_64 index ce70c9509204..21ce5ea4304e 100644 --- a/tools/testing/selftests/bpf/config.x86_64 +++ b/tools/testing/selftests/bpf/config.x86_64 @@ -145,11 +145,6 @@ CONFIG_MCORE2=y CONFIG_MEMCG=y CONFIG_MEMORY_FAILURE=y CONFIG_MINIX_SUBPARTITION=y -CONFIG_MODULE_SIG=y -CONFIG_MODULE_SRCVERSION_ALL=y -CONFIG_MODULE_UNLOAD=y -CONFIG_MODULES=y -CONFIG_MODVERSIONS=y CONFIG_NAMESPACES=y CONFIG_NET=y CONFIG_NET_9P=y diff --git a/tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c b/tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c new file mode 100644 index 000000000000..20be68d4cce4 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c @@ -0,0 +1,399 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include <stdio.h> +#include <errno.h> +#include <stdlib.h> +#include <unistd.h> +#include <endian.h> +#include <limits.h> +#include <sys/stat.h> +#include <sys/wait.h> +#include <sys/mman.h> +#include <linux/keyctl.h> +#include <test_progs.h> + +#include "test_verify_pkcs7_sig.skel.h" + +#define MAX_DATA_SIZE (1024 * 1024) +#define MAX_SIG_SIZE 1024 + +#define VERIFY_USE_SECONDARY_KEYRING (1UL) +#define VERIFY_USE_PLATFORM_KEYRING (2UL) + +/* In stripped ARM and x86-64 modules, ~ is surprisingly rare. */ +#define MODULE_SIG_STRING "~Module signature appended~\n" + +/* + * Module signature information block. + * + * The constituents of the signature section are, in order: + * + * - Signer's name + * - Key identifier + * - Signature data + * - Information block + */ +struct module_signature { + u8 algo; /* Public-key crypto algorithm [0] */ + u8 hash; /* Digest algorithm [0] */ + u8 id_type; /* Key identifier type [PKEY_ID_PKCS7] */ + u8 signer_len; /* Length of signer's name [0] */ + u8 key_id_len; /* Length of key identifier [0] */ + u8 __pad[3]; + __be32 sig_len; /* Length of signature data */ +}; + +struct data { + u8 data[MAX_DATA_SIZE]; + u32 data_len; + u8 sig[MAX_SIG_SIZE]; + u32 sig_len; +}; + +static bool kfunc_not_supported; + +static int libbpf_print_cb(enum libbpf_print_level level, const char *fmt, + va_list args) +{ + if (strcmp(fmt, "libbpf: extern (func ksym) '%s': not found in kernel or module BTFs\n")) + return 0; + + if (strcmp(va_arg(args, char *), "bpf_verify_pkcs7_signature")) + return 0; + + kfunc_not_supported = true; + return 0; +} + +static int _run_setup_process(const char *setup_dir, const char *cmd) +{ + int child_pid, child_status; + + child_pid = fork(); + if (child_pid == 0) { + execlp("./verify_sig_setup.sh", "./verify_sig_setup.sh", cmd, + setup_dir, NULL); + exit(errno); + + } else if (child_pid > 0) { + waitpid(child_pid, &child_status, 0); + return WEXITSTATUS(child_status); + } + + return -EINVAL; +} + +static int populate_data_item_str(const char *tmp_dir, struct data *data_item) +{ + struct stat st; + char data_template[] = "/tmp/dataXXXXXX"; + char path[PATH_MAX]; + int ret, fd, child_status, child_pid; + + data_item->data_len = 4; + memcpy(data_item->data, "test", data_item->data_len); + + fd = mkstemp(data_template); + if (fd == -1) + return -errno; + + ret = write(fd, data_item->data, data_item->data_len); + + close(fd); + + if (ret != data_item->data_len) { + ret = -EIO; + goto out; + } + + child_pid = fork(); + + if (child_pid == -1) { + ret = -errno; + goto out; + } + + if (child_pid == 0) { + snprintf(path, sizeof(path), "%s/signing_key.pem", tmp_dir); + + return execlp("./sign-file", "./sign-file", "-d", "sha256", + path, path, data_template, NULL); + } + + waitpid(child_pid, &child_status, 0); + + ret = WEXITSTATUS(child_status); + if (ret) + goto out; + + snprintf(path, sizeof(path), "%s.p7s", data_template); + + ret = stat(path, &st); + if (ret == -1) { + ret = -errno; + goto out; + } + + if (st.st_size > sizeof(data_item->sig)) { + ret = -EINVAL; + goto out_sig; + } + + data_item->sig_len = st.st_size; + + fd = open(path, O_RDONLY); + if (fd == -1) { + ret = -errno; + goto out_sig; + } + + ret = read(fd, data_item->sig, data_item->sig_len); + + close(fd); + + if (ret != data_item->sig_len) { + ret = -EIO; + goto out_sig; + } + + ret = 0; +out_sig: + unlink(path); +out: + unlink(data_template); + return ret; +} + +static int populate_data_item_mod(struct data *data_item) +{ + char mod_path[PATH_MAX], *mod_path_ptr; + struct stat st; + void *mod; + FILE *fp; + struct module_signature ms; + int ret, fd, modlen, marker_len, sig_len; + + data_item->data_len = 0; + + if (stat("/lib/modules", &st) == -1) + return 0; + + /* Requires CONFIG_TCP_CONG_BIC=m. */ + fp = popen("find /lib/modules/$(uname -r) -name tcp_bic.ko", "r"); + if (!fp) + return 0; + + mod_path_ptr = fgets(mod_path, sizeof(mod_path), fp); + pclose(fp); + + if (!mod_path_ptr) + return 0; + + mod_path_ptr = strchr(mod_path, '\n'); + if (!mod_path_ptr) + return 0; + + *mod_path_ptr = '\0'; + + if (stat(mod_path, &st) == -1) + return 0; + + modlen = st.st_size; + marker_len = sizeof(MODULE_SIG_STRING) - 1; + + fd = open(mod_path, O_RDONLY); + if (fd == -1) + return -errno; + + mod = mmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0); + + close(fd); + + if (mod == MAP_FAILED) + return -errno; + + if (strncmp(mod + modlen - marker_len, MODULE_SIG_STRING, marker_len)) { + ret = -EINVAL; + goto out; + } + + modlen -= marker_len; + + memcpy(&ms, mod + (modlen - sizeof(ms)), sizeof(ms)); + + sig_len = __be32_to_cpu(ms.sig_len); + modlen -= sig_len + sizeof(ms); + + if (modlen > sizeof(data_item->data)) { + ret = -E2BIG; + goto out; + } + + memcpy(data_item->data, mod, modlen); + data_item->data_len = modlen; + + if (sig_len > sizeof(data_item->sig)) { + ret = -E2BIG; + goto out; + } + + memcpy(data_item->sig, mod + modlen, sig_len); + data_item->sig_len = sig_len; + ret = 0; +out: + munmap(mod, st.st_size); + return ret; +} + +void test_verify_pkcs7_sig(void) +{ + libbpf_print_fn_t old_print_cb; + char tmp_dir_template[] = "/tmp/verify_sigXXXXXX"; + char *tmp_dir; + struct test_verify_pkcs7_sig *skel = NULL; + struct bpf_map *map; + struct data data; + int ret, zero = 0; + + /* Trigger creation of session keyring. */ + syscall(__NR_request_key, "keyring", "_uid.0", NULL, + KEY_SPEC_SESSION_KEYRING); + + tmp_dir = mkdtemp(tmp_dir_template); + if (!ASSERT_OK_PTR(tmp_dir, "mkdtemp")) + return; + + ret = _run_setup_process(tmp_dir, "setup"); + if (!ASSERT_OK(ret, "_run_setup_process")) + goto close_prog; + + skel = test_verify_pkcs7_sig__open(); + if (!ASSERT_OK_PTR(skel, "test_verify_pkcs7_sig__open")) + goto close_prog; + + old_print_cb = libbpf_set_print(libbpf_print_cb); + ret = test_verify_pkcs7_sig__load(skel); + libbpf_set_print(old_print_cb); + + if (ret < 0 && kfunc_not_supported) { + printf( + "%s:SKIP:bpf_verify_pkcs7_signature() kfunc not supported\n", + __func__); + test__skip(); + goto close_prog; + } + + if (!ASSERT_OK(ret, "test_verify_pkcs7_sig__load")) + goto close_prog; + + ret = test_verify_pkcs7_sig__attach(skel); + if (!ASSERT_OK(ret, "test_verify_pkcs7_sig__attach")) + goto close_prog; + + map = bpf_object__find_map_by_name(skel->obj, "data_input"); + if (!ASSERT_OK_PTR(map, "data_input not found")) + goto close_prog; + + skel->bss->monitored_pid = getpid(); + + /* Test without data and signature. */ + skel->bss->user_keyring_serial = KEY_SPEC_SESSION_KEYRING; + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data, BPF_ANY); + if (!ASSERT_LT(ret, 0, "bpf_map_update_elem data_input")) + goto close_prog; + + /* Test successful signature verification with session keyring. */ + ret = populate_data_item_str(tmp_dir, &data); + if (!ASSERT_OK(ret, "populate_data_item_str")) + goto close_prog; + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data, BPF_ANY); + if (!ASSERT_OK(ret, "bpf_map_update_elem data_input")) + goto close_prog; + + /* Test successful signature verification with testing keyring. */ + skel->bss->user_keyring_serial = syscall(__NR_request_key, "keyring", + "ebpf_testing_keyring", NULL, + KEY_SPEC_SESSION_KEYRING); + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data, BPF_ANY); + if (!ASSERT_OK(ret, "bpf_map_update_elem data_input")) + goto close_prog; + + /* + * Ensure key_task_permission() is called and rejects the keyring + * (no Search permission). + */ + syscall(__NR_keyctl, KEYCTL_SETPERM, skel->bss->user_keyring_serial, + 0x37373737); + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data, BPF_ANY); + if (!ASSERT_LT(ret, 0, "bpf_map_update_elem data_input")) + goto close_prog; + + syscall(__NR_keyctl, KEYCTL_SETPERM, skel->bss->user_keyring_serial, + 0x3f3f3f3f); + + /* + * Ensure key_validate() is called and rejects the keyring (key expired) + */ + syscall(__NR_keyctl, KEYCTL_SET_TIMEOUT, + skel->bss->user_keyring_serial, 1); + sleep(1); + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data, BPF_ANY); + if (!ASSERT_LT(ret, 0, "bpf_map_update_elem data_input")) + goto close_prog; + + skel->bss->user_keyring_serial = KEY_SPEC_SESSION_KEYRING; + + /* Test with corrupted data (signature verification should fail). */ + data.data[0] = 'a'; + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data, BPF_ANY); + if (!ASSERT_LT(ret, 0, "bpf_map_update_elem data_input")) + goto close_prog; + + ret = populate_data_item_mod(&data); + if (!ASSERT_OK(ret, "populate_data_item_mod")) + goto close_prog; + + /* Test signature verification with system keyrings. */ + if (data.data_len) { + skel->bss->user_keyring_serial = 0; + skel->bss->system_keyring_id = 0; + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data, + BPF_ANY); + if (!ASSERT_OK(ret, "bpf_map_update_elem data_input")) + goto close_prog; + + skel->bss->system_keyring_id = VERIFY_USE_SECONDARY_KEYRING; + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data, + BPF_ANY); + if (!ASSERT_OK(ret, "bpf_map_update_elem data_input")) + goto close_prog; + + skel->bss->system_keyring_id = VERIFY_USE_PLATFORM_KEYRING; + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data, + BPF_ANY); + ASSERT_LT(ret, 0, "bpf_map_update_elem data_input"); + } + +close_prog: + _run_setup_process(tmp_dir, "cleanup"); + + if (!skel) + return; + + skel->bss->monitored_pid = 0; + test_verify_pkcs7_sig__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c b/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c new file mode 100644 index 000000000000..4ceab545d99a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c @@ -0,0 +1,100 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include "vmlinux.h" +#include <errno.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +#define MAX_DATA_SIZE (1024 * 1024) +#define MAX_SIG_SIZE 1024 + +typedef __u8 u8; +typedef __u16 u16; +typedef __u32 u32; +typedef __u64 u64; + +struct bpf_dynptr { + __u64 :64; + __u64 :64; +} __attribute__((aligned(8))); + +extern struct bpf_key *bpf_lookup_user_key(__u32 serial, __u64 flags) __ksym; +extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym; +extern void bpf_key_put(struct bpf_key *key) __ksym; +extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr, + struct bpf_dynptr *sig_ptr, + struct bpf_key *trusted_keyring) __ksym; + +u32 monitored_pid; +u32 user_keyring_serial; +u64 system_keyring_id; + +struct data { + u8 data[MAX_DATA_SIZE]; + u32 data_len; + u8 sig[MAX_SIG_SIZE]; + u32 sig_len; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, __u32); + __type(value, struct data); +} data_input SEC(".maps"); + +char _license[] SEC("license") = "GPL"; + +SEC("lsm.s/bpf") +int BPF_PROG(bpf, int cmd, union bpf_attr *attr, unsigned int size) +{ + struct bpf_dynptr data_ptr, sig_ptr; + struct data *data_val; + struct bpf_key *trusted_keyring; + u32 pid; + u64 value; + int ret, zero = 0; + + pid = bpf_get_current_pid_tgid() >> 32; + if (pid != monitored_pid) + return 0; + + data_val = bpf_map_lookup_elem(&data_input, &zero); + if (!data_val) + return 0; + + bpf_probe_read(&value, sizeof(value), &attr->value); + + bpf_copy_from_user(data_val, sizeof(struct data), + (void *)(unsigned long)value); + + if (data_val->data_len > sizeof(data_val->data)) + return -EINVAL; + + bpf_dynptr_from_mem(data_val->data, data_val->data_len, 0, &data_ptr); + + if (data_val->sig_len > sizeof(data_val->sig)) + return -EINVAL; + + bpf_dynptr_from_mem(data_val->sig, data_val->sig_len, 0, &sig_ptr); + + if (user_keyring_serial) + trusted_keyring = bpf_lookup_user_key(user_keyring_serial, 0); + else + trusted_keyring = bpf_lookup_system_key(system_keyring_id); + + if (!trusted_keyring) + return -ENOENT; + + ret = bpf_verify_pkcs7_signature(&data_ptr, &sig_ptr, trusted_keyring); + + bpf_key_put(trusted_keyring); + + return ret; +} diff --git a/tools/testing/selftests/bpf/verify_sig_setup.sh b/tools/testing/selftests/bpf/verify_sig_setup.sh new file mode 100755 index 000000000000..ba08922b4a27 --- /dev/null +++ b/tools/testing/selftests/bpf/verify_sig_setup.sh @@ -0,0 +1,104 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +set -e +set -u +set -o pipefail + +VERBOSE="${SELFTESTS_VERBOSE:=0}" +LOG_FILE="$(mktemp /tmp/verify_sig_setup.log.XXXXXX)" + +x509_genkey_content="\ +[ req ] +default_bits = 2048 +distinguished_name = req_distinguished_name +prompt = no +string_mask = utf8only +x509_extensions = myexts + +[ req_distinguished_name ] +CN = eBPF Signature Verification Testing Key + +[ myexts ] +basicConstraints=critical,CA:FALSE +keyUsage=digitalSignature +subjectKeyIdentifier=hash +authorityKeyIdentifier=keyid +" + +usage() +{ + echo "Usage: $0 <setup|cleanup <existing_tmp_dir>" + exit 1 +} + +setup() +{ + local tmp_dir="$1" + + echo "${x509_genkey_content}" > ${tmp_dir}/x509.genkey + + openssl req -new -nodes -utf8 -sha256 -days 36500 \ + -batch -x509 -config ${tmp_dir}/x509.genkey \ + -outform PEM -out ${tmp_dir}/signing_key.pem \ + -keyout ${tmp_dir}/signing_key.pem 2>&1 + + openssl x509 -in ${tmp_dir}/signing_key.pem -out \ + ${tmp_dir}/signing_key.der -outform der + + key_id=$(cat ${tmp_dir}/signing_key.der | keyctl padd asymmetric ebpf_testing_key @s) + + keyring_id=$(keyctl newring ebpf_testing_keyring @s) + keyctl link $key_id $keyring_id +} + +cleanup() { + local tmp_dir="$1" + + keyctl unlink $(keyctl search @s asymmetric ebpf_testing_key) @s + keyctl unlink $(keyctl search @s keyring ebpf_testing_keyring) @s + rm -rf ${tmp_dir} +} + +catch() +{ + local exit_code="$1" + local log_file="$2" + + if [[ "${exit_code}" -ne 0 ]]; then + cat "${log_file}" >&3 + fi + + rm -f "${log_file}" + exit ${exit_code} +} + +main() +{ + [[ $# -ne 2 ]] && usage + + local action="$1" + local tmp_dir="$2" + + [[ ! -d "${tmp_dir}" ]] && echo "Directory ${tmp_dir} doesn't exist" && exit 1 + + if [[ "${action}" == "setup" ]]; then + setup "${tmp_dir}" + elif [[ "${action}" == "cleanup" ]]; then + cleanup "${tmp_dir}" + else + echo "Unknown action: ${action}" + exit 1 + fi +} + +trap 'catch "$?" "${LOG_FILE}"' EXIT + +if [[ "${VERBOSE}" -eq 0 ]]; then + # Save the stderr to 3 so that we can output back to + # it incase of an error. + exec 3>&2 1>"${LOG_FILE}" 2>&1 +fi + +main "$@" +rm -f "${LOG_FILE}"
From: Roberto Sassu roberto.sassu@huawei.com
Add tests to ensure that only supported dynamic pointer types are accepted, that the passed argument is actually a dynamic pointer, and that the passed argument is a pointer to the stack.
The tests are currently in the deny list for s390x (JIT does not support calling kernel function).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../bpf/prog_tests/kfunc_dynptr_param.c | 103 ++++++++++++++++++ .../bpf/progs/test_kfunc_dynptr_param.c | 57 ++++++++++ 3 files changed, 161 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c create mode 100644 tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 4e305baa5277..9a6dc3671c65 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -71,3 +71,4 @@ cgroup_hierarchical_stats # JIT does not support calling kernel f htab_update # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) lookup_key # JIT does not support calling kernel function (kfunc) verify_pkcs7_sig # JIT does not support calling kernel function (kfunc) +kfunc_dynptr_param # JIT does not support calling kernel function (kfunc) diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c new file mode 100644 index 000000000000..ea655a5c9d8b --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c @@ -0,0 +1,103 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (c) 2022 Facebook + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include <test_progs.h> +#include "test_kfunc_dynptr_param.skel.h" + +static size_t log_buf_sz = 1048576; /* 1 MB */ +static char obj_log_buf[1048576]; + +static struct { + const char *prog_name; + const char *expected_err_msg; +} kfunc_dynptr_tests[] = { + {"dynptr_type_not_supp", + "arg#0 pointer type STRUCT bpf_dynptr_kern points to unsupported dynamic pointer type"}, + {"not_valid_dynptr", + "arg#0 pointer type STRUCT bpf_dynptr_kern must be valid and initialized"}, + {"not_ptr_to_stack", "arg#0 pointer type STRUCT bpf_dynptr_kern not to stack"}, +}; + +static bool kfunc_not_supported; + +static int libbpf_print_cb(enum libbpf_print_level level, const char *fmt, + va_list args) +{ + if (strcmp(fmt, "libbpf: extern (func ksym) '%s': not found in kernel or module BTFs\n")) + return 0; + + if (strcmp(va_arg(args, char *), "bpf_verify_pkcs7_signature")) + return 0; + + kfunc_not_supported = true; + return 0; +} + +static void verify_fail(const char *prog_name, const char *expected_err_msg) +{ + struct test_kfunc_dynptr_param *skel; + LIBBPF_OPTS(bpf_object_open_opts, opts); + libbpf_print_fn_t old_print_cb; + struct bpf_program *prog; + int err; + + opts.kernel_log_buf = obj_log_buf; + opts.kernel_log_size = log_buf_sz; + opts.kernel_log_level = 1; + + skel = test_kfunc_dynptr_param__open_opts(&opts); + if (!ASSERT_OK_PTR(skel, "test_kfunc_dynptr_param__open_opts")) + goto cleanup; + + prog = bpf_object__find_program_by_name(skel->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto cleanup; + + bpf_program__set_autoload(prog, true); + + bpf_map__set_max_entries(skel->maps.ringbuf, getpagesize()); + + kfunc_not_supported = false; + + old_print_cb = libbpf_set_print(libbpf_print_cb); + err = test_kfunc_dynptr_param__load(skel); + libbpf_set_print(old_print_cb); + + if (err < 0 && kfunc_not_supported) { + fprintf(stderr, + "%s:SKIP:bpf_verify_pkcs7_signature() kfunc not supported\n", + __func__); + test__skip(); + goto cleanup; + } + + if (!ASSERT_ERR(err, "unexpected load success")) + goto cleanup; + + if (!ASSERT_OK_PTR(strstr(obj_log_buf, expected_err_msg), "expected_err_msg")) { + fprintf(stderr, "Expected err_msg: %s\n", expected_err_msg); + fprintf(stderr, "Verifier output: %s\n", obj_log_buf); + } + +cleanup: + test_kfunc_dynptr_param__destroy(skel); +} + +void test_kfunc_dynptr_param(void) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(kfunc_dynptr_tests); i++) { + if (!test__start_subtest(kfunc_dynptr_tests[i].prog_name)) + continue; + + verify_fail(kfunc_dynptr_tests[i].prog_name, + kfunc_dynptr_tests[i].expected_err_msg); + } +} diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c new file mode 100644 index 000000000000..2f09f91a1576 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include "vmlinux.h" +#include <errno.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +struct bpf_dynptr { + __u64 :64; + __u64 :64; +} __attribute__((aligned(8))); + +extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr, + struct bpf_dynptr *sig_ptr, + struct bpf_key *trusted_keyring) __ksym; + +struct { + __uint(type, BPF_MAP_TYPE_RINGBUF); +} ringbuf SEC(".maps"); + +char _license[] SEC("license") = "GPL"; + +SEC("?lsm.s/bpf") +int BPF_PROG(dynptr_type_not_supp, int cmd, union bpf_attr *attr, + unsigned int size) +{ + char write_data[64] = "hello there, world!!"; + struct bpf_dynptr ptr; + + bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(write_data), 0, &ptr); + + return bpf_verify_pkcs7_signature(&ptr, &ptr, NULL); +} + +SEC("?lsm.s/bpf") +int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size) +{ + unsigned long val; + + return bpf_verify_pkcs7_signature((struct bpf_dynptr *)&val, + (struct bpf_dynptr *)&val, NULL); +} + +SEC("?lsm.s/bpf") +int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size) +{ + unsigned long val; + + return bpf_verify_pkcs7_signature((struct bpf_dynptr *)val, + (struct bpf_dynptr *)val, NULL); +}
On Mon, 5 Sept 2022 at 16:36, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add tests to ensure that only supported dynamic pointer types are accepted, that the passed argument is actually a dynamic pointer, and that the passed argument is a pointer to the stack.
The tests are currently in the deny list for s390x (JIT does not support calling kernel function).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../bpf/prog_tests/kfunc_dynptr_param.c | 103 ++++++++++++++++++ .../bpf/progs/test_kfunc_dynptr_param.c | 57 ++++++++++ 3 files changed, 161 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c create mode 100644 tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 4e305baa5277..9a6dc3671c65 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -71,3 +71,4 @@ cgroup_hierarchical_stats # JIT does not support calling kernel f htab_update # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) lookup_key # JIT does not support calling kernel function (kfunc) verify_pkcs7_sig # JIT does not support calling kernel function (kfunc) +kfunc_dynptr_param # JIT does not support calling kernel function (kfunc) diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c new file mode 100644 index 000000000000..ea655a5c9d8b --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c @@ -0,0 +1,103 @@ +// SPDX-License-Identifier: GPL-2.0
+/*
- Copyright (c) 2022 Facebook
- Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH
- Author: Roberto Sassu roberto.sassu@huawei.com
- */
+#include <test_progs.h> +#include "test_kfunc_dynptr_param.skel.h"
+static size_t log_buf_sz = 1048576; /* 1 MB */ +static char obj_log_buf[1048576];
+static struct {
const char *prog_name;
const char *expected_err_msg;
+} kfunc_dynptr_tests[] = {
{"dynptr_type_not_supp",
"arg#0 pointer type STRUCT bpf_dynptr_kern points to unsupported dynamic pointer type"},
{"not_valid_dynptr",
"arg#0 pointer type STRUCT bpf_dynptr_kern must be valid and initialized"},
{"not_ptr_to_stack", "arg#0 pointer type STRUCT bpf_dynptr_kern not to stack"},
+};
+static bool kfunc_not_supported;
+static int libbpf_print_cb(enum libbpf_print_level level, const char *fmt,
va_list args)
+{
if (strcmp(fmt, "libbpf: extern (func ksym) '%s': not found in kernel or module BTFs\n"))
return 0;
if (strcmp(va_arg(args, char *), "bpf_verify_pkcs7_signature"))
return 0;
kfunc_not_supported = true;
return 0;
+}
+static void verify_fail(const char *prog_name, const char *expected_err_msg) +{
struct test_kfunc_dynptr_param *skel;
LIBBPF_OPTS(bpf_object_open_opts, opts);
libbpf_print_fn_t old_print_cb;
struct bpf_program *prog;
int err;
opts.kernel_log_buf = obj_log_buf;
opts.kernel_log_size = log_buf_sz;
opts.kernel_log_level = 1;
skel = test_kfunc_dynptr_param__open_opts(&opts);
if (!ASSERT_OK_PTR(skel, "test_kfunc_dynptr_param__open_opts"))
goto cleanup;
prog = bpf_object__find_program_by_name(skel->obj, prog_name);
if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
goto cleanup;
bpf_program__set_autoload(prog, true);
bpf_map__set_max_entries(skel->maps.ringbuf, getpagesize());
kfunc_not_supported = false;
old_print_cb = libbpf_set_print(libbpf_print_cb);
err = test_kfunc_dynptr_param__load(skel);
libbpf_set_print(old_print_cb);
if (err < 0 && kfunc_not_supported) {
fprintf(stderr,
"%s:SKIP:bpf_verify_pkcs7_signature() kfunc not supported\n",
__func__);
test__skip();
goto cleanup;
}
if (!ASSERT_ERR(err, "unexpected load success"))
goto cleanup;
if (!ASSERT_OK_PTR(strstr(obj_log_buf, expected_err_msg), "expected_err_msg")) {
fprintf(stderr, "Expected err_msg: %s\n", expected_err_msg);
fprintf(stderr, "Verifier output: %s\n", obj_log_buf);
}
+cleanup:
test_kfunc_dynptr_param__destroy(skel);
+}
+void test_kfunc_dynptr_param(void) +{
int i;
for (i = 0; i < ARRAY_SIZE(kfunc_dynptr_tests); i++) {
if (!test__start_subtest(kfunc_dynptr_tests[i].prog_name))
continue;
verify_fail(kfunc_dynptr_tests[i].prog_name,
kfunc_dynptr_tests[i].expected_err_msg);
}
+} diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c new file mode 100644 index 000000000000..2f09f91a1576 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0
+/*
- Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH
- Author: Roberto Sassu roberto.sassu@huawei.com
- */
+#include "vmlinux.h" +#include <errno.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h>
+struct bpf_dynptr {
__u64 :64;
__u64 :64;
+} __attribute__((aligned(8)));
+extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
struct bpf_dynptr *sig_ptr,
struct bpf_key *trusted_keyring) __ksym;
+struct {
__uint(type, BPF_MAP_TYPE_RINGBUF);
+} ringbuf SEC(".maps");
+char _license[] SEC("license") = "GPL";
+SEC("?lsm.s/bpf") +int BPF_PROG(dynptr_type_not_supp, int cmd, union bpf_attr *attr,
unsigned int size)
+{
char write_data[64] = "hello there, world!!";
struct bpf_dynptr ptr;
bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(write_data), 0, &ptr);
return bpf_verify_pkcs7_signature(&ptr, &ptr, NULL);
+}
+SEC("?lsm.s/bpf") +int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size) +{
unsigned long val;
return bpf_verify_pkcs7_signature((struct bpf_dynptr *)&val,
(struct bpf_dynptr *)&val, NULL);
+}
+SEC("?lsm.s/bpf") +int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size) +{
unsigned long val;
return bpf_verify_pkcs7_signature((struct bpf_dynptr *)val,
(struct bpf_dynptr *)val, NULL);
Please also include a test where you cause the dynptr to be set to NULL, e.g. by passing invalid stuff to ringbuf_reserve_dynptr, and then try to pass it to bpf_verify_pkc7_signature.
+}
2.25.1
On Tue, 2022-09-06 at 05:15 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:36, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add tests to ensure that only supported dynamic pointer types are accepted, that the passed argument is actually a dynamic pointer, and that the passed argument is a pointer to the stack.
The tests are currently in the deny list for s390x (JIT does not support calling kernel function).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../bpf/prog_tests/kfunc_dynptr_param.c | 103 ++++++++++++++++++ .../bpf/progs/test_kfunc_dynptr_param.c | 57 ++++++++++ 3 files changed, 161 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c create mode 100644 tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 4e305baa5277..9a6dc3671c65 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -71,3 +71,4 @@ cgroup_hierarchical_stats # JIT does not support calling kernel f htab_update # failed to attach: ERROR: strerror_r(- 524)=22 (trampoline) lookup_key # JIT does not support calling kernel function (kfunc) verify_pkcs7_sig # JIT does not support calling kernel function (kfunc) +kfunc_dynptr_param # JIT does not support calling kernel function (kfunc) diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c new file mode 100644 index 000000000000..ea655a5c9d8b --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c @@ -0,0 +1,103 @@ +// SPDX-License-Identifier: GPL-2.0
+/*
- Copyright (c) 2022 Facebook
- Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH
- Author: Roberto Sassu roberto.sassu@huawei.com
- */
+#include <test_progs.h> +#include "test_kfunc_dynptr_param.skel.h"
+static size_t log_buf_sz = 1048576; /* 1 MB */ +static char obj_log_buf[1048576];
+static struct {
const char *prog_name;
const char *expected_err_msg;
+} kfunc_dynptr_tests[] = {
{"dynptr_type_not_supp",
"arg#0 pointer type STRUCT bpf_dynptr_kern points to
unsupported dynamic pointer type"},
{"not_valid_dynptr",
"arg#0 pointer type STRUCT bpf_dynptr_kern must be valid
and initialized"},
{"not_ptr_to_stack", "arg#0 pointer type STRUCT
bpf_dynptr_kern not to stack"}, +};
+static bool kfunc_not_supported;
+static int libbpf_print_cb(enum libbpf_print_level level, const char *fmt,
va_list args)
+{
if (strcmp(fmt, "libbpf: extern (func ksym) '%s': not found
in kernel or module BTFs\n"))
return 0;
if (strcmp(va_arg(args, char *),
"bpf_verify_pkcs7_signature"))
return 0;
kfunc_not_supported = true;
return 0;
+}
+static void verify_fail(const char *prog_name, const char *expected_err_msg) +{
struct test_kfunc_dynptr_param *skel;
LIBBPF_OPTS(bpf_object_open_opts, opts);
libbpf_print_fn_t old_print_cb;
struct bpf_program *prog;
int err;
opts.kernel_log_buf = obj_log_buf;
opts.kernel_log_size = log_buf_sz;
opts.kernel_log_level = 1;
skel = test_kfunc_dynptr_param__open_opts(&opts);
if (!ASSERT_OK_PTR(skel,
"test_kfunc_dynptr_param__open_opts"))
goto cleanup;
prog = bpf_object__find_program_by_name(skel->obj,
prog_name);
if (!ASSERT_OK_PTR(prog,
"bpf_object__find_program_by_name"))
goto cleanup;
bpf_program__set_autoload(prog, true);
bpf_map__set_max_entries(skel->maps.ringbuf,
getpagesize());
kfunc_not_supported = false;
old_print_cb = libbpf_set_print(libbpf_print_cb);
err = test_kfunc_dynptr_param__load(skel);
libbpf_set_print(old_print_cb);
if (err < 0 && kfunc_not_supported) {
fprintf(stderr,
"%s:SKIP:bpf_verify_pkcs7_signature() kfunc not
supported\n",
__func__);
test__skip();
goto cleanup;
}
if (!ASSERT_ERR(err, "unexpected load success"))
goto cleanup;
if (!ASSERT_OK_PTR(strstr(obj_log_buf, expected_err_msg),
"expected_err_msg")) {
fprintf(stderr, "Expected err_msg: %s\n",
expected_err_msg);
fprintf(stderr, "Verifier output: %s\n",
obj_log_buf);
}
+cleanup:
test_kfunc_dynptr_param__destroy(skel);
+}
+void test_kfunc_dynptr_param(void) +{
int i;
for (i = 0; i < ARRAY_SIZE(kfunc_dynptr_tests); i++) {
if
(!test__start_subtest(kfunc_dynptr_tests[i].prog_name))
continue;
verify_fail(kfunc_dynptr_tests[i].prog_name,
kfunc_dynptr_tests[i].expected_err_msg)
;
}
+} diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c new file mode 100644 index 000000000000..2f09f91a1576 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0
+/*
- Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH
- Author: Roberto Sassu roberto.sassu@huawei.com
- */
+#include "vmlinux.h" +#include <errno.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h>
+struct bpf_dynptr {
__u64 :64;
__u64 :64;
+} __attribute__((aligned(8)));
+extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
struct bpf_dynptr *sig_ptr,
struct bpf_key
*trusted_keyring) __ksym;
+struct {
__uint(type, BPF_MAP_TYPE_RINGBUF);
+} ringbuf SEC(".maps");
+char _license[] SEC("license") = "GPL";
+SEC("?lsm.s/bpf") +int BPF_PROG(dynptr_type_not_supp, int cmd, union bpf_attr *attr,
unsigned int size)
+{
char write_data[64] = "hello there, world!!";
struct bpf_dynptr ptr;
bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(write_data), 0,
&ptr);
return bpf_verify_pkcs7_signature(&ptr, &ptr, NULL);
+}
+SEC("?lsm.s/bpf") +int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size) +{
unsigned long val;
return bpf_verify_pkcs7_signature((struct bpf_dynptr
*)&val,
(struct bpf_dynptr
*)&val, NULL); +}
+SEC("?lsm.s/bpf") +int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size) +{
unsigned long val;
return bpf_verify_pkcs7_signature((struct bpf_dynptr *)val,
(struct bpf_dynptr *)val,
NULL);
Please also include a test where you cause the dynptr to be set to NULL, e.g. by passing invalid stuff to ringbuf_reserve_dynptr, and then try to pass it to bpf_verify_pkc7_signature.
Uhm, bpf_ringbuf_reserve_dynptr() is expecting a valid map. How else I can achieve it?
Thanks
Roberto
On Tue, 6 Sept 2022 at 10:31, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
On Tue, 2022-09-06 at 05:15 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:36, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add tests to ensure that only supported dynamic pointer types are accepted, that the passed argument is actually a dynamic pointer, and that the passed argument is a pointer to the stack.
The tests are currently in the deny list for s390x (JIT does not support calling kernel function).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../bpf/prog_tests/kfunc_dynptr_param.c | 103 ++++++++++++++++++ .../bpf/progs/test_kfunc_dynptr_param.c | 57 ++++++++++ 3 files changed, 161 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c create mode 100644 tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 4e305baa5277..9a6dc3671c65 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -71,3 +71,4 @@ cgroup_hierarchical_stats # JIT does not support calling kernel f htab_update # failed to attach: ERROR: strerror_r(- 524)=22 (trampoline) lookup_key # JIT does not support calling kernel function (kfunc) verify_pkcs7_sig # JIT does not support calling kernel function (kfunc) +kfunc_dynptr_param # JIT does not support calling kernel function (kfunc) diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c new file mode 100644 index 000000000000..ea655a5c9d8b --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c @@ -0,0 +1,103 @@ +// SPDX-License-Identifier: GPL-2.0
+/*
- Copyright (c) 2022 Facebook
- Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH
- Author: Roberto Sassu roberto.sassu@huawei.com
- */
+#include <test_progs.h> +#include "test_kfunc_dynptr_param.skel.h"
+static size_t log_buf_sz = 1048576; /* 1 MB */ +static char obj_log_buf[1048576];
+static struct {
const char *prog_name;
const char *expected_err_msg;
+} kfunc_dynptr_tests[] = {
{"dynptr_type_not_supp",
"arg#0 pointer type STRUCT bpf_dynptr_kern points to
unsupported dynamic pointer type"},
{"not_valid_dynptr",
"arg#0 pointer type STRUCT bpf_dynptr_kern must be valid
and initialized"},
{"not_ptr_to_stack", "arg#0 pointer type STRUCT
bpf_dynptr_kern not to stack"}, +};
+static bool kfunc_not_supported;
+static int libbpf_print_cb(enum libbpf_print_level level, const char *fmt,
va_list args)
+{
if (strcmp(fmt, "libbpf: extern (func ksym) '%s': not found
in kernel or module BTFs\n"))
return 0;
if (strcmp(va_arg(args, char *),
"bpf_verify_pkcs7_signature"))
return 0;
kfunc_not_supported = true;
return 0;
+}
+static void verify_fail(const char *prog_name, const char *expected_err_msg) +{
struct test_kfunc_dynptr_param *skel;
LIBBPF_OPTS(bpf_object_open_opts, opts);
libbpf_print_fn_t old_print_cb;
struct bpf_program *prog;
int err;
opts.kernel_log_buf = obj_log_buf;
opts.kernel_log_size = log_buf_sz;
opts.kernel_log_level = 1;
skel = test_kfunc_dynptr_param__open_opts(&opts);
if (!ASSERT_OK_PTR(skel,
"test_kfunc_dynptr_param__open_opts"))
goto cleanup;
prog = bpf_object__find_program_by_name(skel->obj,
prog_name);
if (!ASSERT_OK_PTR(prog,
"bpf_object__find_program_by_name"))
goto cleanup;
bpf_program__set_autoload(prog, true);
bpf_map__set_max_entries(skel->maps.ringbuf,
getpagesize());
kfunc_not_supported = false;
old_print_cb = libbpf_set_print(libbpf_print_cb);
err = test_kfunc_dynptr_param__load(skel);
libbpf_set_print(old_print_cb);
if (err < 0 && kfunc_not_supported) {
fprintf(stderr,
"%s:SKIP:bpf_verify_pkcs7_signature() kfunc not
supported\n",
__func__);
test__skip();
goto cleanup;
}
if (!ASSERT_ERR(err, "unexpected load success"))
goto cleanup;
if (!ASSERT_OK_PTR(strstr(obj_log_buf, expected_err_msg),
"expected_err_msg")) {
fprintf(stderr, "Expected err_msg: %s\n",
expected_err_msg);
fprintf(stderr, "Verifier output: %s\n",
obj_log_buf);
}
+cleanup:
test_kfunc_dynptr_param__destroy(skel);
+}
+void test_kfunc_dynptr_param(void) +{
int i;
for (i = 0; i < ARRAY_SIZE(kfunc_dynptr_tests); i++) {
if
(!test__start_subtest(kfunc_dynptr_tests[i].prog_name))
continue;
verify_fail(kfunc_dynptr_tests[i].prog_name,
kfunc_dynptr_tests[i].expected_err_msg)
;
}
+} diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c new file mode 100644 index 000000000000..2f09f91a1576 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0
+/*
- Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH
- Author: Roberto Sassu roberto.sassu@huawei.com
- */
+#include "vmlinux.h" +#include <errno.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h>
+struct bpf_dynptr {
__u64 :64;
__u64 :64;
+} __attribute__((aligned(8)));
+extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
struct bpf_dynptr *sig_ptr,
struct bpf_key
*trusted_keyring) __ksym;
+struct {
__uint(type, BPF_MAP_TYPE_RINGBUF);
+} ringbuf SEC(".maps");
+char _license[] SEC("license") = "GPL";
+SEC("?lsm.s/bpf") +int BPF_PROG(dynptr_type_not_supp, int cmd, union bpf_attr *attr,
unsigned int size)
+{
char write_data[64] = "hello there, world!!";
struct bpf_dynptr ptr;
bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(write_data), 0,
&ptr);
return bpf_verify_pkcs7_signature(&ptr, &ptr, NULL);
+}
+SEC("?lsm.s/bpf") +int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size) +{
unsigned long val;
return bpf_verify_pkcs7_signature((struct bpf_dynptr
*)&val,
(struct bpf_dynptr
*)&val, NULL); +}
+SEC("?lsm.s/bpf") +int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size) +{
unsigned long val;
return bpf_verify_pkcs7_signature((struct bpf_dynptr *)val,
(struct bpf_dynptr *)val,
NULL);
Please also include a test where you cause the dynptr to be set to NULL, e.g. by passing invalid stuff to ringbuf_reserve_dynptr, and then try to pass it to bpf_verify_pkc7_signature.
Uhm, bpf_ringbuf_reserve_dynptr() is expecting a valid map. How else I can achieve it?
So? Just define a ringbuf map and pass to it? I'm missing why that is undesirable. It also needs to be a runtime test, not verifier test, I probably replied to the wrong patch.
From: Roberto Sassu roberto.sassu@huawei.com
Add tests to ensure that only supported dynamic pointer types are accepted, that the passed argument is actually a dynamic pointer, that the passed argument is a pointer to the stack, and that bpf_verify_pkcs7_signature() correctly handles dynamic pointers with data set to NULL.
The tests are currently in the deny list for s390x (JIT does not support calling kernel function).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../bpf/prog_tests/kfunc_dynptr_param.c | 164 ++++++++++++++++++ .../bpf/progs/test_kfunc_dynptr_param.c | 99 +++++++++++ 3 files changed, 264 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c create mode 100644 tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 4e305baa5277..9a6dc3671c65 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -71,3 +71,4 @@ cgroup_hierarchical_stats # JIT does not support calling kernel f htab_update # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) lookup_key # JIT does not support calling kernel function (kfunc) verify_pkcs7_sig # JIT does not support calling kernel function (kfunc) +kfunc_dynptr_param # JIT does not support calling kernel function (kfunc) diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c new file mode 100644 index 000000000000..c210657d4d0a --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c @@ -0,0 +1,164 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (c) 2022 Facebook + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include <test_progs.h> +#include "test_kfunc_dynptr_param.skel.h" + +static size_t log_buf_sz = 1048576; /* 1 MB */ +static char obj_log_buf[1048576]; + +static struct { + const char *prog_name; + const char *expected_verifier_err_msg; + int expected_runtime_err; +} kfunc_dynptr_tests[] = { + {"dynptr_type_not_supp", + "arg#0 pointer type STRUCT bpf_dynptr_kern points to unsupported dynamic pointer type", 0}, + {"not_valid_dynptr", + "arg#0 pointer type STRUCT bpf_dynptr_kern must be valid and initialized", 0}, + {"not_ptr_to_stack", "arg#0 pointer type STRUCT bpf_dynptr_kern not to stack", 0}, + {"dynptr_data_null", NULL, -EBADMSG}, +}; + +static bool kfunc_not_supported; + +static int libbpf_print_cb(enum libbpf_print_level level, const char *fmt, + va_list args) +{ + if (strcmp(fmt, "libbpf: extern (func ksym) '%s': not found in kernel or module BTFs\n")) + return 0; + + if (strcmp(va_arg(args, char *), "bpf_verify_pkcs7_signature")) + return 0; + + kfunc_not_supported = true; + return 0; +} + +static void verify_fail(const char *prog_name, const char *expected_err_msg) +{ + struct test_kfunc_dynptr_param *skel; + LIBBPF_OPTS(bpf_object_open_opts, opts); + libbpf_print_fn_t old_print_cb; + struct bpf_program *prog; + int err; + + opts.kernel_log_buf = obj_log_buf; + opts.kernel_log_size = log_buf_sz; + opts.kernel_log_level = 1; + + skel = test_kfunc_dynptr_param__open_opts(&opts); + if (!ASSERT_OK_PTR(skel, "test_kfunc_dynptr_param__open_opts")) + goto cleanup; + + prog = bpf_object__find_program_by_name(skel->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto cleanup; + + bpf_program__set_autoload(prog, true); + + bpf_map__set_max_entries(skel->maps.ringbuf, getpagesize()); + + kfunc_not_supported = false; + + old_print_cb = libbpf_set_print(libbpf_print_cb); + err = test_kfunc_dynptr_param__load(skel); + libbpf_set_print(old_print_cb); + + if (err < 0 && kfunc_not_supported) { + fprintf(stderr, + "%s:SKIP:bpf_verify_pkcs7_signature() kfunc not supported\n", + __func__); + test__skip(); + goto cleanup; + } + + if (!ASSERT_ERR(err, "unexpected load success")) + goto cleanup; + + if (!ASSERT_OK_PTR(strstr(obj_log_buf, expected_err_msg), "expected_err_msg")) { + fprintf(stderr, "Expected err_msg: %s\n", expected_err_msg); + fprintf(stderr, "Verifier output: %s\n", obj_log_buf); + } + +cleanup: + test_kfunc_dynptr_param__destroy(skel); +} + +static void verify_success(const char *prog_name, int expected_runtime_err) +{ + struct test_kfunc_dynptr_param *skel; + libbpf_print_fn_t old_print_cb; + struct bpf_program *prog; + struct bpf_link *link; + __u32 next_id; + int err; + + skel = test_kfunc_dynptr_param__open(); + if (!ASSERT_OK_PTR(skel, "test_kfunc_dynptr_param__open")) + return; + + skel->bss->pid = getpid(); + + bpf_map__set_max_entries(skel->maps.ringbuf, getpagesize()); + + kfunc_not_supported = false; + + old_print_cb = libbpf_set_print(libbpf_print_cb); + err = test_kfunc_dynptr_param__load(skel); + libbpf_set_print(old_print_cb); + + if (err < 0 && kfunc_not_supported) { + fprintf(stderr, + "%s:SKIP:bpf_verify_pkcs7_signature() kfunc not supported\n", + __func__); + test__skip(); + goto cleanup; + } + + if (!ASSERT_OK(err, "test_kfunc_dynptr_param__load")) + goto cleanup; + + prog = bpf_object__find_program_by_name(skel->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto cleanup; + + link = bpf_program__attach(prog); + if (!ASSERT_OK_PTR(link, "bpf_program__attach")) + goto cleanup; + + err = bpf_prog_get_next_id(0, &next_id); + + bpf_link__destroy(link); + + if (!ASSERT_OK(err, "bpf_prog_get_next_id")) + goto cleanup; + + ASSERT_EQ(skel->bss->err, expected_runtime_err, "err"); + +cleanup: + test_kfunc_dynptr_param__destroy(skel); +} + +void test_kfunc_dynptr_param(void) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(kfunc_dynptr_tests); i++) { + if (!test__start_subtest(kfunc_dynptr_tests[i].prog_name)) + continue; + + if (kfunc_dynptr_tests[i].expected_verifier_err_msg) + verify_fail(kfunc_dynptr_tests[i].prog_name, + kfunc_dynptr_tests[i].expected_verifier_err_msg); + else + verify_success(kfunc_dynptr_tests[i].prog_name, + kfunc_dynptr_tests[i].expected_runtime_err); + } +} diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c new file mode 100644 index 000000000000..d7d39ea009a7 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include "vmlinux.h" +#include <errno.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +struct bpf_dynptr { + __u64 :64; + __u64 :64; +} __attribute__((aligned(8))); + +extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym; +extern void bpf_key_put(struct bpf_key *key) __ksym; +extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr, + struct bpf_dynptr *sig_ptr, + struct bpf_key *trusted_keyring) __ksym; + +struct { + __uint(type, BPF_MAP_TYPE_RINGBUF); +} ringbuf SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, __u32); + __type(value, __u32); +} array_map SEC(".maps"); + +int err, pid; + +char _license[] SEC("license") = "GPL"; + +SEC("?lsm.s/bpf") +int BPF_PROG(dynptr_type_not_supp, int cmd, union bpf_attr *attr, + unsigned int size) +{ + char write_data[64] = "hello there, world!!"; + struct bpf_dynptr ptr; + + bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(write_data), 0, &ptr); + + return bpf_verify_pkcs7_signature(&ptr, &ptr, NULL); +} + +SEC("?lsm.s/bpf") +int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size) +{ + unsigned long val; + + return bpf_verify_pkcs7_signature((struct bpf_dynptr *)&val, + (struct bpf_dynptr *)&val, NULL); +} + +SEC("?lsm.s/bpf") +int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size) +{ + unsigned long val; + + return bpf_verify_pkcs7_signature((struct bpf_dynptr *)val, + (struct bpf_dynptr *)val, NULL); +} + +SEC("lsm.s/bpf") +int BPF_PROG(dynptr_data_null, int cmd, union bpf_attr *attr, unsigned int size) +{ + struct bpf_key *trusted_keyring; + struct bpf_dynptr ptr; + __u32 *value; + int ret, zero = 0; + + if (bpf_get_current_pid_tgid() >> 32 != pid) + return 0; + + value = bpf_map_lookup_elem(&array_map, &zero); + if (!value) + return 0; + + /* Pass invalid flags. */ + ret = bpf_dynptr_from_mem(value, sizeof(*value), 1, &ptr); + if (ret != -EINVAL) + return 0; + + trusted_keyring = bpf_lookup_system_key(0); + if (!trusted_keyring) + return 0; + + err = bpf_verify_pkcs7_signature(&ptr, &ptr, trusted_keyring); + + bpf_key_put(trusted_keyring); + + return 0; +}
On Wed, 7 Sept 2022 at 17:00, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
Add tests to ensure that only supported dynamic pointer types are accepted, that the passed argument is actually a dynamic pointer, that the passed argument is a pointer to the stack, and that bpf_verify_pkcs7_signature() correctly handles dynamic pointers with data set to NULL.
The tests are currently in the deny list for s390x (JIT does not support calling kernel function).
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
Just a minor nit: you could probably use invalid flags value other than 1, since most likely the next valid flag value will be 1, which will require changing this again. LGTM otherwise.
Acked-by: Kumar Kartikeya Dwivedi memxor@gmail.com
On Mon, 5 Sept 2022 at 16:34, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
One of the desirable features in security is the ability to restrict import of data to a given system based on data authenticity. If data import can be restricted, it would be possible to enforce a system-wide policy based on the signing keys the system owner trusts.
This feature is widely used in the kernel. For example, if the restriction is enabled, kernel modules can be plugged in only if they are signed with a key whose public part is in the primary or secondary keyring.
For eBPF, it can be useful as well. For example, it might be useful to authenticate data an eBPF program makes security decisions on.
[...]
CI is crashing with NULL deref for test_progs-no_alu32 with llvm-16, but I don't think the problem is in this series. This is most likely unrelated to BPF, as the crash happens inside kernel/time/tick-sched.c:tick_nohz_restart_sched_tick.
This was the same case in https://lore.kernel.org/bpf/CAP01T74steDfP6O8QOshoto3e3RnHhKtAeTbnrPBZS3YJXj....
So, https://github.com/kernel-patches/bpf/runs/8194263557?check_suite_focus=true and https://github.com/kernel-patches/bpf/runs/7982907380?check_suite_focus=true
look similar to me, and may not be related to BPF. They only trigger during runs compiled using LLVM 16, so maybe some compiler transformation is surfacing the problem?
On Mon, 2022-09-05 at 21:26 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:34, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
One of the desirable features in security is the ability to restrict import of data to a given system based on data authenticity. If data import can be restricted, it would be possible to enforce a system-wide policy based on the signing keys the system owner trusts.
This feature is widely used in the kernel. For example, if the restriction is enabled, kernel modules can be plugged in only if they are signed with a key whose public part is in the primary or secondary keyring.
For eBPF, it can be useful as well. For example, it might be useful to authenticate data an eBPF program makes security decisions on.
[...]
CI is crashing with NULL deref for test_progs-no_alu32 with llvm-16, but I don't think the problem is in this series. This is most likely unrelated to BPF, as the crash happens inside kernel/time/tick-sched.c:tick_nohz_restart_sched_tick.
This was the same case in https://lore.kernel.org/bpf/CAP01T74steDfP6O8QOshoto3e3RnHhKtAeTbnrPBZS3YJXj....
So, https://github.com/kernel-patches/bpf/runs/8194263557?check_suite_focus=true and https://github.com/kernel-patches/bpf/runs/7982907380?check_suite_focus=true
look similar to me, and may not be related to BPF. They only trigger during runs compiled using LLVM 16, so maybe some compiler transformation is surfacing the problem?
Yes, I saw that too. Not sure what the cause could be.
Thanks
Roberto
On Tue, 2022-09-06 at 09:35 +0200, Roberto Sassu wrote:
On Mon, 2022-09-05 at 21:26 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:34, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
One of the desirable features in security is the ability to restrict import of data to a given system based on data authenticity. If data import can be restricted, it would be possible to enforce a system-wide policy based on the signing keys the system owner trusts.
This feature is widely used in the kernel. For example, if the restriction is enabled, kernel modules can be plugged in only if they are signed with a key whose public part is in the primary or secondary keyring.
For eBPF, it can be useful as well. For example, it might be useful to authenticate data an eBPF program makes security decisions on.
[...]
CI is crashing with NULL deref for test_progs-no_alu32 with llvm- 16, but I don't think the problem is in this series. This is most likely unrelated to BPF, as the crash happens inside kernel/time/tick-sched.c:tick_nohz_restart_sched_tick.
This was the same case in https://lore.kernel.org/bpf/CAP01T74steDfP6O8QOshoto3e3RnHhKtAeTbnrPBZS3YJXj....
So, https://github.com/kernel-patches/bpf/runs/8194263557?check_suite_focus=true and https://github.com/kernel-patches/bpf/runs/7982907380?check_suite_focus=true
look similar to me, and may not be related to BPF. They only trigger during runs compiled using LLVM 16, so maybe some compiler transformation is surfacing the problem?
Yes, I saw that too. Not sure what the cause could be.
Another occurrence, this time with gcc:
https://github.com/robertosassu/vmtest/runs/8230071814?check_suite_focus=tru...
Roberto
On Wed, 7 Sept 2022 at 16:49, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
On Tue, 2022-09-06 at 09:35 +0200, Roberto Sassu wrote:
On Mon, 2022-09-05 at 21:26 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:34, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
One of the desirable features in security is the ability to restrict import of data to a given system based on data authenticity. If data import can be restricted, it would be possible to enforce a system-wide policy based on the signing keys the system owner trusts.
This feature is widely used in the kernel. For example, if the restriction is enabled, kernel modules can be plugged in only if they are signed with a key whose public part is in the primary or secondary keyring.
For eBPF, it can be useful as well. For example, it might be useful to authenticate data an eBPF program makes security decisions on.
[...]
CI is crashing with NULL deref for test_progs-no_alu32 with llvm- 16, but I don't think the problem is in this series. This is most likely unrelated to BPF, as the crash happens inside kernel/time/tick-sched.c:tick_nohz_restart_sched_tick.
This was the same case in https://lore.kernel.org/bpf/CAP01T74steDfP6O8QOshoto3e3RnHhKtAeTbnrPBZS3YJXj....
So, https://github.com/kernel-patches/bpf/runs/8194263557?check_suite_focus=true and https://github.com/kernel-patches/bpf/runs/7982907380?check_suite_focus=true
look similar to me, and may not be related to BPF. They only trigger during runs compiled using LLVM 16, so maybe some compiler transformation is surfacing the problem?
Yes, I saw that too. Not sure what the cause could be.
Another occurrence, this time with gcc:
https://github.com/robertosassu/vmtest/runs/8230071814?check_suite_focus=tru...
... and it seems like this run does not even have your patches, right?
Roberto
On Wed, 2022-09-07 at 16:57 +0200, Kumar Kartikeya Dwivedi wrote:
On Wed, 7 Sept 2022 at 16:49, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
On Tue, 2022-09-06 at 09:35 +0200, Roberto Sassu wrote:
On Mon, 2022-09-05 at 21:26 +0200, Kumar Kartikeya Dwivedi wrote:
On Mon, 5 Sept 2022 at 16:34, Roberto Sassu roberto.sassu@huaweicloud.com wrote:
From: Roberto Sassu roberto.sassu@huawei.com
One of the desirable features in security is the ability to restrict import of data to a given system based on data authenticity. If data import can be restricted, it would be possible to enforce a system-wide policy based on the signing keys the system owner trusts.
This feature is widely used in the kernel. For example, if the restriction is enabled, kernel modules can be plugged in only if they are signed with a key whose public part is in the primary or secondary keyring.
For eBPF, it can be useful as well. For example, it might be useful to authenticate data an eBPF program makes security decisions on.
[...]
CI is crashing with NULL deref for test_progs-no_alu32 with llvm- 16, but I don't think the problem is in this series. This is most likely unrelated to BPF, as the crash happens inside kernel/time/tick-sched.c:tick_nohz_restart_sched_tick.
This was the same case in https://lore.kernel.org/bpf/CAP01T74steDfP6O8QOshoto3e3RnHhKtAeTbnrPBZS3YJXj....
So, https://github.com/kernel-patches/bpf/runs/8194263557?check_suite_focus=true and https://github.com/kernel-patches/bpf/runs/7982907380?check_suite_focus=true
look similar to me, and may not be related to BPF. They only trigger during runs compiled using LLVM 16, so maybe some compiler transformation is surfacing the problem?
Yes, I saw that too. Not sure what the cause could be.
Another occurrence, this time with gcc:
https://github.com/robertosassu/vmtest/runs/8230071814?check_suite_focus=tru...
... and it seems like this run does not even have your patches, right?
Uhm, the kernel patches are there. The tests except the verifier ones weren't successfuly applied, probably due to the deny list.
One thing in common with the failures seems when the panic happens, when test_progs reaches verif_twfw. I will try to execute this and earlier tests to reproduce the panic locally.
Roberto
linux-kselftest-mirror@lists.linaro.org