One of the desirable features in security is the ability to restrict import of data to a given system based on data authenticity. If data import can be restricted, it would be possible to enforce a system-wide policy based on the signing keys the system owner trusts.
This feature is widely used in the kernel. For example, if the restriction is enabled, kernel modules can be plugged in only if they are signed with a key whose public part is in the primary or secondary keyring.
For eBPF, it can be useful as well. For example, it might be useful to authenticate data an eBPF program makes security decisions on.
The initial idea for this feature was to provide an helper that eBPF programs might call to authenticate data whenever necessary. However, this restricts the ability to use that helper only in sleepable programs (due to crypto operations). Furthermore, data authentication would have been responsibility of eBPF programs.
The proposed implementation instead shifts the responsibility of data authentication to the eBPF subsystem, upon request by the users. Whenever the users desire such feature, they just have to set a new map flag called BPF_F_VERIFY_ELEM. The eBPF subsystem ensures that only authenticated data can be added to the map. The check is performed during the execution of the bpf() system call when the commands are BPF_MAP_UPDATE_ELEM or BPF_MAP_UPDATE_BATCH. Since memory regions are not verified, usage of the BPF_F_MMAPABLE map flag is forbidden when BPF_F_VERIFY_ELEM is set.
An advantage of shifting the responsibility of data authentication to the eBPF subsystem is that it can be offered to any kind of eBPF programs, not only the sleepable ones.
When the new map flag BPF_F_VERIFY_ELEM is set, users have to provide a map value in the following format:
+-------------------------------+---------------+-----+-----------------+ | verified data+sig size (be32) | verified data | sig | unverified data | +-------------------------------+---------------+-----+-----------------+
This is mostly the same format adopted for kernel modules, with the exception of the first field, as the size cannot be determined otherwise due to the fixed map value size. More details can be found in patch 1.
Since the kernel already parses the format above, it was convenient to introduce also a new helper, called bpf_map_verified_data_size(), to return the size of verified data to the caller. This is done in patch 2.
Finally, the added functionality is tested in patch 3.
Roberto Sassu (3): bpf: Add BPF_F_VERIFY_ELEM to require signature verification on map values bpf: Introduce bpf_map_verified_data_size() helper bpf: Add tests for signed map values
include/linux/bpf.h | 7 + include/uapi/linux/bpf.h | 11 + kernel/bpf/arraymap.c | 2 +- kernel/bpf/helpers.c | 15 ++ kernel/bpf/syscall.c | 70 ++++++ tools/include/uapi/linux/bpf.h | 11 + .../bpf/prog_tests/test_map_value_sig.c | 212 ++++++++++++++++++ .../selftests/bpf/progs/map_value_sig.c | 50 +++++ 8 files changed, 377 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/test_map_value_sig.c create mode 100644 tools/testing/selftests/bpf/progs/map_value_sig.c
In some cases, it is desirable to ensure that a map contains data from authenticated sources, for example if map data are used for making security decisions.
Such restriction is achieved by verifying the signature of map values, at the time those values are added to the map with the bpf() system call (more specifically, when the commands passed to bpf() are BPF_MAP_UPDATE_ELEM or BPF_MAP_UPDATE_BATCH). Mmappable maps are not allowed in this case.
Signature verification is initially done with keys in the primary and secondary kernel keyrings, similarly to kernel modules. This allows system owners to enforce a system-wide policy based on the keys they trust. Support for additional keyrings could be added later, based on use case needs.
Signature verification is done only for those maps for which the new map flag BPF_F_VERIFY_ELEM is set. When the flag is set, the kernel expects map values to be in the following format:
+-------------------------------+---------------+-----+-----------------+ | verified data+sig size (be32) | verified data | sig | unverified data | +-------------------------------+---------------+-----+-----------------+
where sig is a module-style appended signature as generated by the sign-file tool. The verified data+sig size (in big endian) must be explicitly provided (it is not generated by sign-file), as it cannot be determined in other ways (currently, the map value size is fixed). It can be obtained from the size of the file created by sign-file.
Introduce the new map flag BPF_F_VERIFY_ELEM, and additionally call the new function bpf_map_verify_value_sig() from bpf_map_update_value() if the flag is set. bpf_map_verify_value_sig(), declared as global for a new helper, is basically equivalent to mod_verify_sig(). It additionally does the marker check, that for kernel modules is done in module_sig_check(), and the parsing of the verified data+sig size.
Currently, enable the usage of the flag only for the array map. Support for more map types can be added later.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- include/linux/bpf.h | 7 ++++ include/uapi/linux/bpf.h | 3 ++ kernel/bpf/arraymap.c | 2 +- kernel/bpf/syscall.c | 70 ++++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 3 ++ 5 files changed, 84 insertions(+), 1 deletion(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a7080c86fa76..8f5c042e70a7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1825,6 +1825,8 @@ static inline bool unprivileged_ebpf_enabled(void) return !sysctl_unprivileged_bpf_disabled; }
+int bpf_map_verify_value_sig(const void *mod, size_t modlen, bool verify); + #else /* !CONFIG_BPF_SYSCALL */ static inline struct bpf_prog *bpf_prog_get(u32 ufd) { @@ -2034,6 +2036,11 @@ static inline bool unprivileged_ebpf_enabled(void) return false; }
+static inline int bpf_map_verify_value_sig(const void *mod, size_t modlen, + bool verify) +{ + return -EOPNOTSUPP; +} #endif /* CONFIG_BPF_SYSCALL */
void __bpf_free_used_btfs(struct bpf_prog_aux *aux, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index f4009dbdf62d..a8e7803d2593 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1226,6 +1226,9 @@ enum {
/* Create a map that is suitable to be an inner map with dynamic max entries */ BPF_F_INNER_MAP = (1U << 12), + +/* Verify map value (fmt: ver data+sig size(be32), ver data, sig, unver data) */ + BPF_F_VERIFY_ELEM = (1U << 13) };
/* Flags for BPF_PROG_QUERY. */ diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index fe40d3b9458f..b430fdd0e892 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -17,7 +17,7 @@
#define ARRAY_CREATE_FLAG_MASK \ (BPF_F_NUMA_NODE | BPF_F_MMAPABLE | BPF_F_ACCESS_MASK | \ - BPF_F_PRESERVE_ELEMS | BPF_F_INNER_MAP) + BPF_F_PRESERVE_ELEMS | BPF_F_INNER_MAP | BPF_F_VERIFY_ELEM)
static void bpf_array_free_percpu(struct bpf_array *array) { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 2b69306d3c6e..ca9e4a284120 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -35,6 +35,8 @@ #include <linux/rcupdate_trace.h> #include <linux/memcontrol.h> #include <linux/trace_events.h> +#include <linux/verification.h> +#include <linux/module_signature.h>
#define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \ (map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \ @@ -180,6 +182,13 @@ static int bpf_map_update_value(struct bpf_map *map, struct fd f, void *key, { int err;
+ if (map->map_flags & BPF_F_VERIFY_ELEM) { + err = bpf_map_verify_value_sig(value, bpf_map_value_size(map), + true); + if (err < 0) + return err; + } + /* Need to create a kthread, thus must support schedule */ if (bpf_map_is_dev_bound(map)) { return bpf_map_offload_update_elem(map, key, value, flags); @@ -1057,6 +1066,11 @@ static int map_create(union bpf_attr *attr) if (err) return -EINVAL;
+ /* Allow signed data to go through update/push methods only. */ + if ((attr->map_flags & BPF_F_VERIFY_ELEM) && + (attr->map_flags & BPF_F_MMAPABLE)) + return -EINVAL; + if (attr->btf_vmlinux_value_type_id) { if (attr->map_type != BPF_MAP_TYPE_STRUCT_OPS || attr->btf_key_type_id || attr->btf_value_type_id) @@ -1353,6 +1367,62 @@ static int map_lookup_elem(union bpf_attr *attr) return err; }
+int bpf_map_verify_value_sig(const void *mod, size_t modlen, bool verify) +{ + const size_t marker_len = strlen(MODULE_SIG_STRING); + struct module_signature ms; + size_t sig_len; + u32 _modlen; + int ret; + + /* + * Format of mod: + * + * verified data+sig size (be32), verified data, sig, unverified data + */ + if (modlen <= sizeof(u32)) + return -ENOENT; + + _modlen = be32_to_cpu(*(u32 *)(mod)); + + if (_modlen > modlen - sizeof(u32)) + return -EINVAL; + + modlen = _modlen; + mod += sizeof(u32); + + if (modlen <= marker_len) + return -ENOENT; + + if (memcmp(mod + modlen - marker_len, MODULE_SIG_STRING, marker_len)) + return -ENOENT; + + modlen -= marker_len; + + if (modlen <= sizeof(ms)) + return -EBADMSG; + + memcpy(&ms, mod + (modlen - sizeof(ms)), sizeof(ms)); + + ret = mod_check_sig(&ms, modlen, "bpf_map_value"); + if (ret) + return ret; + + sig_len = be32_to_cpu(ms.sig_len); + modlen -= sig_len + sizeof(ms); + + if (verify) { + ret = verify_pkcs7_signature(mod, modlen, mod + modlen, sig_len, + VERIFY_USE_SECONDARY_KEYRING, + VERIFYING_UNSPECIFIED_SIGNATURE, + NULL, NULL); + if (ret < 0) + return ret; + } + + return modlen; +} +EXPORT_SYMBOL_GPL(bpf_map_verify_value_sig);
#define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index f4009dbdf62d..a8e7803d2593 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1226,6 +1226,9 @@ enum {
/* Create a map that is suitable to be an inner map with dynamic max entries */ BPF_F_INNER_MAP = (1U << 12), + +/* Verify map value (fmt: ver data+sig size(be32), ver data, sig, unver data) */ + BPF_F_VERIFY_ELEM = (1U << 13) };
/* Flags for BPF_PROG_QUERY. */
Hi Roberto,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on bpf-next/master] [also build test ERROR on bpf/master horms-ipvs/master net/master net-next/master v5.18 next-20220525] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/intel-lab-lkp/linux/commits/Roberto-Sassu/bpf-Add-support... base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master config: x86_64-kexec (https://download.01.org/0day-ci/archive/20220526/202205260057.t4Lmg5Gb-lkp@i...) compiler: gcc-11 (Debian 11.3.0-1) 11.3.0 reproduce (this is a W=1 build): # https://github.com/intel-lab-lkp/linux/commit/196e68e5ddfa50f40efaf20c8df37f... git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Roberto-Sassu/bpf-Add-support-for-maps-with-authenticated-values/20220525-212552 git checkout 196e68e5ddfa50f40efaf20c8df37f3420e38b72 # save the config file mkdir build_dir && cp config build_dir/.config make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash
If you fix the issue, kindly add following tag where applicable Reported-by: kernel test robot lkp@intel.com
All errors (new ones prefixed by >>):
ld: kernel/bpf/syscall.o: in function `bpf_map_verify_value_sig':
syscall.c:(.text+0x4ff): undefined reference to `mod_check_sig'
Hi Roberto,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on bpf-next/master] [also build test ERROR on bpf/master horms-ipvs/master net/master net-next/master v5.18 next-20220525] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/intel-lab-lkp/linux/commits/Roberto-Sassu/bpf-Add-support... base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master config: m68k-defconfig (https://download.01.org/0day-ci/archive/20220526/202205260201.H6HGWRhl-lkp@i...) compiler: m68k-linux-gcc (GCC) 11.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/196e68e5ddfa50f40efaf20c8df37f... git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Roberto-Sassu/bpf-Add-support-for-maps-with-authenticated-values/20220525-212552 git checkout 196e68e5ddfa50f40efaf20c8df37f3420e38b72 # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=m68k SHELL=/bin/bash
If you fix the issue, kindly add following tag where applicable Reported-by: kernel test robot lkp@intel.com
All errors (new ones prefixed by >>):
kernel/bpf/syscall.c: In function 'bpf_map_verify_value_sig':
kernel/bpf/syscall.c:1415:23: error: implicit declaration of function 'verify_pkcs7_signature' [-Werror=implicit-function-declaration]
1415 | ret = verify_pkcs7_signature(mod, modlen, mod + modlen, sig_len, | ^~~~~~~~~~~~~~~~~~~~~~ kernel/bpf/syscall.c: At top level: kernel/bpf/syscall.c:5271:13: warning: no previous prototype for 'unpriv_ebpf_notify' [-Wmissing-prototypes] 5271 | void __weak unpriv_ebpf_notify(int new_state) | ^~~~~~~~~~~~~~~~~~ cc1: some warnings being treated as errors
vim +/verify_pkcs7_signature +1415 kernel/bpf/syscall.c
1369 1370 int bpf_map_verify_value_sig(const void *mod, size_t modlen, bool verify) 1371 { 1372 const size_t marker_len = strlen(MODULE_SIG_STRING); 1373 struct module_signature ms; 1374 size_t sig_len; 1375 u32 _modlen; 1376 int ret; 1377 1378 /* 1379 * Format of mod: 1380 * 1381 * verified data+sig size (be32), verified data, sig, unverified data 1382 */ 1383 if (modlen <= sizeof(u32)) 1384 return -ENOENT; 1385 1386 _modlen = be32_to_cpu(*(u32 *)(mod)); 1387 1388 if (_modlen > modlen - sizeof(u32)) 1389 return -EINVAL; 1390 1391 modlen = _modlen; 1392 mod += sizeof(u32); 1393 1394 if (modlen <= marker_len) 1395 return -ENOENT; 1396 1397 if (memcmp(mod + modlen - marker_len, MODULE_SIG_STRING, marker_len)) 1398 return -ENOENT; 1399 1400 modlen -= marker_len; 1401 1402 if (modlen <= sizeof(ms)) 1403 return -EBADMSG; 1404 1405 memcpy(&ms, mod + (modlen - sizeof(ms)), sizeof(ms)); 1406 1407 ret = mod_check_sig(&ms, modlen, "bpf_map_value"); 1408 if (ret) 1409 return ret; 1410 1411 sig_len = be32_to_cpu(ms.sig_len); 1412 modlen -= sig_len + sizeof(ms); 1413 1414 if (verify) {
1415 ret = verify_pkcs7_signature(mod, modlen, mod + modlen, sig_len,
1416 VERIFY_USE_SECONDARY_KEYRING, 1417 VERIFYING_UNSPECIFIED_SIGNATURE, 1418 NULL, NULL); 1419 if (ret < 0) 1420 return ret; 1421 } 1422 1423 return modlen; 1424 } 1425 EXPORT_SYMBOL_GPL(bpf_map_verify_value_sig); 1426
Hi Roberto,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on bpf-next/master] [also build test WARNING on bpf/master horms-ipvs/master net/master net-next/master v5.18 next-20220525] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/intel-lab-lkp/linux/commits/Roberto-Sassu/bpf-Add-support... base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master config: x86_64-rhel-8.3-kselftests (https://download.01.org/0day-ci/archive/20220526/202205260606.VXzztn2R-lkp@i...) compiler: gcc-11 (Debian 11.3.0-1) 11.3.0 reproduce: # apt-get install sparse # sparse version: v0.6.4-14-g5a0004b5-dirty # https://github.com/intel-lab-lkp/linux/commit/196e68e5ddfa50f40efaf20c8df37f... git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Roberto-Sassu/bpf-Add-support-for-maps-with-authenticated-values/20220525-212552 git checkout 196e68e5ddfa50f40efaf20c8df37f3420e38b72 # save the config file mkdir build_dir && cp config build_dir/.config make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=x86_64 SHELL=/bin/bash kernel/bpf/
If you fix the issue, kindly add following tag where applicable Reported-by: kernel test robot lkp@intel.com
sparse warnings: (new ones prefixed by >>) kernel/bpf/syscall.c:590:25: sparse: sparse: Using plain integer as NULL pointer
kernel/bpf/syscall.c:1386:19: sparse: sparse: cast to restricted __be32
kernel/bpf/syscall.c: note: in included file (through include/linux/bpf.h): include/linux/bpfptr.h:52:47: sparse: sparse: cast to non-scalar include/linux/bpfptr.h:52:47: sparse: sparse: cast from non-scalar include/linux/bpfptr.h:52:47: sparse: sparse: cast to non-scalar include/linux/bpfptr.h:52:47: sparse: sparse: cast from non-scalar include/linux/bpfptr.h:81:43: sparse: sparse: cast to non-scalar include/linux/bpfptr.h:81:43: sparse: sparse: cast from non-scalar include/linux/bpfptr.h:52:47: sparse: sparse: cast to non-scalar include/linux/bpfptr.h:52:47: sparse: sparse: cast from non-scalar include/linux/bpfptr.h:52:47: sparse: sparse: cast to non-scalar include/linux/bpfptr.h:52:47: sparse: sparse: cast from non-scalar
vim +1386 kernel/bpf/syscall.c
1369 1370 int bpf_map_verify_value_sig(const void *mod, size_t modlen, bool verify) 1371 { 1372 const size_t marker_len = strlen(MODULE_SIG_STRING); 1373 struct module_signature ms; 1374 size_t sig_len; 1375 u32 _modlen; 1376 int ret; 1377 1378 /* 1379 * Format of mod: 1380 * 1381 * verified data+sig size (be32), verified data, sig, unverified data 1382 */ 1383 if (modlen <= sizeof(u32)) 1384 return -ENOENT; 1385
1386 _modlen = be32_to_cpu(*(u32 *)(mod));
1387 1388 if (_modlen > modlen - sizeof(u32)) 1389 return -EINVAL; 1390 1391 modlen = _modlen; 1392 mod += sizeof(u32); 1393 1394 if (modlen <= marker_len) 1395 return -ENOENT; 1396 1397 if (memcmp(mod + modlen - marker_len, MODULE_SIG_STRING, marker_len)) 1398 return -ENOENT; 1399 1400 modlen -= marker_len; 1401 1402 if (modlen <= sizeof(ms)) 1403 return -EBADMSG; 1404 1405 memcpy(&ms, mod + (modlen - sizeof(ms)), sizeof(ms)); 1406 1407 ret = mod_check_sig(&ms, modlen, "bpf_map_value"); 1408 if (ret) 1409 return ret; 1410 1411 sig_len = be32_to_cpu(ms.sig_len); 1412 modlen -= sig_len + sizeof(ms); 1413 1414 if (verify) { 1415 ret = verify_pkcs7_signature(mod, modlen, mod + modlen, sig_len, 1416 VERIFY_USE_SECONDARY_KEYRING, 1417 VERIFYING_UNSPECIFIED_SIGNATURE, 1418 NULL, NULL); 1419 if (ret < 0) 1420 return ret; 1421 } 1422 1423 return modlen; 1424 } 1425 EXPORT_SYMBOL_GPL(bpf_map_verify_value_sig); 1426
On Wed, May 25, 2022 at 3:21 PM Roberto Sassu roberto.sassu@huawei.com wrote:
In some cases, it is desirable to ensure that a map contains data from authenticated sources, for example if map data are used for making security decisions.
I am guessing this comes from the discussion we had about digilim. I remember we discussed a BPF helper that could verify signatures. Why would that approach not work?
Such restriction is achieved by verifying the signature of map values, at the time those values are added to the map with the bpf() system call (more specifically, when the commands passed to bpf() are BPF_MAP_UPDATE_ELEM or BPF_MAP_UPDATE_BATCH). Mmappable maps are not allowed in this case.
Signature verification is initially done with keys in the primary and secondary kernel keyrings, similarly to kernel modules. This allows system owners to enforce a system-wide policy based on the keys they trust. Support for additional keyrings could be added later, based on use case needs.
Signature verification is done only for those maps for which the new map flag BPF_F_VERIFY_ELEM is set. When the flag is set, the kernel expects map values to be in the following format:
+-------------------------------+---------------+-----+-----------------+ | verified data+sig size (be32) | verified data | sig | unverified data | +-------------------------------+---------------+-----+-----------------+
where sig is a module-style appended signature as generated by the sign-file tool. The verified data+sig size (in big endian) must be explicitly provided (it is not generated by sign-file), as it cannot be determined in other ways (currently, the map value size is fixed). It can be obtained from the size of the file created by sign-file.
Introduce the new map flag BPF_F_VERIFY_ELEM, and additionally call the new function bpf_map_verify_value_sig() from bpf_map_update_value() if the flag is set. bpf_map_verify_value_sig(), declared as global for a new helper, is basically equivalent to mod_verify_sig(). It additionally does the marker check, that for kernel modules is done in module_sig_check(), and the parsing of the verified data+sig size.
Currently, enable the usage of the flag only for the array map. Support for more map types can be added later.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
include/linux/bpf.h | 7 ++++ include/uapi/linux/bpf.h | 3 ++ kernel/bpf/arraymap.c | 2 +- kernel/bpf/syscall.c | 70 ++++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 3 ++ 5 files changed, 84 insertions(+), 1 deletion(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a7080c86fa76..8f5c042e70a7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1825,6 +1825,8 @@ static inline bool unprivileged_ebpf_enabled(void) return !sysctl_unprivileged_bpf_disabled; }
+int bpf_map_verify_value_sig(const void *mod, size_t modlen, bool verify);
#else /* !CONFIG_BPF_SYSCALL */ static inline struct bpf_prog *bpf_prog_get(u32 ufd) { @@ -2034,6 +2036,11 @@ static inline bool unprivileged_ebpf_enabled(void) return false; }
+static inline int bpf_map_verify_value_sig(const void *mod, size_t modlen,
bool verify)
+{
return -EOPNOTSUPP;
+} #endif /* CONFIG_BPF_SYSCALL */
void __bpf_free_used_btfs(struct bpf_prog_aux *aux, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index f4009dbdf62d..a8e7803d2593 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1226,6 +1226,9 @@ enum {
/* Create a map that is suitable to be an inner map with dynamic max entries */ BPF_F_INNER_MAP = (1U << 12),
+/* Verify map value (fmt: ver data+sig size(be32), ver data, sig, unver data) */
BPF_F_VERIFY_ELEM = (1U << 13)
};
/* Flags for BPF_PROG_QUERY. */ diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index fe40d3b9458f..b430fdd0e892 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -17,7 +17,7 @@
#define ARRAY_CREATE_FLAG_MASK \ (BPF_F_NUMA_NODE | BPF_F_MMAPABLE | BPF_F_ACCESS_MASK | \
BPF_F_PRESERVE_ELEMS | BPF_F_INNER_MAP)
BPF_F_PRESERVE_ELEMS | BPF_F_INNER_MAP | BPF_F_VERIFY_ELEM)
static void bpf_array_free_percpu(struct bpf_array *array) { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 2b69306d3c6e..ca9e4a284120 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -35,6 +35,8 @@ #include <linux/rcupdate_trace.h> #include <linux/memcontrol.h> #include <linux/trace_events.h> +#include <linux/verification.h> +#include <linux/module_signature.h>
#define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \ (map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \ @@ -180,6 +182,13 @@ static int bpf_map_update_value(struct bpf_map *map, struct fd f, void *key, { int err;
if (map->map_flags & BPF_F_VERIFY_ELEM) {
err = bpf_map_verify_value_sig(value, bpf_map_value_size(map),
true);
if (err < 0)
return err;
}
/* Need to create a kthread, thus must support schedule */ if (bpf_map_is_dev_bound(map)) { return bpf_map_offload_update_elem(map, key, value, flags);
@@ -1057,6 +1066,11 @@ static int map_create(union bpf_attr *attr) if (err) return -EINVAL;
/* Allow signed data to go through update/push methods only. */
if ((attr->map_flags & BPF_F_VERIFY_ELEM) &&
(attr->map_flags & BPF_F_MMAPABLE))
return -EINVAL;
if (attr->btf_vmlinux_value_type_id) { if (attr->map_type != BPF_MAP_TYPE_STRUCT_OPS || attr->btf_key_type_id || attr->btf_value_type_id)
@@ -1353,6 +1367,62 @@ static int map_lookup_elem(union bpf_attr *attr) return err; }
+int bpf_map_verify_value_sig(const void *mod, size_t modlen, bool verify) +{
const size_t marker_len = strlen(MODULE_SIG_STRING);
struct module_signature ms;
size_t sig_len;
u32 _modlen;
int ret;
/*
* Format of mod:
*
* verified data+sig size (be32), verified data, sig, unverified data
*/
if (modlen <= sizeof(u32))
return -ENOENT;
_modlen = be32_to_cpu(*(u32 *)(mod));
if (_modlen > modlen - sizeof(u32))
return -EINVAL;
modlen = _modlen;
mod += sizeof(u32);
if (modlen <= marker_len)
return -ENOENT;
if (memcmp(mod + modlen - marker_len, MODULE_SIG_STRING, marker_len))
return -ENOENT;
modlen -= marker_len;
if (modlen <= sizeof(ms))
return -EBADMSG;
memcpy(&ms, mod + (modlen - sizeof(ms)), sizeof(ms));
ret = mod_check_sig(&ms, modlen, "bpf_map_value");
if (ret)
return ret;
sig_len = be32_to_cpu(ms.sig_len);
modlen -= sig_len + sizeof(ms);
if (verify) {
ret = verify_pkcs7_signature(mod, modlen, mod + modlen, sig_len,
VERIFY_USE_SECONDARY_KEYRING,
VERIFYING_UNSPECIFIED_SIGNATURE,
NULL, NULL);
if (ret < 0)
return ret;
}
return modlen;
+} +EXPORT_SYMBOL_GPL(bpf_map_verify_value_sig);
#define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index f4009dbdf62d..a8e7803d2593 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1226,6 +1226,9 @@ enum {
/* Create a map that is suitable to be an inner map with dynamic max entries */ BPF_F_INNER_MAP = (1U << 12),
+/* Verify map value (fmt: ver data+sig size(be32), ver data, sig, unver data) */
BPF_F_VERIFY_ELEM = (1U << 13)
};
/* Flags for BPF_PROG_QUERY. */
2.25.1
From: KP Singh [mailto:kpsingh@kernel.org] Sent: Friday, June 3, 2022 2:08 PM On Wed, May 25, 2022 at 3:21 PM Roberto Sassu roberto.sassu@huawei.com wrote:
In some cases, it is desirable to ensure that a map contains data from authenticated sources, for example if map data are used for making security decisions.
I am guessing this comes from the discussion we had about digilim. I remember we discussed a BPF helper that could verify signatures. Why would that approach not work?
The main reason is that signature verification can be done also for non-sleepable hooks. For example, one is fexit/array_map_update_elem.
Currently the helper in patch 2 just returns the size of verified data. With an additional parameter, it could also be used as a helper for signature verification by any eBPF programs.
To be honest, I like more the idea of a map flag, as it is more clear that signature verification is being done. Otherwise, we would need to infer it from the eBPF program code.
Thanks
Roberto
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Yang Xi, Li He
Such restriction is achieved by verifying the signature of map values, at the time those values are added to the map with the bpf() system call (more specifically, when the commands passed to bpf() are BPF_MAP_UPDATE_ELEM
or
BPF_MAP_UPDATE_BATCH). Mmappable maps are not allowed in this case.
Signature verification is initially done with keys in the primary and secondary kernel keyrings, similarly to kernel modules. This allows system owners to enforce a system-wide policy based on the keys they trust. Support for additional keyrings could be added later, based on use case needs.
Signature verification is done only for those maps for which the new map flag BPF_F_VERIFY_ELEM is set. When the flag is set, the kernel expects map values to be in the following format:
+-------------------------------+---------------+-----+-----------------+ | verified data+sig size (be32) | verified data | sig | unverified data | +-------------------------------+---------------+-----+-----------------+
where sig is a module-style appended signature as generated by the sign-file tool. The verified data+sig size (in big endian) must be explicitly provided (it is not generated by sign-file), as it cannot be determined in other ways (currently, the map value size is fixed). It can be obtained from the size of the file created by sign-file.
Introduce the new map flag BPF_F_VERIFY_ELEM, and additionally call the
new
function bpf_map_verify_value_sig() from bpf_map_update_value() if the flag is set. bpf_map_verify_value_sig(), declared as global for a new helper, is basically equivalent to mod_verify_sig(). It additionally does the marker check, that for kernel modules is done in module_sig_check(), and the parsing of the verified data+sig size.
Currently, enable the usage of the flag only for the array map. Support for more map types can be added later.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
include/linux/bpf.h | 7 ++++ include/uapi/linux/bpf.h | 3 ++ kernel/bpf/arraymap.c | 2 +- kernel/bpf/syscall.c | 70 ++++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 3 ++ 5 files changed, 84 insertions(+), 1 deletion(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a7080c86fa76..8f5c042e70a7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1825,6 +1825,8 @@ static inline bool unprivileged_ebpf_enabled(void) return !sysctl_unprivileged_bpf_disabled; }
+int bpf_map_verify_value_sig(const void *mod, size_t modlen, bool verify);
#else /* !CONFIG_BPF_SYSCALL */ static inline struct bpf_prog *bpf_prog_get(u32 ufd) { @@ -2034,6 +2036,11 @@ static inline bool unprivileged_ebpf_enabled(void) return false; }
+static inline int bpf_map_verify_value_sig(const void *mod, size_t modlen,
bool verify)
+{
return -EOPNOTSUPP;
+} #endif /* CONFIG_BPF_SYSCALL */
void __bpf_free_used_btfs(struct bpf_prog_aux *aux, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index f4009dbdf62d..a8e7803d2593 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1226,6 +1226,9 @@ enum {
/* Create a map that is suitable to be an inner map with dynamic max entries
*/
BPF_F_INNER_MAP = (1U << 12),
+/* Verify map value (fmt: ver data+sig size(be32), ver data, sig, unver data) */
BPF_F_VERIFY_ELEM = (1U << 13)
};
/* Flags for BPF_PROG_QUERY. */ diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index fe40d3b9458f..b430fdd0e892 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -17,7 +17,7 @@
#define ARRAY_CREATE_FLAG_MASK \ (BPF_F_NUMA_NODE | BPF_F_MMAPABLE | BPF_F_ACCESS_MASK | \
BPF_F_PRESERVE_ELEMS | BPF_F_INNER_MAP)
BPF_F_PRESERVE_ELEMS | BPF_F_INNER_MAP | BPF_F_VERIFY_ELEM)
static void bpf_array_free_percpu(struct bpf_array *array) { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 2b69306d3c6e..ca9e4a284120 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -35,6 +35,8 @@ #include <linux/rcupdate_trace.h> #include <linux/memcontrol.h> #include <linux/trace_events.h> +#include <linux/verification.h> +#include <linux/module_signature.h>
#define IS_FD_ARRAY(map) ((map)->map_type ==
BPF_MAP_TYPE_PERF_EVENT_ARRAY || \
(map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \
@@ -180,6 +182,13 @@ static int bpf_map_update_value(struct bpf_map
*map, struct fd f, void *key,
{ int err;
if (map->map_flags & BPF_F_VERIFY_ELEM) {
err = bpf_map_verify_value_sig(value, bpf_map_value_size(map),
true);
if (err < 0)
return err;
}
/* Need to create a kthread, thus must support schedule */ if (bpf_map_is_dev_bound(map)) { return bpf_map_offload_update_elem(map, key, value, flags);
@@ -1057,6 +1066,11 @@ static int map_create(union bpf_attr *attr) if (err) return -EINVAL;
/* Allow signed data to go through update/push methods only. */
if ((attr->map_flags & BPF_F_VERIFY_ELEM) &&
(attr->map_flags & BPF_F_MMAPABLE))
return -EINVAL;
if (attr->btf_vmlinux_value_type_id) { if (attr->map_type != BPF_MAP_TYPE_STRUCT_OPS || attr->btf_key_type_id || attr->btf_value_type_id)
@@ -1353,6 +1367,62 @@ static int map_lookup_elem(union bpf_attr *attr) return err; }
+int bpf_map_verify_value_sig(const void *mod, size_t modlen, bool verify) +{
const size_t marker_len = strlen(MODULE_SIG_STRING);
struct module_signature ms;
size_t sig_len;
u32 _modlen;
int ret;
/*
* Format of mod:
*
* verified data+sig size (be32), verified data, sig, unverified data
*/
if (modlen <= sizeof(u32))
return -ENOENT;
_modlen = be32_to_cpu(*(u32 *)(mod));
if (_modlen > modlen - sizeof(u32))
return -EINVAL;
modlen = _modlen;
mod += sizeof(u32);
if (modlen <= marker_len)
return -ENOENT;
if (memcmp(mod + modlen - marker_len, MODULE_SIG_STRING,
marker_len))
return -ENOENT;
modlen -= marker_len;
if (modlen <= sizeof(ms))
return -EBADMSG;
memcpy(&ms, mod + (modlen - sizeof(ms)), sizeof(ms));
ret = mod_check_sig(&ms, modlen, "bpf_map_value");
if (ret)
return ret;
sig_len = be32_to_cpu(ms.sig_len);
modlen -= sig_len + sizeof(ms);
if (verify) {
ret = verify_pkcs7_signature(mod, modlen, mod + modlen, sig_len,
VERIFY_USE_SECONDARY_KEYRING,
VERIFYING_UNSPECIFIED_SIGNATURE,
NULL, NULL);
if (ret < 0)
return ret;
}
return modlen;
+} +EXPORT_SYMBOL_GPL(bpf_map_verify_value_sig);
#define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index f4009dbdf62d..a8e7803d2593 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1226,6 +1226,9 @@ enum {
/* Create a map that is suitable to be an inner map with dynamic max entries
*/
BPF_F_INNER_MAP = (1U << 12),
+/* Verify map value (fmt: ver data+sig size(be32), ver data, sig, unver data) */
BPF_F_VERIFY_ELEM = (1U << 13)
};
/* Flags for BPF_PROG_QUERY. */
2.25.1
On Fri, Jun 3, 2022 at 3:11 PM Roberto Sassu roberto.sassu@huawei.com wrote:
From: KP Singh [mailto:kpsingh@kernel.org] Sent: Friday, June 3, 2022 2:08 PM On Wed, May 25, 2022 at 3:21 PM Roberto Sassu roberto.sassu@huawei.com wrote:
In some cases, it is desirable to ensure that a map contains data from authenticated sources, for example if map data are used for making security decisions.
I am guessing this comes from the discussion we had about digilim. I remember we discussed a BPF helper that could verify signatures. Why would that approach not work?
The main reason is that signature verification can be done also for non-sleepable hooks. For example, one is fexit/array_map_update_elem.
For your use-case, why is it not possible to hook the LSM hook "bpf" i.e security_bpf and then check if there is a MAP_UPDATE_ELEM operation?
Currently the helper in patch 2 just returns the size of verified data. With an additional parameter, it could also be used as a helper for signature verification by any eBPF programs.
Your bpf_map_verify_value_sig hard codes the type of signature (bpf_map_verify_value_sig as verify_pkcs7_signature) its implementation. This is not extensible.
What we discussed was an extensible helper that can be used for different signature types.
To be honest, I like more the idea of a map flag, as it is more clear that signature verification is being done. Otherwise, we would need to infer it from the eBPF program code.
Thanks
Roberto
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Yang Xi, Li He
Such restriction is achieved by verifying the signature of map values, at the time those values are added to the map with the bpf() system call (more specifically, when the commands passed to bpf() are BPF_MAP_UPDATE_ELEM
or
BPF_MAP_UPDATE_BATCH). Mmappable maps are not allowed in this case.
Signature verification is initially done with keys in the primary and secondary kernel keyrings, similarly to kernel modules. This allows system owners to enforce a system-wide policy based on the keys they trust. Support for additional keyrings could be added later, based on use case needs.
Signature verification is done only for those maps for which the new map flag BPF_F_VERIFY_ELEM is set. When the flag is set, the kernel expects map values to be in the following format:
+-------------------------------+---------------+-----+-----------------+ | verified data+sig size (be32) | verified data | sig | unverified data | +-------------------------------+---------------+-----+-----------------+
where sig is a module-style appended signature as generated by the sign-file tool. The verified data+sig size (in big endian) must be explicitly provided (it is not generated by sign-file), as it cannot be determined in other ways (currently, the map value size is fixed). It can be obtained from the size of the file created by sign-file.
Introduce the new map flag BPF_F_VERIFY_ELEM, and additionally call the
new
function bpf_map_verify_value_sig() from bpf_map_update_value() if the flag is set. bpf_map_verify_value_sig(), declared as global for a new helper, is basically equivalent to mod_verify_sig(). It additionally does the marker check, that for kernel modules is done in module_sig_check(), and the parsing of the verified data+sig size.
Currently, enable the usage of the flag only for the array map. Support for more map types can be added later.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
[...]
NULL, NULL);
if (ret < 0)
return ret;
}
return modlen;
+} +EXPORT_SYMBOL_GPL(bpf_map_verify_value_sig);
#define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index f4009dbdf62d..a8e7803d2593 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1226,6 +1226,9 @@ enum {
/* Create a map that is suitable to be an inner map with dynamic max entries
*/
BPF_F_INNER_MAP = (1U << 12),
+/* Verify map value (fmt: ver data+sig size(be32), ver data, sig, unver data) */
BPF_F_VERIFY_ELEM = (1U << 13)
};
/* Flags for BPF_PROG_QUERY. */
2.25.1
From: KP Singh [mailto:kpsingh@kernel.org] Sent: Friday, June 3, 2022 5:18 PM On Fri, Jun 3, 2022 at 3:11 PM Roberto Sassu roberto.sassu@huawei.com wrote:
From: KP Singh [mailto:kpsingh@kernel.org] Sent: Friday, June 3, 2022 2:08 PM On Wed, May 25, 2022 at 3:21 PM Roberto Sassu
wrote:
In some cases, it is desirable to ensure that a map contains data from authenticated sources, for example if map data are used for making
security
decisions.
I am guessing this comes from the discussion we had about digilim. I remember we discussed a BPF helper that could verify signatures. Why would that approach not work?
The main reason is that signature verification can be done also for non-sleepable hooks. For example, one is fexit/array_map_update_elem.
For your use-case, why is it not possible to hook the LSM hook "bpf" i.e security_bpf and then check if there is a MAP_UPDATE_ELEM operation?
It would require the following: a new helper to compare the user space fd with the address of the map in the eBPF program; copy data from user space to kernel space (verify_pkcs7_signatureI() expects kernel memory). That copy would happen twice.
Currently the helper in patch 2 just returns the size of verified data. With an additional parameter, it could also be used as a helper for signature verification by any eBPF programs.
Your bpf_map_verify_value_sig hard codes the type of signature (bpf_map_verify_value_sig as verify_pkcs7_signature) its implementation. This is not extensible.
It is hardcoded now, but it wouldn't if there are more verification functions. For example, if 'id_type' of module_signature is set to PKEY_ID_PGP, bpf_map_verify_value_sig() would call verify_pgp_signature() (assuming that support for PGP keys and signatures is added to the kernel).
Roberto
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Yang Xi, Li He
What we discussed was an extensible helper that can be used for different signature types.
To be honest, I like more the idea of a map flag, as it is more clear that signature verification is being done. Otherwise, we would need to infer it from the eBPF program code.
Thanks
Roberto
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Yang Xi, Li He
Such restriction is achieved by verifying the signature of map values, at the time those values are added to the map with the bpf() system call
(more
specifically, when the commands passed to bpf() are
BPF_MAP_UPDATE_ELEM
or
BPF_MAP_UPDATE_BATCH). Mmappable maps are not allowed in this
case.
Signature verification is initially done with keys in the primary and secondary kernel keyrings, similarly to kernel modules. This allows system owners to enforce a system-wide policy based on the keys they trust. Support for additional keyrings could be added later, based on use case needs.
Signature verification is done only for those maps for which the new map flag BPF_F_VERIFY_ELEM is set. When the flag is set, the kernel expects
map
values to be in the following format:
+-------------------------------+---------------+-----+-----------------+ | verified data+sig size (be32) | verified data | sig | unverified data | +-------------------------------+---------------+-----+-----------------+
where sig is a module-style appended signature as generated by the sign-file tool. The verified data+sig size (in big endian) must be explicitly provided (it is not generated by sign-file), as it cannot be determined in other ways (currently, the map value size is fixed). It can be obtained from the size of the file created by sign-file.
Introduce the new map flag BPF_F_VERIFY_ELEM, and additionally call the
new
function bpf_map_verify_value_sig() from bpf_map_update_value() if the
flag
is set. bpf_map_verify_value_sig(), declared as global for a new helper, is basically equivalent to mod_verify_sig(). It additionally does the marker check, that for kernel modules is done in module_sig_check(), and the parsing of the verified data+sig size.
Currently, enable the usage of the flag only for the array map. Support for more map types can be added later.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
[...]
NULL, NULL);
if (ret < 0)
return ret;
}
return modlen;
+} +EXPORT_SYMBOL_GPL(bpf_map_verify_value_sig);
#define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index f4009dbdf62d..a8e7803d2593 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1226,6 +1226,9 @@ enum {
/* Create a map that is suitable to be an inner map with dynamic max
entries
*/
BPF_F_INNER_MAP = (1U << 12),
+/* Verify map value (fmt: ver data+sig size(be32), ver data, sig, unver
data) */
BPF_F_VERIFY_ELEM = (1U << 13)
};
/* Flags for BPF_PROG_QUERY. */
2.25.1
On Fri, Jun 3, 2022 at 5:44 PM Roberto Sassu roberto.sassu@huawei.com wrote:
Your bpf_map_verify_value_sig hard codes the type of signature (bpf_map_verify_value_sig as verify_pkcs7_signature) its implementation. This is not extensible.
It is hardcoded now, but it wouldn't if there are more verification functions. For example, if 'id_type' of module_signature is set to PKEY_ID_PGP, bpf_map_verify_value_sig() would call verify_pgp_signature() (assuming that support for PGP keys and signatures is added to the kernel).
I agree with KP. All hard coded things are hurting extensibility. we just need a helper that calls verify_pkcs7_signature where prog will specify len, keyring, etc.
Introduce the bpf_map_verified_data_size() helper to get the verified data size from a signed map value, as parsed by the kernel with bpf_map_verify_value_sig().
The same information might be provided by user space tools as well without any helper, for example by adding a second unsigned integer after the verified data+sig size field.
However, this alternative seems to increase the code complexity: the kernel has to parse two unsigned integers and check their consistency; user space tools have to parse the module-style appended signature to get the verified data size.
Alternatively, each eBPF program could parse the module-style signature by itself, but this would cause duplication of the code.
Adding a new helper seems the best choice, it only needs to call the existing function bpf_map_verify_value_sig() and pass the result to the caller.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- include/uapi/linux/bpf.h | 8 ++++++++ kernel/bpf/helpers.c | 15 +++++++++++++++ tools/include/uapi/linux/bpf.h | 8 ++++++++ 3 files changed, 31 insertions(+)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index a8e7803d2593..4a05caa49419 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -5252,6 +5252,13 @@ union bpf_attr { * Pointer to the underlying dynptr data, NULL if the dynptr is * read-only, if the dynptr is invalid, or if the offset and length * is out of bounds. + * + * long bpf_map_verified_data_size(const void *value, u32 value_size) + * Description + * Parse signed map value in *value* with size *value_size*. + * Return + * The size of verified data on success, or a negative error in + * case of failure. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -5458,6 +5465,7 @@ union bpf_attr { FN(dynptr_read), \ FN(dynptr_write), \ FN(dynptr_data), \ + FN(bpf_map_verified_data_size), \ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 225806a02efb..78c29c4e5d3f 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1575,6 +1575,19 @@ const struct bpf_func_proto bpf_dynptr_data_proto = { .arg3_type = ARG_CONST_ALLOC_SIZE_OR_ZERO, };
+BPF_CALL_2(bpf_map_verified_data_size, const void *, value, u32, value_size) +{ + return bpf_map_verify_value_sig(value, value_size, false); +} + +const struct bpf_func_proto bpf_map_verified_data_size_proto = { + .func = bpf_map_verified_data_size, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_MEM, + .arg2_type = ARG_CONST_SIZE_OR_ZERO, +}; + const struct bpf_func_proto bpf_get_current_task_proto __weak; const struct bpf_func_proto bpf_get_current_task_btf_proto __weak; const struct bpf_func_proto bpf_probe_read_user_proto __weak; @@ -1643,6 +1656,8 @@ bpf_base_func_proto(enum bpf_func_id func_id) return &bpf_dynptr_write_proto; case BPF_FUNC_dynptr_data: return &bpf_dynptr_data_proto; + case BPF_FUNC_bpf_map_verified_data_size: + return &bpf_map_verified_data_size_proto; default: break; } diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index a8e7803d2593..4a05caa49419 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -5252,6 +5252,13 @@ union bpf_attr { * Pointer to the underlying dynptr data, NULL if the dynptr is * read-only, if the dynptr is invalid, or if the offset and length * is out of bounds. + * + * long bpf_map_verified_data_size(const void *value, u32 value_size) + * Description + * Parse signed map value in *value* with size *value_size*. + * Return + * The size of verified data on success, or a negative error in + * case of failure. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -5458,6 +5465,7 @@ union bpf_attr { FN(dynptr_read), \ FN(dynptr_write), \ FN(dynptr_data), \ + FN(bpf_map_verified_data_size), \ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
Check the ability of eBPF to restrict map update operations based on signature verification of provided map values, if the map flag BPF_F_VERIFY_ELEM is set.
Ensure that the map update operation is rejected if the signature is invalid, or the data format is not correct. Also check that bpf_map_verified_data_size() returns the correct data size for an added signed map value.
Execute the test for a single element and in batch mode.
The test requires access to the kernel modules signing key and the execution of the sign-file tool with the signing key path passed as argument.
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- .../bpf/prog_tests/test_map_value_sig.c | 212 ++++++++++++++++++ .../selftests/bpf/progs/map_value_sig.c | 50 +++++ 2 files changed, 262 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/test_map_value_sig.c create mode 100644 tools/testing/selftests/bpf/progs/map_value_sig.c
diff --git a/tools/testing/selftests/bpf/prog_tests/test_map_value_sig.c b/tools/testing/selftests/bpf/prog_tests/test_map_value_sig.c new file mode 100644 index 000000000000..0c74627f54c6 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/test_map_value_sig.c @@ -0,0 +1,212 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include <stdio.h> +#include <errno.h> +#include <stdlib.h> +#include <unistd.h> +#include <endian.h> +#include <sys/stat.h> +#include <sys/wait.h> +#include <test_progs.h> + +#include "map_value_sig.skel.h" + +#define MAX_DATA_SIZE 1024 +#define ARRAY_ELEMS 5 + +struct data { + u8 payload[MAX_DATA_SIZE]; +}; + +struct data_info { + char str[10]; + int str_len; +}; + +int populate_data_item(struct data *data_item, struct data_info *data_info_item) +{ + struct stat st; + char signed_file_template[] = "/tmp/signed_fileXXXXXX"; + int ret, fd, child_status, child_pid; + + fd = mkstemp(signed_file_template); + if (fd == -1) { + ret = -errno; + goto out; + } + + ret = write(fd, data_info_item->str, data_info_item->str_len); + + close(fd); + + if (ret != data_info_item->str_len) { + ret = -EIO; + goto out; + } + + child_pid = fork(); + + if (child_pid == -1) { + ret = -errno; + goto out; + } + + if (child_pid == 0) + return execlp("../../../../scripts/sign-file", + "../../../../scripts/sign-file", "sha256", + "../../../../certs/signing_key.pem", + "../../../../certs/signing_key.pem", + signed_file_template, NULL); + + waitpid(child_pid, &child_status, 0); + + ret = WEXITSTATUS(child_status); + if (ret) + goto out; + + ret = stat(signed_file_template, &st); + if (ret == -1) { + ret = -errno; + goto out; + } + + if (st.st_size > sizeof(data_item->payload) - sizeof(u32)) { + ret = -EINVAL; + goto out; + } + + *(u32 *)data_item->payload = __cpu_to_be32(st.st_size); + + fd = open(signed_file_template, O_RDONLY); + if (fd == -1) { + ret = -errno; + goto out; + } + + ret = read(fd, data_item->payload + sizeof(u32), st.st_size); + + close(fd); + + if (ret != st.st_size) { + ret = -EIO; + goto out; + } + + ret = 0; +out: + unlink(signed_file_template); + return ret; +} + +void test_test_map_value_sig(void) +{ + struct map_value_sig *skel = NULL; + struct bpf_map *map; + struct data *data_array = NULL; + struct data_info *data_info_array = NULL; + int keys[ARRAY_ELEMS]; + u32 max_entries = ARRAY_ELEMS; + int ret, zero = 0, i, map_fd, duration = 0; + + DECLARE_LIBBPF_OPTS(bpf_map_create_opts, create_opts, + .map_flags = BPF_F_MMAPABLE | BPF_F_VERIFY_ELEM); + + DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts, + .elem_flags = 0, + .flags = 0, + ); + + data_array = malloc(sizeof(*data_array) * ARRAY_ELEMS); + if (CHECK(!data_array, "data array", "not enough memory\n")) + goto close_prog; + + data_info_array = malloc(sizeof(*data_info_array) * ARRAY_ELEMS); + if (CHECK(!data_info_array, "data info array", "not enough memory\n")) + goto close_prog; + + skel = map_value_sig__open_and_load(); + if (CHECK(!skel, "skel", "open_and_load failed\n")) + goto close_prog; + + ret = map_value_sig__attach(skel); + if (CHECK(ret < 0, "skel", "attach failed\n")) + goto close_prog; + + map_fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, NULL, 4, + sizeof(struct data), ARRAY_ELEMS, &create_opts); + if (CHECK(map_fd != -EINVAL, "bpf_map_create", + "should fail (mmapable & verify_elem flags set\n")) + goto close_prog; + + map = bpf_object__find_map_by_name(skel->obj, "data_input"); + if (CHECK(!map, "bpf_object__find_map_by_name", "not found\n")) + goto close_prog; + + for (i = 0; i < ARRAY_ELEMS; i++) { + keys[i] = i; + + data_info_array[i].str_len = snprintf(data_info_array[i].str, + sizeof(data_info_array[i].str), + "test%d", i); + + ret = populate_data_item(&data_array[i], &data_info_array[i]); + if (CHECK(ret, "populate_data_item", "error: %d\n", ret)) + goto close_prog; + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, + &data_array[i], BPF_ANY); + if (CHECK(ret < 0, "bpf_map_update_elem", "error: %d\n", ret)) + goto close_prog; + + if (CHECK(skel->bss->verified_data_size != + data_info_array[i].str_len, "data size", + "mismatch\n")) + goto close_prog; + } + + ret = bpf_map_update_batch(bpf_map__fd(map), keys, (void *)data_array, + &max_entries, &opts); + if (CHECK(ret, "bpf_map_update_batch", "error: %d\n", ret)) + goto close_prog; + + *(u32 *)data_array[0].payload = + __cpu_to_be32(sizeof(data_array[0].payload)); + + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data_array[0], + BPF_ANY); + if (CHECK(!ret, "bpf_map_update_elem", "should fail (invalid size)\n")) + goto close_prog; + + ret = bpf_map_update_batch(bpf_map__fd(map), keys, (void *)data_array, + &max_entries, &opts); + if (CHECK(!ret, "bpf_map_update_batch", "should fail (invalid size)\n")) + goto close_prog; + + *(u32 *)data_array[0].payload = + __cpu_to_be32(data_info_array[0].str_len); + + data_array[0].payload[sizeof(u32) + data_info_array[0].str_len - 1] = + '\0'; + ret = bpf_map_update_elem(bpf_map__fd(map), &zero, &data_array[0], 0); + if (CHECK(!ret, "bpf_map_update_elem", + "should fail (invalid signature)\n")) + goto close_prog; + + max_entries = ARRAY_ELEMS; + + ret = bpf_map_update_batch(bpf_map__fd(map), keys, (void *)data_array, + &max_entries, &opts); + if (CHECK(!ret, "bpf_map_update_batch", + "should fail (invalid signature)\n")) + goto close_prog; +close_prog: + map_value_sig__destroy(skel); + free(data_array); + free(data_info_array); +} diff --git a/tools/testing/selftests/bpf/progs/map_value_sig.c b/tools/testing/selftests/bpf/progs/map_value_sig.c new file mode 100644 index 000000000000..a1d0fc021127 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/map_value_sig.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu roberto.sassu@huawei.com + */ + +#include "vmlinux.h" +#include <errno.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +#define MAX_DATA_SIZE 1024 +#define ARRAY_ELEMS 5 + +u32 verified_data_size; + +struct data { + u8 payload[MAX_DATA_SIZE]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(map_flags, BPF_F_VERIFY_ELEM); + __uint(max_entries, ARRAY_ELEMS); + __type(key, __u32); + __type(value, struct data); +} data_input SEC(".maps"); + +char _license[] SEC("license") = "GPL"; + +SEC("fexit/array_map_update_elem") +int BPF_PROG(array_map_update_elem, struct bpf_map *map, void *key, void *value, + u64 map_flags) +{ + struct data *data_ptr; + int zero = 0; + + if (map != (struct bpf_map *)&data_input) + return 0; + + data_ptr = bpf_map_lookup_elem(&data_input, &zero); + if (!data_ptr) + return 0; + + verified_data_size = bpf_map_verified_data_size((void *)data_ptr, + sizeof(struct data)); + return 0; +}
linux-kselftest-mirror@lists.linaro.org