During tracee-ebpf regression tests, it was discovered that a CO-RE capable eBPF program, that relied on a kconfig BTF extern, could not be loaded with the following error:
libbpf: prog 'tracepoint__raw_syscalls__sys_enter': failed to attach to raw tracepoint 'sys_enter': Invalid argument
That happened because the CONFIG_ARCH_HAS_SYSCALL_WRAPPER variable had the wrong value, despite kconfig map existing, misleading the eBPF program execution (which would then have different pointers, not accepted by the verifier during load time).
I got the patch proposed here by bisecting upstream tree with the testcase just described. I kindly ask you to include this patch in the LTS v5.4.x series so CO-RE (Compile Once - Run Everywhere) eBPF programs, relying in kconfig settings, can be correctly loaded in kernel series v5.4.
Link: https://github.com/aquasecurity/tracee/issues/851#issuecomment-903074596
I have tested latest 5.4 stable tree with this patch and it fixes the issue.
-rafaeldtinoco
commit a23740ec43ba022dbfd139d0fe3eff193216272b upstream.
Maps that are read-only both from BPF program side and user space side have their contents constant, so verifier can track referenced values precisely and use that knowledge for dead code elimination, branch pruning, etc. This patch teaches BPF verifier how to do this.
[Backport] Already includes further build fix made at commit 2dedd7d21655 ("bpf: Fix cast to pointer from integer of different size warning").
Signed-off-by: Andrii Nakryiko andriin@fb.com Signed-off-by: Daniel Borkmann daniel@iogearbox.net Link: https://lore.kernel.org/bpf/20191009201458.2679171-2-andriin@fb.com Signed-off-by: Rafael David Tinoco rafaeldtinoco@gmail.com Cc: stable@vger.kernel.org # 5.4.x Link: https://github.com/aquasecurity/tracee/issues/851#issuecomment-903074596 --- kernel/bpf/verifier.c | 57 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 55 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 52c2b11a0b47..ffb33bde92b8 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2778,6 +2778,41 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size) reg->smax_value = reg->umax_value; }
+static bool bpf_map_is_rdonly(const struct bpf_map *map) +{ + return (map->map_flags & BPF_F_RDONLY_PROG) && map->frozen; +} + +static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val) +{ + void *ptr; + u64 addr; + int err; + + err = map->ops->map_direct_value_addr(map, &addr, off); + if (err) + return err; + ptr = (void *)(long)addr + off; + + switch (size) { + case sizeof(u8): + *val = (u64)*(u8 *)ptr; + break; + case sizeof(u16): + *val = (u64)*(u16 *)ptr; + break; + case sizeof(u32): + *val = (u64)*(u32 *)ptr; + break; + case sizeof(u64): + *val = *(u64 *)ptr; + break; + default: + return -EINVAL; + } + return 0; +} + /* check whether memory at (regno + off) is accessible for t = (read | write) * if t==write, value_regno is a register which value is stored into memory * if t==read, value_regno is a register which will receive the value from memory @@ -2815,9 +2850,27 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn if (err) return err; err = check_map_access(env, regno, off, size, false); - if (!err && t == BPF_READ && value_regno >= 0) - mark_reg_unknown(env, regs, value_regno); + if (!err && t == BPF_READ && value_regno >= 0) { + struct bpf_map *map = reg->map_ptr; + + /* if map is read-only, track its contents as scalars */ + if (tnum_is_const(reg->var_off) && + bpf_map_is_rdonly(map) && + map->ops->map_direct_value_addr) { + int map_off = off + reg->var_off.value; + u64 val = 0;
+ err = bpf_map_direct_read(map, map_off, size, + &val); + if (err) + return err; + + regs[value_regno].type = SCALAR_VALUE; + __mark_reg_known(®s[value_regno], val); + } else { + mark_reg_unknown(env, regs, value_regno); + } + } } else if (reg->type == PTR_TO_CTX) { enum bpf_reg_type reg_type = SCALAR_VALUE;
On Sat, Aug 21, 2021 at 05:31:08PM -0300, Rafael David Tinoco wrote:
commit a23740ec43ba022dbfd139d0fe3eff193216272b upstream.
Maps that are read-only both from BPF program side and user space side have their contents constant, so verifier can track referenced values precisely and use that knowledge for dead code elimination, branch pruning, etc. This patch teaches BPF verifier how to do this.
[Backport] Already includes further build fix made at commit 2dedd7d21655 ("bpf: Fix cast to pointer from integer of different size warning").
Do not do that, let us queue up the original commits instead please.
thanks,
greg k-h
linux-stable-mirror@lists.linaro.org