New version of the sleepable bpf_timer code, without the HID changes, as they can now go through the HID tree independantly.
For reference, the use cases I have in mind:
---
Basically, I need to be able to defer a HID-BPF program for the following reasons (from the aforementioned patch): 1. defer an event: Sometimes we receive an out of proximity event, but the device can not be trusted enough, and we need to ensure that we won't receive another one in the following n milliseconds. So we need to wait those n milliseconds, and eventually re-inject that event in the stack.
2. inject new events in reaction to one given event: We might want to transform one given event into several. This is the case for macro keys where a single key press is supposed to send a sequence of key presses. But this could also be used to patch a faulty behavior, if a device forgets to send a release event.
3. communicate with the device in reaction to one event: We might want to communicate back to the device after a given event. For example a device might send us an event saying that it came back from sleeping state and needs to be re-initialized.
Currently we can achieve that by keeping a userspace program around, raise a bpf event, and let that userspace program inject the events and commands. However, we are just keeping that program alive as a daemon for just scheduling commands. There is no logic in it, so it doesn't really justify an actual userspace wakeup. So a kernel workqueue seems simpler to handle.
bpf_timers are currently running in a soft IRQ context, this patch series implements a sleppable context for them.
Cheers, Benjamin
To: Alexei Starovoitov ast@kernel.org To: Daniel Borkmann daniel@iogearbox.net To: Andrii Nakryiko andrii@kernel.org To: Martin KaFai Lau martin.lau@linux.dev To: Eduard Zingerman eddyz87@gmail.com To: Song Liu song@kernel.org To: Yonghong Song yonghong.song@linux.dev To: John Fastabend john.fastabend@gmail.com To: KP Singh kpsingh@kernel.org To: Stanislav Fomichev sdf@google.com To: Hao Luo haoluo@google.com To: Jiri Olsa jolsa@kernel.org To: Mykola Lysenko mykolal@fb.com To: Shuah Khan shuah@kernel.org Cc: Benjamin Tissoires bentiss@kernel.org Cc: bpf@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-kselftest@vger.kernel.org
--- Changes in v4: - dropped the HID changes, they can go independently from bpf-core - addressed Alexei's and Eduard's remarks - added selftests - Link to v3: https://lore.kernel.org/r/20240221-hid-bpf-sleepable-v3-0-1fb378ca6301@kerne...
Changes in v3: - fixed the crash from v2 - changed the API to have only BPF_F_TIMER_SLEEPABLE for bpf_timer_start() - split the new kfuncs/verifier patch into several sub-patches, for easier reviews - Link to v2: https://lore.kernel.org/r/20240214-hid-bpf-sleepable-v2-0-5756b054724d@kerne...
Changes in v2: - make use of bpf_timer (and dropped the custom HID handling) - implemented bpf_timer_set_sleepable_cb as a kfunc - still not implemented global subprogs - no sleepable bpf_timer selftests yet - Link to v1: https://lore.kernel.org/r/20240209-hid-bpf-sleepable-v1-0-4cc895b5adbd@kerne...
--- Benjamin Tissoires (6): bpf/helpers: introduce sleepable bpf_timers bpf/verifier: add bpf_timer as a kfunc capable type bpf/helpers: introduce bpf_timer_set_sleepable_cb() kfunc bpf/helpers: mark the callback of bpf_timer_set_sleepable_cb() as sleepable tools: sync include/uapi/linux/bpf.h selftests/bpf: add sleepable timer tests
include/linux/bpf_verifier.h | 1 + include/uapi/linux/bpf.h | 4 + kernel/bpf/helpers.c | 132 ++++++++++++++++++++- kernel/bpf/verifier.c | 92 +++++++++++++- tools/include/uapi/linux/bpf.h | 4 + tools/testing/selftests/bpf/bpf_experimental.h | 4 + .../selftests/bpf/bpf_testmod/bpf_testmod.c | 5 + .../selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h | 1 + tools/testing/selftests/bpf/prog_tests/timer.c | 1 + tools/testing/selftests/bpf/progs/timer.c | 40 ++++++- tools/testing/selftests/bpf/progs/timer_failure.c | 114 +++++++++++++++++- 11 files changed, 387 insertions(+), 11 deletions(-) --- base-commit: 9187210eee7d87eea37b45ea93454a88681894a4 change-id: 20240205-hid-bpf-sleepable-c01260fd91c4
Best regards,
They are implemented as a workqueue, which means that there are no guarantees of timing nor ordering.
Signed-off-by: Benjamin Tissoires bentiss@kernel.org
--- changes in v4: - dropped __bpf_timer_compute_key() - use a spin_lock instead of a semaphore - ensure bpf_timer_cancel_and_free is not complaining about non sleepable context and use cancel_work() instead of cancel_work_sync() - return -EINVAL if a delay is given to bpf_timer_start() with BPF_F_TIMER_SLEEPABLE
changes in v3: - extracted the implementation in bpf_timer only, without bpf_timer_set_sleepable_cb() - rely on schedule_work() only, from bpf_timer_start() - add semaphore to ensure bpf_timer_work_cb() is accessing consistent data
changes in v2 (compared to the one attaches to v1 0/9): - make use of a kfunc - add a (non-used) BPF_F_TIMER_SLEEPABLE - the callback is *not* called, it makes the kernel crashes --- include/uapi/linux/bpf.h | 4 +++ kernel/bpf/helpers.c | 86 ++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 88 insertions(+), 2 deletions(-)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 3c42b9f1bada..b90def29d796 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -7461,10 +7461,14 @@ struct bpf_core_relo { * - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is * relative to current time. * - BPF_F_TIMER_CPU_PIN: Timer will be pinned to the CPU of the caller. + * - BPF_F_TIMER_SLEEPABLE: Timer will run in a sleepable context, with + * no guarantees of ordering nor timing (consider this as being just + * offloaded immediately). */ enum { BPF_F_TIMER_ABS = (1ULL << 0), BPF_F_TIMER_CPU_PIN = (1ULL << 1), + BPF_F_TIMER_SLEEPABLE = (1ULL << 2), };
/* BPF numbers iterator state */ diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index a89587859571..38de73a9df83 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1094,14 +1094,20 @@ const struct bpf_func_proto bpf_snprintf_proto = { * bpf_timer_cancel() cancels the timer and decrements prog's refcnt. * Inner maps can contain bpf timers as well. ops->map_release_uref is * freeing the timers when inner map is replaced or deleted by user space. + * + * sleepable_lock protects only the setup of the workqueue, not the callback + * itself. This is done to ensure we don't run concurrently a free of the + * callback or the associated program. */ struct bpf_hrtimer { struct hrtimer timer; + struct work_struct work; struct bpf_map *map; struct bpf_prog *prog; void __rcu *callback_fn; void *value; struct rcu_head rcu; + spinlock_t sleepable_lock; };
/* the actual struct hidden inside uapi struct bpf_timer */ @@ -1114,6 +1120,49 @@ struct bpf_timer_kern { struct bpf_spin_lock lock; } __attribute__((aligned(8)));
+static void bpf_timer_work_cb(struct work_struct *work) +{ + struct bpf_hrtimer *t = container_of(work, struct bpf_hrtimer, work); + struct bpf_map *map = t->map; + bpf_callback_t callback_fn; + void *value = t->value; + unsigned long flags; + void *key; + u32 idx; + + BTF_TYPE_EMIT(struct bpf_timer); + + spin_lock_irqsave(&t->sleepable_lock, flags); + + callback_fn = READ_ONCE(t->callback_fn); + if (!callback_fn) { + spin_unlock_irqrestore(&t->sleepable_lock, flags); + return; + } + + if (map->map_type == BPF_MAP_TYPE_ARRAY) { + struct bpf_array *array = container_of(map, struct bpf_array, map); + + /* compute the key */ + idx = ((char *)value - array->value) / array->elem_size; + key = &idx; + } else { /* hash or lru */ + key = value - round_up(map->key_size, 8); + } + + /* prevent the callback to be freed by bpf_timer_cancel() while running + * so we can release the sleepable lock + */ + bpf_prog_inc(t->prog); + + spin_unlock_irqrestore(&t->sleepable_lock, flags); + + callback_fn((u64)(long)map, (u64)(long)key, (u64)(long)value, 0, 0); + /* The verifier checked that return value is zero. */ + + bpf_prog_put(t->prog); +} + static DEFINE_PER_CPU(struct bpf_hrtimer *, hrtimer_running);
static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer) @@ -1192,6 +1241,8 @@ BPF_CALL_3(bpf_timer_init, struct bpf_timer_kern *, timer, struct bpf_map *, map t->prog = NULL; rcu_assign_pointer(t->callback_fn, NULL); hrtimer_init(&t->timer, clockid, HRTIMER_MODE_REL_SOFT); + INIT_WORK(&t->work, bpf_timer_work_cb); + spin_lock_init(&t->sleepable_lock); t->timer.function = bpf_timer_cb; WRITE_ONCE(timer->timer, t); /* Guarantee the order between timer->timer and map->usercnt. So @@ -1237,6 +1288,7 @@ BPF_CALL_3(bpf_timer_set_callback, struct bpf_timer_kern *, timer, void *, callb ret = -EINVAL; goto out; } + spin_lock(&t->sleepable_lock); if (!atomic64_read(&t->map->usercnt)) { /* maps with timers must be either held by user space * or pinned in bpffs. Otherwise timer might still be @@ -1263,6 +1315,8 @@ BPF_CALL_3(bpf_timer_set_callback, struct bpf_timer_kern *, timer, void *, callb } rcu_assign_pointer(t->callback_fn, callback_fn); out: + if (t) + spin_unlock(&t->sleepable_lock); __bpf_spin_unlock_irqrestore(&timer->lock); return ret; } @@ -1283,8 +1337,12 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla
if (in_nmi()) return -EOPNOTSUPP; - if (flags & ~(BPF_F_TIMER_ABS | BPF_F_TIMER_CPU_PIN)) + if (flags & ~(BPF_F_TIMER_ABS | BPF_F_TIMER_CPU_PIN | BPF_F_TIMER_SLEEPABLE)) return -EINVAL; + + if ((flags & BPF_F_TIMER_SLEEPABLE) && nsecs) + return -EINVAL; + __bpf_spin_lock_irqsave(&timer->lock); t = timer->timer; if (!t || !t->prog) { @@ -1300,7 +1358,10 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla if (flags & BPF_F_TIMER_CPU_PIN) mode |= HRTIMER_MODE_PINNED;
- hrtimer_start(&t->timer, ns_to_ktime(nsecs), mode); + if (flags & BPF_F_TIMER_SLEEPABLE) + schedule_work(&t->work); + else + hrtimer_start(&t->timer, ns_to_ktime(nsecs), mode); out: __bpf_spin_unlock_irqrestore(&timer->lock); return ret; @@ -1348,13 +1409,22 @@ BPF_CALL_1(bpf_timer_cancel, struct bpf_timer_kern *, timer) ret = -EDEADLK; goto out; } + spin_lock(&t->sleepable_lock); drop_prog_refcnt(t); + spin_unlock(&t->sleepable_lock); out: __bpf_spin_unlock_irqrestore(&timer->lock); /* Cancel the timer and wait for associated callback to finish * if it was running. */ ret = ret ?: hrtimer_cancel(&t->timer); + + /* also cancel the sleepable work, but *do not* wait for + * it to finish if it was running as we might not be in a + * sleepable context + */ + ret = ret ?: cancel_work(&t->work); + rcu_read_unlock(); return ret; } @@ -1383,11 +1453,13 @@ void bpf_timer_cancel_and_free(void *val) t = timer->timer; if (!t) goto out; + spin_lock(&t->sleepable_lock); drop_prog_refcnt(t); /* The subsequent bpf_timer_start/cancel() helpers won't be able to use * this timer, since it won't be initialized. */ WRITE_ONCE(timer->timer, NULL); + spin_unlock(&t->sleepable_lock); out: __bpf_spin_unlock_irqrestore(&timer->lock); if (!t) @@ -1410,6 +1482,16 @@ void bpf_timer_cancel_and_free(void *val) */ if (this_cpu_read(hrtimer_running) != t) hrtimer_cancel(&t->timer); + + /* also cancel the sleepable work, but *do not* wait for + * it to finish if it was running as we might not be in a + * sleepable context. Same reason as above, it's fine to + * free 't': the subprog callback will never access it anymore + * and can not reschedule itself since timer->timer = NULL was + * already done. + */ + cancel_work(&t->work); + kfree_rcu(t, rcu); }
We need to extend the bpf_timer API, but the way forward relies on kfuncs. So make bpf_timer known for kfuncs from the verifier PoV
Signed-off-by: Benjamin Tissoires bentiss@kernel.org
---
changes in v4: - enforce KF_ARG_PTR_TO_TIMER to be of type PTR_TO_MAP_VALUE
new in v3 (split from v2 02/10) --- kernel/bpf/verifier.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 63749ad5ac6b..1483ebc0ee73 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -10826,6 +10826,7 @@ enum { KF_ARG_LIST_NODE_ID, KF_ARG_RB_ROOT_ID, KF_ARG_RB_NODE_ID, + KF_ARG_TIMER_ID, };
BTF_ID_LIST(kf_arg_btf_ids) @@ -10834,6 +10835,7 @@ BTF_ID(struct, bpf_list_head) BTF_ID(struct, bpf_list_node) BTF_ID(struct, bpf_rb_root) BTF_ID(struct, bpf_rb_node) +BTF_ID(struct, bpf_timer_kern)
static bool __is_kfunc_ptr_arg_type(const struct btf *btf, const struct btf_param *arg, int type) @@ -10877,6 +10879,12 @@ static bool is_kfunc_arg_rbtree_node(const struct btf *btf, const struct btf_par return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_RB_NODE_ID); }
+static bool is_kfunc_arg_timer(const struct btf *btf, const struct btf_param *arg) +{ + bool ret = __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_TIMER_ID); + return ret; +} + static bool is_kfunc_arg_callback(struct bpf_verifier_env *env, const struct btf *btf, const struct btf_param *arg) { @@ -10946,6 +10954,7 @@ enum kfunc_ptr_arg_type { KF_ARG_PTR_TO_NULL, KF_ARG_PTR_TO_CONST_STR, KF_ARG_PTR_TO_MAP, + KF_ARG_PTR_TO_TIMER, };
enum special_kfunc_type { @@ -11102,6 +11111,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, if (is_kfunc_arg_map(meta->btf, &args[argno])) return KF_ARG_PTR_TO_MAP;
+ if (is_kfunc_arg_timer(meta->btf, &args[argno])) + return KF_ARG_PTR_TO_TIMER; + if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) { if (!btf_type_is_struct(ref_t)) { verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n", @@ -11735,6 +11747,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ case KF_ARG_PTR_TO_CALLBACK: case KF_ARG_PTR_TO_REFCOUNTED_KPTR: case KF_ARG_PTR_TO_CONST_STR: + case KF_ARG_PTR_TO_TIMER: /* Trusted by default */ break; default: @@ -12021,6 +12034,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (ret) return ret; break; + case KF_ARG_PTR_TO_TIMER: + if (reg->type != PTR_TO_MAP_VALUE) { + verbose(env, "arg#%d doesn't point to a map value\n", i); + return -EINVAL; + } + break; } }
On Fri, 2024-03-15 at 15:29 +0100, Benjamin Tissoires wrote: [...]
@@ -12021,6 +12034,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (ret) return ret; break;
case KF_ARG_PTR_TO_TIMER:
if (reg->type != PTR_TO_MAP_VALUE) {
verbose(env, "arg#%d doesn't point to a map value\n", i);
return -EINVAL;
}
break;
I think that pointer offset has to be checked as well, otherwise the following program verifies w/o error:
--- 8< ----------------------------
#include <linux/bpf.h> #include <time.h> #include <errno.h> #include <bpf/bpf_helpers.h> #include "bpf_tcp_helpers.h"
extern int bpf_timer_set_sleepable_cb_impl(struct bpf_timer *timer, int (callback_fn)(void *map, int *key, struct bpf_timer *timer), void *aux__ign) __ksym;
#define bpf_timer_set_sleepable_cb(timer, cb) \ bpf_timer_set_sleepable_cb_impl(timer, cb, NULL)
struct elem { struct bpf_timer t; };
struct { __uint(type, BPF_MAP_TYPE_ARRAY); __uint(max_entries, 2); __type(key, int); __type(value, struct elem); } array SEC(".maps");
static int cb_sleepable(void *map, int *key, struct bpf_timer *timer) { return 0; }
SEC("fentry/bpf_fentry_test5") int BPF_PROG2(test_sleepable, int, a) { struct bpf_timer *arr_timer; int array_key = 1;
arr_timer = bpf_map_lookup_elem(&array, &array_key); if (!arr_timer) return 0; bpf_timer_init(arr_timer, &array, CLOCK_MONOTONIC); bpf_timer_set_sleepable_cb((void *)arr_timer + 1, // <-- note incorrect offset cb_sleepable); bpf_timer_start(arr_timer, 0, 0); return 0; }
char _license[] SEC("license") = "GPL";
---------------------------- >8 ---
On Mon, Mar 18, 2024 at 10:59 PM Eduard Zingerman eddyz87@gmail.com wrote:
On Fri, 2024-03-15 at 15:29 +0100, Benjamin Tissoires wrote: [...]
@@ -12021,6 +12034,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (ret) return ret; break;
case KF_ARG_PTR_TO_TIMER:
if (reg->type != PTR_TO_MAP_VALUE) {
verbose(env, "arg#%d doesn't point to a map value\n", i);
return -EINVAL;
}
break;
I think that pointer offset has to be checked as well, otherwise the following program verifies w/o error:
I initially thought it would be harder than it actually was.
Fixed (with the new test below) in v5.
Cheers, Benjamin
--- 8< ----------------------------
#include <linux/bpf.h> #include <time.h> #include <errno.h> #include <bpf/bpf_helpers.h> #include "bpf_tcp_helpers.h"
extern int bpf_timer_set_sleepable_cb_impl(struct bpf_timer *timer, int (callback_fn)(void *map, int *key, struct bpf_timer *timer), void *aux__ign) __ksym;
#define bpf_timer_set_sleepable_cb(timer, cb) \ bpf_timer_set_sleepable_cb_impl(timer, cb, NULL)
struct elem { struct bpf_timer t; };
struct { __uint(type, BPF_MAP_TYPE_ARRAY); __uint(max_entries, 2); __type(key, int); __type(value, struct elem); } array SEC(".maps");
static int cb_sleepable(void *map, int *key, struct bpf_timer *timer) { return 0; }
SEC("fentry/bpf_fentry_test5") int BPF_PROG2(test_sleepable, int, a) { struct bpf_timer *arr_timer; int array_key = 1;
arr_timer = bpf_map_lookup_elem(&array, &array_key); if (!arr_timer) return 0; bpf_timer_init(arr_timer, &array, CLOCK_MONOTONIC); bpf_timer_set_sleepable_cb((void *)arr_timer + 1, // <-- note incorrect offset cb_sleepable); bpf_timer_start(arr_timer, 0, 0); return 0;
}
char _license[] SEC("license") = "GPL";
---------------------------- >8 ---
In this patch, bpf_timer_set_sleepable_cb() is functionally equivalent to bpf_timer_set_callback(), to the exception that it enforces the timer to be started with BPF_F_TIMER_SLEEPABLE.
But given that bpf_timer_set_callback() is a helper when bpf_timer_set_sleepable_cb() is a kfunc, we need to teach the verifier about its attached callback. Marking that callback as sleepable will be done in a separate patch
Signed-off-by: Benjamin Tissoires bentiss@kernel.org
---
changes in v4: - added a new (ignored) argument to the kfunc so that we do not need to wlak the stack
new in v3 (split from v2 02/10) --- kernel/bpf/helpers.c | 46 +++++++++++++++++++++++++++++++++++++-- kernel/bpf/verifier.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 102 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 38de73a9df83..65c07c0df263 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1108,6 +1108,7 @@ struct bpf_hrtimer { void *value; struct rcu_head rcu; spinlock_t sleepable_lock; + bool is_sleepable; };
/* the actual struct hidden inside uapi struct bpf_timer */ @@ -1273,8 +1274,8 @@ static const struct bpf_func_proto bpf_timer_init_proto = { .arg3_type = ARG_ANYTHING, };
-BPF_CALL_3(bpf_timer_set_callback, struct bpf_timer_kern *, timer, void *, callback_fn, - struct bpf_prog_aux *, aux) +static int __bpf_timer_set_callback(struct bpf_timer_kern *timer, void *callback_fn, + struct bpf_prog_aux *aux, bool is_sleepable) { struct bpf_prog *prev, *prog = aux->prog; struct bpf_hrtimer *t; @@ -1314,6 +1315,7 @@ BPF_CALL_3(bpf_timer_set_callback, struct bpf_timer_kern *, timer, void *, callb t->prog = prog; } rcu_assign_pointer(t->callback_fn, callback_fn); + t->is_sleepable = is_sleepable; out: if (t) spin_unlock(&t->sleepable_lock); @@ -1321,6 +1323,12 @@ BPF_CALL_3(bpf_timer_set_callback, struct bpf_timer_kern *, timer, void *, callb return ret; }
+BPF_CALL_3(bpf_timer_set_callback, struct bpf_timer_kern *, timer, void *, callback_fn, + struct bpf_prog_aux *, aux) +{ + return __bpf_timer_set_callback(timer, callback_fn, aux, false); +} + static const struct bpf_func_proto bpf_timer_set_callback_proto = { .func = bpf_timer_set_callback, .gpl_only = true, @@ -1350,6 +1358,11 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla goto out; }
+ if (t->is_sleepable && !(flags & BPF_F_TIMER_SLEEPABLE)) { + ret = -EINVAL; + goto out; + } + if (flags & BPF_F_TIMER_ABS) mode = HRTIMER_MODE_ABS_SOFT; else @@ -2627,6 +2640,34 @@ __bpf_kfunc void bpf_throw(u64 cookie) WARN(1, "A call to BPF exception callback should never return\n"); }
+/** + * bpf_timer_set_sleepable_cb_impl() - Configure the timer to call %callback_fn + * static function in a sleepable context. + * @timer: The bpf_timer that needs to be configured + * @callback_fn: a static bpf function + * + * @returns %0 on success. %-EINVAL if %timer was not initialized with + * bpf_timer_init() earlier. %-EPERM if %timer is in a map that doesn't + * have any user references. + * The user space should either hold a file descriptor to a map with timers + * or pin such map in bpffs. When map is unpinned or file descriptor is + * closed all timers in the map will be cancelled and freed. + * + * This kfunc is equivalent to %bpf_timer_set_callback except that it tells + * the verifier that the target callback is run in a sleepable context. + */ +__bpf_kfunc int bpf_timer_set_sleepable_cb_impl(struct bpf_timer_kern *timer, + int (callback_fn)(void *map, int *key, struct bpf_timer *timer), + void *aux__ign) +{ + struct bpf_prog_aux *aux = (struct bpf_prog_aux *)aux__ign; + + if (!aux) + return -EINVAL; + + return __bpf_timer_set_callback(timer, (void *)callback_fn, aux, true); +} + __bpf_kfunc_end_defs();
BTF_KFUNCS_START(generic_btf_ids) @@ -2703,6 +2744,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_is_null) BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly) BTF_ID_FLAGS(func, bpf_dynptr_size) BTF_ID_FLAGS(func, bpf_dynptr_clone) +BTF_ID_FLAGS(func, bpf_timer_set_sleepable_cb_impl) BTF_KFUNCS_END(common_btf_ids)
static const struct btf_kfunc_id_set common_kfunc_set = { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1483ebc0ee73..53f85e114a33 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -501,8 +501,12 @@ static bool is_dynptr_ref_function(enum bpf_func_id func_id) }
static bool is_sync_callback_calling_kfunc(u32 btf_id); +static bool is_async_callback_calling_kfunc(u32 btf_id); +static bool is_callback_calling_kfunc(u32 btf_id); static bool is_bpf_throw_kfunc(struct bpf_insn *insn);
+static bool is_bpf_timer_set_sleepable_cb_impl_kfunc(u32 btf_id); + static bool is_sync_callback_calling_function(enum bpf_func_id func_id) { return func_id == BPF_FUNC_for_each_map_elem || @@ -530,7 +534,8 @@ static bool is_sync_callback_calling_insn(struct bpf_insn *insn)
static bool is_async_callback_calling_insn(struct bpf_insn *insn) { - return bpf_helper_call(insn) && is_async_callback_calling_function(insn->imm); + return (bpf_helper_call(insn) && is_async_callback_calling_function(insn->imm)) || + (bpf_pseudo_kfunc_call(insn) && is_async_callback_calling_kfunc(insn->imm)); }
static bool is_may_goto_insn(struct bpf_insn *insn) @@ -9471,7 +9476,7 @@ static int push_callback_call(struct bpf_verifier_env *env, struct bpf_insn *ins */ env->subprog_info[subprog].is_cb = true; if (bpf_pseudo_kfunc_call(insn) && - !is_sync_callback_calling_kfunc(insn->imm)) { + !is_callback_calling_kfunc(insn->imm)) { verbose(env, "verifier bug: kfunc %s#%d not marked as callback-calling\n", func_id_name(insn->imm), insn->imm); return -EFAULT; @@ -10981,6 +10986,7 @@ enum special_kfunc_type { KF_bpf_percpu_obj_drop_impl, KF_bpf_throw, KF_bpf_iter_css_task_new, + KF_bpf_timer_set_sleepable_cb_impl, };
BTF_SET_START(special_kfunc_set) @@ -11007,6 +11013,7 @@ BTF_ID(func, bpf_throw) #ifdef CONFIG_CGROUPS BTF_ID(func, bpf_iter_css_task_new) #endif +BTF_ID(func, bpf_timer_set_sleepable_cb_impl) BTF_SET_END(special_kfunc_set)
BTF_ID_LIST(special_kfunc_list) @@ -11037,6 +11044,7 @@ BTF_ID(func, bpf_iter_css_task_new) #else BTF_ID_UNUSED #endif +BTF_ID(func, bpf_timer_set_sleepable_cb_impl)
static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) { @@ -11365,12 +11373,28 @@ static bool is_sync_callback_calling_kfunc(u32 btf_id) return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl]; }
+static bool is_async_callback_calling_kfunc(u32 btf_id) +{ + return btf_id == special_kfunc_list[KF_bpf_timer_set_sleepable_cb_impl]; +} + static bool is_bpf_throw_kfunc(struct bpf_insn *insn) { return bpf_pseudo_kfunc_call(insn) && insn->off == 0 && insn->imm == special_kfunc_list[KF_bpf_throw]; }
+static bool is_bpf_timer_set_sleepable_cb_impl_kfunc(u32 btf_id) +{ + return btf_id == special_kfunc_list[KF_bpf_timer_set_sleepable_cb_impl]; +} + +static bool is_callback_calling_kfunc(u32 btf_id) +{ + return is_sync_callback_calling_kfunc(btf_id) || + is_async_callback_calling_kfunc(btf_id); +} + static bool is_rbtree_lock_required_kfunc(u32 btf_id) { return is_bpf_rbtree_api_kfunc(btf_id); @@ -12151,6 +12175,16 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } }
+ if (is_async_callback_calling_kfunc(meta.func_id)) { + err = push_callback_call(env, insn, insn_idx, meta.subprogno, + set_timer_callback_state); + if (err) { + verbose(env, "kfunc %s#%d failed callback verification\n", + func_name, meta.func_id); + return err; + } + } + rcu_lock = is_kfunc_bpf_rcu_read_lock(&meta); rcu_unlock = is_kfunc_bpf_rcu_read_unlock(&meta);
@@ -19544,6 +19578,28 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) { insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1); *cnt = 1; + } else if (is_bpf_timer_set_sleepable_cb_impl_kfunc(desc->func_id)) { + /* The verifier will process callback_fn as many times as necessary + * with different maps and the register states prepared by + * set_timer_callback_state will be accurate. + * + * The following use case is valid: + * map1 is shared by prog1, prog2, prog3. + * prog1 calls bpf_timer_init for some map1 elements + * prog2 calls bpf_timer_set_callback for some map1 elements. + * Those that were not bpf_timer_init-ed will return -EINVAL. + * prog3 calls bpf_timer_start for some map1 elements. + * Those that were not both bpf_timer_init-ed and + * bpf_timer_set_callback-ed will return -EINVAL. + */ + struct bpf_insn ld_addrs[2] = { + BPF_LD_IMM64(BPF_REG_3, (long)env->prog->aux), + }; + + insn_buf[0] = ld_addrs[0]; + insn_buf[1] = ld_addrs[1]; + insn_buf[2] = *insn; + *cnt = 3; } return 0; }
On Fri, 2024-03-15 at 15:29 +0100, Benjamin Tissoires wrote:
This patch looks good to me, please see two nitpicks below. Acked-by: Eduard Zingerman eddyz87@gmail.com
[...]
@@ -1350,6 +1358,11 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla goto out; }
- if (t->is_sleepable && !(flags & BPF_F_TIMER_SLEEPABLE)) {
ret = -EINVAL;
goto out;
- }
Nit: the BPF_F_TIMER_ABS and BPF_F_TIMER_CPU_PIN don't affect sleepable timers, should this check be changed to: '(t->is_sleepable && flags != BPF_F_TIMER_SLEEPABLE)' ?
[...]
@@ -12151,6 +12175,16 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } }
- if (is_async_callback_calling_kfunc(meta.func_id)) {
err = push_callback_call(env, insn, insn_idx, meta.subprogno,
set_timer_callback_state);
Nit: still think that this fragment would be better as:
if (is_bpf_timer_set_sleepable_cb_impl_kfunc(meta.func_id)) { err = push_callback_call(env, insn, insn_idx, meta.subprogno, set_timer_callback_state);
Because of the 'set_timer_callback_state' passed to push_callback_call().
if (err) {
verbose(env, "kfunc %s#%d failed callback verification\n",
func_name, meta.func_id);
return err;
}
- }
- rcu_lock = is_kfunc_bpf_rcu_read_lock(&meta); rcu_unlock = is_kfunc_bpf_rcu_read_unlock(&meta);
On Mon, Mar 18, 2024 at 11:52 PM Eduard Zingerman eddyz87@gmail.com wrote:
On Fri, 2024-03-15 at 15:29 +0100, Benjamin Tissoires wrote:
This patch looks good to me, please see two nitpicks below. Acked-by: Eduard Zingerman eddyz87@gmail.com
Thanks!
[...]
@@ -1350,6 +1358,11 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla goto out; }
if (t->is_sleepable && !(flags & BPF_F_TIMER_SLEEPABLE)) {
ret = -EINVAL;
goto out;
}
Nit: the BPF_F_TIMER_ABS and BPF_F_TIMER_CPU_PIN don't affect sleepable timers, should this check be changed to: '(t->is_sleepable && flags != BPF_F_TIMER_SLEEPABLE)' ?
Sounds fair enough. Scheduled this for v5
[...]
@@ -12151,6 +12175,16 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } }
if (is_async_callback_calling_kfunc(meta.func_id)) {
err = push_callback_call(env, insn, insn_idx, meta.subprogno,
set_timer_callback_state);
Nit: still think that this fragment would be better as:
if (is_bpf_timer_set_sleepable_cb_impl_kfunc(meta.func_id)) { err = push_callback_call(env, insn, insn_idx, meta.subprogno, set_timer_callback_state);
Because of the 'set_timer_callback_state' passed to push_callback_call().
Yeah, sorry I missed that part from the previous reviews.
Fixed in v5
Cheers, Benjamin
if (err) {
verbose(env, "kfunc %s#%d failed callback verification\n",
func_name, meta.func_id);
return err;
}
}
rcu_lock = is_kfunc_bpf_rcu_read_lock(&meta); rcu_unlock = is_kfunc_bpf_rcu_read_unlock(&meta);
Now that we have bpf_timer_set_sleepable_cb() available and working, we can tag the attached callback as sleepable, and let the verifier check in the correct context the calls and kfuncs.
Signed-off-by: Benjamin Tissoires bentiss@kernel.org
---
changes in v4: - use a function parameter to forward the sleepable information
new in v3 (split from v2 02/10) --- include/linux/bpf_verifier.h | 1 + kernel/bpf/verifier.c | 13 ++++++++++--- 2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 7cb1b75eee38..14e4ee67b694 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -426,6 +426,7 @@ struct bpf_verifier_state { * while they are still in use. */ bool used_as_loop_entry; + bool in_sleepable;
/* first and last insn idx of this verifier state */ u32 first_insn_idx; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 53f85e114a33..0be07da38f8a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1434,6 +1434,7 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, } dst_state->speculative = src->speculative; dst_state->active_rcu_lock = src->active_rcu_lock; + dst_state->in_sleepable = src->in_sleepable; dst_state->curframe = src->curframe; dst_state->active_lock.ptr = src->active_lock.ptr; dst_state->active_lock.id = src->active_lock.id; @@ -2407,7 +2408,7 @@ static void init_func_state(struct bpf_verifier_env *env, /* Similar to push_stack(), but for async callbacks */ static struct bpf_verifier_state *push_async_cb(struct bpf_verifier_env *env, int insn_idx, int prev_insn_idx, - int subprog) + int subprog, bool is_sleepable) { struct bpf_verifier_stack_elem *elem; struct bpf_func_state *frame; @@ -2434,6 +2435,7 @@ static struct bpf_verifier_state *push_async_cb(struct bpf_verifier_env *env, * Initialize it similar to do_check_common(). */ elem->st.branches = 1; + elem->st.in_sleepable = is_sleepable; frame = kzalloc(sizeof(*frame), GFP_KERNEL); if (!frame) goto err; @@ -5279,7 +5281,8 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
static bool in_sleepable(struct bpf_verifier_env *env) { - return env->prog->sleepable; + return env->prog->sleepable || + (env->cur_state && env->cur_state->in_sleepable); }
/* The non-sleepable programs and sleepable programs with explicit bpf_rcu_read_lock() @@ -9493,7 +9496,8 @@ static int push_callback_call(struct bpf_verifier_env *env, struct bpf_insn *ins /* there is no real recursion here. timer callbacks are async */ env->subprog_info[subprog].is_async_cb = true; async_cb = push_async_cb(env, env->subprog_info[subprog].start, - insn_idx, subprog); + insn_idx, subprog, + is_bpf_timer_set_sleepable_cb_impl_kfunc(insn->imm)); if (!async_cb) return -EFAULT; callee = async_cb->frame[0]; @@ -16937,6 +16941,9 @@ static bool states_equal(struct bpf_verifier_env *env, if (old->active_rcu_lock != cur->active_rcu_lock) return false;
+ if (old->in_sleepable != cur->in_sleepable) + return false; + /* for states to be equal callsites have to be the same * and all frame states need to be equivalent */
On Fri, 2024-03-15 at 15:29 +0100, Benjamin Tissoires wrote: [...]
@@ -5279,7 +5281,8 @@ static int map_kptr_match_type(struct bpf_verifier_env *env, static bool in_sleepable(struct bpf_verifier_env *env) {
- return env->prog->sleepable;
- return env->prog->sleepable ||
(env->cur_state && env->cur_state->in_sleepable);
}
I was curious why 'env->cur_state &&' check was needed and found that removing it caused an error in the following fragment:
static int do_misc_fixups(struct bpf_verifier_env *env) { ... if (is_storage_get_function(insn->imm)) { if (!in_sleepable(env) || env->insn_aux_data[i + delta].storage_get_func_atomic) insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_ATOMIC); else insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_KERNEL); ... } ... }
When do_misc_fixups() is done env->cur_state is NULL. Current implementation would use GFP_ATOMIC allocation even for sleepable callbacks, where GFP_KERNEL is sufficient. Is this is something we want to address?
/* The non-sleepable programs and sleepable programs with explicit bpf_rcu_read_lock()
On Tue, Mar 19, 2024 at 12:54 AM Eduard Zingerman eddyz87@gmail.com wrote:
On Fri, 2024-03-15 at 15:29 +0100, Benjamin Tissoires wrote: [...]
@@ -5279,7 +5281,8 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
static bool in_sleepable(struct bpf_verifier_env *env) {
return env->prog->sleepable;
return env->prog->sleepable ||
(env->cur_state && env->cur_state->in_sleepable);
}
I was curious why 'env->cur_state &&' check was needed and found that removing it caused an error in the following fragment:
static int do_misc_fixups(struct bpf_verifier_env *env) { ... if (is_storage_get_function(insn->imm)) { if (!in_sleepable(env) || env->insn_aux_data[i + delta].storage_get_func_atomic) insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_ATOMIC); else insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_KERNEL); ... } ... }
When do_misc_fixups() is done env->cur_state is NULL. Current implementation would use GFP_ATOMIC allocation even for sleepable callbacks, where GFP_KERNEL is sufficient. Is this is something we want to address?
I honestly have no idea of the impact there.
AFAICT, if env->cur_state is not set, we don't even know if the callback will be sleepable or not, so if there is a small penalty, then it's the safest option, no?
Cheers, Benjamin
/* The non-sleepable programs and sleepable programs with explicit bpf_rcu_read_lock()
cp include/uapi/linux/bpf.h tools/include/uapi/linux/bpf.h
Signed-off-by: Benjamin Tissoires bentiss@kernel.org
---
new in v4 --- tools/include/uapi/linux/bpf.h | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 3c42b9f1bada..b90def29d796 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -7461,10 +7461,14 @@ struct bpf_core_relo { * - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is * relative to current time. * - BPF_F_TIMER_CPU_PIN: Timer will be pinned to the CPU of the caller. + * - BPF_F_TIMER_SLEEPABLE: Timer will run in a sleepable context, with + * no guarantees of ordering nor timing (consider this as being just + * offloaded immediately). */ enum { BPF_F_TIMER_ABS = (1ULL << 0), BPF_F_TIMER_CPU_PIN = (1ULL << 1), + BPF_F_TIMER_SLEEPABLE = (1ULL << 2), };
/* BPF numbers iterator state */
bpf_experimental.h and ../bpf_testmod/bpf_testmod_kfunc.h are both including vmlinux.h, which is not compatible with including time.h or bpf_tcp_helpers.h.
So prevent vmlinux.h to be included, and override the few missing types.
Signed-off-by: Benjamin Tissoires bentiss@kernel.org
---
new in v4 --- tools/testing/selftests/bpf/bpf_experimental.h | 4 + .../selftests/bpf/bpf_testmod/bpf_testmod.c | 5 + .../selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h | 1 + tools/testing/selftests/bpf/prog_tests/timer.c | 1 + tools/testing/selftests/bpf/progs/timer.c | 40 +++++++- tools/testing/selftests/bpf/progs/timer_failure.c | 114 ++++++++++++++++++++- 6 files changed, 163 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index a5b9df38c162..79da06ca4136 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -459,4 +459,8 @@ extern int bpf_iter_css_new(struct bpf_iter_css *it, extern struct cgroup_subsys_state *bpf_iter_css_next(struct bpf_iter_css *it) __weak __ksym; extern void bpf_iter_css_destroy(struct bpf_iter_css *it) __weak __ksym;
+extern int bpf_timer_set_sleepable_cb_impl(struct bpf_timer *timer, + int (callback_fn)(void *map, int *key, struct bpf_timer *timer), void *aux__ign) __ksym; +#define bpf_timer_set_sleepable_cb(timer, cb) \ + bpf_timer_set_sleepable_cb_impl(timer, cb, NULL) #endif diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c index 39ad96a18123..eb2b78552ca2 100644 --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c @@ -494,6 +494,10 @@ __bpf_kfunc static u32 bpf_kfunc_call_test_static_unused_arg(u32 arg, u32 unused return arg; }
+__bpf_kfunc void bpf_kfunc_call_test_sleepable(void) +{ +} + BTF_KFUNCS_START(bpf_testmod_check_kfunc_ids) BTF_ID_FLAGS(func, bpf_testmod_test_mod_kfunc) BTF_ID_FLAGS(func, bpf_kfunc_call_test1) @@ -520,6 +524,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS | KF_RCU) BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE) BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg) BTF_ID_FLAGS(func, bpf_kfunc_call_test_offset) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_sleepable, KF_SLEEPABLE) BTF_KFUNCS_END(bpf_testmod_check_kfunc_ids)
static int bpf_testmod_ops_init(struct btf *btf) diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h index 7c664dd61059..ce5cd763561c 100644 --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h @@ -96,6 +96,7 @@ void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym; void bpf_kfunc_call_test_mem_len_fail2(__u64 *mem, int len) __ksym;
void bpf_kfunc_call_test_destructive(void) __ksym; +void bpf_kfunc_call_test_sleepable(void) __ksym;
void bpf_kfunc_call_test_offset(struct prog_test_ref_kfunc *p); struct prog_test_member *bpf_kfunc_call_memb_acquire(void); diff --git a/tools/testing/selftests/bpf/prog_tests/timer.c b/tools/testing/selftests/bpf/prog_tests/timer.c index d66687f1ee6a..48973c2e28c7 100644 --- a/tools/testing/selftests/bpf/prog_tests/timer.c +++ b/tools/testing/selftests/bpf/prog_tests/timer.c @@ -61,6 +61,7 @@ static int timer(struct timer *timer_skel)
/* check that code paths completed */ ASSERT_EQ(timer_skel->bss->ok, 1 | 2 | 4, "ok"); + ASSERT_EQ(timer_skel->bss->ok_sleepable, 1, "ok_sleepable");
prog_fd = bpf_program__fd(timer_skel->progs.race); for (i = 0; i < NUM_THR; i++) { diff --git a/tools/testing/selftests/bpf/progs/timer.c b/tools/testing/selftests/bpf/progs/timer.c index f615da97df26..6b19254c5b75 100644 --- a/tools/testing/selftests/bpf/progs/timer.c +++ b/tools/testing/selftests/bpf/progs/timer.c @@ -6,6 +6,14 @@ #include <bpf/bpf_helpers.h> #include "bpf_tcp_helpers.h"
+#define __VMLINUX_H__ +#define u32 __u32 +#define u64 __u64 +#include "bpf_experimental.h" +struct prog_test_member1; +#include "../bpf_testmod/bpf_testmod_kfunc.h" +#undef __VMLINUX_H__ + char _license[] SEC("license") = "GPL"; struct hmap_elem { int counter; @@ -34,7 +42,7 @@ struct elem {
struct { __uint(type, BPF_MAP_TYPE_ARRAY); - __uint(max_entries, 2); + __uint(max_entries, 3); __type(key, int); __type(value, struct elem); } array SEC(".maps"); @@ -62,6 +70,7 @@ __u64 callback_check = 52; __u64 callback2_check = 52; __u64 pinned_callback_check; __s32 pinned_cpu; +__u32 ok_sleepable;
#define ARRAY 1 #define HTAB 2 @@ -422,3 +431,32 @@ int race(void *ctx)
return 0; } + +/* callback for sleepable timer */ +static int timer_cb_sleepable(void *map, int *key, struct bpf_timer *timer) +{ + bpf_kfunc_call_test_sleepable(); + ok_sleepable |= 1; + return 0; +} + +SEC("fentry/bpf_fentry_test6") +int BPF_PROG2(test6, int, a) +{ + int key = 2; + struct bpf_timer *timer; + + bpf_printk("test6"); + + timer = bpf_map_lookup_elem(&array, &key); + if (timer) { + if (bpf_timer_init(timer, &array, CLOCK_MONOTONIC) != 0) + err |= 32768; + bpf_timer_set_sleepable_cb(timer, timer_cb_sleepable); + bpf_timer_start(timer, 0, BPF_F_TIMER_SLEEPABLE); + } else { + err |= 65536; + } + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/timer_failure.c b/tools/testing/selftests/bpf/progs/timer_failure.c index 0996c2486f05..72942a90189b 100644 --- a/tools/testing/selftests/bpf/progs/timer_failure.c +++ b/tools/testing/selftests/bpf/progs/timer_failure.c @@ -8,6 +8,14 @@ #include "bpf_misc.h" #include "bpf_tcp_helpers.h"
+#define __VMLINUX_H__ +#define u32 __u32 +#define u64 __u64 +#include "bpf_experimental.h" +struct prog_test_member1; +#include "../bpf_testmod/bpf_testmod_kfunc.h" +#undef __VMLINUX_H__ + char _license[] SEC("license") = "GPL";
struct elem { @@ -16,7 +24,7 @@ struct elem {
struct { __uint(type, BPF_MAP_TYPE_ARRAY); - __uint(max_entries, 1); + __uint(max_entries, 2); __type(key, int); __type(value, struct elem); } timer_map SEC(".maps"); @@ -66,3 +74,107 @@ long BPF_PROG2(test_bad_ret, int, a)
return 0; } + +/* callback for sleepable timer */ +static int timer_cb_sleepable(void *map, int *key, struct bpf_timer *timer) +{ + bpf_kfunc_call_test_sleepable(); + return 0; +} + +SEC("fentry/bpf_fentry_test1") +__log_level(2) +__failure +/* check that bpf_timer_set_callback() can not be called with a + * sleepable callback + */ +__msg("mark_precise: frame0: regs=r0 stack= before") +__msg(": (85) call bpf_kfunc_call_test_sleepable#") /* anchor message */ +__msg("program must be sleepable to call sleepable kfunc bpf_kfunc_call_test_sleepable") +int BPF_PROG2(test_non_sleepable_sleepable_callback, int, a) +{ + int key = 0; + struct bpf_timer *timer; + + timer = bpf_map_lookup_elem(&timer_map, &key); + if (timer) { + bpf_timer_init(timer, &timer_map, CLOCK_MONOTONIC); + bpf_timer_set_callback(timer, timer_cb_sleepable); + bpf_timer_start(timer, 0, BPF_F_TIMER_SLEEPABLE); + } + + return 0; +} + +SEC("tc") +/* check that calling bpf_timer_start() without BPF_F_TIMER_SLEEPABLE on a sleepable + * callback is returning -EINVAL + */ +__retval(-22) +long test_call_sleepable_missing_flag(void *ctx) +{ + int key = 0; + struct bpf_timer *timer; + + timer = bpf_map_lookup_elem(&timer_map, &key); + if (!timer) + return 1; + + if (bpf_timer_init(timer, &timer_map, CLOCK_MONOTONIC)) + return 2; + + if (bpf_timer_set_sleepable_cb(timer, timer_cb_sleepable)) + return 3; + + return bpf_timer_start(timer, 0, 0); +} + +SEC("tc") +/* check that calling bpf_timer_start() without BPF_F_TIMER_SLEEPABLE on a sleepable + * callback is returning -EINVAL + */ +__retval(-22) +long test_call_sleepable_delay(void *ctx) +{ + int key = 1; + struct bpf_timer *timer; + + timer = bpf_map_lookup_elem(&timer_map, &key); + if (!timer) + return 1; + + if (bpf_timer_init(timer, &timer_map, CLOCK_MONOTONIC)) + return 2; + + if (bpf_timer_set_sleepable_cb(timer, timer_cb_sleepable)) + return 3; + + return bpf_timer_start(timer, 1, BPF_F_TIMER_SLEEPABLE); +} + +SEC("tc") +__log_level(2) +__failure +/* check that the first argument of bpf_timer_set_callback() + * is a correct bpf_timer pointer. + */ +__msg("mark_precise: frame0: regs=r1 stack= before") +__msg(": (85) call bpf_timer_set_sleepable_cb_impl#") /* anchor message */ +__msg("arg#0 doesn't point to a map value") +long test_wrong_pointer(void *ctx) +{ + int key = 0; + struct bpf_timer *timer; + + timer = bpf_map_lookup_elem(&timer_map, &key); + if (!timer) + return 1; + + if (bpf_timer_init(timer, &timer_map, CLOCK_MONOTONIC)) + return 2; + + if (bpf_timer_set_sleepable_cb((void *)&timer, timer_cb_sleepable)) + return 3; + + return -22; +}
On Fri, 2024-03-15 at 15:29 +0100, Benjamin Tissoires wrote:
bpf_experimental.h and ../bpf_testmod/bpf_testmod_kfunc.h are both including vmlinux.h, which is not compatible with including time.h or bpf_tcp_helpers.h.
So prevent vmlinux.h to be included, and override the few missing types.
Signed-off-by: Benjamin Tissoires bentiss@kernel.org
[...]
@@ -6,6 +6,14 @@ #include <bpf/bpf_helpers.h> #include "bpf_tcp_helpers.h" +#define __VMLINUX_H__ +#define u32 __u32 +#define u64 __u64 +#include "bpf_experimental.h" +struct prog_test_member1; +#include "../bpf_testmod/bpf_testmod_kfunc.h" +#undef __VMLINUX_H__
Tbh, this looks very ugly. Would it be possible to create a new tests file sleepable_timer.c and include bpf_experimental.h there, skipping time.h? It appears that for the new tests the only necessary definition from time.h is CLOCK_MONOTONIC.
On Tue, Mar 19, 2024 at 1:20 AM Eduard Zingerman eddyz87@gmail.com wrote:
On Fri, 2024-03-15 at 15:29 +0100, Benjamin Tissoires wrote:
bpf_experimental.h and ../bpf_testmod/bpf_testmod_kfunc.h are both including vmlinux.h, which is not compatible with including time.h or bpf_tcp_helpers.h.
So prevent vmlinux.h to be included, and override the few missing types.
Signed-off-by: Benjamin Tissoires bentiss@kernel.org
[...]
@@ -6,6 +6,14 @@ #include <bpf/bpf_helpers.h> #include "bpf_tcp_helpers.h"
+#define __VMLINUX_H__ +#define u32 __u32 +#define u64 __u64 +#include "bpf_experimental.h" +struct prog_test_member1; +#include "../bpf_testmod/bpf_testmod_kfunc.h" +#undef __VMLINUX_H__
Tbh, this looks very ugly.
heh :)
Would it be possible to create a new tests file sleepable_timer.c and include bpf_experimental.h there, skipping time.h?
Sounds like an option, yeah :)
It appears that for the new tests the only necessary definition from time.h is CLOCK_MONOTONIC.
Yeah, that #define should be the only one missing.
I'll try to come up with better tests in v5 :)
Cheers, Benjamin
linux-kselftest-mirror@lists.linaro.org