On Fri, Feb 16, 2024 at 10:50:10AM +0100, Benjamin Tissoires wrote:
static bool is_rbtree_lock_required_kfunc(u32 btf_id) { return is_bpf_rbtree_api_kfunc(btf_id); @@ -12140,6 +12143,16 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } }
- if (is_bpf_timer_set_sleepable_cb_kfunc(meta.func_id)) {
err = push_callback_call(env, insn, insn_idx, meta.subprogno,
set_timer_callback_state);
if (err) {
verbose(env, "kfunc %s#%d failed callback verification\n",
func_name, meta.func_id);
return err;
}
- }
All makes sense so far. Please squash all the fix and repost. It's hard to do a proper review in this shape of the patch. As far as rcu_read_lock/unlock that is done in callback... it feels buggy and unnecessary. bpf prog and timer won't disappear while work is queued. array and hash map will call bpf_obj_free_timer() before going away.
And things like: + rcu_read_lock(); + callback_fn = rcu_dereference(t->sleepable_cb_fn); + rcu_read_unlock(); + if (!callback_fn) + return;
is 99% broken. if (!callback_fn) line is UAF.