Benjamin Tissoires bentiss@kernel.org writes:
On Feb 16 2024, Toke Høiland-Jørgensen wrote:
Benjamin Tissoires bentiss@kernel.org writes:
On Feb 15 2024, Martin KaFai Lau wrote:
On 2/14/24 9:18 AM, Benjamin Tissoires wrote:
+static void bpf_timer_work_cb(struct work_struct *work) +{
- struct bpf_hrtimer *t = container_of(work, struct bpf_hrtimer, work);
- struct bpf_map *map = t->map;
- void *value = t->value;
- bpf_callback_t callback_fn;
- void *key;
- u32 idx;
- BTF_TYPE_EMIT(struct bpf_timer);
- rcu_read_lock();
- callback_fn = rcu_dereference(t->sleepable_cb_fn);
- rcu_read_unlock();
I took a very brief look at patch 2. One thing that may worth to ask here, the rcu_read_unlock() seems to be done too early. It is protecting the t->sleepable_cb_fn (?), so should it be done after finished using the callback_fn?
Probably :)
TBH, everytime I work with RCUs I spent countless hours trying to re-understand everything, and in this case I'm currently in the "let's make it work" process than fixing concurrency issues. I still gave it a shot in case it solves my issue, but no, I still have the crash.
But given that callback_fn might sleep, isn't it an issue to keep the RCU_reader lock so long? (we don't seem to call synchronize_rcu() so it might be fine, but I'd like the confirmation from someone else).
You're right, it isn't. From the RCU/checklist.rst doc:
- Unlike most flavors of RCU, it *is* permissible to block in an
SRCU read-side critical section (demarked by srcu_read_lock() and srcu_read_unlock()), hence the "SRCU": "sleepable RCU". Please note that if you don't need to sleep in read-side critical sections, you should be using RCU rather than SRCU, because RCU is almost always faster and easier to use than is SRCU.
So we can't use the regular RCU protection for the callback in this usage. We'll need to either convert it to SRCU, or add another protection mechanism to make sure the callback function is not freed from under us (like a refcnt). I suspect the latter may be simpler (from reading the rest of that documentation around SRCU.
Currently I'm thinking at also incrementing the ->prog held in the bpf_hrtimer which should prevent the callback to be freed, if I'm not wrong. Then I should be able to just release the rcu_read_unlock before calling the actual callback. And then put the ref on ->prog once done.
But to be able to do that I might need to protect ->prog with an RCU too.
Hmm, bpf_timer_set_callback() already increments the bpf refcnt; so it's a matter of ensuring that bpf_timer_cancel() and bpf_timer_cancel_and_free() wait for the callback to complete even in the workqueue case. The current 'hrtimer_running' percpu global var is not going to cut it for that, so I guess some other kind of locking will be needed? Not really sure what would be appropriate here, a refcnt, or maybe a full mutex?
I am not actually sure the RCU protection of the callback field itself is that important given all the other protections that make sure the callback has exited before cancelling? As long as we add another such protection I think it can just be a READ_ONCE() for getting the cb pointer?
A high level design question. The intention of the new bpf_timer_set_sleepable_cb() kfunc is actually to delay work to a workqueue. It is useful to delay work from the bpf_timer_cb and it may also useful to delay work from other bpf running context (e.g. the networking hooks like "tc"). The bpf_timer_set_sleepable_cb() seems to be unnecessary forcing delay-work must be done in a bpf_timer_cb.
Basically I'm just a monkey here. I've been told that I should use bpf_timer[0]. But my implementation is not finished, as Alexei mentioned that we should bypass hrtimer if I'm not wrong [1].
I don't think getting rid of the hrtimer in favour of schedule_delayed_work() makes any sense. schedule_delayed_work() does exactly the same as you're doing in this version of the patch: it schedules a timer callback, and calls queue_work() from inside that timer callback. It just uses "regular" timers instead of hrtimers. So I don't think there's any performance benefit from using that facility; on the contrary, it would require extra logic to handle cancellation etc; might as well just re-use the existing hrtimer-based callback logic we already have, and do a schedule_work() from the hrtimer callback like you're doing now.
I agree that we can nicely emulate delayed_timer with the current patch series. However, if I understand Alexei's idea (and Martin's) there are cases where we just want schedule_work(), without any timer involved. That makes a weird timer (with a delay always equal to 0), but it would allow to satisfy those latency issues.
So (and this also answers your second email today) I'm thinking at:
- have multiple flags to control the timer (with dedicated timer_cb kernel functions):
- BPF_F_TIMER_HRTIMER (default)
- BPF_F_TIMER_WORKER (no timer, just workqueue)
- BPF_F_TIMER_DELAYED_WORKER (hrtimer + workqueue, or actual delayed_work, but that's re-implementing stuffs)
I don't think the "delayed" bit needs to be a property of the timer; the context in which the timer is executed (softirq vs workqueue) is, because that has consequences for how the callback is verified (it would be neat if we could know the flag at verification time, but since we can't we need the pairing with the _set_sleepable_cb()).
But the same timer could be used both as an immediate and a delayed callback during its lifetime; so I think this should rather be governed by a flag to bpf_timer_start(). In fact, the patch I linked earlier[0] does just that, adding a BPF_TIMER_IMMEDIATE flag to bpf_timer_start(). I.e., keep the hrtimer allocated at all times, but skip going through it if that flag is set.
An alternative could also be to just special-case a zero timeout in bpf_timer_start(); I don't actually recall why I went with the flag instead when I wrote that patch...
-Toke
[0] https://git.kernel.org/pub/scm/linux/kernel/git/toke/linux.git/commit/?h=xdp...