Excerpts from Davidlohr Bueso's message of June 8, 2021 12:33 pm:
On Mon, 07 Jun 2021, Andr� Almeida wrote:
Às 22:09 de 04/06/21, Nicholas Piggin escreveu:
Actually one other scalability thing while I remember it:
futex_wait currently requires that the lock word is tested under the queue spin lock (to avoid consuming a wakeup). The problem with this is that the lock word can be a very hot cache line if you have a lot of concurrency, so accessing it under the queue lock can increase queue lock hold time.
I would prefer if the new API was relaxed to avoid this restriction (e.g., any wait call may consume a wakeup so it's up to userspace to avoid that if it is a problem).
Maybe I'm wrong, but AFAIK the goal of checking the lock word inside the spin lock is to avoid sleeping forever (in other words, wrongly assuming that the lock is taken and missing a wakeup call), not to avoid consuming wakeups. Or at least this is my interpretation of this long comment in futex.c:
https://elixir.bootlin.com/linux/v5.12.9/source/kernel/futex.c#L51
I think what Nick is referring to is that futex_wait() could return 0 instead of EAGAIN upon a uval != val condition if the check is done without the hb lock. The value could have changed between when userspace did the condition check and called into futex(2) to block in the slowpath.
I just mean the check could be done after queueing ourselves on the wait queue (and unlocking the waitqueue lock, not checking while holding the lock). That is the standard pattern used everywhere else by the kernel:
prepare_to_wait() /* -> lock; add_wait_queue; unlock; */ check_again(); schedule();
It can still return EAGAIN if there is a reasonable use for it, but I'd be wary about user code that cares about this -- it's racy you could arrive right before the value changes or right after it changes, so any user code checking this I would be suspicious of (I'm willing to see a use case that really cares).
But such spurious scenarios should be pretty rare, and while I agree that the cacheline can be hot, I'm not sure how much of a performance issue this really is(?),
It's not a spurious anything. The problem is the contention on the lock word cacheline means it can take a relatively long time just to perform that one load instruction. Mandating that it must be done while holding the lock translates to increased lock hold times.
This matters particularly in situations that have lock stealing, optimistic spinning, reader-writer locks or more exotic kind of things that allow some common types of critical section to go through while others are blocking. And partiuclarly when such things hash collide on other futexes that share the same hash lock.
compared to other issues, certainly not to govern futex2 design. Changing such semantics would be a _huge_ difference between futex1 and futex2.
futex1 behaviour should not govern futex2 design. That's the only nice thing you get with an API change, so we should take full advantage of it. I'm not saying make changes for no reason, but I gave a reason, so that should be countered with a better reason to not change.
Thanks, Nick
At least compared, for example, to the hb collisions serializing independent futexes, affecting both performance and determinism. And I agree that a new interface should address this problem - albeit most of the workloads I have seen in production use but a handful of futexes and larger thread counts. One thing that crossed my mind (but have not actually sat down to look at) would be to use rlhastables for the dynamic resizing, but of course that would probably add a decent amount of overhead to the simple hashing we currently have.