On Mon, Sep 14, 2020 at 01:59:15PM -0700, Linus Torvalds wrote:
On Mon, Sep 14, 2020 at 1:45 PM Thomas Gleixner tglx@linutronix.de wrote:
Recently merged code does:
gfp = preemptible() ? GFP_KERNEL : GFP_ATOMIC;
Looks obviously correct, except for the fact that preemptible() is unconditionally false for CONFIF_PREEMPT_COUNT=n, i.e. all allocations in that code use GFP_ATOMIC on such kernels.
I don't think this is a good reason to entirely get rid of the no-preempt thing.
The above is just garbage. It's bogus. You can't do it.
Blaming the no-preempt code for this bug is extremely unfair, imho.
And the no-preempt code does help make for much better code generation for simple spinlocks.
Where is that horribly buggy recent code? It's not in that exact format, certainly, since 'grep' doesn't find it.
It would be convenient for that "gfp =" code to work, as this would allow better cache locality while invoking RCU callbacks, and would further provide better robustness to callback floods. The full story is quite long, but here are alternatives have not yet been proven to be abject failures:
1. Use workqueues to do the allocations in a clean context. While waiting for the allocations, the callbacks are queued in the old cache-busting manner. This functions correctly, but in the meantime (which on busy systems can be some time) the cache locality and robustness are lost.
2. Provide the ability to allocate memory in raw atomic context. This is extremely effective, especially when used in combination with #1 above, but as you might suspect, the MM guys don't like it much.
In contrast, with Thomas's patch series, call_rcu() and kvfree_rcu() could just look at preemptible() to see whether or not it was safe to allocate memory, even in !PREEMPT kernels -- and in the common case, it almost always would be safe. It is quite possible that this approach would work in isolation, or failing that, that adding #1 above would do the trick.
I understand that this is all very hand-wavy, and I do apologize for that. If you really want the full sad story with performance numbers and the works, let me know!
Thanx, Paul
linux-kselftest-mirror@lists.linaro.org