On Mon, Sep 14 2020 at 15:24, Linus Torvalds wrote:
On Mon, Sep 14, 2020 at 2:55 PM Thomas Gleixner tglx@linutronix.de wrote:
Yes it does generate better code, but I tried hard to spot a difference in various metrics exposed by perf. It's all in the noise and I only can spot a difference when the actual preemption check after the decrement
I'm somewhat more worried about the small-device case.
I just checked on one of my old UP ARM toys which I run at home. The .text increase is about 2% (75k) and none of the tests I ran showed any significant difference. Couldn't verify with perf though as the PMU on that piece of art is unusable.
That said, the diffstat certainly has its very clear charm, and I do agree that it makes things simpler.
I'm just not convinced people should ever EVER do things like that "if (preemptible())" garbage. It sounds like somebody is doing seriously bad things.
OTOH, having a working 'preemptible()' or maybe better named 'can_schedule()' check makes tons of sense to make decisions about allocation modes or other things.
We're currently looking through all of in_atomic(), in_interrupt() etc. usage sites and quite some of them are historic and have the clear intent of checking whether the code is called from task context or hard/softirq context. Lots of them are completely broken or just work by chance.
But there is clearly historic precendence that context checks are useful, but they only can be useful if we have a consistent mechanism which works everywhere.
Of course we could mandate that every interface which might be called from one or the other context has a context argument or provides two variants of the same thing. But I'm not really convinced whether that's a win over having a consistent and reliable set of checks.
Thanks,
tglx