Hi Peter,
On Fri, Jul 09, 2021 at 02:57:10PM +0200 Peter Zijlstra wrote:
On Thu, Jul 08, 2021 at 10:54:58AM -0400, Phil Auld wrote:
Sorry... I don't have a nice diagram. I'm still looking at what all those macros actually mean on the various architectures.
Don't worry about other architectures, lets focus on Power, because that's the case where you can reprouce funnies. Now Power only has 2 barrier ops (not quite true, but close enough for all this):
SYNC is the full barrier
LWSYNC is a TSO like barrier
Pretty much everything (LOAD-ACQUIRE, STORE-RELEASE, WMB, RMB) uses LWSYNC. Only MB result in SYNC.
Power is 'funny' because their spinlocks are weaker than everybody else's, but AFAICT that doesn't seem relevant here.
Thanks.
Using what you have above I get the same thing. It looks like it should be ordered but in practice it's not, and ordering it "more" as I did in the patch, fixes it.
And you're running Linus' tree, not some franken-kernel from RHT, right? As asked in that other email, can you try with just the WMB added? I really don't believe that RMB you added can make a difference.
So, no. Right now the reproducer is on the franken-kernel :(
As far as I can tell the relevant code paths (schedule, barriers, wakeup etc) are all current and the same. I traced through your diagram and it all matches exactly.
I have a suspicion that Linus's tree may hide it. I believe this is tickled by NFS io, which I _think_ is effected by the unboud workqueue changes that may make it less likely to do the wakeup on a different cpu. But that's just speculation.
The issue is that the systems under test here are in a partner's lab to which I have no direct access.
I will try to get an upstream build on there, if possible, as soon as I can.
Also, can you try with TTWU_QUEUE disabled (without any additional barriers added), that simplifies the wakeup path a lot.
Will do.
Is it possible that the bit field is causing some of the assumptions about ordering in those various macros to be off?
*should* not matter...
prev->sched_contributes_to_load = X;
smp_store_release(&prev->on_cpu, 0); asm("LWSYNC" : : : "memory"); WRITE_ONCE(prev->on_cpu, 0);
due to that memory clobber, the compiler must emit whatever stores are required for the bitfield prior to the LWSYNC.
I notice in all the comments about smp_mb__after_spinlock etc, it's always WRITE_ONCE/READ_ONCE on the variables in question but we can't do that with the bit field.
Yeah, but both ->on_rq and ->sched_contributes_to_load are 'normal' stores. That said, given that ttwu() does a READ_ONCE() on ->on_rq, we should match that with WRITE_ONCE()...
So I think we should do the below, but I don't believe it'll make a difference. Let me stare more.
I'm out of the office for the next week+ so don't stare to hard. I'll try to get the tests you asked for as soon as I get back in the (home) office.
I'm not sure the below will make a difference either, but will try it too.
Thanks again for the help. And sorry for the timing.
Cheers, Phil
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ca9a523c9a6c..da93551b298d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1973,12 +1973,12 @@ void activate_task(struct rq *rq, struct task_struct *p, int flags) { enqueue_task(rq, p, flags);
- p->on_rq = TASK_ON_RQ_QUEUED;
- WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
} void deactivate_task(struct rq *rq, struct task_struct *p, int flags) {
- p->on_rq = (flags & DEQUEUE_SLEEP) ? 0 : TASK_ON_RQ_MIGRATING;
- WRITE_ONCE(p->on_rq, (flags & DEQUEUE_SLEEP) ? 0 : TASK_ON_RQ_MIGRATING);
dequeue_task(rq, p, flags); } @@ -5662,11 +5662,11 @@ static bool try_steal_cookie(int this, int that) if (p->core_occupation > dst->idle->core_occupation) goto next;
p->on_rq = TASK_ON_RQ_MIGRATING;
deactivate_task(src, p, 0); set_task_cpu(p, this); activate_task(dst, p, 0);WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
p->on_rq = TASK_ON_RQ_QUEUED;
WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
resched_curr(dst);