On Thu, Jul 08, 2021 at 10:54:58AM -0400, Phil Auld wrote:
Sorry... I don't have a nice diagram. I'm still looking at what all those macros actually mean on the various architectures.
Don't worry about other architectures, lets focus on Power, because that's the case where you can reprouce funnies. Now Power only has 2 barrier ops (not quite true, but close enough for all this):
- SYNC is the full barrier
- LWSYNC is a TSO like barrier
Pretty much everything (LOAD-ACQUIRE, STORE-RELEASE, WMB, RMB) uses LWSYNC. Only MB result in SYNC.
Power is 'funny' because their spinlocks are weaker than everybody else's, but AFAICT that doesn't seem relevant here.
Using what you have above I get the same thing. It looks like it should be ordered but in practice it's not, and ordering it "more" as I did in the patch, fixes it.
And you're running Linus' tree, not some franken-kernel from RHT, right? As asked in that other email, can you try with just the WMB added? I really don't believe that RMB you added can make a difference.
Also, can you try with TTWU_QUEUE disabled (without any additional barriers added), that simplifies the wakeup path a lot.
Is it possible that the bit field is causing some of the assumptions about ordering in those various macros to be off?
*should* not matter...
prev->sched_contributes_to_load = X;
smp_store_release(&prev->on_cpu, 0); asm("LWSYNC" : : : "memory"); WRITE_ONCE(prev->on_cpu, 0);
due to that memory clobber, the compiler must emit whatever stores are required for the bitfield prior to the LWSYNC.
I notice in all the comments about smp_mb__after_spinlock etc, it's always WRITE_ONCE/READ_ONCE on the variables in question but we can't do that with the bit field.
Yeah, but both ->on_rq and ->sched_contributes_to_load are 'normal' stores. That said, given that ttwu() does a READ_ONCE() on ->on_rq, we should match that with WRITE_ONCE()...
So I think we should do the below, but I don't believe it'll make a difference. Let me stare more.
--- diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ca9a523c9a6c..da93551b298d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1973,12 +1973,12 @@ void activate_task(struct rq *rq, struct task_struct *p, int flags) { enqueue_task(rq, p, flags);
- p->on_rq = TASK_ON_RQ_QUEUED; + WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED); }
void deactivate_task(struct rq *rq, struct task_struct *p, int flags) { - p->on_rq = (flags & DEQUEUE_SLEEP) ? 0 : TASK_ON_RQ_MIGRATING; + WRITE_ONCE(p->on_rq, (flags & DEQUEUE_SLEEP) ? 0 : TASK_ON_RQ_MIGRATING);
dequeue_task(rq, p, flags); } @@ -5662,11 +5662,11 @@ static bool try_steal_cookie(int this, int that) if (p->core_occupation > dst->idle->core_occupation) goto next;
- p->on_rq = TASK_ON_RQ_MIGRATING; + WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING); deactivate_task(src, p, 0); set_task_cpu(p, this); activate_task(dst, p, 0); - p->on_rq = TASK_ON_RQ_QUEUED; + WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
resched_curr(dst);