On Tue, Jul 10, 2018 at 12:26:34PM +0800, Huacai Chen wrote:
Hi, Paul and Peter,
I think we find the real root cause, READ_ONCE() doesn't need any barriers, the problematic code is queued_spin_lock_slowpath() in kernel/locking/qspinlock.c:
if (old & _Q_TAIL_MASK) { prev = decode_tail(old); /* Link @node into the waitqueue. */ WRITE_ONCE(prev->next, node); pv_wait_node(node, prev); arch_mcs_spin_lock_contended(&node->locked); /* * While waiting for the MCS lock, the next pointer may have * been set by another lock waiter. We optimistically load * the next pointer & prefetch the cacheline for writing * to reduce latency in the upcoming MCS unlock operation. */ next = READ_ONCE(node->next); if (next) prefetchw(next); }
After WRITE_ONCE(prev->next, node); arch_mcs_spin_lock_contended() enter a READ_ONCE() loop, so the effect of WRITE_ONCE() is invisible by other cores because of the write buffer.
And _that_ is a hardware bug. Also please explain how that is different from the ARM bug mentioned elsewhere.
As a result, arch_mcs_spin_lock_contended() will wait for ever because the waiters of prev->next will wait for ever. I think the right way to fix this is flush SFB after this WRITE_ONCE(), but I don't have a good solution: 1, MIPS has wbflush() which can be used to flush SFB, but other archs don't have;
Sane archs don't need this.
2, Every arch has mb(), and add mb() after WRITE_ONCE() can actually solve Loongson's problem, but in syntax, mb() is different from wbflush();
Still wrong, because any non-broken arch doesn't need that flush to begin with.
3, Maybe we can define a Loongson-specific WRITE_ONCE(), but not every WRITE_ONCE() need wbflush(), we only need wbflush() between WRITE_ONCE() and a READ_ONCE() loop.
No no no no ...
So now explain why the cpu_relax() hack that arm did doesn't work for you?