Correcting Waiman's mail address
On 01/10/2018 09:16, Juergen Gross wrote:
In the following situation a vcpu waiting for a lock might not be woken up from xen_poll_irq():
CPU 1: CPU 2: CPU 3: takes a spinlock tries to get lock -> xen_qlock_wait() -> xen_clear_irq_pending() frees the lock -> xen_qlock_kick(cpu2)
takes lock again tries to get lock -> *lock = _Q_SLOW_VAL -> *lock == _Q_SLOW_VAL ? -> xen_poll_irq() frees the lock -> xen_qlock_kick(cpu3)
And cpu 2 will sleep forever.
This can be avoided easily by modifying xen_qlock_wait() to call xen_poll_irq() only if the related irq was not pending and to call xen_clear_irq_pending() only if it was pending.
Cc: stable@vger.kernel.org Cc: longman@redhat.com Cc: peterz@infradead.org Signed-off-by: Juergen Gross jgross@suse.com
arch/x86/xen/spinlock.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 973f10e05211..cd210a4ba7b1 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -45,17 +45,12 @@ static void xen_qlock_wait(u8 *byte, u8 val) if (irq == -1) return;
- /* clear pending */
- xen_clear_irq_pending(irq);
- barrier();
- /* If irq pending already clear it and return. */
- if (xen_test_irq_pending(irq)) {
xen_clear_irq_pending(irq);
return;
- }
- /*
* We check the byte value after clearing pending IRQ to make sure
* that we won't miss a wakeup event because of the clearing.
*
* The sync_clear_bit() call in xen_clear_irq_pending() is atomic.
* So it is effectively a memory barrier for x86.
if (READ_ONCE(*byte) != val) return;*/