The following situation leads to deadlock:
[task 1] [task 2] [task 3] kill_fasync() mm_update_next_owner() copy_process() spin_lock_irqsave(&fa->fa_lock) read_lock(&tasklist_lock) write_lock_irq(&tasklist_lock) send_sigio() <IRQ> ... read_lock(&fown->lock) kill_fasync() ... read_lock(&tasklist_lock) spin_lock_irqsave(&fa->fa_lock) ...
Task 1 can't acquire read locked tasklist_lock, since there is already task 3 expressed its wish to take the lock exclusive. Task 2 holds the read locked lock, but it can't take the spin lock.
The patch makes queued_read_lock_slowpath() to give task 1 the same priority as it was an interrupt handler, and to take the lock dispite of task 3 is waiting it, and this prevents the deadlock. It seems there is no better way to detect such the situations, also in general it's not good to wait so long for readers with interrupts disabled, since read_lock may nest with another locks and delay the system.
Signed-off-by: Kirill Tkhai ktkhai@virtuozzo.com --- kernel/locking/qrwlock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c index c7471c3fb798..d15df85de8f5 100644 --- a/kernel/locking/qrwlock.c +++ b/kernel/locking/qrwlock.c @@ -32,7 +32,7 @@ void queued_read_lock_slowpath(struct qrwlock *lock) /* * Readers come here when they cannot get the lock without waiting */ - if (unlikely(in_interrupt())) { + if (unlikely(irqs_disabled())) { /* * Readers in interrupt context will get the lock immediately * if the writer is just waiting (not holding the lock yet),