On 04.04.2018 18:51, Kirill Tkhai wrote:
On 04.04.2018 18:35, Peter Zijlstra wrote:
On Wed, Apr 04, 2018 at 06:24:39PM +0300, Kirill Tkhai wrote:
The following situation leads to deadlock:
[task 1] [task 2] [task 3] kill_fasync() mm_update_next_owner() copy_process() spin_lock_irqsave(&fa->fa_lock) read_lock(&tasklist_lock) write_lock_irq(&tasklist_lock) send_sigio() <IRQ> ... read_lock(&fown->lock) kill_fasync() ... read_lock(&tasklist_lock) spin_lock_irqsave(&fa->fa_lock) ...
Task 1 can't acquire read locked tasklist_lock, since there is already task 3 expressed its wish to take the lock exclusive. Task 2 holds the read locked lock, but it can't take the spin lock.
The patch makes queued_read_lock_slowpath() to give task 1 the same priority as it was an interrupt handler, and to take the lock
That re-introduces starvation scenarios. And the above looks like a proper deadlock that should be sorted by fixing the locking order.
We can move tasklist_lock out of send_sigio(), but I'm not sure it's possible for read_lock(&fown->lock).
Is there another solution? Is there reliable way to iterate do_each_pid_task() with rcu_read_lock()?
In case of &fown->lock we may always disable irqs for all the places, where it's taken for read, i.e. read_lock_irqsave(&fown->lock). This seems to fix the problem for this lock.