The patch below does not apply to the 5.13-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 110aa25c3ce417a44e35990cf8ed22383277933a Mon Sep 17 00:00:00 2001
From: Jens Axboe axboe@kernel.dk Date: Mon, 26 Jul 2021 10:42:56 -0600 Subject: [PATCH] io_uring: fix race in unified task_work running
We use a bit to manage if we need to add the shared task_work, but a list + lock for the pending work. Before aborting a current run of the task_work we check if the list is empty, but we do so without grabbing the lock that protects it. This can lead to races where we think we have nothing left to run, where in practice we could be racing with a task adding new work to the list. If we do hit that race condition, we could be left with work items that need processing, but the shared task_work is not active.
Ensure that we grab the lock before checking if the list is empty, so we know if it's safe to exit the run or not.
Link: https://lore.kernel.org/io-uring/c6bd5987-e9ae-cd02-49d0-1b3ac1ef65b1@tnonli... Cc: stable@vger.kernel.org # 5.11+ Reported-by: Forza forza@tnonline.net Signed-off-by: Jens Axboe axboe@kernel.dk
diff --git a/fs/io_uring.c b/fs/io_uring.c index c4d2b320cdd4..a4331deb0427 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1959,9 +1959,13 @@ static void tctx_task_work(struct callback_head *cb) node = next; } if (wq_list_empty(&tctx->task_list)) { + spin_lock_irq(&tctx->task_lock); clear_bit(0, &tctx->task_state); - if (wq_list_empty(&tctx->task_list)) + if (wq_list_empty(&tctx->task_list)) { + spin_unlock_irq(&tctx->task_lock); break; + } + spin_unlock_irq(&tctx->task_lock); /* another tctx_task_work() is enqueued, yield */ if (test_and_set_bit(0, &tctx->task_state)) break;
On 7/31/21 12:43 AM, gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 5.13-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
Here's a tested 5.13-stable port.
From: Jens Axboe axboe@kernel.dk Subject: io_uring: fix race in unified task_work running
commit 110aa25c3ce417a44e35990cf8ed22383277933a upstream.
We use a bit to manage if we need to add the shared task_work, but a list + lock for the pending work. Before aborting a current run of the task_work we check if the list is empty, but we do so without grabbing the lock that protects it. This can lead to races where we think we have nothing left to run, where in practice we could be racing with a task adding new work to the list. If we do hit that race condition, we could be left with work items that need processing, but the shared task_work is not active.
Ensure that we grab the lock before checking if the list is empty, so we know if it's safe to exit the run or not.
Link: https://lore.kernel.org/io-uring/c6bd5987-e9ae-cd02-49d0-1b3ac1ef65b1@tnonli... Cc: stable@vger.kernel.org # 5.11+ Reported-by: Forza forza@tnonline.net Tested-by: Forza forza@tnonline.net Signed-off-by: Jens Axboe axboe@kernel.dk
diff --git a/fs/io_uring.c b/fs/io_uring.c index df4288776815..3be33819ee42 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1890,7 +1890,7 @@ static void tctx_task_work(struct callback_head *cb)
clear_bit(0, &tctx->task_state);
- while (!wq_list_empty(&tctx->task_list)) { + while (true) { struct io_ring_ctx *ctx = NULL; struct io_wq_work_list list; struct io_wq_work_node *node; @@ -1900,6 +1900,9 @@ static void tctx_task_work(struct callback_head *cb) INIT_WQ_LIST(&tctx->task_list); spin_unlock_irq(&tctx->task_lock);
+ if (wq_list_empty(&list)) + break; + node = list.first; while (node) { struct io_wq_work_node *next = node->next;
On Sat, Jul 31, 2021 at 08:44:33AM -0600, Jens Axboe wrote:
On 7/31/21 12:43 AM, gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 5.13-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
Here's a tested 5.13-stable port.
From: Jens Axboe axboe@kernel.dk Subject: io_uring: fix race in unified task_work running
commit 110aa25c3ce417a44e35990cf8ed22383277933a upstream.
We use a bit to manage if we need to add the shared task_work, but a list + lock for the pending work. Before aborting a current run of the task_work we check if the list is empty, but we do so without grabbing the lock that protects it. This can lead to races where we think we have nothing left to run, where in practice we could be racing with a task adding new work to the list. If we do hit that race condition, we could be left with work items that need processing, but the shared task_work is not active.
Ensure that we grab the lock before checking if the list is empty, so we know if it's safe to exit the run or not.
Link: https://lore.kernel.org/io-uring/c6bd5987-e9ae-cd02-49d0-1b3ac1ef65b1@tnonli... Cc: stable@vger.kernel.org # 5.11+ Reported-by: Forza forza@tnonline.net Tested-by: Forza forza@tnonline.net
Now queued up, thanks!
greg k-h
linux-stable-mirror@lists.linaro.org