From: Ionut Nechita ionut.nechita@windriver.com
This series addresses two critical issues in the block layer multiqueue (blk-mq) subsystem when running on PREEMPT_RT kernels.
The first patch fixes a severe performance regression where queue_lock contention in the I/O hot path causes IRQ threads to sleep on RT kernels. Testing on MegaRAID 12GSAS controller showed a 76% performance drop (640 MB/s -> 153 MB/s). The fix replaces spinlock with memory barriers to maintain ordering without sleeping.
The second patch fixes a WARN_ON that triggers during SCSI device scanning when blk_freeze_queue_start() calls blk_mq_run_hw_queues() synchronously from interrupt context. The warning "WARN_ON_ONCE(!async && in_interrupt())" is resolved by switching to asynchronous execution.
Changes in v2: - Removed the blk_mq_cpuhp_lock patch (needs more investigation) - Added fix for WARN_ON in interrupt context during queue freezing - Updated commit messages for clarity
Ionut Nechita (2): block/blk-mq: fix RT kernel regression with queue_lock in hot path block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context
block/blk-mq.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-)
From: Ionut Nechita ionut.nechita@windriver.com
Fix warning "WARN_ON_ONCE(!async && in_interrupt())" that occurs during SCSI device scanning when blk_freeze_queue_start() calls blk_mq_run_hw_queues() synchronously from interrupt context.
The issue happens during device removal/scanning when: 1. blk_mq_destroy_queue() -> blk_queue_start_drain() 2. blk_freeze_queue_start() calls blk_mq_run_hw_queues(q, false) 3. This triggers the warning in blk_mq_run_hw_queue() when in interrupt context
Change the synchronous call to asynchronous to avoid running in interrupt context.
Fixes: Warning in blk_mq_run_hw_queue+0x1fa/0x260 Signed-off-by: Ionut Nechita ionut.nechita@windriver.com --- block/blk-mq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c index 5fb8da4958d0..ae152f7a6933 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -128,7 +128,7 @@ void blk_freeze_queue_start(struct request_queue *q) percpu_ref_kill(&q->q_usage_counter); mutex_unlock(&q->mq_freeze_lock); if (queue_is_mq(q)) - blk_mq_run_hw_queues(q, false); + blk_mq_run_hw_queues(q, true); } else { mutex_unlock(&q->mq_freeze_lock); }
From: Ionut Nechita ionut.nechita@windriver.com
Commit 679b1874eba7 ("block: fix ordering between checking QUEUE_FLAG_QUIESCED request adding") introduced queue_lock acquisition in blk_mq_run_hw_queue() to synchronize QUEUE_FLAG_QUIESCED checks.
On RT kernels (CONFIG_PREEMPT_RT), regular spinlocks are converted to rt_mutex (sleeping locks). When multiple MSI-X IRQ threads process I/O completions concurrently, they contend on queue_lock in the hot path, causing all IRQ threads to enter D (uninterruptible sleep) state. This serializes interrupt processing completely.
Test case (MegaRAID 12GSAS with 8 MSI-X vectors on RT kernel): - Good (v6.6.52-rt): 640 MB/s sequential read - Bad (v6.6.64-rt): 153 MB/s sequential read (-76% regression) - 6-8 out of 8 MSI-X IRQ threads stuck in D-state waiting on queue_lock
The original commit message mentioned memory barriers as an alternative approach. Use full memory barriers (smp_mb) instead of queue_lock to provide the same ordering guarantees without sleeping in RT kernel.
Memory barriers ensure proper synchronization: - CPU0 either sees QUEUE_FLAG_QUIESCED cleared, OR - CPU1 sees dispatch list/sw queue bitmap updates
This maintains correctness while avoiding lock contention that causes RT kernel IRQ threads to sleep in the I/O completion path.
Fixes: 679b1874eba7 ("block: fix ordering between checking QUEUE_FLAG_QUIESCED request adding") Cc: stable@vger.kernel.org Signed-off-by: Ionut Nechita ionut.nechita@windriver.com --- block/blk-mq.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c index 5da948b07058..5fb8da4958d0 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2292,22 +2292,19 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
+ /* + * First lockless check to avoid unnecessary overhead. + * Memory barrier below synchronizes with blk_mq_unquiesce_queue(). + */ need_run = blk_mq_hw_queue_need_run(hctx); if (!need_run) { - unsigned long flags; - - /* - * Synchronize with blk_mq_unquiesce_queue(), because we check - * if hw queue is quiesced locklessly above, we need the use - * ->queue_lock to make sure we see the up-to-date status to - * not miss rerunning the hw queue. - */ - spin_lock_irqsave(&hctx->queue->queue_lock, flags); + /* Synchronize with blk_mq_unquiesce_queue() */ + smp_mb(); need_run = blk_mq_hw_queue_need_run(hctx); - spin_unlock_irqrestore(&hctx->queue->queue_lock, flags); - if (!need_run) return; + /* Ensure dispatch list/sw queue updates visible before execution */ + smp_mb(); }
if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
Hi,
Thanks for your patch.
FYI: kernel test robot notices the stable kernel rule is not satisfied.
The check is based on https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html#opti...
Rule: add the tag "Cc: stable@vger.kernel.org" in the sign-off area to have the patch automatically included in the stable tree. Subject: [PATCH v2 2/2] block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context Link: https://lore.kernel.org/stable/20251222201541.11961-3-ionut.nechita%40windri...
On Mon, Dec 22, 2025 at 10:15:41PM +0200, Ionut Nechita (WindRiver) wrote:
From: Ionut Nechita ionut.nechita@windriver.com
Fix warning "WARN_ON_ONCE(!async && in_interrupt())" that occurs during SCSI device scanning when blk_freeze_queue_start() calls blk_mq_run_hw_queues() synchronously from interrupt context.
Can you show the whole stack trace in the warning? The in-code doesn't indicate that freeze queue can be called from scsi's interrupt context.
Thanks, Ming
On 2025/12/23 04:15, Ionut Nechita (WindRiver) wrote:
From: Ionut Nechita ionut.nechita@windriver.com
Commit 679b1874eba7 ("block: fix ordering between checking QUEUE_FLAG_QUIESCED request adding") introduced queue_lock acquisition in blk_mq_run_hw_queue() to synchronize QUEUE_FLAG_QUIESCED checks.
On RT kernels (CONFIG_PREEMPT_RT), regular spinlocks are converted to rt_mutex (sleeping locks). When multiple MSI-X IRQ threads process I/O completions concurrently, they contend on queue_lock in the hot path, causing all IRQ threads to enter D (uninterruptible sleep) state. This serializes interrupt processing completely.
Test case (MegaRAID 12GSAS with 8 MSI-X vectors on RT kernel):
- Good (v6.6.52-rt): 640 MB/s sequential read
- Bad (v6.6.64-rt): 153 MB/s sequential read (-76% regression)
- 6-8 out of 8 MSI-X IRQ threads stuck in D-state waiting on queue_lock
The original commit message mentioned memory barriers as an alternative approach. Use full memory barriers (smp_mb) instead of queue_lock to provide the same ordering guarantees without sleeping in RT kernel.
Memory barriers ensure proper synchronization:
- CPU0 either sees QUEUE_FLAG_QUIESCED cleared, OR
- CPU1 sees dispatch list/sw queue bitmap updates
This maintains correctness while avoiding lock contention that causes RT kernel IRQ threads to sleep in the I/O completion path.
Fixes: 679b1874eba7 ("block: fix ordering between checking QUEUE_FLAG_QUIESCED request adding") Cc: stable@vger.kernel.org Signed-off-by: Ionut Nechita ionut.nechita@windriver.com
block/blk-mq.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c index 5da948b07058..5fb8da4958d0 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2292,22 +2292,19 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async) might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
- /*
* First lockless check to avoid unnecessary overhead.* Memory barrier below synchronizes with blk_mq_unquiesce_queue(). need_run = blk_mq_hw_queue_need_run(hctx); if (!need_run) {*/
unsigned long flags;/** Synchronize with blk_mq_unquiesce_queue(), because we check* if hw queue is quiesced locklessly above, we need the use* ->queue_lock to make sure we see the up-to-date status to* not miss rerunning the hw queue.*/spin_lock_irqsave(&hctx->queue->queue_lock, flags);
/* Synchronize with blk_mq_unquiesce_queue() */
Memory barriers must be used in pairs. So how to synchronize?
need_run = blk_mq_hw_queue_need_run(hctx);smp_mb();
spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);- if (!need_run) return;
/* Ensure dispatch list/sw queue updates visible before execution */smp_mb();
Why we need another barrier? What order does this barrier guarantee?
Thanks.
} if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
On Dec 23, 2025, at 04:15, Ionut Nechita (WindRiver) djiony2011@gmail.com wrote:
From: Ionut Nechita ionut.nechita@windriver.com
Fix warning "WARN_ON_ONCE(!async && in_interrupt())" that occurs during SCSI device scanning when blk_freeze_queue_start() calls blk_mq_run_hw_queues() synchronously from interrupt context.
The issue happens during device removal/scanning when:
- blk_mq_destroy_queue() -> blk_queue_start_drain()
- blk_freeze_queue_start() calls blk_mq_run_hw_queues(q, false)
- This triggers the warning in blk_mq_run_hw_queue() when in interrupt context
Change the synchronous call to asynchronous to avoid running in interrupt context.
Fixes: Warning in blk_mq_run_hw_queue+0x1fa/0x260
You've added a wrong format of Fixes tag.
Thanks.
Signed-off-by: Ionut Nechita ionut.nechita@windriver.com
linux-stable-mirror@lists.linaro.org