On Sep 11, 2024, at 11:54, Ming Lei ming.lei@redhat.com wrote:
On Tue, Sep 10, 2024 at 07:22:16AM -0600, Jens Axboe wrote:
On 9/3/24 2:16 AM, Muchun Song wrote:
Supposing the following scenario.
CPU0 CPU1
blk_mq_insert_request() 1) store blk_mq_unquiesce_queue() blk_mq_run_hw_queue() blk_queue_flag_clear(QUEUE_FLAG_QUIESCED) 3) store if (blk_queue_quiesced()) 2) load blk_mq_run_hw_queues() return blk_mq_run_hw_queue() blk_mq_sched_dispatch_requests() if (!blk_mq_hctx_has_pending()) 4) load return
The full memory barrier should be inserted between 1) and 2), as well as between 3) and 4) to make sure that either CPU0 sees QUEUE_FLAG_QUIESCED is cleared or CPU1 sees dispatch list or setting of bitmap of software queue. Otherwise, either CPU will not re-run the hardware queue causing starvation.
So the first solution is to 1) add a pair of memory barrier to fix the problem, another solution is to 2) use hctx->queue->queue_lock to synchronize QUEUE_FLAG_QUIESCED. Here, we chose 2) to fix it since memory barrier is not easy to be maintained.
Same comment here, 72-74 chars wide please.
diff --git a/block/blk-mq.c b/block/blk-mq.c index b2d0f22de0c7f..ac39f2a346a52 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2202,6 +2202,24 @@ void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs) } EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
+static inline bool blk_mq_hw_queue_need_run(struct blk_mq_hw_ctx *hctx) +{
- bool need_run;
- /*
* When queue is quiesced, we may be switching io scheduler, or
* updating nr_hw_queues, or other things, and we can't run queue
* any more, even blk_mq_hctx_has_pending() can't be called safely.
*
* And queue will be rerun in blk_mq_unquiesce_queue() if it is
* quiesced.
*/
- __blk_mq_run_dispatch_ops(hctx->queue, false,
need_run = !blk_queue_quiesced(hctx->queue) &&
blk_mq_hctx_has_pending(hctx));
- return need_run;
+}
This __blk_mq_run_dispatch_ops() is also way too wide, why didn't you just break it like where you copied it from?
/**
- blk_mq_run_hw_queue - Start to run a hardware queue.
- @hctx: Pointer to the hardware queue to run.
@@ -2222,20 +2240,23 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
- /*
* When queue is quiesced, we may be switching io scheduler, or
* updating nr_hw_queues, or other things, and we can't run queue
* any more, even __blk_mq_hctx_has_pending() can't be called safely.
*
* And queue will be rerun in blk_mq_unquiesce_queue() if it is
* quiesced.
*/
- __blk_mq_run_dispatch_ops(hctx->queue, false,
need_run = !blk_queue_quiesced(hctx->queue) &&
blk_mq_hctx_has_pending(hctx));
- need_run = blk_mq_hw_queue_need_run(hctx);
- if (!need_run) {
unsigned long flags;
- if (!need_run)
return;
- /*
* synchronize with blk_mq_unquiesce_queue(), becuase we check
* if hw queue is quiesced locklessly above, we need the use
* ->queue_lock to make sure we see the up-to-date status to
* not miss rerunning the hw queue.
*/
- spin_lock_irqsave(&hctx->queue->queue_lock, flags);
- need_run = blk_mq_hw_queue_need_run(hctx);
- spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
- if (!need_run)
return;
- }
Is this not solvable on the unquiesce side instead? It's rather a shame to add overhead to the fast path to avoid a race with something that's super unlikely, like quisce.
Yeah, it can be solved by adding synchronize_rcu()/srcu() in unquiesce side, but SCSI may call it in non-sleepable context via scsi_internal_device_unblock_nowait().
Another approach will be like the fix for BLK_MQ_S_STOPPED (in patch 3), we could add a pair of mb into blk_queue_quiesced() and blk_mq_unquiesce_queue(). In which case, the fix will not affect any fast path, only slow path need the barrier overhead.
diff --git a/block/blk-mq.c b/block/blk-mq.c index b2d0f22de0c7f..45588ddb08d6b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -264,6 +264,12 @@ void blk_mq_unquiesce_queue(struct request_queue *q) ; } else if (!--q->quiesce_depth) { blk_queue_flag_clear(QUEUE_FLAG_QUIESCED, q); + /* + * Pairs with the smp_mb() in blk_queue_quiesced() to order the + * clearing of QUEUE_FLAG_QUIESCED above and the checking of + * dispatch list in the subsequent routine. + */ + smp_mb__after_atomic(); run_queue = true; } spin_unlock_irqrestore(&q->queue_lock, flags); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b8196e219ac22..7a71462892b66 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -628,7 +628,25 @@ void blk_queue_flag_clear(unsigned int flag, struct request_queue *q); #define blk_noretry_request(rq) \ ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \ REQ_FAILFAST_DRIVER)) -#define blk_queue_quiesced(q) test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags) + +static inline bool blk_queue_quiesced(struct request_queue *q) +{ + /* Fast path: hardware queue is unquiesced most of the time. */ + if (likely(!test_bit(QUEUE_FLAG_QUIESCED, &q->queue_flags))) + return false; + + /* + * This barrier is used to order adding of dispatch list before and + * the test of QUEUE_FLAG_QUIESCED below. Pairs with the memory barrier + * in blk_mq_unquiesce_queue() so that dispatch code could either see + * QUEUE_FLAG_QUIESCED is cleared or dispatch list is not empty to + * avoid missing dispatching requests. + */ + smp_mb(); + + return test_bit(QUEUE_FLAG_QUIESCED, &q->queue_flags); +} + #define blk_queue_pm_only(q) atomic_read(&(q)->pm_only) #define blk_queue_registered(q) test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags) #define blk_queue_sq_sched(q) test_bit(QUEUE_FLAG_SQ_SCHED, &(q)->queue_flags)
Muchun, Thanks.
Thanks, Ming