6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christoph Hellwig hch@lst.de
commit 1b0cab327e060ccf397ae634a34c84dd1d4d2bb2 upstream.
req_get_ioprio looks at req->bio to find the I/O priority, which is not set when completing bios that the driver fully iterated through.
Stash away the dd_per_prio in the elevator private data instead of looking it up again to optimize the code a bit while fixing the regression from removing the per-request ioprio value.
Fixes: 6975c1a486a4 ("block: remove the ioprio field from struct request") Reported-by: Chris Bainbridge chris.bainbridge@gmail.com Reported-by: Sam Protsenko semen.protsenko@linaro.org Signed-off-by: Christoph Hellwig hch@lst.de Tested-by: Chris Bainbridge chris.bainbridge@gmail.com Tested-by: Sam Protsenko semen.protsenko@linaro.org Reviewed-by: Bart Van Assche bvanassche@acm.org Link: https://lore.kernel.org/r/20241126102136.619067-1-hch@lst.de Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/mq-deadline.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-)
--- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -685,10 +685,9 @@ static void dd_insert_request(struct blk
prio = ioprio_class_to_prio[ioprio_class]; per_prio = &dd->per_prio[prio]; - if (!rq->elv.priv[0]) { + if (!rq->elv.priv[0]) per_prio->stats.inserted++; - rq->elv.priv[0] = (void *)(uintptr_t)1; - } + rq->elv.priv[0] = per_prio;
if (blk_mq_sched_try_insert_merge(q, rq, free)) return; @@ -753,18 +752,14 @@ static void dd_prepare_request(struct re */ static void dd_finish_request(struct request *rq) { - struct request_queue *q = rq->q; - struct deadline_data *dd = q->elevator->elevator_data; - const u8 ioprio_class = dd_rq_ioclass(rq); - const enum dd_prio prio = ioprio_class_to_prio[ioprio_class]; - struct dd_per_prio *per_prio = &dd->per_prio[prio]; + struct dd_per_prio *per_prio = rq->elv.priv[0];
/* * The block layer core may call dd_finish_request() without having * called dd_insert_requests(). Skip requests that bypassed I/O * scheduling. See also blk_mq_request_bypass_insert(). */ - if (rq->elv.priv[0]) + if (per_prio) atomic_inc(&per_prio->stats.completed); }