When processing a batch of requests, it is possible that nvme_queue_rq() misses to ring nvme queue doorbell if the last request fails because the controller is not ready. As a result of that, previously queued requests will timeout because the device had not chance to know about the commands existence. This failure can cause nvme controller reset to timeout if there was another App using adminq while nvme reset was taking place.
Consider this case: - App is hammering adminq with NVME_ADMIN_IDENTIFY commands - Controller reset triggered by "echo 1 > /sys/.../nvme0/reset_controller"
nvme_reset_ctrl() will change controller state to NVME_CTRL_RESETTING. From that point on all requests from App will be forced to fail because the controller is no longer ready. More importantly these requests will not make it to adminq and will be short-circuited in nvme_queue_rq(). Unlike App requests, requests issued by reset code path will be allowed to go through adminq in order to carry out the reset process. The problem happens when blk-mq decides to mix requests from reset code path and App in one batch, in particular when the last request in such batch happens to be from App.
In this case the last request will have bd->last set to true telling the driver to ring doorbell after queuing this request. However, since the controller is not ready, this App request will be completed without going through adminq, and nvme_queue_rq() will miss the opportunity to ring adminq doorbell leaving earlier queued requests unknown to the device.
Fixes: d4060d2be1132 ("nvme-pci: fix controller reset hang when racing with nvme_timeout") Cc: stable@vger.kernel.org Reported-by: Eric Badger ebadger@purestorage.com Signed-off-by: Mohamed Khalfella mkhalfella@purestorage.com Reviewed-by: Eric Badger ebadger@purestorage.com --- drivers/nvme/host/pci.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 98864b853eef..f6b1ae593e8e 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -946,8 +946,12 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) return BLK_STS_IOERR;
- if (unlikely(!nvme_check_ready(&dev->ctrl, req, true))) - return nvme_fail_nonready_command(&dev->ctrl, req); + if (unlikely(!nvme_check_ready(&dev->ctrl, req, true))) { + ret = nvme_fail_nonready_command(&dev->ctrl, req); + if (ret == BLK_STS_OK && bd->last) + nvme_commit_rqs(hctx); + return ret; + }
ret = nvme_prep_rq(dev, req); if (unlikely(ret)) @@ -1724,6 +1728,7 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled) static const struct blk_mq_ops nvme_mq_admin_ops = { .queue_rq = nvme_queue_rq, .complete = nvme_pci_complete_rq, + .commit_rqs = nvme_commit_rqs, .init_hctx = nvme_admin_init_hctx, .init_request = nvme_pci_init_request, .timeout = nvme_timeout,