From: James Smart jsmart2021@gmail.com
[ Upstream commit a7d139145a6640172516b193abf6d2398620aa14 ]
The __nvmf_check_ready() routine used to bounce all filesystem io if the controller state isn't LIVE. However, a later patch changed the logic so that it rejection ends up being based on the Q live check. The FC transport has a slightly different sequence from rdma and tcp for shutting down queues/marking them non-live. FC marks its queue non-live after aborting all ios and waiting for their termination, leaving a rather large window for filesystem io to continue to hit the transport. Unfortunately this resulted in filesystem I/O or applications seeing I/O errors.
Change the FC transport to mark the queues non-live at the first sign of teardown for the association (when I/O is initially terminated).
Fixes: 73a5379937ec ("nvme-fabrics: allow to queue requests for live queues") Signed-off-by: James Smart jsmart2021@gmail.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Reviewed-by: Himanshu Madhani himanshu.madhani@oracle.com Reviewed-by: Hannes Reinecke hare@suse.de Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/fc.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 41257daf7464..a0bcec33b020 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -2460,6 +2460,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved) static void __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues) { + int q; + + /* + * if aborting io, the queues are no longer good, mark them + * all as not live. + */ + if (ctrl->ctrl.queue_count > 1) { + for (q = 1; q < ctrl->ctrl.queue_count; q++) + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags); + } + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags); + /* * If io queues are present, stop them and terminate all outstanding * ios on them. As FC allocates FC exchange for each io, the