This is a note to let you know that I've just added the patch titled
iw_cxgb4: atomically flush the qp
to the 4.14-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: iw_cxgb4-atomically-flush-the-qp.patch and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From bc52e9ca74b9a395897bb640c6671b2cbf716032 Mon Sep 17 00:00:00 2001
From: Steve Wise swise@opengridcomputing.com Date: Thu, 9 Nov 2017 07:21:26 -0800 Subject: iw_cxgb4: atomically flush the qp
From: Steve Wise swise@opengridcomputing.com
commit bc52e9ca74b9a395897bb640c6671b2cbf716032 upstream.
__flush_qp() has a race condition where during the flush operation, the qp lock is released allowing another thread to possibly post a WR, which corrupts the queue state, possibly causing crashes. The lock was released to preserve the cq/qp locking hierarchy of cq first, then qp. However releasing the qp lock is not necessary; both RQ and SQ CQ locks can be acquired first, followed by the qp lock, and then the RQ and SQ flushing can be done w/o unlocking.
Signed-off-by: Steve Wise swise@opengridcomputing.com Signed-off-by: Doug Ledford dledford@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/infiniband/hw/cxgb4/qp.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-)
--- a/drivers/infiniband/hw/cxgb4/qp.c +++ b/drivers/infiniband/hw/cxgb4/qp.c @@ -1271,31 +1271,34 @@ static void __flush_qp(struct c4iw_qp *q
pr_debug("%s qhp %p rchp %p schp %p\n", __func__, qhp, rchp, schp);
- /* locking hierarchy: cq lock first, then qp lock. */ + /* locking hierarchy: cqs lock first, then qp lock. */ spin_lock_irqsave(&rchp->lock, flag); + if (schp != rchp) + spin_lock(&schp->lock); spin_lock(&qhp->lock);
if (qhp->wq.flushed) { spin_unlock(&qhp->lock); + if (schp != rchp) + spin_unlock(&schp->lock); spin_unlock_irqrestore(&rchp->lock, flag); return; } qhp->wq.flushed = 1; + t4_set_wq_in_error(&qhp->wq);
c4iw_flush_hw_cq(rchp); c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count); rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count); - spin_unlock(&qhp->lock); - spin_unlock_irqrestore(&rchp->lock, flag);
- /* locking hierarchy: cq lock first, then qp lock. */ - spin_lock_irqsave(&schp->lock, flag); - spin_lock(&qhp->lock); if (schp != rchp) c4iw_flush_hw_cq(schp); sq_flushed = c4iw_flush_sq(qhp); + spin_unlock(&qhp->lock); - spin_unlock_irqrestore(&schp->lock, flag); + if (schp != rchp) + spin_unlock(&schp->lock); + spin_unlock_irqrestore(&rchp->lock, flag);
if (schp == rchp) { if (t4_clear_cq_armed(&rchp->cq) && @@ -1329,8 +1332,8 @@ static void flush_qp(struct c4iw_qp *qhp rchp = to_c4iw_cq(qhp->ibqp.recv_cq); schp = to_c4iw_cq(qhp->ibqp.send_cq);
- t4_set_wq_in_error(&qhp->wq); if (qhp->ibqp.uobject) { + t4_set_wq_in_error(&qhp->wq); t4_set_cq_in_error(&rchp->cq); spin_lock_irqsave(&rchp->comp_handler_lock, flag); (*rchp->ibcq.comp_handler)(&rchp->ibcq, rchp->ibcq.cq_context);
Patches currently in stable-queue which might be from swise@opengridcomputing.com are
queue-4.14/iw_cxgb4-only-clear-the-armed-bit-if-a-notification-is-needed.patch queue-4.14/iw_cxgb4-atomically-flush-the-qp.patch queue-4.14/iw_cxgb4-when-flushing-complete-all-wrs-in-a-chain.patch queue-4.14/iw_cxgb4-reflect-the-original-wr-opcode-in-drain-cqes.patch queue-4.14/iw_cxgb4-only-call-the-cq-comp_handler-when-the-cq-is-armed.patch