From: Parav Pandit parav@nvidia.com
[ Upstream commit 60f0779862e4ab943810187752c462e85f5fa371 ]
Currently vq->broken field is read by virtqueue_is_broken() in busy loop in one context by virtnet_send_command().
vq->broken is set to true in other process context by virtio_break_device(). Reader and writer are accessing it without any synchronization. This may lead to a compiler optimization which may result to optimize reading vq->broken only once.
Hence, force reading vq->broken on each invocation of virtqueue_is_broken() and also force writing it so that such update is visible to the readers.
It is a theoretical fix that isn't yet encountered in the field.
Signed-off-by: Parav Pandit parav@nvidia.com Link: https://lore.kernel.org/r/20210721142648.1525924-2-parav@nvidia.com Signed-off-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/virtio/virtio_ring.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 5cad9f41c238..cf7eccfe3469 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -1150,7 +1150,7 @@ bool virtqueue_is_broken(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq);
- return vq->broken; + return READ_ONCE(vq->broken); } EXPORT_SYMBOL_GPL(virtqueue_is_broken);
@@ -1164,7 +1164,9 @@ void virtio_break_device(struct virtio_device *dev)
list_for_each_entry(_vq, &dev->vqs, list) { struct vring_virtqueue *vq = to_vvq(_vq); - vq->broken = true; + + /* Pairs with READ_ONCE() in virtqueue_is_broken(). */ + WRITE_ONCE(vq->broken, true); } } EXPORT_SYMBOL_GPL(virtio_break_device);
From: Neeraj Upadhyay neeraju@codeaurora.org
[ Upstream commit e74cfa91f42c50f7f649b0eca46aa049754ccdbd ]
As __vringh_iov() traverses a descriptor chain, it populates each descriptor entry into either read or write vring iov and increments that iov's ->used member. So, as we iterate over a descriptor chain, at any point, (riov/wriov)->used value gives the number of descriptor enteries available, which are to be read or written by the device. As all read iovs must precede the write iovs, wiov->used should be zero when we are traversing a read descriptor. Current code checks for wiov->i, to figure out whether any previous entry in the current descriptor chain was a write descriptor. However, iov->i is only incremented, when these vring iovs are consumed, at a later point, and remain 0 in __vringh_iov(). So, correct the check for read and write descriptor order, to use wiov->used.
Acked-by: Jason Wang jasowang@redhat.com Reviewed-by: Stefano Garzarella sgarzare@redhat.com Signed-off-by: Neeraj Upadhyay neeraju@codeaurora.org Link: https://lore.kernel.org/r/1624591502-4827-1-git-send-email-neeraju@codeauror... Signed-off-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/vhost/vringh.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index d56736655dec..da47542496cc 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -329,7 +329,7 @@ __vringh_iov(struct vringh *vrh, u16 i, iov = wiov; else { iov = riov; - if (unlikely(wiov && wiov->i)) { + if (unlikely(wiov && wiov->used)) { vringh_bad("Readable desc %p after writable", &descs[i]); err = -EINVAL;
From: Gerd Rausch gerd.rausch@oracle.com
[ Upstream commit fb4b1373dcab086d0619c29310f0466a0b2ceb8a ]
Function "dma_map_sg" is entitled to merge adjacent entries and return a value smaller than what was passed as "nents".
Subsequently "ib_map_mr_sg" needs to work with this value ("sg_dma_len") rather than the original "nents" parameter ("sg_len").
This old RDS bug was exposed and reliably causes kernel panics (using RDMA operations "rds-stress -D") on x86_64 starting with: commit c588072bba6b ("iommu/vt-d: Convert intel iommu driver to the iommu ops")
Simply put: Linux 5.11 and later.
Signed-off-by: Gerd Rausch gerd.rausch@oracle.com Acked-by: Santosh Shilimkar santosh.shilimkar@oracle.com Link: https://lore.kernel.org/r/60efc69f-1f35-529d-a7ef-da0549cad143@oracle.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/rds/ib_frmr.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c index 3d9c4c6397c3..20d045faf07c 100644 --- a/net/rds/ib_frmr.c +++ b/net/rds/ib_frmr.c @@ -112,9 +112,9 @@ static int rds_ib_post_reg_frmr(struct rds_ib_mr *ibmr) cpu_relax(); }
- ret = ib_map_mr_sg_zbva(frmr->mr, ibmr->sg, ibmr->sg_len, + ret = ib_map_mr_sg_zbva(frmr->mr, ibmr->sg, ibmr->sg_dma_len, &off, PAGE_SIZE); - if (unlikely(ret != ibmr->sg_len)) + if (unlikely(ret != ibmr->sg_dma_len)) return ret < 0 ? ret : -EINVAL;
/* Perform a WR for the fast_reg_mr. Each individual page
linux-stable-mirror@lists.linaro.org