From: Liming Sun limings@nvidia.com
[ Upstream commit 1c0e5701c5e792c090aef0e5b9b8923c334d9324 ]
The virtio framework uses wmb() when updating avail->idx. It guarantees the write order, but not necessarily loading order for the code accessing the memory. This commit adds a load barrier after reading the avail->idx to make sure all the data in the descriptor is visible. It also adds a barrier when returning the packet to virtio framework to make sure read/writes are visible to the virtio code.
Fixes: 1357dfd7261f ("platform/mellanox: Add TmFifo driver for Mellanox BlueField Soc") Signed-off-by: Liming Sun limings@nvidia.com Reviewed-by: Vadim Pasternak vadimp@nvidia.com Link: https://lore.kernel.org/r/1620433812-17911-1-git-send-email-limings@nvidia.c... Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/mellanox/mlxbf-tmfifo.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c index bbc4e71a16ff..38800e86ed8a 100644 --- a/drivers/platform/mellanox/mlxbf-tmfifo.c +++ b/drivers/platform/mellanox/mlxbf-tmfifo.c @@ -294,6 +294,9 @@ mlxbf_tmfifo_get_next_desc(struct mlxbf_tmfifo_vring *vring) if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx)) return NULL;
+ /* Make sure 'avail->idx' is visible already. */ + virtio_rmb(false); + idx = vring->next_avail % vr->num; head = virtio16_to_cpu(vdev, vr->avail->ring[idx]); if (WARN_ON(head >= vr->num)) @@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct mlxbf_tmfifo_vring *vring, * done or not. Add a memory barrier here to make sure the update above * completes before updating the idx. */ - mb(); + virtio_mb(false); vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1); }
@@ -733,6 +736,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring, desc = NULL; fifo->vring[is_rx] = NULL;
+ /* + * Make sure the load/store are in order before + * returning back to virtio. + */ + virtio_mb(false); + /* Notify upper layer that packet is done. */ spin_lock_irqsave(&fifo->spin_lock[is_rx], flags); vring_interrupt(0, vring->vq);