Use dma_resv_wait() instead of extracting the exclusive fence and waiting on it manually.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Leon Romanovsky leon@kernel.org Cc: Maor Gottlieb maorg@nvidia.com Cc: Gal Pressman galpress@amazon.com Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/infiniband/core/umem_dmabuf.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index f0760741f281..d32cd7538835 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -16,7 +16,6 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) { struct sg_table *sgt; struct scatterlist *sg; - struct dma_fence *fence; unsigned long start, end, cur = 0; unsigned int nmap = 0; int i; @@ -68,11 +67,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) * may be not up-to-date. Wait for the exporter to finish * the migration. */ - fence = dma_resv_excl_fence(umem_dmabuf->attach->dmabuf->resv); - if (fence) - return dma_fence_wait(fence, false); - - return 0; + return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv, false, + false, MAX_SCHEDULE_TIMEOUT); } EXPORT_SYMBOL(ib_umem_dmabuf_map_pages);
On Mon, Mar 21, 2022 at 02:58:37PM +0100, Christian König wrote:
Use dma_resv_wait() instead of extracting the exclusive fence and waiting on it manually.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Cc: Jason Gunthorpe jgg@ziepe.ca
Jason, can you ack this for merging through drm trees please?
Thanks, Daniel
Cc: Leon Romanovsky leon@kernel.org Cc: Maor Gottlieb maorg@nvidia.com Cc: Gal Pressman galpress@amazon.com Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org
drivers/infiniband/core/umem_dmabuf.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index f0760741f281..d32cd7538835 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -16,7 +16,6 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) { struct sg_table *sgt; struct scatterlist *sg;
- struct dma_fence *fence; unsigned long start, end, cur = 0; unsigned int nmap = 0; int i;
@@ -68,11 +67,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) * may be not up-to-date. Wait for the exporter to finish * the migration. */
- fence = dma_resv_excl_fence(umem_dmabuf->attach->dmabuf->resv);
- if (fence)
return dma_fence_wait(fence, false);
- return 0;
- return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv, false,
false, MAX_SCHEDULE_TIMEOUT);
} EXPORT_SYMBOL(ib_umem_dmabuf_map_pages); -- 2.25.1
On Wed, Mar 23, 2022 at 02:22:01PM +0100, Daniel Vetter wrote:
On Mon, Mar 21, 2022 at 02:58:37PM +0100, Christian König wrote:
Use dma_resv_wait() instead of extracting the exclusive fence and waiting on it manually.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Cc: Jason Gunthorpe jgg@ziepe.ca
Jason, can you ack this for merging through drm trees please?
Sure, it looks trivial, but I didn't see the whole series:
Acked-by: Jason Gunthorpe jgg@nvidia.com
Jason
On Wed, 23 Mar 2022 at 17:32, Jason Gunthorpe jgg@ziepe.ca wrote:
On Wed, Mar 23, 2022 at 02:22:01PM +0100, Daniel Vetter wrote:
On Mon, Mar 21, 2022 at 02:58:37PM +0100, Christian König wrote:
Use dma_resv_wait() instead of extracting the exclusive fence and waiting on it manually.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Cc: Jason Gunthorpe jgg@ziepe.ca
Jason, can you ack this for merging through drm trees please?
Sure, it looks trivial, but I didn't see the whole series:
Acked-by: Jason Gunthorpe jgg@nvidia.com
The entire series reworks how dma_resv stores fences (and what exactly the mean), which is why we need to get users away from some of these low-level accessors and towards functions at a slightly higher level. -Daniel
linaro-mm-sig@lists.linaro.org