On Wed, Jun 23, 2021 at 06:47:37PM +0200, Boris Brezillon wrote:
On Tue, 22 Jun 2021 18:55:02 +0200 Daniel Vetter daniel.vetter@ffwll.ch wrote:
Currently this has no practial relevance I think because there's not many who can pull off a setup with panfrost and another gpu in the same system. But the rules are that if you're setting an exclusive fence, indicating a gpu write access in the implicit fencing system, then you need to wait for all fences, not just the previous exclusive fence.
panfrost against itself has no problem, because it always sets the exclusive fence (but that's probably something that will need to be fixed for vulkan and/or multi-engine gpus, or you'll suffer badly). Also no problem with that against display.
With the prep work done to switch over to the dependency helpers this is now a oneliner.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Cc: Sumit Semwal sumit.semwal@linaro.org
Reviewed-by: Boris Brezillon boris.brezillon@collabora.com
Pushed the 3 panfrost patches to drm-misc-next, thanks for reviewing them. -Daniel
Cc: "Christian König" christian.koenig@amd.com Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org
drivers/gpu/drm/panfrost/panfrost_job.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 71cd43fa1b36..ef004d587dc4 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -203,9 +203,8 @@ static int panfrost_acquire_object_fences(struct drm_gem_object **bos, int i, ret; for (i = 0; i < bo_count; i++) {
struct dma_fence *fence = dma_resv_get_excl_unlocked(bos[i]->resv);
ret = drm_gem_fence_array_add(deps, fence);
/* panfrost always uses write mode in its current uapi */
if (ret) return ret; }ret = drm_gem_fence_array_add_implicit(deps, bos[i], true);
linaro-mm-sig@lists.linaro.org