Hi guys,
as discussed before this set of patches completely rework the dma_resv semantic and spreads the new handling over all the existing drivers and users.
First of all this drops the DAG approach because it requires that every single driver implements those relatively complicated rules correctly and any violation of that immediately leads to either corruption of freed memory or even more severe security problems.
Instead we just keep all fences around all the time until they are signaled. Only fences with the same context are assumed to be signaled in the correct order since this is exercised elsewhere as well. Replacing fences is now only supported for hardware mechanism like VM page table updates where the hardware can guarantee that the resource can't be accessed any more.
Then the concept of a single exclusive fence and multiple shared fences is dropped as well.
Instead the dma_resv object is now just a container for dma_fence objects where each fence has associated usage flags. Those use flags describe how the operation represented by the dma_fence object is using the resource protected by the dma_resv object. This allows us to add multiple fences for each usage type.
Additionally to the existing WRITE/READ usages this patch set also adds the new KERNEL and OTHER usages. The KERNEL usages is used in cases where the kernel needs to do some operation with the resource protected by the dma_resv object, like copies or clears. Those are mandatory to wait for when dynamic memory management is used.
The OTHER usage is for cases where we don't want that the operation represented by the dma_fence object participate in any implicit sync but needs to be respected by the kernel memory management. Examples for those are VM page table updates and preemption fences.
While doing this the new implementation cleans up existing workarounds all over the place, but especially amdgpu and TTM. Surprisingly I also found two use cases for the KERNEL/OTHER usage in i915 and Nouveau, those might need more thoughts.
In general the existing functionality should been preserved, the only downside is that we now always need to reserve a slot before adding a fence. The newly added call to the reservation function can probably use some more cleanup.
TODOs: Testing, testing, testing, doublechecking the newly added kerneldoc for any typos.
Please review and/or comment, Christian.
Partially revert commit 5f319c5c21b5909abb43d8aadc92a8aa549ee443.
First of all this is illegal use of RCU to call dma_fence_enable_sw_signaling() since we don't hold a reference to the fence in question and can crash badly.
Then the code doesn't seem to have the intended effect since only the exclusive fence is handled, but the KFD fences are always added as shared fence.
Only keep the handling to throw away the content of SVM BOs.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 --------- 1 file changed, 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index eab4380f28e5..c15687ce67c4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -116,17 +116,8 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
abo = ttm_to_amdgpu_bo(bo); if (abo->flags & AMDGPU_AMDKFD_CREATE_SVM_BO) { - struct dma_fence *fence; - struct dma_resv *resv = &bo->base._resv; - - rcu_read_lock(); - fence = rcu_dereference(resv->fence_excl); - if (fence && !fence->ops->signaled) - dma_fence_enable_sw_signaling(fence); - placement->num_placement = 0; placement->num_busy_placement = 0; - rcu_read_unlock(); return; }
The i915 driver implements a prune function which is called when it is very likely that the fences inside the dma_resv object can be removed because they are all signaled. TTM does something similar after waiting for the dma_resv to be idle.
Move those functions into the dma-resv.c code since the behavior of pruning fences is something internal to the object.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-resv.c | 32 ++++++++++++++++++++ drivers/gpu/drm/i915/Makefile | 1 - drivers/gpu/drm/i915/dma_resv_utils.c | 17 ----------- drivers/gpu/drm/i915/dma_resv_utils.h | 13 -------- drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 3 +- drivers/gpu/drm/i915/gem/i915_gem_wait.c | 3 +- drivers/gpu/drm/ttm/ttm_bo.c | 2 +- include/linux/dma-resv.h | 2 ++ 8 files changed, 37 insertions(+), 36 deletions(-) delete mode 100644 drivers/gpu/drm/i915/dma_resv_utils.c delete mode 100644 drivers/gpu/drm/i915/dma_resv_utils.h
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index ff3c0558b3b8..f6499e87963c 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -324,6 +324,38 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) } EXPORT_SYMBOL(dma_resv_add_excl_fence);
+/** + * dma_resv_prune - remove signaled fences + * @obj: The dma_resv object to prune + * + * Remove all the signaled fences from the dma_resv object. + */ +void dma_resv_prune(struct dma_resv *obj) +{ + dma_resv_assert_held(obj); + + if (dma_resv_test_signaled(obj, true)) + dma_resv_add_excl_fence(obj, NULL); +} +EXPORT_SYMBOL(dma_resv_prune_unlocked); + +/** + * dma_resv_prune_unlocked - try to remove signaled fences + * @obj: The dma_resv object to prune + * + * Try to lock the object, test if it is signaled and if yes then remove all the + * signaled fences. + */ +void dma_resv_prune_unlocked(struct dma_resv *obj) +{ + if (!dma_resv_trylock(obj)) + return; + + dma_resv_prune(obj); + dma_resv_unlock(obj); +} +EXPORT_SYMBOL(dma_resv_prune); + /** * dma_resv_iter_restart_unlocked - restart the unlocked iterator * @cursor: The dma_resv_iter object to restart diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 660bb03de6fc..5c1af130cb6d 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -60,7 +60,6 @@ i915-y += i915_drv.o \
# core library code i915-y += \ - dma_resv_utils.o \ i915_memcpy.o \ i915_mm.o \ i915_sw_fence.o \ diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c deleted file mode 100644 index 7df91b7e4ca8..000000000000 --- a/drivers/gpu/drm/i915/dma_resv_utils.c +++ /dev/null @@ -1,17 +0,0 @@ -// SPDX-License-Identifier: MIT -/* - * Copyright © 2020 Intel Corporation - */ - -#include <linux/dma-resv.h> - -#include "dma_resv_utils.h" - -void dma_resv_prune(struct dma_resv *resv) -{ - if (dma_resv_trylock(resv)) { - if (dma_resv_test_signaled(resv, true)) - dma_resv_add_excl_fence(resv, NULL); - dma_resv_unlock(resv); - } -} diff --git a/drivers/gpu/drm/i915/dma_resv_utils.h b/drivers/gpu/drm/i915/dma_resv_utils.h deleted file mode 100644 index b9d8fb5f8367..000000000000 --- a/drivers/gpu/drm/i915/dma_resv_utils.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: MIT */ -/* - * Copyright © 2020 Intel Corporation - */ - -#ifndef DMA_RESV_UTILS_H -#define DMA_RESV_UTILS_H - -struct dma_resv; - -void dma_resv_prune(struct dma_resv *resv); - -#endif /* DMA_RESV_UTILS_H */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index 5ab136ffdeb2..48029bbda682 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -15,7 +15,6 @@
#include "gt/intel_gt_requests.h"
-#include "dma_resv_utils.h" #include "i915_trace.h"
static bool swap_available(void) @@ -229,7 +228,7 @@ i915_gem_shrink(struct i915_gem_ww_ctx *ww, i915_gem_object_unlock(obj); }
- dma_resv_prune(obj->base.resv); + dma_resv_prune_unlocked(obj->base.resv);
scanned += obj->base.size >> PAGE_SHIFT; skip: diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c index f11325484110..75b58aa8d4a7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c @@ -10,7 +10,6 @@
#include "gt/intel_engine.h"
-#include "dma_resv_utils.h" #include "i915_gem_ioctls.h" #include "i915_gem_object.h"
@@ -57,7 +56,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv, * signaled. */ if (timeout > 0) - dma_resv_prune(resv); + dma_resv_prune_unlocked(resv);
return ret; } diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index e4a20a3a5d16..e43f551594a8 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1086,7 +1086,7 @@ int ttm_bo_wait(struct ttm_buffer_object *bo, if (timeout == 0) return -EBUSY;
- dma_resv_add_excl_fence(bo->base.resv, NULL); + dma_resv_prune(bo->base.resv); return 0; } EXPORT_SYMBOL(ttm_bo_wait); diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index eebf04325b34..2594fef75f51 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -458,6 +458,8 @@ void dma_resv_fini(struct dma_resv *obj); int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence); void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); +void dma_resv_prune(struct dma_resv *obj); +void dma_resv_prune_unlocked(struct dma_resv *obj); int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl, unsigned *pshared_count, struct dma_fence ***pshared); int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
Calling dma_resv_add_excl_fence() with the fence as NULL and expecting that that this frees up the fences is simply abuse of the internals of the dma_resv object.
Rework how pruning fences works and make the fence parameter mandatory.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-resv.c | 39 ++++++++++++++++++++++++++++++++++---- 1 file changed, 35 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index f6499e87963c..e627a4274ff6 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -96,6 +96,34 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+/** + * dma_resv_list_prune - drop all signaled fences + * @list: list to check for signaled fences + * @obj: dma_resv object for lockdep + * + * Replace all the signaled fences with the stub fence to free them up. + */ +static void dma_resv_list_prune(struct dma_resv_list *list, + struct dma_resv *obj) +{ + unsigned int i; + + if (!list) + return; + + for (i = 0; i < list->shared_count; ++i) { + struct dma_fence *fence; + + fence = rcu_dereference_protected(list->shared[i], + dma_resv_held(obj)); + if (!dma_fence_is_signaled(fence)) + continue; + + RCU_INIT_POINTER(list->shared[i], dma_fence_get_stub()); + dma_fence_put(fence); + } +} + /** * dma_resv_init - initialize a reservation object * @obj: the reservation object @@ -305,8 +333,7 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) if (old) i = old->shared_count;
- if (fence) - dma_fence_get(fence); + dma_fence_get(fence);
write_seqcount_begin(&obj->seq); /* write_seqcount_begin provides the necessary memory barrier */ @@ -334,8 +361,12 @@ void dma_resv_prune(struct dma_resv *obj) { dma_resv_assert_held(obj);
- if (dma_resv_test_signaled(obj, true)) - dma_resv_add_excl_fence(obj, NULL); + write_seqcount_begin(&obj->seq); + if (obj->fence_excl && dma_fence_is_signaled(obj->fence_excl)) + dma_fence_put(rcu_replace_pointer(obj->fence_excl, NULL, + dma_resv_held(obj))); + dma_resv_list_prune(dma_resv_shared_list(obj), obj); + write_seqcount_end(&obj->seq); } EXPORT_SYMBOL(dma_resv_prune_unlocked);
I'm not sure why it is useful to know the number of fences in the reservation object, but we try to avoid exposing the dma_resv_shared_list() function.
So use the iterator instead. If more information is desired we could use dma_resv_describe() as well.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/qxl/qxl_debugfs.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_debugfs.c b/drivers/gpu/drm/qxl/qxl_debugfs.c index 1f9a59601bb1..6a36b0fd845c 100644 --- a/drivers/gpu/drm/qxl/qxl_debugfs.c +++ b/drivers/gpu/drm/qxl/qxl_debugfs.c @@ -57,13 +57,16 @@ qxl_debugfs_buffers_info(struct seq_file *m, void *data) struct qxl_bo *bo;
list_for_each_entry(bo, &qdev->gem.objects, list) { - struct dma_resv_list *fobj; - int rel; - - rcu_read_lock(); - fobj = dma_resv_shared_list(bo->tbo.base.resv); - rel = fobj ? fobj->shared_count : 0; - rcu_read_unlock(); + struct dma_resv_iter cursor; + struct dma_fence *fence; + int rel = 0; + + dma_resv_iter_begin(&cursor, bo->tbo.base.resv, true); + dma_resv_for_each_fence_unlocked(&cursor, fence) { + if (dma_resv_iter_is_restarted(&cursor)) + rel = 0; + ++rel; + }
seq_printf(m, "size %ld, pc %d, num releases %d\n", (unsigned long)bo->tbo.base.size,
This function allows to replace fences from the shared fence list when we can gurantee that the operation represented by the original fence has finished or no accesses to the resources protected by the dma_resv object any more when the new fence finishes.
Then use this function in the amdkfd code when BOs are unmapped from the process.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-resv.c | 43 ++++++++++++++++ .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 49 +++---------------- include/linux/dma-resv.h | 2 + 3 files changed, 52 insertions(+), 42 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index e627a4274ff6..0daed67cab0e 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -312,6 +312,49 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) } EXPORT_SYMBOL(dma_resv_add_shared_fence);
+/** + * dma_resv_replace_fences - replace fences in the dma_resv obj + * @obj: the reservation object + * @context: the context of the fences to replace + * @replacement: the new fence to use instead + * + * Replace fences with a specified context with a new fence. Only valid if the + * operation represented by the original fences is completed or has no longer + * access to the resources protected by the dma_resv object when the new fence + * completes. + */ +void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, + struct dma_fence *replacement) +{ + struct dma_resv_list *list; + struct dma_fence *old; + unsigned int i; + + dma_resv_assert_held(obj); + + write_seqcount_begin(&obj->seq); + + old = dma_resv_excl_fence(obj); + if (old->context == context) { + RCU_INIT_POINTER(obj->fence_excl, dma_fence_get(replacement)); + dma_fence_put(old); + } + + list = dma_resv_shared_list(obj); + for (i = 0; list && i < list->shared_count; ++i) { + old = rcu_dereference_protected(list->shared[i], + dma_resv_held(obj)); + if (old->context != context) + continue; + + rcu_assign_pointer(list->shared[i], dma_fence_get(replacement)); + dma_fence_put(old); + } + + write_seqcount_end(&obj->seq); +} +EXPORT_SYMBOL(dma_resv_replace_fences); + /** * dma_resv_add_excl_fence - Add an exclusive fence. * @obj: the reservation object diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 71acd577803e..b558ef0f8c4a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -236,53 +236,18 @@ void amdgpu_amdkfd_release_notify(struct amdgpu_bo *bo) static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, struct amdgpu_amdkfd_fence *ef) { - struct dma_resv *resv = bo->tbo.base.resv; - struct dma_resv_list *old, *new; - unsigned int i, j, k; + struct dma_fence *replacement;
if (!ef) return -EINVAL;
- old = dma_resv_shared_list(resv); - if (!old) - return 0; - - new = kmalloc(struct_size(new, shared, old->shared_max), GFP_KERNEL); - if (!new) - return -ENOMEM; - - /* Go through all the shared fences in the resevation object and sort - * the interesting ones to the end of the list. + /* TODO: Instead of block before we should use the fence of the page + * table update and TLB flush here directly. */ - for (i = 0, j = old->shared_count, k = 0; i < old->shared_count; ++i) { - struct dma_fence *f; - - f = rcu_dereference_protected(old->shared[i], - dma_resv_held(resv)); - - if (f->context == ef->base.context) - RCU_INIT_POINTER(new->shared[--j], f); - else - RCU_INIT_POINTER(new->shared[k++], f); - } - new->shared_max = old->shared_max; - new->shared_count = k; - - /* Install the new fence list, seqcount provides the barriers */ - write_seqcount_begin(&resv->seq); - RCU_INIT_POINTER(resv->fence, new); - write_seqcount_end(&resv->seq); - - /* Drop the references to the removed fences or move them to ef_list */ - for (i = j; i < old->shared_count; ++i) { - struct dma_fence *f; - - f = rcu_dereference_protected(new->shared[i], - dma_resv_held(resv)); - dma_fence_put(f); - } - kfree_rcu(old, rcu); - + replacement = dma_fence_get_stub(); + dma_resv_replace_fences(bo->tbo.base.resv, ef->base.context, + replacement); + dma_fence_put(replacement); return 0; }
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 2594fef75f51..0eb0c08c51c9 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -457,6 +457,8 @@ void dma_resv_init(struct dma_resv *obj); void dma_resv_fini(struct dma_resv *obj); int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence); +void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, + struct dma_fence *fence); void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); void dma_resv_prune(struct dma_resv *obj); void dma_resv_prune_unlocked(struct dma_resv *obj);
Drivers should never touch this directly.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-resv.c | 26 ++++++++++++++++++++++++++ include/linux/dma-resv.h | 26 +------------------------- 2 files changed, 27 insertions(+), 25 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 0daed67cab0e..611bba5528ad 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -56,6 +56,19 @@ DEFINE_WD_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class);
+/** + * struct dma_resv_list - a list of shared fences + * @rcu: for internal use + * @shared_count: table of shared fences + * @shared_max: for growing shared fence table + * @shared: shared fence table + */ +struct dma_resv_list { + struct rcu_head rcu; + u32 shared_count, shared_max; + struct dma_fence __rcu *shared[]; +}; + /** * dma_resv_list_alloc - allocate fence list * @shared_max: number of fences we need space for @@ -161,6 +174,19 @@ void dma_resv_fini(struct dma_resv *obj) } EXPORT_SYMBOL(dma_resv_fini);
+/** + * dma_resv_shared_list - get the reservation object's shared fence list + * @obj: the reservation object + * + * Returns the shared fence list. Caller must either hold the objects + * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), + * or one of the variants of each + */ +static inline struct dma_resv_list *dma_resv_shared_list(struct dma_resv *obj) +{ + return rcu_dereference_check(obj->fence, dma_resv_held(obj)); +} + /** * dma_resv_reserve_shared - Reserve space to add shared fences to * a dma_resv. diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 0eb0c08c51c9..e0cec3a57c08 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -47,18 +47,7 @@
extern struct ww_class reservation_ww_class;
-/** - * struct dma_resv_list - a list of shared fences - * @rcu: for internal use - * @shared_count: table of shared fences - * @shared_max: for growing shared fence table - * @shared: shared fence table - */ -struct dma_resv_list { - struct rcu_head rcu; - u32 shared_count, shared_max; - struct dma_fence __rcu *shared[]; -}; +struct dma_resv_list;
/** * struct dma_resv - a reservation object manages fences for a buffer @@ -440,19 +429,6 @@ dma_resv_excl_fence(struct dma_resv *obj) return rcu_dereference_check(obj->fence_excl, dma_resv_held(obj)); }
-/** - * dma_resv_shared_list - get the reservation object's shared fence list - * @obj: the reservation object - * - * Returns the shared fence list. Caller must either hold the objects - * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), - * or one of the variants of each - */ -static inline struct dma_resv_list *dma_resv_shared_list(struct dma_resv *obj) -{ - return rcu_dereference_check(obj->fence, dma_resv_held(obj)); -} - void dma_resv_init(struct dma_resv *obj); void dma_resv_fini(struct dma_resv *obj); int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences);
Returning the exclusive fence separately is no longer used.
Instead add a write parameter to indicate the use case.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-resv.c | 48 ++++++++------------ drivers/dma-buf/st-dma-resv.c | 26 ++--------- drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 6 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 3 +- include/linux/dma-resv.h | 4 +- 6 files changed, 31 insertions(+), 58 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 611bba5528ad..0a69f4b7e6b5 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -675,57 +675,45 @@ EXPORT_SYMBOL(dma_resv_copy_fences); * dma_resv_get_fences - Get an object's shared and exclusive * fences without update side lock held * @obj: the reservation object - * @fence_excl: the returned exclusive fence (or NULL) - * @shared_count: the number of shared fences returned - * @shared: the array of shared fence ptrs returned (array is krealloc'd to - * the required size, and must be freed by caller) - * - * Retrieve all fences from the reservation object. If the pointer for the - * exclusive fence is not specified the fence is put into the array of the - * shared fences as well. Returns either zero or -ENOMEM. + * @write: true if we should return all fences + * @num_fences: the number of fences returned + * @fences: the array of fence ptrs returned (array is krealloc'd to the + * required size, and must be freed by caller) + * + * Retrieve all fences from the reservation object. + * Returns either zero or -ENOMEM. */ -int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **fence_excl, - unsigned int *shared_count, struct dma_fence ***shared) +int dma_resv_get_fences(struct dma_resv *obj, bool write, + unsigned int *num_fences, struct dma_fence ***fences) { struct dma_resv_iter cursor; struct dma_fence *fence;
- *shared_count = 0; - *shared = NULL; - - if (fence_excl) - *fence_excl = NULL; + *num_fences = 0; + *fences = NULL;
- dma_resv_iter_begin(&cursor, obj, true); + dma_resv_iter_begin(&cursor, obj, write); dma_resv_for_each_fence_unlocked(&cursor, fence) {
if (dma_resv_iter_is_restarted(&cursor)) { unsigned int count;
- while (*shared_count) - dma_fence_put((*shared)[--(*shared_count)]); + while (*num_fences) + dma_fence_put((*fences)[--(*num_fences)]);
- if (fence_excl) - dma_fence_put(*fence_excl); - - count = cursor.shared_count; - count += fence_excl ? 0 : 1; + count = cursor.shared_count + 1;
/* Eventually re-allocate the array */ - *shared = krealloc_array(*shared, count, + *fences = krealloc_array(*fences, count, sizeof(void *), GFP_KERNEL); - if (count && !*shared) { + if (count && !*fences) { dma_resv_iter_end(&cursor); return -ENOMEM; } }
- dma_fence_get(fence); - if (dma_resv_iter_is_exclusive(&cursor) && fence_excl) - *fence_excl = fence; - else - (*shared)[(*shared_count)++] = fence; + (*fences)[(*num_fences)++] = dma_fence_get(fence); } dma_resv_iter_end(&cursor);
diff --git a/drivers/dma-buf/st-dma-resv.c b/drivers/dma-buf/st-dma-resv.c index bc32b3eedcb6..cbe999c6e7a6 100644 --- a/drivers/dma-buf/st-dma-resv.c +++ b/drivers/dma-buf/st-dma-resv.c @@ -275,7 +275,7 @@ static int test_shared_for_each_unlocked(void *arg)
static int test_get_fences(void *arg, bool shared) { - struct dma_fence *f, *excl = NULL, **fences = NULL; + struct dma_fence *f, **fences = NULL; struct dma_resv resv; int r, i;
@@ -304,35 +304,19 @@ static int test_get_fences(void *arg, bool shared) } dma_resv_unlock(&resv);
- r = dma_resv_get_fences(&resv, &excl, &i, &fences); + r = dma_resv_get_fences(&resv, shared, &i, &fences); if (r) { pr_err("get_fences failed\n"); goto err_free; }
- if (shared) { - if (excl != NULL) { - pr_err("get_fences returned unexpected excl fence\n"); - goto err_free; - } - if (i != 1 || fences[0] != f) { - pr_err("get_fences returned unexpected shared fence\n"); - goto err_free; - } - } else { - if (excl != f) { - pr_err("get_fences returned unexpected excl fence\n"); - goto err_free; - } - if (i != 0) { - pr_err("get_fences returned unexpected shared fence\n"); - goto err_free; - } + if (i != 1 || fences[0] != f) { + pr_err("get_fences returned unexpected fence\n"); + goto err_free; }
dma_fence_signal(f); err_free: - dma_fence_put(excl); while (i--) dma_fence_put(fences[i]); kfree(fences); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c index 68108f151dad..d17e1c911689 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c @@ -200,8 +200,10 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc, goto unpin; }
- r = dma_resv_get_fences(new_abo->tbo.base.resv, NULL, - &work->shared_count, &work->shared); + /* TODO: Unify this with other drivers */ + r = dma_resv_get_fences(new_abo->tbo.base.resv, true, + &work->shared_count, + &work->shared); if (unlikely(r != 0)) { DRM_ERROR("failed to get fences for buffer\n"); goto unpin; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c index b7fb72bff2c1..be48487e2ca7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv, unsigned count; int r;
- r = dma_resv_get_fences(resv, NULL, &count, &fences); + r = dma_resv_get_fences(resv, true, &count, &fences); if (r) goto fallback;
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index b5e8ce86dbe7..64c90ff348f2 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -189,8 +189,7 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) continue;
if (bo->flags & ETNA_SUBMIT_BO_WRITE) { - ret = dma_resv_get_fences(robj, NULL, - &bo->nr_shared, + ret = dma_resv_get_fences(robj, true, &bo->nr_shared, &bo->shared); if (ret) return ret; diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index e0cec3a57c08..09b676b87c35 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -438,8 +438,8 @@ void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); void dma_resv_prune(struct dma_resv *obj); void dma_resv_prune_unlocked(struct dma_resv *obj); -int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl, - unsigned *pshared_count, struct dma_fence ***pshared); +int dma_resv_get_fences(struct dma_resv *obj, bool write, + unsigned int *num_fences, struct dma_fence ***fences); int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout);
Add a function to simplify getting a single fence for all the fences in the dma_resv object.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-resv.c | 50 ++++++++++++++++++++++++++++++++++++++ include/linux/dma-resv.h | 2 ++ 2 files changed, 52 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 0a69f4b7e6b5..f91ca023b550 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@ */
#include <linux/dma-resv.h> +#include <linux/dma-fence-array.h> #include <linux/export.h> #include <linux/mm.h> #include <linux/sched/mm.h> @@ -721,6 +722,55 @@ int dma_resv_get_fences(struct dma_resv *obj, bool write, } EXPORT_SYMBOL_GPL(dma_resv_get_fences);
+/** + * dma_resv_get_singleton - Get a single fence for all the fences + * @obj: the reservation object + * @write: true if we should return all fences + * @fence: the resulting fence + * + * Get a single fence representing all the fences inside the resv object. + * Returns either 0 for success or -ENOMEM. + * + * Warning: This can't be used like this when adding the fence back to the resv + * object since that can lead to stack corruption when finalizing the + * dma_fence_array. + */ +int dma_resv_get_singleton(struct dma_resv *obj, bool write, + struct dma_fence **fence) +{ + struct dma_fence_array *array; + struct dma_fence **fences; + unsigned count; + int r; + + r = dma_resv_get_fences(obj, write, &count, &fences); + if (r) + return r; + + if (count == 0) { + *fence = NULL; + return 0; + } + + if (count == 1) { + *fence = fences[0]; + kfree(fences); + return 0; + } + + array = dma_fence_array_create(count, fences, + dma_fence_context_alloc(1), + 1, false); + if (!array) { + kfree(fences); + return -ENOMEM; + } + + *fence = &array->base; + return 0; +} +EXPORT_SYMBOL_GPL(dma_resv_get_singleton); + /** * dma_resv_wait_timeout - Wait on reservation's objects * shared and/or exclusive fences. diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 09b676b87c35..082f77b7bc63 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -440,6 +440,8 @@ void dma_resv_prune(struct dma_resv *obj); void dma_resv_prune_unlocked(struct dma_resv *obj); int dma_resv_get_fences(struct dma_resv *obj, bool write, unsigned int *num_fences, struct dma_fence ***fences); +int dma_resv_get_singleton(struct dma_resv *obj, bool write, + struct dma_fence **fence); int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout);
Use dma_resv_wait() instead of extracting the exclusive fence and waiting on it manually.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/infiniband/core/umem_dmabuf.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index f0760741f281..d32cd7538835 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -16,7 +16,6 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) { struct sg_table *sgt; struct scatterlist *sg; - struct dma_fence *fence; unsigned long start, end, cur = 0; unsigned int nmap = 0; int i; @@ -68,11 +67,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) * may be not up-to-date. Wait for the exporter to finish * the migration. */ - fence = dma_resv_excl_fence(umem_dmabuf->attach->dmabuf->resv); - if (fence) - return dma_fence_wait(fence, false); - - return 0; + return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv, false, + false, MAX_SCHEDULE_TIMEOUT); } EXPORT_SYMBOL(ib_umem_dmabuf_map_pages);
We can get the excl fence together with the shared ones as well.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 14 +++++--------- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 10 ---------- 3 files changed, 5 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h index 98e60df882b6..f596d743baa3 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h @@ -80,7 +80,6 @@ struct etnaviv_gem_submit_bo { u64 va; struct etnaviv_gem_object *obj; struct etnaviv_vram_mapping *mapping; - struct dma_fence *excl; unsigned int nr_shared; struct dma_fence **shared; }; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 64c90ff348f2..4286dc93fdaa 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -188,15 +188,11 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT) continue;
- if (bo->flags & ETNA_SUBMIT_BO_WRITE) { - ret = dma_resv_get_fences(robj, true, &bo->nr_shared, - &bo->shared); - if (ret) - return ret; - } else { - bo->excl = dma_fence_get(dma_resv_excl_fence(robj)); - } - + ret = dma_resv_get_fences(robj, + !!(bo->flags & ETNA_SUBMIT_BO_WRITE), + &bo->nr_shared, &bo->shared); + if (ret) + return ret; }
return ret; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 180bb633d5c5..8c038a363d15 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -39,16 +39,6 @@ etnaviv_sched_dependency(struct drm_sched_job *sched_job, struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; int j;
- if (bo->excl) { - fence = bo->excl; - bo->excl = NULL; - - if (!dma_fence_is_signaled(fence)) - return fence; - - dma_fence_put(fence); - } - for (j = 0; j < bo->nr_shared; j++) { if (!bo->shared[j]) continue;
Instead use the new dma_resv_get_singleton function.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/nouveau/nouveau_bo.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index fa73fe57f97b..74f8652d2bd3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -959,7 +959,14 @@ nouveau_bo_vm_cleanup(struct ttm_buffer_object *bo, { struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct drm_device *dev = drm->dev; - struct dma_fence *fence = dma_resv_excl_fence(bo->base.resv); + struct dma_fence *fence; + int ret; + + /* TODO: This is actually a memory management dependency */ + ret = dma_resv_get_singleton(bo->base.resv, false, &fence); + if (ret) + dma_resv_wait_timeout(bo->base.resv, false, false, + MAX_SCHEDULE_TIMEOUT);
nv10_bo_put_tile_region(dev, *old_tile, fence); *old_tile = new_tile;
Instead use the new dma_resv_get_singleton function.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 8d1e869cc196..23c3fc2cbf10 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -1168,8 +1168,10 @@ int vmw_resources_clean(struct vmw_buffer_object *vbo, pgoff_t start, vmw_bo_fence_single(bo, NULL); if (bo->moving) dma_fence_put(bo->moving); - bo->moving = dma_fence_get - (dma_resv_excl_fence(bo->base.resv)); + + /* TODO: This is actually a memory management dependency */ + return dma_resv_get_singleton(bo->base.resv, false, + &bo->moving); }
return 0;
Instead use the new dma_resv_get_singleton function.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/radeon/radeon_display.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c index 573154268d43..a6f875118f01 100644 --- a/drivers/gpu/drm/radeon/radeon_display.c +++ b/drivers/gpu/drm/radeon/radeon_display.c @@ -533,7 +533,12 @@ static int radeon_crtc_page_flip_target(struct drm_crtc *crtc, DRM_ERROR("failed to pin new rbo buffer before flip\n"); goto cleanup; } - work->fence = dma_fence_get(dma_resv_excl_fence(new_rbo->tbo.base.resv)); + r = dma_resv_get_singleton(new_rbo->tbo.base.resv, false, &work->fence); + if (r) { + radeon_bo_unreserve(new_rbo); + DRM_ERROR("failed to get new rbo buffer fences\n"); + goto cleanup; + } radeon_bo_get_tiling_flags(new_rbo, &tiling_flags, NULL); radeon_bo_unreserve(new_rbo);
This was added because of the now dropped shared on excl dependency.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 +---- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 6 ------ 2 files changed, 1 insertion(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 0311d799a010..53e407ea4c89 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1275,14 +1275,11 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, /* * Work around dma_resv shortcommings by wrapping up the * submission in a dma_fence_chain and add it as exclusive - * fence, but first add the submission as shared fence to make - * sure that shared fences never signal before the exclusive - * one. + * fence. */ dma_fence_chain_init(chain, dma_resv_excl_fence(resv), dma_fence_get(p->fence), 1);
- dma_resv_add_shared_fence(resv, p->fence); rcu_assign_pointer(resv->fence_excl, &chain->base); e->chain = NULL; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index a1e63ba4c54a..85d31d85c384 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -226,12 +226,6 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj, if (!amdgpu_vm_ready(vm)) goto out_unlock;
- fence = dma_resv_excl_fence(bo->tbo.base.resv); - if (fence) { - amdgpu_bo_fence(bo, fence, true); - fence = NULL; - } - r = amdgpu_vm_clear_freed(adev, vm, &fence); if (r || !fence) goto out_unlock;
Get the write fence using dma_resv_for_each_fence instead of accessing it manually.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 53e407ea4c89..7facd614e50a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1268,6 +1268,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, amdgpu_bo_list_for_each_entry(e, p->bo_list) { struct dma_resv *resv = e->tv.bo->base.resv; struct dma_fence_chain *chain = e->chain; + struct dma_resv_iter cursor; + struct dma_fence *fence;
if (!chain) continue; @@ -1277,9 +1279,10 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, * submission in a dma_fence_chain and add it as exclusive * fence. */ - dma_fence_chain_init(chain, dma_resv_excl_fence(resv), - dma_fence_get(p->fence), 1); - + dma_resv_for_each_fence(&cursor, resv, false, fence) { + break; + } + dma_fence_chain_init(chain, fence, dma_fence_get(p->fence), 1); rcu_assign_pointer(resv->fence_excl, &chain->base); e->chain = NULL; }
Drivers should never touch this directly.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-resv.c | 17 +++++++++++++++++ include/linux/dma-resv.h | 17 ----------------- 2 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index f91ca023b550..539b9b1df640 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -175,6 +175,23 @@ void dma_resv_fini(struct dma_resv *obj) } EXPORT_SYMBOL(dma_resv_fini);
+/** + * dma_resv_excl_fence - return the object's exclusive fence + * @obj: the reservation object + * + * Returns the exclusive fence (if any). Caller must either hold the objects + * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), + * or one of the variants of each + * + * RETURNS + * The exclusive fence or NULL + */ +static inline struct dma_fence * +dma_resv_excl_fence(struct dma_resv *obj) +{ + return rcu_dereference_check(obj->fence_excl, dma_resv_held(obj)); +} + /** * dma_resv_shared_list - get the reservation object's shared fence list * @obj: the reservation object diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 082f77b7bc63..062571c04bca 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -412,23 +412,6 @@ static inline void dma_resv_unlock(struct dma_resv *obj) ww_mutex_unlock(&obj->lock); }
-/** - * dma_resv_excl_fence - return the object's exclusive fence - * @obj: the reservation object - * - * Returns the exclusive fence (if any). Caller must either hold the objects - * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), - * or one of the variants of each - * - * RETURNS - * The exclusive fence or NULL - */ -static inline struct dma_fence * -dma_resv_excl_fence(struct dma_resv *obj) -{ - return rcu_dereference_check(obj->fence_excl, dma_resv_held(obj)); -} - void dma_resv_init(struct dma_resv *obj); void dma_resv_fini(struct dma_resv *obj); int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences);
So far we had the approach of using a directed acyclic graph with the dma_resv obj.
This turned out to have many downsides, especially it means that every single driver and user of this interface needs to be aware of this restriction when adding fences. If the rules for the DAG are not followed then we end up with potential hard to debug memory corruption, information leaks or even elephant big security holes because we allow userspace to access freed up memory.
Since we already took a step back from that by always looking at all fences we now go a step further and stop dropping the shared fences when a new exclusive one is added.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-resv.c | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 539b9b1df640..3b0001c5ff3a 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -411,29 +411,17 @@ EXPORT_SYMBOL(dma_resv_replace_fences); void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) { struct dma_fence *old_fence = dma_resv_excl_fence(obj); - struct dma_resv_list *old; - u32 i = 0;
dma_resv_assert_held(obj);
- old = dma_resv_shared_list(obj); - if (old) - i = old->shared_count; - dma_fence_get(fence);
write_seqcount_begin(&obj->seq); /* write_seqcount_begin provides the necessary memory barrier */ RCU_INIT_POINTER(obj->fence_excl, fence); - if (old) - old->shared_count = 0; + dma_resv_list_prune(dma_resv_shared_list(obj), obj); write_seqcount_end(&obj->seq);
- /* inplace update, no shared fences */ - while (i--) - dma_fence_put(rcu_dereference_protected(old->shared[i], - dma_resv_held(obj))); - dma_fence_put(old_fence); } EXPORT_SYMBOL(dma_resv_add_excl_fence);
Audit all the users of dma_resv_add_excl_fence() and make sure they reserve a shared slot also when only trying to add an exclusive fence.
This is the next step towards handling the exclusive fence like a shared one.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/st-dma-resv.c | 64 +++++++++---------- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 8 +++ drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 8 +-- drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 3 +- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +-- .../drm/i915/gem/selftests/i915_gem_migrate.c | 5 +- drivers/gpu/drm/i915/i915_vma.c | 6 ++ .../drm/i915/selftests/intel_memory_region.c | 7 ++ drivers/gpu/drm/lima/lima_gem.c | 10 ++- drivers/gpu/drm/msm/msm_gem_submit.c | 18 +++--- drivers/gpu/drm/nouveau/nouveau_fence.c | 9 +-- drivers/gpu/drm/panfrost/panfrost_job.c | 4 ++ drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++- drivers/gpu/drm/ttm/ttm_execbuf_util.c | 11 ++-- drivers/gpu/drm/v3d/v3d_gem.c | 15 +++-- drivers/gpu/drm/vgem/vgem_fence.c | 12 ++-- drivers/gpu/drm/virtio/virtgpu_gem.c | 9 +++ drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 16 +++-- 18 files changed, 133 insertions(+), 92 deletions(-)
diff --git a/drivers/dma-buf/st-dma-resv.c b/drivers/dma-buf/st-dma-resv.c index cbe999c6e7a6..f33bafc78693 100644 --- a/drivers/dma-buf/st-dma-resv.c +++ b/drivers/dma-buf/st-dma-resv.c @@ -75,17 +75,16 @@ static int test_signaling(void *arg, bool shared) goto err_free; }
- if (shared) { - r = dma_resv_reserve_shared(&resv, 1); - if (r) { - pr_err("Resv shared slot allocation failed\n"); - goto err_unlock; - } + r = dma_resv_reserve_shared(&resv, 1); + if (r) { + pr_err("Resv shared slot allocation failed\n"); + goto err_unlock; + }
+ if (shared) dma_resv_add_shared_fence(&resv, f); - } else { + else dma_resv_add_excl_fence(&resv, f); - }
if (dma_resv_test_signaled(&resv, shared)) { pr_err("Resv unexpectedly signaled\n"); @@ -134,17 +133,16 @@ static int test_for_each(void *arg, bool shared) goto err_free; }
- if (shared) { - r = dma_resv_reserve_shared(&resv, 1); - if (r) { - pr_err("Resv shared slot allocation failed\n"); - goto err_unlock; - } + r = dma_resv_reserve_shared(&resv, 1); + if (r) { + pr_err("Resv shared slot allocation failed\n"); + goto err_unlock; + }
+ if (shared) dma_resv_add_shared_fence(&resv, f); - } else { + else dma_resv_add_excl_fence(&resv, f); - }
r = -ENOENT; dma_resv_for_each_fence(&cursor, &resv, shared, fence) { @@ -206,18 +204,17 @@ static int test_for_each_unlocked(void *arg, bool shared) goto err_free; }
- if (shared) { - r = dma_resv_reserve_shared(&resv, 1); - if (r) { - pr_err("Resv shared slot allocation failed\n"); - dma_resv_unlock(&resv); - goto err_free; - } + r = dma_resv_reserve_shared(&resv, 1); + if (r) { + pr_err("Resv shared slot allocation failed\n"); + dma_resv_unlock(&resv); + goto err_free; + }
+ if (shared) dma_resv_add_shared_fence(&resv, f); - } else { + else dma_resv_add_excl_fence(&resv, f); - } dma_resv_unlock(&resv);
r = -ENOENT; @@ -290,18 +287,17 @@ static int test_get_fences(void *arg, bool shared) goto err_resv; }
- if (shared) { - r = dma_resv_reserve_shared(&resv, 1); - if (r) { - pr_err("Resv shared slot allocation failed\n"); - dma_resv_unlock(&resv); - goto err_resv; - } + r = dma_resv_reserve_shared(&resv, 1); + if (r) { + pr_err("Resv shared slot allocation failed\n"); + dma_resv_unlock(&resv); + goto err_resv; + }
+ if (shared) dma_resv_add_shared_fence(&resv, f); - } else { + else dma_resv_add_excl_fence(&resv, f); - } dma_resv_unlock(&resv);
r = dma_resv_get_fences(&resv, shared, &i, &fences); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index 4fcfc2313b8c..1becd4e7e463 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -1367,6 +1367,14 @@ void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence, bool shared) { struct dma_resv *resv = bo->tbo.base.resv; + int r; + + r = dma_resv_reserve_fences(resv, 1); + if (r) { + /* As last resort on OOM we block for the fence */ + dma_fence_wait(fence, false); + return; + }
if (shared) dma_resv_add_shared_fence(resv, fence); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 4286dc93fdaa..d4a7073190ec 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -179,11 +179,9 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; struct dma_resv *robj = bo->obj->base.resv;
- if (!(bo->flags & ETNA_SUBMIT_BO_WRITE)) { - ret = dma_resv_reserve_shared(robj, 1); - if (ret) - return ret; - } + ret = dma_resv_reserve_shared(robj, 1); + if (ret) + return ret;
if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT) continue; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c index f0435c6feb68..fc57ab914b60 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c @@ -100,7 +100,8 @@ bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, trace_i915_gem_object_clflush(obj);
clflush = NULL; - if (!(flags & I915_CLFLUSH_SYNC)) + if (!(flags & I915_CLFLUSH_SYNC) && + dma_resv_reserve_shared(obj->base.resv, 1) == 0) clflush = clflush_work_create(obj); if (clflush) { i915_sw_fence_await_reservation(&clflush->base.chain, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 4d7da07442f2..fc0e1625847c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -989,11 +989,9 @@ static int eb_validate_vmas(struct i915_execbuffer *eb) } }
- if (!(ev->flags & EXEC_OBJECT_WRITE)) { - err = dma_resv_reserve_shared(vma->resv, 1); - if (err) - return err; - } + err = dma_resv_reserve_shared(vma->resv, 1); + if (err) + return err;
GEM_BUG_ON(drm_mm_node_allocated(&vma->node) && eb_vma_misplaced(&eb->exec[i], vma, ev->flags)); diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c index 28a700f08b49..2bf491fd5cdf 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c @@ -179,7 +179,10 @@ static int igt_lmem_pages_migrate(void *arg) i915_gem_object_is_lmem(obj), 0xdeadbeaf, &rq); if (rq) { - dma_resv_add_excl_fence(obj->base.resv, &rq->fence); + err = dma_resv_reserve_shared(obj->base.resv, 1); + if (!err) + dma_resv_add_excl_fence(obj->base.resv, + &rq->fence); i915_request_put(rq); } if (err) diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index bef795e265a6..5ec87de63963 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -1255,6 +1255,12 @@ int _i915_vma_move_to_active(struct i915_vma *vma, intel_frontbuffer_put(front); }
+ if (!(flags & __EXEC_OBJECT_NO_RESERVE)) { + err = dma_resv_reserve_shared(vma->resv, 1); + if (unlikely(err)) + return err; + } + if (fence) { dma_resv_add_excl_fence(vma->resv, fence); obj->write_domain = I915_GEM_DOMAIN_RENDER; diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 418caae84759..b85af1672a7e 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -894,6 +894,13 @@ static int igt_lmem_write_cpu(void *arg) }
i915_gem_object_lock(obj, NULL); + + err = dma_resv_reserve_shared(obj->base.resv, 1); + if (err) { + i915_gem_object_unlock(obj); + goto out_put; + } + /* Put the pages into a known state -- from the gpu for added fun */ intel_engine_pm_get(engine); err = intel_context_migrate_clear(engine->gt->migrate.context, NULL, diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 2723d333c608..487581e2f716 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -255,13 +255,11 @@ int lima_gem_get_info(struct drm_file *file, u32 handle, u32 *va, u64 *offset) static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo, bool write, bool explicit) { - int err = 0; + int err;
- if (!write) { - err = dma_resv_reserve_shared(lima_bo_resv(bo), 1); - if (err) - return err; - } + err = dma_resv_reserve_shared(lima_bo_resv(bo), 1); + if (err) + return err;
/* explicit sync use user passed dep fence */ if (explicit) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 3cb029f10925..e874d09b74ef 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -320,16 +320,14 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) struct drm_gem_object *obj = &submit->bos[i].obj->base; bool write = submit->bos[i].flags & MSM_SUBMIT_BO_WRITE;
- if (!write) { - /* NOTE: _reserve_shared() must happen before - * _add_shared_fence(), which makes this a slightly - * strange place to call it. OTOH this is a - * convenient can-fail point to hook it in. - */ - ret = dma_resv_reserve_shared(obj->resv, 1); - if (ret) - return ret; - } + /* NOTE: _reserve_shared() must happen before + * _add_shared_fence(), which makes this a slightly + * strange place to call it. OTOH this is a + * convenient can-fail point to hook it in. + */ + ret = dma_resv_reserve_shared(obj->resv, 1); + if (ret) + return ret;
/* exclusive fences must be ordered */ if (no_implicit && !write) diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c index 26f9299df881..cd6715bd6d6b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.c +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c @@ -349,12 +349,9 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, struct nouveau_fence *f; int ret;
- if (!exclusive) { - ret = dma_resv_reserve_shared(resv, 1); - - if (ret) - return ret; - } + ret = dma_resv_reserve_shared(resv, 1); + if (ret) + return ret;
dma_resv_for_each_fence(&cursor, resv, exclusive, fence) { struct nouveau_channel *prev = NULL; diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 908d79520853..89c3fe389476 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -247,6 +247,10 @@ static int panfrost_acquire_object_fences(struct drm_gem_object **bos, int i, ret;
for (i = 0; i < bo_count; i++) { + ret = dma_resv_reserve_shared(bos[i]->resv, 1); + if (ret) + return ret; + /* panfrost always uses write mode in its current uapi */ ret = drm_sched_job_add_implicit_dependencies(job, bos[i], true); diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 72a94301bc95..ea9eabcc0a0c 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -221,9 +221,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
fbo->base = *bo;
- ttm_bo_get(bo); - fbo->bo = bo; - /** * Fix up members that we shouldn't copy directly: * TODO: Explicit member copy would probably be better here. @@ -246,6 +243,15 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, ret = dma_resv_trylock(&fbo->base.base._resv); WARN_ON(!ret);
+ ret = dma_resv_reserve_shared(&fbo->base.base._resv, 1); + if (ret) { + kfree(fbo); + return ret; + } + + ttm_bo_get(bo); + fbo->bo = bo; + ttm_bo_move_to_lru_tail_unlocked(&fbo->base);
*new_obj = &fbo->base; diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c index 071c48d672c6..5da922639d54 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c @@ -90,6 +90,7 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket,
list_for_each_entry(entry, list, head) { struct ttm_buffer_object *bo = entry->bo; + unsigned int num_fences;
ret = ttm_bo_reserve(bo, intr, (ticket == NULL), ticket); if (ret == -EALREADY && dups) { @@ -100,12 +101,10 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, continue; }
+ num_fences = min(entry->num_shared, 1u); if (!ret) { - if (!entry->num_shared) - continue; - ret = dma_resv_reserve_shared(bo->base.resv, - entry->num_shared); + num_fences); if (!ret) continue; } @@ -120,9 +119,9 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, ret = ttm_bo_reserve_slowpath(bo, intr, ticket); }
- if (!ret && entry->num_shared) + if (!ret) ret = dma_resv_reserve_shared(bo->base.resv, - entry->num_shared); + num_fences);
if (unlikely(ret != 0)) { if (ticket) { diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index c7ed2e1cbab6..1bea90e40ce1 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -259,16 +259,21 @@ v3d_lock_bo_reservations(struct v3d_job *job, return ret;
for (i = 0; i < job->bo_count; i++) { + ret = dma_resv_reserve_shared(job->bo[i]->resv, 1); + if (ret) + goto fail; + ret = drm_sched_job_add_implicit_dependencies(&job->base, job->bo[i], true); - if (ret) { - drm_gem_unlock_reservations(job->bo, job->bo_count, - acquire_ctx); - return ret; - } + if (ret) + goto fail; }
return 0; + +fail: + drm_gem_unlock_reservations(job->bo, job->bo_count, acquire_ctx); + return ret; }
/** diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c index bd6f75285fd9..a4cb296d4fcd 100644 --- a/drivers/gpu/drm/vgem/vgem_fence.c +++ b/drivers/gpu/drm/vgem/vgem_fence.c @@ -157,12 +157,14 @@ int vgem_fence_attach_ioctl(struct drm_device *dev, }
/* Expose the fence via the dma-buf */ - ret = 0; dma_resv_lock(resv, NULL); - if (arg->flags & VGEM_FENCE_WRITE) - dma_resv_add_excl_fence(resv, fence); - else if ((ret = dma_resv_reserve_shared(resv, 1)) == 0) - dma_resv_add_shared_fence(resv, fence); + ret = dma_resv_reserve_shared(resv, 1); + if (!ret) { + if (arg->flags & VGEM_FENCE_WRITE) + dma_resv_add_excl_fence(resv, fence); + else + dma_resv_add_shared_fence(resv, fence); + } dma_resv_unlock(resv);
/* Record the fence in our idr for later signaling */ diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 2de61b63ef91..aec105cdd64c 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -214,6 +214,7 @@ void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs,
int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) { + unsigned int i; int ret;
if (objs->nents == 1) { @@ -222,6 +223,14 @@ int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) ret = drm_gem_lock_reservations(objs->objs, objs->nents, &objs->ticket); } + if (ret) + return ret; + + for (i = 0; i < objs->nents; ++i) { + ret = dma_resv_reserve_shared(objs->objs[i]->resv, 1); + if (ret) + return ret; + } return ret; }
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index fd007f1c1776..f81767f0a5cc 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -1053,16 +1053,22 @@ void vmw_bo_fence_single(struct ttm_buffer_object *bo, struct vmw_fence_obj *fence) { struct ttm_device *bdev = bo->bdev; - struct vmw_private *dev_priv = container_of(bdev, struct vmw_private, bdev); + int ret;
- if (fence == NULL) { + if (fence == NULL) vmw_execbuf_fence_commands(NULL, dev_priv, &fence, NULL); + else + dma_fence_get(&fence->base); + + ret = dma_resv_reserve_shared(bo->base.resv, 1); + if (!ret) dma_resv_add_excl_fence(bo->base.resv, &fence->base); - dma_fence_put(&fence->base); - } else - dma_resv_add_excl_fence(bo->base.resv, &fence->base); + else + /* Last resort fallback when we are OOM */ + dma_fence_wait(&fence->base, false); + dma_fence_put(&fence->base); }
Use dma_resv_get_singleton() here to eventually get more than one write fence as single fence.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/drm_gem_atomic_helper.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c index c3189afe10cb..9338ddb7edff 100644 --- a/drivers/gpu/drm/drm_gem_atomic_helper.c +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c @@ -143,25 +143,21 @@ */ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) { - struct dma_resv_iter cursor; struct drm_gem_object *obj; struct dma_fence *fence; + int ret;
if (!state->fb) return 0;
obj = drm_gem_fb_get_obj(state->fb, 0); - dma_resv_iter_begin(&cursor, obj->resv, false); - dma_resv_for_each_fence_unlocked(&cursor, fence) { - /* TODO: Currently there should be only one write fence, so this - * here works fine. But drm_atomic_set_fence_for_plane() should - * be changed to be able to handle more fences in general for - * multiple BOs per fb anyway. */ - dma_fence_get(fence); - break; - } - dma_resv_iter_end(&cursor); + ret = dma_resv_get_singleton(obj->resv, false, &fence); + if (ret) + return ret;
+ /* TODO: drm_atomic_set_fence_for_plane() should be changed to be able + * to handle more fences in general for multiple BOs per fb. + */ drm_atomic_set_fence_for_plane(state, fence); return 0; }
On Tue, Nov 23, 2021 at 03:21:04PM +0100, Christian König wrote:
Use dma_resv_get_singleton() here to eventually get more than one write fence as single fence.
Yeah this is nice, atomic commit helpers not supporting multiple write fences was really my main worry in this entire endeavour. Otherwise looks all rather reasonable.
I'll try to find some review bandwidth, but would be really if you can volunteer others too (especially making sure ttm drivers set the KERNEL fences correctly in all cases). -Daniel
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/drm_gem_atomic_helper.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c index c3189afe10cb..9338ddb7edff 100644 --- a/drivers/gpu/drm/drm_gem_atomic_helper.c +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c @@ -143,25 +143,21 @@ */ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) {
- struct dma_resv_iter cursor; struct drm_gem_object *obj; struct dma_fence *fence;
- int ret;
if (!state->fb) return 0; obj = drm_gem_fb_get_obj(state->fb, 0);
- dma_resv_iter_begin(&cursor, obj->resv, false);
- dma_resv_for_each_fence_unlocked(&cursor, fence) {
/* TODO: Currently there should be only one write fence, so this
* here works fine. But drm_atomic_set_fence_for_plane() should
* be changed to be able to handle more fences in general for
* multiple BOs per fb anyway. */
dma_fence_get(fence);
break;
- }
- dma_resv_iter_end(&cursor);
- ret = dma_resv_get_singleton(obj->resv, false, &fence);
- if (ret)
return ret;
- /* TODO: drm_atomic_set_fence_for_plane() should be changed to be able
* to handle more fences in general for multiple BOs per fb.
drm_atomic_set_fence_for_plane(state, fence); return 0;*/
}
2.25.1
Am 25.11.21 um 16:47 schrieb Daniel Vetter:
On Tue, Nov 23, 2021 at 03:21:04PM +0100, Christian König wrote:
Use dma_resv_get_singleton() here to eventually get more than one write fence as single fence.
Yeah this is nice, atomic commit helpers not supporting multiple write fences was really my main worry in this entire endeavour. Otherwise looks all rather reasonable.
I'll try to find some review bandwidth, but would be really if you can volunteer others too (especially making sure ttm drivers set the KERNEL fences correctly in all cases).
Maybe I should split that up into switching over to adding the enum and then switching to kernel/bookkeep(previously other) for some use cases.
It would be good if I could get an rb on the trivial driver cleanups first. I can send those out individually if that helps.
Thanks, Christian.
-Daniel
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/drm_gem_atomic_helper.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c index c3189afe10cb..9338ddb7edff 100644 --- a/drivers/gpu/drm/drm_gem_atomic_helper.c +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c @@ -143,25 +143,21 @@ */ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) {
- struct dma_resv_iter cursor; struct drm_gem_object *obj; struct dma_fence *fence;
- int ret;
if (!state->fb) return 0; obj = drm_gem_fb_get_obj(state->fb, 0);
- dma_resv_iter_begin(&cursor, obj->resv, false);
- dma_resv_for_each_fence_unlocked(&cursor, fence) {
/* TODO: Currently there should be only one write fence, so this
* here works fine. But drm_atomic_set_fence_for_plane() should
* be changed to be able to handle more fences in general for
* multiple BOs per fb anyway. */
dma_fence_get(fence);
break;
- }
- dma_resv_iter_end(&cursor);
- ret = dma_resv_get_singleton(obj->resv, false, &fence);
- if (ret)
return ret;
- /* TODO: drm_atomic_set_fence_for_plane() should be changed to be able
* to handle more fences in general for multiple BOs per fb.
drm_atomic_set_fence_for_plane(state, fence); return 0; }*/
-- 2.25.1
On Fri, Nov 26, 2021 at 11:30:19AM +0100, Christian König wrote:
Am 25.11.21 um 16:47 schrieb Daniel Vetter:
On Tue, Nov 23, 2021 at 03:21:04PM +0100, Christian König wrote:
Use dma_resv_get_singleton() here to eventually get more than one write fence as single fence.
Yeah this is nice, atomic commit helpers not supporting multiple write fences was really my main worry in this entire endeavour. Otherwise looks all rather reasonable.
I'll try to find some review bandwidth, but would be really if you can volunteer others too (especially making sure ttm drivers set the KERNEL fences correctly in all cases).
Maybe I should split that up into switching over to adding the enum and then switching to kernel/bookkeep(previously other) for some use cases.
It would be good if I could get an rb on the trivial driver cleanups first. I can send those out individually if that helps.
Yeah some of the conversion patches might make sense to be split a bit more. Especially when there's functional changes hiding, but I tought you've split them out? But didn't read them in detail.
Either way for stuff like this I think it's always best to split the mechanical stuff from the concept intro/docs and functional changes, except when it's really very obvious what's going on and just as small patch. -Daniel
Thanks, Christian.
-Daniel
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/drm_gem_atomic_helper.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c index c3189afe10cb..9338ddb7edff 100644 --- a/drivers/gpu/drm/drm_gem_atomic_helper.c +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c @@ -143,25 +143,21 @@ */ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) {
- struct dma_resv_iter cursor; struct drm_gem_object *obj; struct dma_fence *fence;
- int ret; if (!state->fb) return 0; obj = drm_gem_fb_get_obj(state->fb, 0);
- dma_resv_iter_begin(&cursor, obj->resv, false);
- dma_resv_for_each_fence_unlocked(&cursor, fence) {
/* TODO: Currently there should be only one write fence, so this
* here works fine. But drm_atomic_set_fence_for_plane() should
* be changed to be able to handle more fences in general for
* multiple BOs per fb anyway. */
dma_fence_get(fence);
break;
- }
- dma_resv_iter_end(&cursor);
- ret = dma_resv_get_singleton(obj->resv, false, &fence);
- if (ret)
return ret;
- /* TODO: drm_atomic_set_fence_for_plane() should be changed to be able
* to handle more fences in general for multiple BOs per fb.
drm_atomic_set_fence_for_plane(state, fence); return 0; }*/
-- 2.25.1
Use dma_resv_get_singleton() here to eventually get more than one write fence as single fence.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/nouveau/dispnv50/wndw.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c index 133c8736426a..b55a8a723581 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c @@ -536,8 +536,6 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) struct nouveau_bo *nvbo; struct nv50_head_atom *asyh; struct nv50_wndw_ctxdma *ctxdma; - struct dma_resv_iter cursor; - struct dma_fence *fence; int ret;
NV_ATOMIC(drm, "%s prepare: %p\n", plane->name, fb); @@ -560,13 +558,11 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) asyw->image.handle[0] = ctxdma->object.handle; }
- dma_resv_iter_begin(&cursor, nvbo->bo.base.resv, false); - dma_resv_for_each_fence_unlocked(&cursor, fence) { - /* TODO: We only use the first writer here */ - asyw->state.fence = dma_fence_get(fence); - break; - } - dma_resv_iter_end(&cursor); + ret = dma_resv_get_singleton(nvbo->bo.base.resv, false, + &asyw->state.fence); + if (ret) + return ret; + asyw->image.offset[0] = nvbo->offset;
if (wndw->func->prepare) {
Makes the code a bit more simpler.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 23 +++-------------------- 1 file changed, 3 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c index be48487e2ca7..888d97143177 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c @@ -107,36 +107,19 @@ static void amdgpu_pasid_free_cb(struct dma_fence *fence, void amdgpu_pasid_free_delayed(struct dma_resv *resv, u32 pasid) { - struct dma_fence *fence, **fences; struct amdgpu_pasid_cb *cb; - unsigned count; + struct dma_fence *fence; int r;
- r = dma_resv_get_fences(resv, true, &count, &fences); + r = dma_resv_get_singleton(resv, true, &fence); if (r) goto fallback;
- if (count == 0) { + if (!fence) { amdgpu_pasid_free(pasid); return; }
- if (count == 1) { - fence = fences[0]; - kfree(fences); - } else { - uint64_t context = dma_fence_context_alloc(1); - struct dma_fence_array *array; - - array = dma_fence_array_create(count, fences, context, - 1, false); - if (!array) { - kfree(fences); - goto fallback; - } - fence = &array->base; - } - cb = kmalloc(sizeof(*cb), GFP_KERNEL); if (!cb) { /* Last resort when we are OOM */
We have previously done that in the individual drivers but it is more defensive to move that into the common code.
Dynamic attachments should wait for map operations to complete by themselves.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-buf.c | 18 +++++++++++++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 14 +------------- drivers/gpu/drm/nouveau/nouveau_prime.c | 17 +---------------- drivers/gpu/drm/radeon/radeon_prime.c | 16 +++------------- 4 files changed, 20 insertions(+), 45 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 528983d3ba64..d3dd602c4753 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -660,12 +660,24 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach, enum dma_data_direction direction) { struct sg_table *sg_table; + signed long ret;
sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction); + if (IS_ERR_OR_NULL(sg_table)) + return sg_table; + + if (!dma_buf_attachment_is_dynamic(attach)) { + ret = dma_resv_wait_timeout(attach->dmabuf->resv, + DMA_RESV_USAGE_KERNEL, true, + MAX_SCHEDULE_TIMEOUT); + if (ret < 0) { + attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, + direction); + return ERR_PTR(ret); + } + }
- if (!IS_ERR_OR_NULL(sg_table)) - mangle_sg_table(sg_table); - + mangle_sg_table(sg_table); return sg_table; }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index ae6ab93c868b..57a7a603f987 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -105,21 +105,9 @@ static int amdgpu_dma_buf_pin(struct dma_buf_attachment *attach) { struct drm_gem_object *obj = attach->dmabuf->priv; struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); - int r;
/* pin buffer into GTT */ - r = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT); - if (r) - return r; - - if (bo->tbo.moving) { - r = dma_fence_wait(bo->tbo.moving, true); - if (r) { - amdgpu_bo_unpin(bo); - return r; - } - } - return 0; + return amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT); }
/** diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index 60019d0532fc..347488685f74 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -93,22 +93,7 @@ int nouveau_gem_prime_pin(struct drm_gem_object *obj) if (ret) return -EINVAL;
- ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); - if (ret) - goto error; - - if (nvbo->bo.moving) - ret = dma_fence_wait(nvbo->bo.moving, true); - - ttm_bo_unreserve(&nvbo->bo); - if (ret) - goto error; - - return ret; - -error: - nouveau_bo_unpin(nvbo); - return ret; + return 0; }
void nouveau_gem_prime_unpin(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index 4a90807351e7..42a87948e28c 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -77,19 +77,9 @@ int radeon_gem_prime_pin(struct drm_gem_object *obj)
/* pin buffer into GTT */ ret = radeon_bo_pin(bo, RADEON_GEM_DOMAIN_GTT, NULL); - if (unlikely(ret)) - goto error; - - if (bo->tbo.moving) { - ret = dma_fence_wait(bo->tbo.moving, false); - if (unlikely(ret)) { - radeon_bo_unpin(bo); - goto error; - } - } - - bo->prime_shared_count++; -error: + if (likely(ret == 0)) + bo->prime_shared_count++; + radeon_bo_unreserve(bo); return ret; }
Not needed any more now we have that inside the framework.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h | 1 - drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 52 +++------------------ 2 files changed, 6 insertions(+), 47 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h index 044b41f0bfd9..529d52a204cf 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h @@ -34,7 +34,6 @@ struct amdgpu_fpriv; struct amdgpu_bo_list_entry { struct ttm_validate_buffer tv; struct amdgpu_bo_va *bo_va; - struct dma_fence_chain *chain; uint32_t priority; struct page **user_pages; bool user_invalidated; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 92091e800022..413606d10080 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -576,14 +576,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
e->bo_va = amdgpu_vm_bo_find(vm, bo); - - if (bo->tbo.base.dma_buf && !amdgpu_bo_explicit_sync(bo)) { - e->chain = dma_fence_chain_alloc(); - if (!e->chain) { - r = -ENOMEM; - goto error_validate; - } - } }
amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold, @@ -634,13 +626,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, }
error_validate: - if (r) { - amdgpu_bo_list_for_each_entry(e, p->bo_list) { - dma_fence_chain_free(e->chain); - e->chain = NULL; - } + if (r) ttm_eu_backoff_reservation(&p->ticket, &p->validated); - } out: return r; } @@ -680,17 +667,9 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, { unsigned i;
- if (error && backoff) { - struct amdgpu_bo_list_entry *e; - - amdgpu_bo_list_for_each_entry(e, parser->bo_list) { - dma_fence_chain_free(e->chain); - e->chain = NULL; - } - + if (error && backoff) ttm_eu_backoff_reservation(&parser->ticket, &parser->validated); - }
for (i = 0; i < parser->num_post_deps; i++) { drm_syncobj_put(parser->post_deps[i].syncobj); @@ -1265,29 +1244,10 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
- amdgpu_bo_list_for_each_entry(e, p->bo_list) { - struct dma_resv *resv = e->tv.bo->base.resv; - struct dma_fence_chain *chain = e->chain; - struct dma_resv_iter cursor; - struct dma_fence *fence; - - if (!chain) - continue; - - /* - * Work around dma_resv shortcommings by wrapping up the - * submission in a dma_fence_chain and add it as exclusive - * fence. - */ - dma_resv_for_each_fence(&cursor, resv, - DMA_RESV_USAGE_WRITE, - fence) { - break; - } - dma_fence_chain_init(chain, fence, dma_fence_get(p->fence), 1); - dma_resv_add_fence(resv, &chain->base, DMA_RESV_USAGE_WRITE); - e->chain = NULL; - } + /* For now manually add the resulting fence as writer as well */ + amdgpu_bo_list_for_each_entry(e, p->bo_list) + dma_resv_add_fence(e->tv.bo->base.resv, p->fence, + DMA_RESV_USAGE_WRITE);
ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence); mutex_unlock(&p->adev->notifier_lock);
This is now handled by the DMA-buf framework in the dma_resv obj.
Signed-off-by: Christian König christian.koenig@amd.com --- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 13 ++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 7 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c | 11 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 11 ++++-- drivers/gpu/drm/ttm/ttm_bo.c | 10 ++---- drivers/gpu/drm/ttm/ttm_bo_util.c | 7 ---- drivers/gpu/drm/ttm/ttm_bo_vm.c | 34 +++++++------------ drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 6 ---- include/drm/ttm/ttm_bo_api.h | 2 -- 9 files changed, 40 insertions(+), 61 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 0201a44ff630..a32035297995 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -2330,6 +2330,8 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef) struct amdgpu_bo *bo = mem->bo; uint32_t domain = mem->domain; struct kfd_mem_attachment *attachment; + struct dma_resv_iter cursor; + struct dma_fence *fence;
total_size += amdgpu_bo_size(bo);
@@ -2344,10 +2346,13 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef) goto validate_map_fail; } } - ret = amdgpu_sync_fence(&sync_obj, bo->tbo.moving); - if (ret) { - pr_debug("Memory eviction: Sync BO fence failed. Try again\n"); - goto validate_map_fail; + dma_resv_for_each_fence(&cursor, bo->tbo.base.resv, + DMA_RESV_USAGE_KERNEL, fence) { + ret = amdgpu_sync_fence(&sync_obj, fence); + if (ret) { + pr_debug("Memory eviction: Sync BO fence failed. Try again\n"); + goto validate_map_fail; + } } list_for_each_entry(attachment, &mem->attachments, list) { if (!attachment->is_mapped) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index a40ede9bccd0..3881a503a7bf 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -608,9 +608,8 @@ int amdgpu_bo_create(struct amdgpu_device *adev, if (unlikely(r)) goto fail_unreserve;
- amdgpu_bo_fence(bo, fence, false); - dma_fence_put(bo->tbo.moving); - bo->tbo.moving = dma_fence_get(fence); + dma_resv_add_fence(bo->tbo.base.resv, fence, + DMA_RESV_USAGE_KERNEL); dma_fence_put(fence); } if (!bp->resv) @@ -1290,7 +1289,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
r = amdgpu_fill_buffer(abo, AMDGPU_POISON, bo->base.resv, &fence); if (!WARN_ON(r)) { - amdgpu_bo_fence(abo, fence, false); + dma_resv_add_fence(bo->base.resv, fence, DMA_RESV_USAGE_KERNEL); dma_fence_put(fence); }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c index e3fbf0f10add..31913ae86de6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c @@ -74,13 +74,12 @@ static int amdgpu_vm_cpu_update(struct amdgpu_vm_update_params *p, { unsigned int i; uint64_t value; - int r; + long r;
- if (vmbo->bo.tbo.moving) { - r = dma_fence_wait(vmbo->bo.tbo.moving, true); - if (r) - return r; - } + r = dma_resv_wait_timeout(vmbo->bo.tbo.base.resv, DMA_RESV_USAGE_KERNEL, + true, MAX_SCHEDULE_TIMEOUT); + if (r < 0) + return r;
pe += (unsigned long)amdgpu_bo_kptr(&vmbo->bo);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c index dbb551762805..bdb44cee19d3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c @@ -204,14 +204,19 @@ static int amdgpu_vm_sdma_update(struct amdgpu_vm_update_params *p, struct amdgpu_bo *bo = &vmbo->bo; enum amdgpu_ib_pool_type pool = p->immediate ? AMDGPU_IB_POOL_IMMEDIATE : AMDGPU_IB_POOL_DELAYED; + struct dma_resv_iter cursor; unsigned int i, ndw, nptes; + struct dma_fence *fence; uint64_t *pte; int r;
/* Wait for PD/PT moves to be completed */ - r = amdgpu_sync_fence(&p->job->sync, bo->tbo.moving); - if (r) - return r; + dma_resv_for_each_fence(&cursor, bo->tbo.base.resv, + DMA_RESV_USAGE_KERNEL, fence) { + r = amdgpu_sync_fence(&p->job->sync, fence); + if (r) + return r; + }
do { ndw = p->num_dw_left; diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 95e6633775f2..5db5c9ba166c 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -468,7 +468,6 @@ static void ttm_bo_release(struct kref *kref) dma_resv_unlock(bo->base.resv);
atomic_dec(&ttm_glob.bo_count); - dma_fence_put(bo->moving); bo->destroy(bo); }
@@ -737,9 +736,8 @@ int ttm_mem_evict_first(struct ttm_device *bdev, }
/* - * Add the last move fence to the BO and reserve a new shared slot. We only use - * a shared slot to avoid unecessary sync and rely on the subsequent bo move to - * either stall or use an exclusive fence respectively set bo->moving. + * Add the last move fence to the BO as kernel dependency and reserve a new + * fence slot. */ static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo, struct ttm_resource_manager *man, @@ -769,9 +767,6 @@ static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo, dma_fence_put(fence); return ret; } - - dma_fence_put(bo->moving); - bo->moving = fence; return 0; }
@@ -978,7 +973,6 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, bo->bdev = bdev; bo->type = type; bo->page_alignment = page_alignment; - bo->moving = NULL; bo->pin_count = 0; bo->sg = sg; if (resv) { diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index b9cfb62c4b6e..95de2691ee7c 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -229,7 +229,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, atomic_inc(&ttm_glob.bo_count); INIT_LIST_HEAD(&fbo->base.ddestroy); INIT_LIST_HEAD(&fbo->base.lru); - fbo->base.moving = NULL; drm_vma_node_reset(&fbo->base.base.vma_node);
kref_init(&fbo->base.kref); @@ -496,9 +495,6 @@ static int ttm_bo_move_to_ghost(struct ttm_buffer_object *bo, * operation has completed. */
- dma_fence_put(bo->moving); - bo->moving = dma_fence_get(fence); - ret = ttm_buffer_object_transfer(bo, &ghost_obj); if (ret) return ret; @@ -543,9 +539,6 @@ static void ttm_bo_move_pipeline_evict(struct ttm_buffer_object *bo, spin_unlock(&from->move_lock);
ttm_resource_free(bo, &bo->resource); - - dma_fence_put(bo->moving); - bo->moving = dma_fence_get(fence); }
int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo, diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 08ba083a80d2..5b324f245265 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -46,17 +46,13 @@ static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo, struct vm_fault *vmf) { - vm_fault_t ret = 0; - int err = 0; - - if (likely(!bo->moving)) - goto out_unlock; + long err = 0;
/* * Quick non-stalling check for idle. */ - if (dma_fence_is_signaled(bo->moving)) - goto out_clear; + if (dma_resv_test_signaled(bo->base.resv, DMA_RESV_USAGE_KERNEL)) + return 0;
/* * If possible, avoid waiting for GPU with mmap_lock @@ -64,34 +60,30 @@ static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo, * is the first attempt. */ if (fault_flag_allow_retry_first(vmf->flags)) { - ret = VM_FAULT_RETRY; if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) - goto out_unlock; + return VM_FAULT_RETRY;
ttm_bo_get(bo); mmap_read_unlock(vmf->vma->vm_mm); - (void) dma_fence_wait(bo->moving, true); + (void)dma_resv_wait_timeout(bo->base.resv, + DMA_RESV_USAGE_KERNEL, true, + MAX_SCHEDULE_TIMEOUT); dma_resv_unlock(bo->base.resv); ttm_bo_put(bo); - goto out_unlock; + return VM_FAULT_RETRY; }
/* * Ordinary wait. */ - err = dma_fence_wait(bo->moving, true); - if (unlikely(err != 0)) { - ret = (err != -ERESTARTSYS) ? VM_FAULT_SIGBUS : + err = dma_resv_wait_timeout(bo->base.resv, DMA_RESV_USAGE_KERNEL, true, + MAX_SCHEDULE_TIMEOUT); + if (unlikely(err < 0)) { + return (err != -ERESTARTSYS) ? VM_FAULT_SIGBUS : VM_FAULT_NOPAGE; - goto out_unlock; }
-out_clear: - dma_fence_put(bo->moving); - bo->moving = NULL; - -out_unlock: - return ret; + return 0; }
static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo, diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 9e3dcbb573e7..40cc2c13e963 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -1166,12 +1166,6 @@ int vmw_resources_clean(struct vmw_buffer_object *vbo, pgoff_t start, *num_prefault = __KERNEL_DIV_ROUND_UP(last_cleaned - res_start, PAGE_SIZE); vmw_bo_fence_single(bo, NULL); - if (bo->moving) - dma_fence_put(bo->moving); - - return dma_resv_get_singleton(bo->base.resv, - DMA_RESV_USAGE_KERNEL, - &bo->moving); }
return 0; diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index cd785cfa3123..9798eb097c13 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -98,7 +98,6 @@ struct ttm_tt; * @lru: List head for the lru list. * @ddestroy: List head for the delayed destroy list. * @swap: List head for swap LRU list. - * @moving: Fence set when BO is moving * @offset: The current GPU offset, which can have different meanings * depending on the memory type. For SYSTEM type memory, it should be 0. * @cur_placement: Hint of current placement. @@ -151,7 +150,6 @@ struct ttm_buffer_object { * Members protected by a bo reservation. */
- struct dma_fence *moving; unsigned priority; unsigned pin_count;
On Tue, 23 Nov 2021 15:20:45 +0100 "Christian König" ckoenig.leichtzumerken@gmail.com wrote:
Hi guys,
as discussed before this set of patches completely rework the dma_resv semantic and spreads the new handling over all the existing drivers and users.
First of all this drops the DAG approach because it requires that every single driver implements those relatively complicated rules correctly and any violation of that immediately leads to either corruption of freed memory or even more severe security problems.
Instead we just keep all fences around all the time until they are signaled. Only fences with the same context are assumed to be signaled in the correct order since this is exercised elsewhere as well. Replacing fences is now only supported for hardware mechanism like VM page table updates where the hardware can guarantee that the resource can't be accessed any more.
Then the concept of a single exclusive fence and multiple shared fences is dropped as well.
Instead the dma_resv object is now just a container for dma_fence objects where each fence has associated usage flags. Those use flags describe how the operation represented by the dma_fence object is using the resource protected by the dma_resv object. This allows us to add multiple fences for each usage type.
Additionally to the existing WRITE/READ usages this patch set also adds the new KERNEL and OTHER usages. The KERNEL usages is used in cases where the kernel needs to do some operation with the resource protected by the dma_resv object, like copies or clears. Those are mandatory to wait for when dynamic memory management is used.
The OTHER usage is for cases where we don't want that the operation represented by the dma_fence object participate in any implicit sync but needs to be respected by the kernel memory management. Examples for those are VM page table updates and preemption fences.
Hi,
reading just the cover letter I wonder if KERNEL and OTHER could have better names based on what you describe how they are used. WRITE and READ immediately give an idea of semantics, KERNEL and OTHER not so much.
Some suggestions coming to my mind:
KERNEL -> PREPARE or INITIALIZE or SANITIZE OTHER -> BOOKKEEP
Thanks, pq
While doing this the new implementation cleans up existing workarounds all over the place, but especially amdgpu and TTM. Surprisingly I also found two use cases for the KERNEL/OTHER usage in i915 and Nouveau, those might need more thoughts.
In general the existing functionality should been preserved, the only downside is that we now always need to reserve a slot before adding a fence. The newly added call to the reservation function can probably use some more cleanup.
TODOs: Testing, testing, testing, doublechecking the newly added kerneldoc for any typos.
Please review and/or comment, Christian.
Am 24.11.21 um 09:31 schrieb Pekka Paalanen:
On Tue, 23 Nov 2021 15:20:45 +0100 "Christian König" ckoenig.leichtzumerken@gmail.com wrote:
Hi guys,
as discussed before this set of patches completely rework the dma_resv semantic and spreads the new handling over all the existing drivers and users.
First of all this drops the DAG approach because it requires that every single driver implements those relatively complicated rules correctly and any violation of that immediately leads to either corruption of freed memory or even more severe security problems.
Instead we just keep all fences around all the time until they are signaled. Only fences with the same context are assumed to be signaled in the correct order since this is exercised elsewhere as well. Replacing fences is now only supported for hardware mechanism like VM page table updates where the hardware can guarantee that the resource can't be accessed any more.
Then the concept of a single exclusive fence and multiple shared fences is dropped as well.
Instead the dma_resv object is now just a container for dma_fence objects where each fence has associated usage flags. Those use flags describe how the operation represented by the dma_fence object is using the resource protected by the dma_resv object. This allows us to add multiple fences for each usage type.
Additionally to the existing WRITE/READ usages this patch set also adds the new KERNEL and OTHER usages. The KERNEL usages is used in cases where the kernel needs to do some operation with the resource protected by the dma_resv object, like copies or clears. Those are mandatory to wait for when dynamic memory management is used.
The OTHER usage is for cases where we don't want that the operation represented by the dma_fence object participate in any implicit sync but needs to be respected by the kernel memory management. Examples for those are VM page table updates and preemption fences.
Hi,
reading just the cover letter I wonder if KERNEL and OTHER could have better names based on what you describe how they are used. WRITE and READ immediately give an idea of semantics, KERNEL and OTHER not so much.
Some suggestions coming to my mind:
KERNEL -> PREPARE or INITIALIZE or SANITIZE OTHER -> BOOKKEEP
Yes, I was entertaining similar thoughts for quite a while as well.
I think KERNEL fits better than the suggestions because that is really something only the kernel should use and we should not encourage anybody to use that for userspace submissions.
But using BOOKKEEP instead of OTHER sounds like a really good idea to me. Going to keep that in mind and if nobody has any better idea making the change for the next revision.
Thanks, Christian.
Thanks, pq
While doing this the new implementation cleans up existing workarounds all over the place, but especially amdgpu and TTM. Surprisingly I also found two use cases for the KERNEL/OTHER usage in i915 and Nouveau, those might need more thoughts.
In general the existing functionality should been preserved, the only downside is that we now always need to reserve a slot before adding a fence. The newly added call to the reservation function can probably use some more cleanup.
TODOs: Testing, testing, testing, doublechecking the newly added kerneldoc for any typos.
Please review and/or comment, Christian.
On Tue, Nov 23, 2021 at 03:20:45PM +0100, Christian König wrote:
Hi guys,
as discussed before this set of patches completely rework the dma_resv semantic and spreads the new handling over all the existing drivers and users.
First of all this drops the DAG approach because it requires that every single driver implements those relatively complicated rules correctly and any violation of that immediately leads to either corruption of freed memory or even more severe security problems.
Instead we just keep all fences around all the time until they are signaled. Only fences with the same context are assumed to be signaled in the correct order since this is exercised elsewhere as well. Replacing fences is now only supported for hardware mechanism like VM page table updates where the hardware can guarantee that the resource can't be accessed any more.
Then the concept of a single exclusive fence and multiple shared fences is dropped as well.
Instead the dma_resv object is now just a container for dma_fence objects where each fence has associated usage flags. Those use flags describe how the operation represented by the dma_fence object is using the resource protected by the dma_resv object. This allows us to add multiple fences for each usage type.
Additionally to the existing WRITE/READ usages this patch set also adds the new KERNEL and OTHER usages. The KERNEL usages is used in cases where the kernel needs to do some operation with the resource protected by the dma_resv object, like copies or clears. Those are mandatory to wait for when dynamic memory management is used.
The OTHER usage is for cases where we don't want that the operation represented by the dma_fence object participate in any implicit sync but needs to be respected by the kernel memory management. Examples for those are VM page table updates and preemption fences.
While doing this the new implementation cleans up existing workarounds all over the place, but especially amdgpu and TTM. Surprisingly I also found two use cases for the KERNEL/OTHER usage in i915 and Nouveau, those might need more thoughts.
In general the existing functionality should been preserved, the only downside is that we now always need to reserve a slot before adding a fence. The newly added call to the reservation function can probably use some more cleanup.
TODOs: Testing, testing, testing, doublechecking the newly added kerneldoc for any typos.
Please review and/or comment,
I like.
Unfortunately also massively burried, but I really like. I think the past few months (years?) of discussions and bikeshed have been worth it, this looks tidy and clear in semantics and in how drivers use it all.
Ofc this will take some time to review/test in detail and land, but I think next steps would be to resurrect Jason's explicit dma-buf fence import/export series (should also clean up nicely I think), and then roll out the new fence semantics to a few vk/compute stacks? I think especially for vk what we want is that normal CS only ever uses OTHER, and any implicit sync that needs to happen for winsys buffers is done through the import/export ioctls. GL might need something slightly different, but normally there's not many shared buffers, so doing a pile of ioctl calls for implicit synced buffers seems fine. But perhaps GL does want a new CS ioctl flag.
Cheers, Daniel
linaro-mm-sig@lists.linaro.org