Quoting Christian König (2019-08-06 16:01:34)
The only remaining use for this is to protect against setting a new exclusive fence while we grab both exclusive and shared. That can also be archived by looking if the exclusive fence has changed or not after completing the operation.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/dma-buf/reservation.c | 20 +++----------------- include/linux/reservation.h | 9 ++------- 2 files changed, 5 insertions(+), 24 deletions(-)
diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c index 839d72af7ad8..43549a4d6658 100644 --- a/drivers/dma-buf/reservation.c +++ b/drivers/dma-buf/reservation.c @@ -49,12 +49,6 @@ DEFINE_WD_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class); -struct lock_class_key reservation_seqcount_class; -EXPORT_SYMBOL(reservation_seqcount_class);
-const char reservation_seqcount_string[] = "reservation_seqcount"; -EXPORT_SYMBOL(reservation_seqcount_string);
/**
- reservation_object_list_alloc - allocate fence list
- @shared_max: number of fences we need space for
@@ -103,9 +97,6 @@ static void reservation_object_list_free(struct reservation_object_list *list) void reservation_object_init(struct reservation_object *obj) { ww_mutex_init(&obj->lock, &reservation_ww_class);
__seqcount_init(&obj->seq, reservation_seqcount_string,
&reservation_seqcount_class); RCU_INIT_POINTER(obj->fence, NULL); RCU_INIT_POINTER(obj->fence_excl, NULL);
} @@ -282,12 +273,10 @@ void reservation_object_add_excl_fence(struct reservation_object *obj, dma_fence_get(fence); preempt_disable();
write_seqcount_begin(&obj->seq);
/* write_seqcount_begin provides the necessary memory barrier */ RCU_INIT_POINTER(obj->fence_excl, fence);
I think, now has to be rcu_assign_pointer.
* Initialize an RCU-protected pointer in special cases where readers * do not need ordering constraints on the CPU or the compiler. These * special cases are: * * 1. This use of RCU_INIT_POINTER() is NULLing out the pointer *or* * 2. The caller has taken whatever steps are required to prevent * RCU readers from concurrently accessing this pointer *or* * 3. The referenced data structure has already been exposed to * readers either at compile time or via rcu_assign_pointer() *and* * * a. You have not made *any* reader-visible changes to * this structure since then *or* * b. It is OK for readers accessing this structure from its * new location to see the old state of the structure. (For * example, the changes were to statistical counters or to * other state where exact synchronization is not required.)
We used to apply 2 from the seqcount, and 3 does not apply here.
/* pointer update must be visible before we modify the shared_count */ if (old)
old->shared_count = 0;
write_seqcount_end(&obj->seq);
smp_store_mb(old->shared_count, 0);
Hmm. Ok, I think.
{
unsigned int seq;
do {
seq = read_seqcount_begin(&obj->seq); *excl = rcu_dereference(obj->fence_excl); *list = rcu_dereference(obj->fence);
} while (read_seqcount_retry(&obj->seq, seq));
smp_rmb(); /* See reservation_object_add_excl_fence */
} while (rcu_access_pointer(obj->fence_excl) != *excl);
}
So, if we are add_excl_fence and cancelling a shared-list, before we return we guarantee that the shared-list is set to zero if excl is set as we read.
If we add to shared-list during the read, ... Hmm, actually we should return num_list, i.e.
do { *list = rcu_dereference(obj->fence); num_list = *list ? (*list)->count : 0; smp_rmb(); } while (...)
return num_list.
as the relationship between the count and the fence entries is also determined by the mb in add_shared_fence.
Oops, that was an oversight in the previous review.
preempt_enable();
/* inplace update, no shared fences */ @@ -370,11 +359,8 @@ int reservation_object_copy_fences(struct reservation_object *dst, old = reservation_object_get_excl(dst); preempt_disable();
write_seqcount_begin(&dst->seq);
/* write_seqcount_begin provides the necessary memory barrier */ RCU_INIT_POINTER(dst->fence_excl, new);
rcu_assign_pointer again I believe. -Chris