With the seqlock now extended to cover the lookup of the fence and its testing, we can perform that testing solely under the seqlock guard and avoid the effective locking and serialisation of acquiring a reference to the request. As the fence is RCU protected we know it cannot disappear as we test it, the same guarantee that made it safe to acquire the reference previously. The seqlock tests whether the fence was replaced as we are testing it telling us whether or not we can trust the result (if not, we just repeat the test until stable).
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Sumit Semwal sumit.semwal@linaro.org Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/dma-buf/reservation.c | 32 ++++---------------------------- 1 file changed, 4 insertions(+), 28 deletions(-)
diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c index e74493e7332b..1ddffa5adb5a 100644 --- a/drivers/dma-buf/reservation.c +++ b/drivers/dma-buf/reservation.c @@ -442,24 +442,6 @@ unlock_retry: } EXPORT_SYMBOL_GPL(reservation_object_wait_timeout_rcu);
- -static inline int -reservation_object_test_signaled_single(struct fence *passed_fence) -{ - struct fence *fence, *lfence = passed_fence; - int ret = 1; - - if (!test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) { - fence = fence_get_rcu(lfence); - if (!fence) - return -1; - - ret = !!fence_is_signaled(fence); - fence_put(fence); - } - return ret; -} - /** * reservation_object_test_signaled_rcu - Test if a reservation object's * fences have been signaled. @@ -474,7 +456,7 @@ bool reservation_object_test_signaled_rcu(struct reservation_object *obj, bool test_all) { unsigned seq, shared_count; - int ret; + bool ret;
rcu_read_lock(); retry: @@ -494,10 +476,8 @@ retry: for (i = 0; i < shared_count; ++i) { struct fence *fence = rcu_dereference(fobj->shared[i]);
- ret = reservation_object_test_signaled_single(fence); - if (ret < 0) - goto retry; - else if (!ret) + ret = fence_is_signaled(fence); + if (!ret) break; }
@@ -509,11 +489,7 @@ retry: struct fence *fence_excl = rcu_dereference(obj->fence_excl);
if (fence_excl) { - ret = reservation_object_test_signaled_single( - fence_excl); - if (ret < 0) - goto retry; - + ret = fence_is_signaled(fence_excl); if (read_seqcount_retry(&obj->seq, seq)) goto retry; }
On Mon, Aug 29, 2016 at 08:08:33AM +0100, Chris Wilson wrote:
With the seqlock now extended to cover the lookup of the fence and its testing, we can perform that testing solely under the seqlock guard and avoid the effective locking and serialisation of acquiring a reference to the request. As the fence is RCU protected we know it cannot disappear as we test it, the same guarantee that made it safe to acquire the reference previously. The seqlock tests whether the fence was replaced as we are testing it telling us whether or not we can trust the result (if not, we just repeat the test until stable).
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Sumit Semwal sumit.semwal@linaro.org Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org
Not entirely sure this is safe for non-i915 drivers. We might now call ->signalled on a zombie fence (i.e. refcount == 0, but not yet kfreed). i915 can do that, but other drivers might go boom.
I think in generic code we can't do these kind of tricks unfortunately. -Daniel
drivers/dma-buf/reservation.c | 32 ++++---------------------------- 1 file changed, 4 insertions(+), 28 deletions(-)
diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c index e74493e7332b..1ddffa5adb5a 100644 --- a/drivers/dma-buf/reservation.c +++ b/drivers/dma-buf/reservation.c @@ -442,24 +442,6 @@ unlock_retry: } EXPORT_SYMBOL_GPL(reservation_object_wait_timeout_rcu);
-static inline int -reservation_object_test_signaled_single(struct fence *passed_fence) -{
- struct fence *fence, *lfence = passed_fence;
- int ret = 1;
- if (!test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) {
fence = fence_get_rcu(lfence);
if (!fence)
return -1;
ret = !!fence_is_signaled(fence);
fence_put(fence);
- }
- return ret;
-}
/**
- reservation_object_test_signaled_rcu - Test if a reservation object's
- fences have been signaled.
@@ -474,7 +456,7 @@ bool reservation_object_test_signaled_rcu(struct reservation_object *obj, bool test_all) { unsigned seq, shared_count;
- int ret;
- bool ret;
rcu_read_lock(); retry: @@ -494,10 +476,8 @@ retry: for (i = 0; i < shared_count; ++i) { struct fence *fence = rcu_dereference(fobj->shared[i]);
ret = reservation_object_test_signaled_single(fence);
if (ret < 0)
goto retry;
else if (!ret)
ret = fence_is_signaled(fence);
}if (!ret) break;
@@ -509,11 +489,7 @@ retry: struct fence *fence_excl = rcu_dereference(obj->fence_excl); if (fence_excl) {
ret = reservation_object_test_signaled_single(
fence_excl);
if (ret < 0)
goto retry;
}ret = fence_is_signaled(fence_excl); if (read_seqcount_retry(&obj->seq, seq)) goto retry;
-- 2.9.3
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-mm-sig
On Fri, Sep 23, 2016 at 03:49:26PM +0200, Daniel Vetter wrote:
On Mon, Aug 29, 2016 at 08:08:33AM +0100, Chris Wilson wrote:
With the seqlock now extended to cover the lookup of the fence and its testing, we can perform that testing solely under the seqlock guard and avoid the effective locking and serialisation of acquiring a reference to the request. As the fence is RCU protected we know it cannot disappear as we test it, the same guarantee that made it safe to acquire the reference previously. The seqlock tests whether the fence was replaced as we are testing it telling us whether or not we can trust the result (if not, we just repeat the test until stable).
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Sumit Semwal sumit.semwal@linaro.org Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org
Not entirely sure this is safe for non-i915 drivers. We might now call ->signalled on a zombie fence (i.e. refcount == 0, but not yet kfreed). i915 can do that, but other drivers might go boom.
All fences must be under RCU guard, or is that the sticking point? Given that, the problem is fence reallocation within the RCU grace period. If we can mandate that fence reallocation must be safe for concurrent fence->ops->*(), we can use this technique to avoid the serialisation barrier otherwise required. In the simple stress test, the difference is an order of magnitude, and test_signaled_rcu is often on a path where every memory barrier quickly adds up (at least for us).
So is it just that you worry that others using SLAB_DESTROY_BY_RCU won't ensure their fence is safe during the reallocation? -Chris
On Fri, Sep 23, 2016 at 03:02:32PM +0100, Chris Wilson wrote:
On Fri, Sep 23, 2016 at 03:49:26PM +0200, Daniel Vetter wrote:
On Mon, Aug 29, 2016 at 08:08:33AM +0100, Chris Wilson wrote:
With the seqlock now extended to cover the lookup of the fence and its testing, we can perform that testing solely under the seqlock guard and avoid the effective locking and serialisation of acquiring a reference to the request. As the fence is RCU protected we know it cannot disappear as we test it, the same guarantee that made it safe to acquire the reference previously. The seqlock tests whether the fence was replaced as we are testing it telling us whether or not we can trust the result (if not, we just repeat the test until stable).
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Sumit Semwal sumit.semwal@linaro.org Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org
Not entirely sure this is safe for non-i915 drivers. We might now call ->signalled on a zombie fence (i.e. refcount == 0, but not yet kfreed). i915 can do that, but other drivers might go boom.
All fences must be under RCU guard, or is that the sticking point? Given that, the problem is fence reallocation within the RCU grace period. If we can mandate that fence reallocation must be safe for concurrent fence->ops->*(), we can use this technique to avoid the serialisation barrier otherwise required. In the simple stress test, the difference is an order of magnitude, and test_signaled_rcu is often on a path where every memory barrier quickly adds up (at least for us).
So is it just that you worry that others using SLAB_DESTROY_BY_RCU won't ensure their fence is safe during the reallocation?
Before your patch the rcu-protected part was just the kref_get_unless_zero. This was done before calling down into and fenc->ops->* functions. Which means the code of these functions was guaranteed to run on a real fence object, and not a zombie fence in the process of getting destructed.
Afaiui with your patch we might call into fence->ops->* on these zombie fences. That would be a behaviour change with rather big implications (since we'd need to audit all existing implementations, and also make sure all future ones will be ok too). Or I missed something again.
I think we could still to this trick, at least partially, by making sure we only inspect generic fence state and never call into fence->ops before we're guaranteed to have a real fence.
But atm (at least per Christian König) a fence won't eventually get signalled without calling into ->ops functions, so there's a catch 22. -Daniel
linaro-mm-sig@lists.linaro.org