On Sun, Sep 12, 2021 at 07:53:07PM +0300, Oded Gabbay wrote:
> Hi,
> Re-sending this patch-set following the release of our user-space TPC
> compiler and runtime library.
>
> I would appreciate a review on this.
I think the big open we have is the entire revoke discussions. Having the
option to let dma-buf hang around which map to random local memory ranges,
without clear ownership link and a way to kill it sounds bad to me.
I think there's a few options:
- We require revoke support. But I've heard rdma really doesn't like that,
I guess because taking out an MR while holding the dma_resv_lock would
be an inversion, so can't be done. Jason, can you recap what exactly the
hold-up was again that makes this a no-go?
- The other option I discussed is a bit more the exlusive device ownership
model we've had for gpus in drm of the really old kind. Roughly this
would work like this, in terms of drm_device:
- Only the current owner (drm_master in current drm code, but should
probably rename that to drm_owner) is allowed to use the accel driver.
So all ioctl would fail if you're not drm_master.
- On dropmaster/file close we'd revoke as much as possible, e.g.
in-flight commands, mmaps, anything really that can be revoked.
- For non-revokable things like these dma-buf we'd keep a drm_master
reference around. This would prevent the next open to acquire
ownership rights, which at least prevents all the nasty potential
problems.
- admin (or well container orchestrator) then has responsibility to
shoot down all process until the problem goes away (i.e. until you hit
the one with the rdma MR which keeps the dma-buf alive)
- Not sure there's another reasonable way to do this without inviting some
problems once we get outside of the "single kernel instance per tenant"
use-case.
Wrt implementation there's the trouble of this reinventing a bunch of drm
stuff and concepts, but that's maybe for after we've figured out
semantics.
Also would be great if you have a pull request for the userspace runtime
that shows a bit how this all gets used and tied together. Or maybe some
pointers, since I guess retconning a PR in github is maybe a bit much.
Cheers, Daniel
>
> Thanks,
> Oded
>
> Oded Gabbay (1):
> habanalabs: define uAPI to export FD for DMA-BUF
>
> Tomer Tayar (1):
> habanalabs: add support for dma-buf exporter
>
> drivers/misc/habanalabs/Kconfig | 1 +
> drivers/misc/habanalabs/common/habanalabs.h | 22 +
> drivers/misc/habanalabs/common/memory.c | 522 +++++++++++++++++++-
> drivers/misc/habanalabs/gaudi/gaudi.c | 1 +
> drivers/misc/habanalabs/goya/goya.c | 1 +
> include/uapi/misc/habanalabs.h | 28 +-
> 6 files changed, 570 insertions(+), 5 deletions(-)
>
> --
> 2.17.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, Sep 9, 2021 at 9:42 AM Simon Ser <contact(a)emersion.fr> wrote:
>
> On Thursday, September 9th, 2021 at 18:31, Rob Clark <robdclark(a)gmail.com> wrote:
>
> > Yes, I think it would.. and "dma-buf/sync_file: Add SET_DEADLINE
> > ioctl" adds such an ioctl.. just for the benefit of igt tests at this
> > point, but the thought was it would be also used by compositors that
> > are doing such frame scheduling. Ofc danvet is a bit grumpy that
> > there isn't a more real (than igt) userspace for the ioctl yet ;-)
>
> Ah, very nice, I somehow missed it.
>
> I guess one issue is that explicit sync isn't quite plumbed through
> compositors yet, so without Jason's DMA-BUF to sync_file IOCTL it'd be
> a bit difficult to use.
>
> Can anybody set the deadline? I wonder if clients should be allowed to.
In its current form, anyone who has the fd can.. I'm not sure how (or
even if) we could limit it beyond that. I suppose hypothetically you
could use this for completely non-compositor related things, like
compute jobs where you want the result by some deadline. (OTOH, it
could be the driver using this internally when the app is stalling on
a result)
> What happens if the deadline is exceeded? I'd assume nothing in
> particular, the deadline being just a hint?
Nothing in particular, it is just a hint. The main intention is to
provide a feedback hint to the drivers in scenarios like vblank
deadlines, where being 1ms late is just as bad as being
$frame_duration-1ms late. From my experiments and profiling it is
useful in a couple scenarios:
1) input latency, ie. go from idle to fullscreen animation, where GPU
has been idle for a while and not busy enough *yet* for devfreq to
kick in and start ramping up the freq before we miss the first vblank
2) double buffering, for ex. if you are 1ms late you end up stalling
15ms before the gpu can start rendering the next frame.. in the
absence of feedback, devfreq would ramp down in this scenario instead
of up
BR,
-R
On Thu, Sep 9, 2021 at 9:16 AM Simon Ser <contact(a)emersion.fr> wrote:
>
> Out of curiosity, would it be reasonable to allow user-space (more
> precisely, the compositor) to set the deadline via an IOCTL without
> actually performing an atomic commit with the FB?
>
> Some compositors might want to wait themselves for FB fence completions
> to ensure a client doesn't block the whole desktop (by submitting a
> very costly rendering job). In this case it would make sense for the
> compositor to indicate that it intends to display the buffer on next
> vblank if it's ready by that point, without queueing a page-flip yet.
Yes, I think it would.. and "dma-buf/sync_file: Add SET_DEADLINE
ioctl" adds such an ioctl.. just for the benefit of igt tests at this
point, but the thought was it would be also used by compositors that
are doing such frame scheduling. Ofc danvet is a bit grumpy that
there isn't a more real (than igt) userspace for the ioctl yet ;-)
BR,
-R
Hi guys,
while it is in most cases technically possible to not have a reference to the dma_fence when adding a callback it is usually a good idea to make sure to always have a reference anyway.
Otherwise we can indeed see cases where this doesn't really work as intended like for example in the now fixed EPOLL code.
Regards,
Christian.
Am 31.08.21 um 05:44 schrieb guangming.cao(a)mediatek.com:
> From: Guangming Cao <Guangming.Cao(a)mediatek.com>
>
>> Am 30.08.21 um 12:01 schrieb guangming.cao(a)mediatek.com:
>>> From: Guangming Cao <Guangming.Cao(a)mediatek.com>
>>>
>>> Current flow, one dmabuf maybe call cache sync many times if
>>> it has beed mapped more than one time.
>> Well I'm not an expert on DMA heaps, but this will most likely not work
>> correctly.
>>
> All attachments of one dmabuf will add into a list, I think it means dmabuf
> supports map more than one time. Could you tell me more about it?
Yes, that's correct and all of those needs to be synced as far as I know.
See the dma_sync_sgtable_for_cpu() is intentionally for each SG table
given out.
>>> Is there any case that attachments of one dmabuf will points to
>>> different memory? If not, seems do sync only one time is more better.
>> I think that this can happen, yes.
>>
>> Christian.
>>
> Seems it's a very special case on Android, if you don't mind, could you
> tell me more about it?
That might be the case, nevertheless this change here is illegal from
the DMA API point of view as far as I can see.
Regards,
Christian.
>
>>> Signed-off-by: Guangming Cao <Guangming.Cao(a)mediatek.com>
>>> ---
>>> drivers/dma-buf/heaps/system_heap.c | 14 ++++++++------
>>> 1 file changed, 8 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
>>> index 23a7e74ef966..909ef652a8c8 100644
>>> --- a/drivers/dma-buf/heaps/system_heap.c
>>> +++ b/drivers/dma-buf/heaps/system_heap.c
>>> @@ -162,9 +162,10 @@ static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>> invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
>>>
>>> list_for_each_entry(a, &buffer->attachments, list) {
>>> - if (!a->mapped)
>>> - continue;
>>> - dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
>>> + if (a->mapped) {
>>> + dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
>>> + break;
>>> + }
>>> }
>>> mutex_unlock(&buffer->lock);
>>>
>>> @@ -183,9 +184,10 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>>> flush_kernel_vmap_range(buffer->vaddr, buffer->len);
>>>
>>> list_for_each_entry(a, &buffer->attachments, list) {
>>> - if (!a->mapped)
>>> - continue;
>>> - dma_sync_sgtable_for_device(a->dev, a->table, direction);
>>> + if (!a->mapped) {
>>> + dma_sync_sgtable_for_device(a->dev, a->table, direction);
>>> + break;
>>> + }
>>> }
>>> mutex_unlock(&buffer->lock);
>>>
Specifically document the new/clarified rules around how the shared
fences do not have any ordering requirements against the exclusive
fence.
But also document all the things a bit better, given how central
struct dma_resv to dynamic buffer management the docs have been very
inadequat.
- Lots more links to other pieces of the puzzle. Unfortunately
ttm_buffer_object has no docs, so no links :-(
- Explain/complain a bit about dma_resv_locking_ctx(). I still don't
like that one, but fixing the ttm call chains is going to be
horrible. Plus we want to plug in real slowpath locking when we do
that anyway.
- Main part of the patch is some actual docs for struct dma_resv.
Overall I think we still have a lot of bad naming in this area (e.g.
dma_resv.fence is singular, but contains the multiple shared fences),
but I think that's more indicative of how the semantics and rules are
just not great.
Another thing that's real awkard is how chaining exclusive fences
right now means direct dma_resv.exclusive_fence pointer access with an
rcu_assign_pointer. Not so great either.
v2:
- Fix a pile of typos (Matt, Jason)
- Hammer it in that breaking the rules leads to use-after-free issues
around dma-buf sharing (Christian)
Reviewed-by: Christian König <christian.koenig(a)amd.com>
Cc: Jason Ekstrand <jason(a)jlekstrand.net>
Cc: Matthew Auld <matthew.auld(a)intel.com>
Reviewed-by: Matthew Auld <matthew.auld(a)intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/dma-resv.c | 24 ++++++---
include/linux/dma-buf.h | 7 +++
include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++--
3 files changed, 124 insertions(+), 11 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index e744fd87c63c..84fbe60629e3 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -48,6 +48,8 @@
* write operations) or N shared fences (read operations). The RCU
* mechanism is used to protect read access to fences from locked
* write-side updates.
+ *
+ * See struct dma_resv for more details.
*/
DEFINE_WD_CLASS(reservation_ww_class);
@@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini);
* @num_fences: number of fences we want to add
*
* Should be called before dma_resv_add_shared_fence(). Must
- * be called with obj->lock held.
+ * be called with @obj locked through dma_resv_lock().
+ *
+ * Note that the preallocated slots need to be re-reserved if @obj is unlocked
+ * at any time before calling dma_resv_add_shared_fence(). This is validated
+ * when CONFIG_DEBUG_MUTEXES is enabled.
*
* RETURNS
* Zero for success, or -errno
@@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max);
* @obj: the reservation object
* @fence: the shared fence to add
*
- * Add a fence to a shared slot, obj->lock must be held, and
+ * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and
* dma_resv_reserve_shared() has been called.
+ *
+ * See also &dma_resv.fence for a discussion of the semantics.
*/
void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
{
@@ -278,9 +286,11 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence);
/**
* dma_resv_add_excl_fence - Add an exclusive fence.
* @obj: the reservation object
- * @fence: the shared fence to add
+ * @fence: the exclusive fence to add
*
- * Add a fence to the exclusive slot. The obj->lock must be held.
+ * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock().
+ * Note that this function replaces all fences attached to @obj, see also
+ * &dma_resv.fence_excl for a discussion of the semantics.
*/
void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
{
@@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
* fence
*
* Callers are not required to hold specific locks, but maybe hold
- * dma_resv_lock() already
+ * dma_resv_lock() already.
+ *
* RETURNS
- * true if all fences signaled, else false
+ *
+ * True if all fences signaled, else false.
*/
bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all)
{
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 678b2006be78..fc62b5f9980c 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -420,6 +420,13 @@ struct dma_buf {
* - Dynamic importers should set fences for any access that they can't
* disable immediately from their &dma_buf_attach_ops.move_notify
* callback.
+ *
+ * IMPORTANT:
+ *
+ * All drivers must obey the struct dma_resv rules, specifically the
+ * rules for updating fences, see &dma_resv.fence_excl and
+ * &dma_resv.fence. If these dependency rules are broken access tracking
+ * can be lost resulting in use after free issues.
*/
struct dma_resv *resv;
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index e1ca2080a1ff..9100dd3dc21f 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -62,16 +62,90 @@ struct dma_resv_list {
/**
* struct dma_resv - a reservation object manages fences for a buffer
- * @lock: update side lock
- * @seq: sequence count for managing RCU read-side synchronization
- * @fence_excl: the exclusive fence, if there is one currently
- * @fence: list of current shared fences
+ *
+ * There are multiple uses for this, with sometimes slightly different rules in
+ * how the fence slots are used.
+ *
+ * One use is to synchronize cross-driver access to a struct dma_buf, either for
+ * dynamic buffer management or just to handle implicit synchronization between
+ * different users of the buffer in userspace. See &dma_buf.resv for a more
+ * in-depth discussion.
+ *
+ * The other major use is to manage access and locking within a driver in a
+ * buffer based memory manager. struct ttm_buffer_object is the canonical
+ * example here, since this is where reservation objects originated from. But
+ * use in drivers is spreading and some drivers also manage struct
+ * drm_gem_object with the same scheme.
*/
struct dma_resv {
+ /**
+ * @lock:
+ *
+ * Update side lock. Don't use directly, instead use the wrapper
+ * functions like dma_resv_lock() and dma_resv_unlock().
+ *
+ * Drivers which use the reservation object to manage memory dynamically
+ * also use this lock to protect buffer object state like placement,
+ * allocation policies or throughout command submission.
+ */
struct ww_mutex lock;
+
+ /**
+ * @seq:
+ *
+ * Sequence count for managing RCU read-side synchronization, allows
+ * read-only access to @fence_excl and @fence while ensuring we take a
+ * consistent snapshot.
+ */
seqcount_ww_mutex_t seq;
+ /**
+ * @fence_excl:
+ *
+ * The exclusive fence, if there is one currently.
+ *
+ * There are two ways to update this fence:
+ *
+ * - First by calling dma_resv_add_excl_fence(), which replaces all
+ * fences attached to the reservation object. To guarantee that no
+ * fences are lost, this new fence must signal only after all previous
+ * fences, both shared and exclusive, have signalled. In some cases it
+ * is convenient to achieve that by attaching a struct dma_fence_array
+ * with all the new and old fences.
+ *
+ * - Alternatively the fence can be set directly, which leaves the
+ * shared fences unchanged. To guarantee that no fences are lost, this
+ * new fence must signal only after the previous exclusive fence has
+ * signalled. Since the shared fences are staying intact, it is not
+ * necessary to maintain any ordering against those. If semantically
+ * only a new access is added without actually treating the previous
+ * one as a dependency the exclusive fences can be strung together
+ * using struct dma_fence_chain.
+ *
+ * Note that actual semantics of what an exclusive or shared fence mean
+ * is defined by the user, for reservation objects shared across drivers
+ * see &dma_buf.resv.
+ */
struct dma_fence __rcu *fence_excl;
+
+ /**
+ * @fence:
+ *
+ * List of current shared fences.
+ *
+ * There are no ordering constraints of shared fences against the
+ * exclusive fence slot. If a waiter needs to wait for all access, it
+ * has to wait for both sets of fences to signal.
+ *
+ * A new fence is added by calling dma_resv_add_shared_fence(). Since
+ * this often needs to be done past the point of no return in command
+ * submission it cannot fail, and therefore sufficient slots need to be
+ * reserved by calling dma_resv_reserve_shared().
+ *
+ * Note that actual semantics of what an exclusive or shared fence mean
+ * is defined by the user, for reservation objects shared across drivers
+ * see &dma_buf.resv.
+ */
struct dma_resv_list __rcu *fence;
};
@@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct dma_resv *obj) {}
* undefined order, a #ww_acquire_ctx is passed to unwind if a cycle
* is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation
* object may be locked by itself by passing NULL as @ctx.
+ *
+ * When a die situation is indicated by returning -EDEADLK all locks held by
+ * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj.
+ *
+ * Unlocked by calling dma_resv_unlock().
+ *
+ * See also dma_resv_lock_interruptible() for the interruptible variant.
*/
static inline int dma_resv_lock(struct dma_resv *obj,
struct ww_acquire_ctx *ctx)
@@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj,
* undefined order, a #ww_acquire_ctx is passed to unwind if a cycle
* is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation
* object may be locked by itself by passing NULL as @ctx.
+ *
+ * When a die situation is indicated by returning -EDEADLK all locks held by
+ * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() called on
+ * @obj.
+ *
+ * Unlocked by calling dma_resv_unlock().
*/
static inline int dma_resv_lock_interruptible(struct dma_resv *obj,
struct ww_acquire_ctx *ctx)
@@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct dma_resv *obj,
* Acquires the reservation object after a die case. This function
* will sleep until the lock becomes available. See dma_resv_lock() as
* well.
+ *
+ * See also dma_resv_lock_slow_interruptible() for the interruptible variant.
*/
static inline void dma_resv_lock_slow(struct dma_resv *obj,
struct ww_acquire_ctx *ctx)
@@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj,
* if they overlap with a writer.
*
* Also note that since no context is provided, no deadlock protection is
- * possible.
+ * possible, which is also not needed for a trylock.
*
* Returns true if the lock was acquired, false otherwise.
*/
@@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_resv *obj)
*
* Returns the context used to lock a reservation object or NULL if no context
* was used or the object is not locked at all.
+ *
+ * WARNING: This interface is pretty horrible, but TTM needs it because it
+ * doesn't pass the struct ww_acquire_ctx around in some very long callchains.
+ * Everyone else just uses it to check whether they're holding a reservation or
+ * not.
*/
static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj)
{
--
2.32.0