On Thu, Apr 15, 2021 at 4:02 PM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
>
>
> On 4/15/21 3:37 PM, Daniel Vetter wrote:
> > On Tue, Apr 13, 2021 at 09:57:06AM +0200, Christian König wrote:
> >>
> >> Am 13.04.21 um 09:50 schrieb Thomas Hellström:
> >>> Hi!
> >>>
> >>> During the dma_resv conversion of the i915 driver, we've been using ww
> >>> transaction lock lists to keep track of ww_mutexes that are locked
> >>> during the transaction so that they can be batch unlocked at suitable
> >>> locations. Including also the LMEM/VRAM eviction we've ended up with
> >>> two static lists per transaction context; one typically unlocked at the
> >>> end of transaction and one initialized before and unlocked after each
> >>> buffer object validate. This enables us to do sleeping locking at
> >>> eviction and keep objects locked on the eviction list until we
> >>> eventually succeed allocating memory (modulo minor flaws of course).
> >>>
> >>> It would be beneficial with the i915 TTM conversion to be able to
> >>> introduce a similar functionality that would work in ttm but also
> >>> cross-driver in, for example move_notify. It would also be beneficial
> >>> to be able to put any dma_resv ww mutex on the lists, and not require
> >>> it to be embedded in a particular object type.
> >>>
> >>> I started scetching on some utilities for this. For TTM, for example,
> >>> the idea would be to pass a list head for the ww transaction lock list
> >>> in the ttm_operation_ctx. A function taking a ww_mutex could then
> >>> either attach a grabbed lock to the list for batch unlocking, or be
> >>> responsible for unlocking when the lock's scope is exited. It's also
> >>> possible to create sublists if so desired. I believe the below would be
> >>> sufficient to cover the i915 functionality.
> >>>
> >>> Any comments and suggestions appreciated!
> >> ah yes Daniel and I haven been discussing something like this for years.
> >>
> >> I also came up with rough implementation, but we always ran into lifetime
> >> issues.
> >>
> >> In other words the ww_mutexes which are on the list would need to be kept
> >> alive until unlocked.
> >>
> >> Because of this we kind of backed up and said we would need this on the GEM
> >> level instead of working with dma_resv objects.
> > Yeah there's a few funny concerns here that make this awkward:
> > - For simplicity doing these helpers at the gem level should make things a
> > bit easier, because then we have a standard way to drop the reference.
> > It does mean that the only thing you can lock like this are gem objects,
> > but I think that's fine. At least for a first cut.
> >
> > - This is a bit awkward for vmwgfx, but a) Zack has mentioned he's looking
> > into adopting gem bo internally to be able to drop a pile of code and
> > stop making vmwgfx the only special-case we have b) drivers which don't
> > need this won't need this, so should be fine.
> >
> > The other awkward thing I guess is that ttm would need to use the
> > embedded kref from the gem bo, but that should be transparent I think.
> >
> > - Next up is dma-buf: For i915 we'd like to do the same eviction trick
> > also through p2p dma-buf callbacks, so that this works the same as
> > eviction/reservation within a gpu. But for these internal bo you might
> > not have a dma-buf, so we can't just lift the trick to the dma-buf
> > level. But I think if we pass e.g. a struct list_head and a callback to
> > unreference/unlock all the buffers in there to the exporter, plus
> > similar for the slowpath lock, then that should be doable without
> > glorious layering inversions between dma-buf and gem.
> >
> > I think for dma-buf it should even be ok if this requires that we
> > allocate an entire structure with kmalloc or something, allocating
> > memory while holding dma_resv is ok.
>
> Yes, the thing here with the suggested helpers is that you would just
> embed a trans_lockitem struct in the gem object (and defines the gem
> object ops). Otherwise and for passing to dma-buf this is pretty much
> exactly what you are suggesting, but the huge benefit of encapsulating
> the needed members like this is that when we need to change something we
> change it in just one place.
>
> For anything that doesn't have a gem object (dma-buf, vmwgfx or
> whatever) you have the choice of either allocating a struct
> trans_lockitem or embed it wherever you prefer. In particular, this is
> beneficial where you have a single dma-resv class ww-mutex sitting
> somewhere in the way and you don't want to needlessly have a gem object
> that embeds it.
The thing is, everyone who actually uses dma_resv_lock has a
gem_buffer_object underneath. So it feels a bit like flexibility for
no real need, and I think it would make it slightly more awkard for
gem drivers to neatly integrate into their cs path. The lockitem
struct works, but it is a bit cumbersome.
Also if we add some wrappers to e.g. add a gem_bo to the ctx, then if
we decide to slip the lockitem in there, we still only need to touch
the helper code, and not all drivers.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Am 13.04.21 um 09:50 schrieb Thomas Hellström:
> Hi!
>
> During the dma_resv conversion of the i915 driver, we've been using ww
> transaction lock lists to keep track of ww_mutexes that are locked
> during the transaction so that they can be batch unlocked at suitable
> locations. Including also the LMEM/VRAM eviction we've ended up with
> two static lists per transaction context; one typically unlocked at the
> end of transaction and one initialized before and unlocked after each
> buffer object validate. This enables us to do sleeping locking at
> eviction and keep objects locked on the eviction list until we
> eventually succeed allocating memory (modulo minor flaws of course).
>
> It would be beneficial with the i915 TTM conversion to be able to
> introduce a similar functionality that would work in ttm but also
> cross-driver in, for example move_notify. It would also be beneficial
> to be able to put any dma_resv ww mutex on the lists, and not require
> it to be embedded in a particular object type.
>
> I started scetching on some utilities for this. For TTM, for example,
> the idea would be to pass a list head for the ww transaction lock list
> in the ttm_operation_ctx. A function taking a ww_mutex could then
> either attach a grabbed lock to the list for batch unlocking, or be
> responsible for unlocking when the lock's scope is exited. It's also
> possible to create sublists if so desired. I believe the below would be
> sufficient to cover the i915 functionality.
>
> Any comments and suggestions appreciated!
ah yes Daniel and I haven been discussing something like this for years.
I also came up with rough implementation, but we always ran into
lifetime issues.
In other words the ww_mutexes which are on the list would need to be
kept alive until unlocked.
Because of this we kind of backed up and said we would need this on the
GEM level instead of working with dma_resv objects.
Regards,
Christian.
>
> 8<------------------------------------------------------
>
> #ifndef _TRANSACTION_LOCKLIST_H_
> #define _TRANSACTION_LOCKLIST_H_
>
> struct trans_lockitem;
>
> /**
> * struct trans_locklist_ops - Ops structure for the ww locklist
> functionality.
> *
> * Typically a const struct trans_locklist_ops is defined for each type
> that
> * embeds a struct trans_lockitem, or hav a struct trans_lockitem
> pointing
> * at it using the private pointer. It can be a buffer object,
> reservation
> * object, a single ww_mutex or even a sublist of trans_lockitems.
> */
> struct trans_locklist_ops {
> /**
> * slow_lock: Slow lock to relax the transaction. Only used by
> * a contending lock item.
> * @item: The struct trans_lockitem to lock
> * @intr: Whether to lock interruptible
> *
> * Return: -ERESTARTSYS: Hit a signal when relaxing,
> * -EAGAIN, relaxing succesful, but the contending lock
> * remains unlocked.
> */
> int (*slow_lock) (struct trans_lockitem *item, bool intr);
>
> /**
> * unlock: Unlock.
> * @item: The struct trans_lockitem to unlock.
> */
> void (*unlock) (struct trans_lockitem *item);
>
> /**
> * unlock: Unlock.
> * @item: The struct trans_lockitem to put. This function may
> be NULL.
> */
> void (*put) (struct trans_lockitem *item);
> };
>
> /**
> * struct trans_lockitem
> * @ops: Pointer to an Ops structure for this lockitem.
> * @link: List link for the transaction locklist.
> * @private: Private info.
> * @relax_unlock: Unlock contending lock after relaxation since it is
> * likely not needed after a transaction restart.
> *
> * A struct trans_lockitem typically represents a single lock, but is
> * generic enough to represent a sublist of locks. It can either be
> * embedded, or allocated on demand. A kmem_cache might be beneficial.
> */
> struct trans_lockitem {
> const struct trans_locklist_ops *ops;
> struct list_head link;
> u32 relax_unlock:1;
> void *private;
> };
>
> /* unlock example */
> static inline void trans_unlock_put_all(struct list_head *list)
> {
> struct trans_lockitem *lock, *next;
>
> list_for_each_entry_safe(lock, next, typeof(*lock), link) {
> lock->ops->unlock(lock);
> list_del_init(&lock->link);
> if (lock->ops_put)
> lock->ops->put(lock);
> }
> }
>
> /* Backoff example */
> static inline int __must_check trans_backoff(struct list_head *list,
> bool intr,
> struct trans_lockitem
> *contending)
> {
> int ret = 0;
>
> trans_unlock_put_all(list);
> if (contending) {
> ret = contending->ops->slow_lock(contending, intr);
> if (!ret && contending->relax_unlock)
> contending->unlock(contending);
> if (ret == -EAGAIN)
> ret = 0;
> contending->ops->put(contending);
> }
>
> return ret;
> }
>
>
> #endif _TRANSACTION_LOCKLIST_
>
>
tldr; DMA buffers aren't normal memory, expecting that you can use
them like that (like calling get_user_pages works, or that they're
accounting like any other normal memory) cannot be guaranteed.
Since some userspace only runs on integrated devices, where all
buffers are actually all resident system memory, there's a huge
temptation to assume that a struct page is always present and useable
like for any more pagecache backed mmap. This has the potential to
result in a uapi nightmare.
To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
blocks get_user_pages and all the other struct page based
infrastructure for everyone. In spirit this is the uapi counterpart to
the kernel-internal CONFIG_DMABUF_DEBUG.
Motivated by a recent patch which wanted to swich the system dma-buf
heap to vm_insert_page instead of vm_insert_pfn.
v2:
Jason brought up that we also want to guarantee that all ptes have the
pte_special flag set, to catch fast get_user_pages (on architectures
that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
>From auditing the various functions to insert pfn pte entires
(vm_insert_pfn_prot, remap_pfn_range and all it's callers like
dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
this should be the correct flag to check for.
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-Wb…
Acked-by: Christian König <christian.koenig(a)amd.com>
Cc: Jason Gunthorpe <jgg(a)ziepe.ca>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: John Stultz <john.stultz(a)linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/dma-buf.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index f264b70c383e..06cb1d2e9fdc 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -127,6 +127,7 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
+ int ret;
if (!is_dma_buf_file(file))
return -EINVAL;
@@ -142,7 +143,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
@@ -1244,6 +1249,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
+ int ret;
+
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1264,7 +1271,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);
--
2.31.0
On Fri, Apr 02, 2021 at 02:39:31PM +0530, Nava kishore Manne wrote:
> Add "fpga-config-from-dmabuf" property to allow the bitstream
> configuration from pre-allocated dma-buffer.
>
> Signed-off-by: Nava kishore Manne <nava.manne(a)xilinx.com>
> ---
> Documentation/devicetree/bindings/fpga/fpga-region.txt | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/fpga/fpga-region.txt b/Documentation/devicetree/bindings/fpga/fpga-region.txt
> index 969ca53bb65e..c573cf258d60 100644
> --- a/Documentation/devicetree/bindings/fpga/fpga-region.txt
> +++ b/Documentation/devicetree/bindings/fpga/fpga-region.txt
> @@ -177,6 +177,8 @@ Optional properties:
> it indicates that the FPGA has already been programmed with this image.
> If this property is in an overlay targeting a FPGA region, it is a
> request to program the FPGA with that image.
> +- fpga-config-from-dmabuf : boolean, set if the FPGA configured done from the
> + pre-allocated dma-buffer.
Sounds like an implementation detail of the OS. Doesn't belong in DT.
Rob
> - fpga-bridges : should contain a list of phandles to FPGA Bridges that must be
> controlled during FPGA programming along with the parent FPGA bridge.
> This property is optional if the FPGA Manager handles the bridges.
> --
> 2.18.0
>
From: David Stevens <stevensd(a)chromium.org>
Allocate a new private stub fence in drm_syncobj_assign_null_handle,
instead of using a static stub fence.
When userspace creates a fence with DRM_SYNCOBJ_CREATE_SIGNALED or when
userspace signals a fence via DRM_IOCTL_SYNCOBJ_SIGNAL, the timestamp
obtained when the fence is exported and queried with SYNC_IOC_FILE_INFO
should match when the fence's status was changed from the perspective of
userspace, which is during the respective ioctl.
When a static stub fence started being used in by these ioctls, this
behavior changed. Instead, the timestamp returned by SYNC_IOC_FILE_INFO
became the first time anything used the static stub fence, which has no
meaning to userspace.
Signed-off-by: David Stevens <stevensd(a)chromium.org>
---
drivers/dma-buf/dma-fence.c | 33 ++++++++++++++++++++++++++++++++-
drivers/gpu/drm/drm_syncobj.c | 28 ++++++++++++++++++++--------
include/linux/dma-fence.h | 1 +
3 files changed, 53 insertions(+), 9 deletions(-)
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index d64fc03929be..6081eb962490 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -26,6 +26,11 @@ EXPORT_TRACEPOINT_SYMBOL(dma_fence_signaled);
static DEFINE_SPINLOCK(dma_fence_stub_lock);
static struct dma_fence dma_fence_stub;
+struct drm_fence_private_stub {
+ struct dma_fence base;
+ spinlock_t lock;
+};
+
/*
* fence context counter: each execution context should have its own
* fence context, this allows checking if fences belong to the same
@@ -123,7 +128,9 @@ static const struct dma_fence_ops dma_fence_stub_ops = {
/**
* dma_fence_get_stub - return a signaled fence
*
- * Return a stub fence which is already signaled.
+ * Return a stub fence which is already signaled. The fence's
+ * timestamp corresponds to the first time after boot this
+ * function is called.
*/
struct dma_fence *dma_fence_get_stub(void)
{
@@ -141,6 +148,30 @@ struct dma_fence *dma_fence_get_stub(void)
}
EXPORT_SYMBOL(dma_fence_get_stub);
+/**
+ * dma_fence_allocate_private_stub - return a private, signaled fence
+ *
+ * Return a newly allocated and signaled stub fence.
+ */
+struct dma_fence *dma_fence_allocate_private_stub(void)
+{
+ struct drm_fence_private_stub *fence;
+
+ fence = kzalloc(sizeof(*fence), GFP_KERNEL);
+ if (fence == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ spin_lock_init(&fence->lock);
+ dma_fence_init(&fence->base,
+ &dma_fence_stub_ops,
+ &fence->lock,
+ 0, 0);
+ dma_fence_signal(&fence->base);
+
+ return &fence->base;
+}
+EXPORT_SYMBOL(dma_fence_allocate_private_stub);
+
/**
* dma_fence_context_alloc - allocate an array of fence contexts
* @num: amount of contexts to allocate
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 349146049849..c6125e57ae37 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -350,12 +350,15 @@ EXPORT_SYMBOL(drm_syncobj_replace_fence);
*
* Assign a already signaled stub fence to the sync object.
*/
-static void drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
+static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
{
- struct dma_fence *fence = dma_fence_get_stub();
+ struct dma_fence *fence = dma_fence_allocate_private_stub();
+ if (IS_ERR(fence))
+ return PTR_ERR(fence);
- drm_syncobj_replace_fence(syncobj, fence);
- dma_fence_put(fence);
+ drm_syncobj_replace_fence(syncobj, fence);
+ dma_fence_put(fence);
+ return 0;
}
/* 5s default for wait submission */
@@ -469,6 +472,7 @@ EXPORT_SYMBOL(drm_syncobj_free);
int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
struct dma_fence *fence)
{
+ int ret;
struct drm_syncobj *syncobj;
syncobj = kzalloc(sizeof(struct drm_syncobj), GFP_KERNEL);
@@ -479,8 +483,13 @@ int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
INIT_LIST_HEAD(&syncobj->cb_list);
spin_lock_init(&syncobj->lock);
- if (flags & DRM_SYNCOBJ_CREATE_SIGNALED)
- drm_syncobj_assign_null_handle(syncobj);
+ if (flags & DRM_SYNCOBJ_CREATE_SIGNALED) {
+ ret = drm_syncobj_assign_null_handle(syncobj);
+ if (ret < 0) {
+ drm_syncobj_put(syncobj);
+ return ret;
+ }
+ }
if (fence)
drm_syncobj_replace_fence(syncobj, fence);
@@ -1322,8 +1331,11 @@ drm_syncobj_signal_ioctl(struct drm_device *dev, void *data,
if (ret < 0)
return ret;
- for (i = 0; i < args->count_handles; i++)
- drm_syncobj_assign_null_handle(syncobjs[i]);
+ for (i = 0; i < args->count_handles; i++) {
+ ret = drm_syncobj_assign_null_handle(syncobjs[i]);
+ if (ret < 0)
+ break;
+ }
drm_syncobj_array_free(syncobjs, args->count_handles);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 9f12efaaa93a..6ffb4b2c6371 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -587,6 +587,7 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr)
}
struct dma_fence *dma_fence_get_stub(void);
+struct dma_fence *dma_fence_allocate_private_stub(void);
u64 dma_fence_context_alloc(unsigned num);
#define DMA_FENCE_TRACE(f, fmt, args...) \
--
2.31.0.208.g409f899ff0-goog
From: David Stevens <stevensd(a)chromium.org>
Allocate a new private stub fence in drm_syncobj_assign_null_handle,
instead of using a static stub fence.
When userspace creates a fence with DRM_SYNCOBJ_CREATE_SIGNALED or when
userspace signals a fence via DRM_IOCTL_SYNCOBJ_SIGNAL, the timestamp
obtained when the fence is exported and queried with SYNC_IOC_FILE_INFO
should match when the fence's status was changed from the perspective of
userspace, which is during the respective ioctl.
When a static stub fence started being used in by these ioctls, this
behavior changed. Instead, the timestamp returned by SYNC_IOC_FILE_INFO
became the first time anything used the static stub fence, which has no
meaning to userspace.
Signed-off-by: David Stevens <stevensd(a)chromium.org>
---
v2 -> v3:
- reuse the static stub spinlock
v1 -> v2:
- checkpatch style fixes
drivers/dma-buf/dma-fence.c | 27 ++++++++++++++++++++++++++-
drivers/gpu/drm/drm_syncobj.c | 25 +++++++++++++++++++------
include/linux/dma-fence.h | 1 +
3 files changed, 46 insertions(+), 7 deletions(-)
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index d64fc03929be..ce0f5eff575d 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -123,7 +123,9 @@ static const struct dma_fence_ops dma_fence_stub_ops = {
/**
* dma_fence_get_stub - return a signaled fence
*
- * Return a stub fence which is already signaled.
+ * Return a stub fence which is already signaled. The fence's
+ * timestamp corresponds to the first time after boot this
+ * function is called.
*/
struct dma_fence *dma_fence_get_stub(void)
{
@@ -141,6 +143,29 @@ struct dma_fence *dma_fence_get_stub(void)
}
EXPORT_SYMBOL(dma_fence_get_stub);
+/**
+ * dma_fence_allocate_private_stub - return a private, signaled fence
+ *
+ * Return a newly allocated and signaled stub fence.
+ */
+struct dma_fence *dma_fence_allocate_private_stub(void)
+{
+ struct dma_fence *fence;
+
+ fence = kzalloc(sizeof(*fence), GFP_KERNEL);
+ if (fence == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ dma_fence_init(fence,
+ &dma_fence_stub_ops,
+ &dma_fence_stub_lock,
+ 0, 0);
+ dma_fence_signal(fence);
+
+ return fence;
+}
+EXPORT_SYMBOL(dma_fence_allocate_private_stub);
+
/**
* dma_fence_context_alloc - allocate an array of fence contexts
* @num: amount of contexts to allocate
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 349146049849..a54aa850d143 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -350,12 +350,16 @@ EXPORT_SYMBOL(drm_syncobj_replace_fence);
*
* Assign a already signaled stub fence to the sync object.
*/
-static void drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
+static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
{
- struct dma_fence *fence = dma_fence_get_stub();
+ struct dma_fence *fence = dma_fence_allocate_private_stub();
+
+ if (IS_ERR(fence))
+ return PTR_ERR(fence);
drm_syncobj_replace_fence(syncobj, fence);
dma_fence_put(fence);
+ return 0;
}
/* 5s default for wait submission */
@@ -469,6 +473,7 @@ EXPORT_SYMBOL(drm_syncobj_free);
int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
struct dma_fence *fence)
{
+ int ret;
struct drm_syncobj *syncobj;
syncobj = kzalloc(sizeof(struct drm_syncobj), GFP_KERNEL);
@@ -479,8 +484,13 @@ int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
INIT_LIST_HEAD(&syncobj->cb_list);
spin_lock_init(&syncobj->lock);
- if (flags & DRM_SYNCOBJ_CREATE_SIGNALED)
- drm_syncobj_assign_null_handle(syncobj);
+ if (flags & DRM_SYNCOBJ_CREATE_SIGNALED) {
+ ret = drm_syncobj_assign_null_handle(syncobj);
+ if (ret < 0) {
+ drm_syncobj_put(syncobj);
+ return ret;
+ }
+ }
if (fence)
drm_syncobj_replace_fence(syncobj, fence);
@@ -1322,8 +1332,11 @@ drm_syncobj_signal_ioctl(struct drm_device *dev, void *data,
if (ret < 0)
return ret;
- for (i = 0; i < args->count_handles; i++)
- drm_syncobj_assign_null_handle(syncobjs[i]);
+ for (i = 0; i < args->count_handles; i++) {
+ ret = drm_syncobj_assign_null_handle(syncobjs[i]);
+ if (ret < 0)
+ break;
+ }
drm_syncobj_array_free(syncobjs, args->count_handles);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 9f12efaaa93a..6ffb4b2c6371 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -587,6 +587,7 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr)
}
struct dma_fence *dma_fence_get_stub(void);
+struct dma_fence *dma_fence_allocate_private_stub(void);
u64 dma_fence_context_alloc(unsigned num);
#define DMA_FENCE_TRACE(f, fmt, args...) \
--
2.31.0.208.g409f899ff0-goog
From: David Stevens <stevensd(a)chromium.org>
Allocate a new private stub fence in drm_syncobj_assign_null_handle,
instead of using a static stub fence.
When userspace creates a fence with DRM_SYNCOBJ_CREATE_SIGNALED or when
userspace signals a fence via DRM_IOCTL_SYNCOBJ_SIGNAL, the timestamp
obtained when the fence is exported and queried with SYNC_IOC_FILE_INFO
should match when the fence's status was changed from the perspective of
userspace, which is during the respective ioctl.
When a static stub fence started being used in by these ioctls, this
behavior changed. Instead, the timestamp returned by SYNC_IOC_FILE_INFO
became the first time anything used the static stub fence, which has no
meaning to userspace.
Signed-off-by: David Stevens <stevensd(a)chromium.org>
---
v1 -> v2:
- checkpatch style fixes
drivers/dma-buf/dma-fence.c | 33 ++++++++++++++++++++++++++++++++-
drivers/gpu/drm/drm_syncobj.c | 25 +++++++++++++++++++------
include/linux/dma-fence.h | 1 +
3 files changed, 52 insertions(+), 7 deletions(-)
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index d64fc03929be..6081eb962490 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -26,6 +26,11 @@ EXPORT_TRACEPOINT_SYMBOL(dma_fence_signaled);
static DEFINE_SPINLOCK(dma_fence_stub_lock);
static struct dma_fence dma_fence_stub;
+struct drm_fence_private_stub {
+ struct dma_fence base;
+ spinlock_t lock;
+};
+
/*
* fence context counter: each execution context should have its own
* fence context, this allows checking if fences belong to the same
@@ -123,7 +128,9 @@ static const struct dma_fence_ops dma_fence_stub_ops = {
/**
* dma_fence_get_stub - return a signaled fence
*
- * Return a stub fence which is already signaled.
+ * Return a stub fence which is already signaled. The fence's
+ * timestamp corresponds to the first time after boot this
+ * function is called.
*/
struct dma_fence *dma_fence_get_stub(void)
{
@@ -141,6 +148,30 @@ struct dma_fence *dma_fence_get_stub(void)
}
EXPORT_SYMBOL(dma_fence_get_stub);
+/**
+ * dma_fence_allocate_private_stub - return a private, signaled fence
+ *
+ * Return a newly allocated and signaled stub fence.
+ */
+struct dma_fence *dma_fence_allocate_private_stub(void)
+{
+ struct drm_fence_private_stub *fence;
+
+ fence = kzalloc(sizeof(*fence), GFP_KERNEL);
+ if (fence == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ spin_lock_init(&fence->lock);
+ dma_fence_init(&fence->base,
+ &dma_fence_stub_ops,
+ &fence->lock,
+ 0, 0);
+ dma_fence_signal(&fence->base);
+
+ return &fence->base;
+}
+EXPORT_SYMBOL(dma_fence_allocate_private_stub);
+
/**
* dma_fence_context_alloc - allocate an array of fence contexts
* @num: amount of contexts to allocate
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 349146049849..a54aa850d143 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -350,12 +350,16 @@ EXPORT_SYMBOL(drm_syncobj_replace_fence);
*
* Assign a already signaled stub fence to the sync object.
*/
-static void drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
+static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
{
- struct dma_fence *fence = dma_fence_get_stub();
+ struct dma_fence *fence = dma_fence_allocate_private_stub();
+
+ if (IS_ERR(fence))
+ return PTR_ERR(fence);
drm_syncobj_replace_fence(syncobj, fence);
dma_fence_put(fence);
+ return 0;
}
/* 5s default for wait submission */
@@ -469,6 +473,7 @@ EXPORT_SYMBOL(drm_syncobj_free);
int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
struct dma_fence *fence)
{
+ int ret;
struct drm_syncobj *syncobj;
syncobj = kzalloc(sizeof(struct drm_syncobj), GFP_KERNEL);
@@ -479,8 +484,13 @@ int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
INIT_LIST_HEAD(&syncobj->cb_list);
spin_lock_init(&syncobj->lock);
- if (flags & DRM_SYNCOBJ_CREATE_SIGNALED)
- drm_syncobj_assign_null_handle(syncobj);
+ if (flags & DRM_SYNCOBJ_CREATE_SIGNALED) {
+ ret = drm_syncobj_assign_null_handle(syncobj);
+ if (ret < 0) {
+ drm_syncobj_put(syncobj);
+ return ret;
+ }
+ }
if (fence)
drm_syncobj_replace_fence(syncobj, fence);
@@ -1322,8 +1332,11 @@ drm_syncobj_signal_ioctl(struct drm_device *dev, void *data,
if (ret < 0)
return ret;
- for (i = 0; i < args->count_handles; i++)
- drm_syncobj_assign_null_handle(syncobjs[i]);
+ for (i = 0; i < args->count_handles; i++) {
+ ret = drm_syncobj_assign_null_handle(syncobjs[i]);
+ if (ret < 0)
+ break;
+ }
drm_syncobj_array_free(syncobjs, args->count_handles);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 9f12efaaa93a..6ffb4b2c6371 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -587,6 +587,7 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr)
}
struct dma_fence *dma_fence_get_stub(void);
+struct dma_fence *dma_fence_allocate_private_stub(void);
u64 dma_fence_context_alloc(unsigned num);
#define DMA_FENCE_TRACE(f, fmt, args...) \
--
2.31.0.208.g409f899ff0-goog
Hi Qu,
Am 02.04.21 um 05:18 schrieb Qu Huang:
> Before dma_resv_lock(bo->base.resv, NULL) in amdgpu_bo_release_notify(),
> the bo->base.resv lock may be held by ttm_mem_evict_first(),
That can't happen since when bo_release_notify is called the BO has not
more references and is therefore deleted.
And we never evict a deleted BO, we just wait for it to become idle.
Regards,
Christian.
> and the VRAM mem will be evicted, mem region was replaced
> by Gtt mem region. amdgpu_bo_release_notify() will then
> hold the bo->base.resv lock, and SDMA will get an invalid
> address in amdgpu_fill_buffer(), resulting in a VMFAULT
> or memory corruption.
>
> To avoid it, we have to hold bo->base.resv lock first, and
> check whether the mem.mem_type is TTM_PL_VRAM.
>
> Signed-off-by: Qu Huang <jinsdb(a)126.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index 4b29b82..8018574 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -1300,12 +1300,16 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
> if (bo->base.resv == &bo->base._resv)
> amdgpu_amdkfd_remove_fence_on_pt_pd_bos(abo);
>
> - if (bo->mem.mem_type != TTM_PL_VRAM || !bo->mem.mm_node ||
> - !(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
> + if (!(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
> return;
>
> dma_resv_lock(bo->base.resv, NULL);
>
> + if (bo->mem.mem_type != TTM_PL_VRAM || !bo->mem.mm_node) {
> + dma_resv_unlock(bo->base.resv);
> + return;
> + }
> +
> r = amdgpu_fill_buffer(abo, AMDGPU_POISON, bo->base.resv, &fence);
> if (!WARN_ON(r)) {
> amdgpu_bo_fence(abo, fence, false);
> --
> 1.8.3.1
>