The issue:
Currently in ION if you allocate uncached memory it is possible that there
are still dirty lines in the cache. And often these dirty lines in the
cache are the zeros which were meant to clear out any sensitive kernel
data.
What this means is that if you allocate uncached memory from ION, and then
subsequently write to that buffer (using the uncached mapping you are
provided by ION) then the data you have written could be corrupted at some
point in the future if a dirty line is evicted from the cache.
Also this means there is a potential security issue. If an un-privileged
userspace user allocated uncached memory (for example from the system heap)
and then if they were to read from that buffer (through the un-cached
mapping they are provided by ION), and if some of the zeros which were
written to that memory are still in the cache then this un-privileged
userspace user could read potentially sensitive kernel data.
An unacceptable fix:
I have included some sample code which should resolve this issue for the
system heap and the cma heap on some architectures, however this code
would not be acceptable for upstreaming since it uses hacks to clean
the cache.
Similar changes would need to be made to carveout heap and chunk heap.
I would appreciate some feedback on the proper way for ION to clean the
caches for memory it has allocated that is intended for uncached access.
I realize that it may be tempting, as a solution to this problem, to
simply strip uncached support from ION. I hope that we can try to find a
solution which preserves uncached memory support as ION uncached memory
is often used (though perhaps not in upstreamed code) in cases such as
multimedia use cases where there is no CPU access required, in secure
heap allocations, and in some cases where there is minimal CPU access
and therefore uncached memory performs better.
Signed-off-by: Liam Mark <lmark(a)codeaurora.org>
---
drivers/staging/android/ion/ion.c | 16 ++++++++++++++++
drivers/staging/android/ion/ion.h | 5 ++++-
drivers/staging/android/ion/ion_cma_heap.c | 3 +++
drivers/staging/android/ion/ion_page_pool.c | 8 ++++++--
drivers/staging/android/ion/ion_system_heap.c | 7 ++++++-
5 files changed, 35 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
index 57e0d8035b2e..10e967b0a0f4 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -38,6 +38,22 @@ bool ion_buffer_cached(struct ion_buffer *buffer)
return !!(buffer->flags & ION_FLAG_CACHED);
}
+void ion_pages_sync_for_device(struct page *page, size_t size,
+ enum dma_data_direction dir)
+{
+ struct scatterlist sg;
+ struct device dev = {0};
+
+ /* hack, use dummy device */
+ arch_setup_dma_ops(&dev, 0, 0, NULL, false);
+
+ sg_init_table(&sg, 1);
+ sg_set_page(&sg, page, size, 0);
+ /* hack, use phys address for dma address */
+ sg_dma_address(&sg) = page_to_phys(page);
+ dma_sync_sg_for_device(&dev, &sg, 1, dir);
+}
+
/* this function should only be called while dev->lock is held */
static void ion_buffer_add(struct ion_device *dev,
struct ion_buffer *buffer)
diff --git a/drivers/staging/android/ion/ion.h b/drivers/staging/android/ion/ion.h
index a238f23c9116..227b9928d185 100644
--- a/drivers/staging/android/ion/ion.h
+++ b/drivers/staging/android/ion/ion.h
@@ -192,6 +192,9 @@ struct ion_heap {
*/
bool ion_buffer_cached(struct ion_buffer *buffer);
+void ion_pages_sync_for_device(struct page *page, size_t size,
+ enum dma_data_direction dir);
+
/**
* ion_buffer_fault_user_mappings - fault in user mappings of this buffer
* @buffer: buffer
@@ -333,7 +336,7 @@ struct ion_page_pool {
struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order,
bool cached);
void ion_page_pool_destroy(struct ion_page_pool *pool);
-struct page *ion_page_pool_alloc(struct ion_page_pool *pool);
+struct page *ion_page_pool_alloc(struct ion_page_pool *pool, bool *from_pool);
void ion_page_pool_free(struct ion_page_pool *pool, struct page *page);
/** ion_page_pool_shrink - shrinks the size of the memory cached in the pool
diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c
index 49718c96bf9e..82e80621d114 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -59,6 +59,9 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
memset(page_address(pages), 0, size);
}
+ if (!ion_buffer_cached(buffer))
+ ion_pages_sync_for_device(pages, size, DMA_BIDIRECTIONAL);
+
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto err;
diff --git a/drivers/staging/android/ion/ion_page_pool.c b/drivers/staging/android/ion/ion_page_pool.c
index b3017f12835f..169a321778ed 100644
--- a/drivers/staging/android/ion/ion_page_pool.c
+++ b/drivers/staging/android/ion/ion_page_pool.c
@@ -63,7 +63,7 @@ static struct page *ion_page_pool_remove(struct ion_page_pool *pool, bool high)
return page;
}
-struct page *ion_page_pool_alloc(struct ion_page_pool *pool)
+struct page *ion_page_pool_alloc(struct ion_page_pool *pool, bool *from_pool)
{
struct page *page = NULL;
@@ -76,8 +76,12 @@ struct page *ion_page_pool_alloc(struct ion_page_pool *pool)
page = ion_page_pool_remove(pool, false);
mutex_unlock(&pool->mutex);
- if (!page)
+ if (!page) {
page = ion_page_pool_alloc_pages(pool);
+ *from_pool = false;
+ } else {
+ *from_pool = true;
+ }
return page;
}
diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c
index bc19cdd30637..3bb4604e032b 100644
--- a/drivers/staging/android/ion/ion_system_heap.c
+++ b/drivers/staging/android/ion/ion_system_heap.c
@@ -57,13 +57,18 @@ static struct page *alloc_buffer_page(struct ion_system_heap *heap,
bool cached = ion_buffer_cached(buffer);
struct ion_page_pool *pool;
struct page *page;
+ bool from_pool;
if (!cached)
pool = heap->uncached_pools[order_to_index(order)];
else
pool = heap->cached_pools[order_to_index(order)];
- page = ion_page_pool_alloc(pool);
+ page = ion_page_pool_alloc(pool, &from_pool);
+
+ if (!from_pool && !ion_buffer_cached(buffer))
+ ion_pages_sync_for_device(page, PAGE_SIZE << order,
+ DMA_BIDIRECTIONAL);
return page;
}
--
1.8.5.2
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
Use this function to set an sg entry to point to device resources mapped
using dma_map_resource(). The page pointer is set to NULL and only the DMA
address, length and offset values are valid.
Signed-off-by: Christian König <christian.koenig(a)amd.com>
---
include/linux/scatterlist.h | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 22b2131bcdcd..f944ee4e482c 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -149,6 +149,29 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
sg_set_page(sg, virt_to_page(buf), buflen, offset_in_page(buf));
}
+/**
+ * sg_set_dma_addr - Set sg entry to point at specified dma address
+ * @sg: SG entry
+ * @address: DMA address to set
+ * @len: Length of data
+ * @offset: Offset into page
+ *
+ * Description:
+ * Use this function to set an sg entry to point to device resources mapped
+ * using dma_map_resource(). The page pointer is set to NULL and only the DMA
+ * address, length and offset values are valid.
+ *
+ **/
+static inline void sg_set_dma_addr(struct scatterlist *sg, dma_addr_t address,
+ unsigned int len, unsigned int offset)
+{
+ sg_set_page(sg, NULL, len, offset);
+ sg->dma_address = address;
+#ifdef CONFIG_NEED_SG_DMA_LENGTH
+ sg->dma_length = len;
+#endif
+}
+
/*
* Loop over each sg element, following the pointer to a new list if necessary
*/
--
2.14.1
When this was introduced in
commit a519435a96597d8cd96123246fea4ae5a6c90b02
Author: Christian König <christian.koenig(a)amd.com>
Date: Tue Oct 20 16:34:16 2015 +0200
dma-buf/fence: add fence_wait_any_timeout function v2
there was a restriction added that this only works if the dma-fence
uses the dma_fence_default_wait hook. Which works for amdgpu, which is
the only caller. Well, until you share some buffers with e.g. i915,
then you get an -EINVAL.
But there's really no reason for this, because all drivers must
support callbacks. The special ->wait hook is only as an optimization;
if the driver needs to create a worker thread for an active callback,
then it can avoid to do that if it knows that there's a process
context available already. So ->wait is just an optimization, just
using the logic in dma_fence_default_wait() should work for all
drivers.
Let's remove this restriction.
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: Gustavo Padovan <gustavo(a)padovan.org>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: Christian König <christian.koenig(a)amd.com>
Cc: Alex Deucher <alexander.deucher(a)amd.com>
---
drivers/dma-buf/dma-fence.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 7b5b40d6b70e..59049375bd19 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -503,11 +503,6 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
for (i = 0; i < count; ++i) {
struct dma_fence *fence = fences[i];
- if (fence->ops->wait != dma_fence_default_wait) {
- ret = -EINVAL;
- goto fence_rm_cb;
- }
-
cb[i].task = current;
if (dma_fence_add_callback(fence, &cb[i].base,
dma_fence_default_wait_cb)) {
--
2.17.0
- Intro section that links to how this is exposed to userspace.
- Lots more hyperlinks.
- Minor clarifications and style polish
Signed-off-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: Gustavo Padovan <gustavo(a)padovan.org>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
Documentation/driver-api/dma-buf.rst | 6 ++
drivers/dma-buf/dma-fence.c | 140 ++++++++++++++++++---------
2 files changed, 102 insertions(+), 44 deletions(-)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index dc384f2f7f34..b541e97c7ab1 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -130,6 +130,12 @@ Reservation Objects
DMA Fences
----------
+.. kernel-doc:: drivers/dma-buf/dma-fence.c
+ :doc: DMA fences overview
+
+DMA Fences Functions Reference
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
.. kernel-doc:: drivers/dma-buf/dma-fence.c
:export:
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 30fcbe415ff4..4e931e1de198 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -38,12 +38,43 @@ EXPORT_TRACEPOINT_SYMBOL(dma_fence_enable_signal);
*/
static atomic64_t dma_fence_context_counter = ATOMIC64_INIT(0);
+/**
+ * DOC: DMA fences overview
+ *
+ * DMA fences, represented by &struct dma_fence, are the kernel internal
+ * synchronization primitive for DMA operations like GPU rendering, video
+ * encoding/decoding, or displaying buffers on a screen.
+ *
+ * A fence is initialized using dma_fence_init() and completed using
+ * dma_fence_signal(). Fences are associated with a context, allocated through
+ * dma_fence_context_alloc(), and all fences on the same context are
+ * fully ordered.
+ *
+ * Since the purposes of fences is to facilitate cross-device and
+ * cross-application synchronization, there's multiple ways to use one:
+ *
+ * - Individual fences can be exposed as a &sync_file, accessed as a file
+ * descriptor from userspace, created by calling sync_file_create(). This is
+ * called explicit fencing, since userspace passes around explicit
+ * synchronization points.
+ *
+ * - Some subsystems also have their own explicit fencing primitives, like
+ * &drm_syncobj. Compared to &sync_file, a &drm_syncobj allows the underlying
+ * fence to be updated.
+ *
+ * - Then there's also implicit fencing, where the synchronization points are
+ * implicitly passed around as part of shared &dma_buf instances. Such
+ * implicit fences are stored in &struct reservation_object through the
+ * &dma_buf.resv pointer.
+ */
+
/**
* dma_fence_context_alloc - allocate an array of fence contexts
- * @num: [in] amount of contexts to allocate
+ * @num: amount of contexts to allocate
*
- * This function will return the first index of the number of fences allocated.
- * The fence context is used for setting fence->context to a unique number.
+ * This function will return the first index of the number of fence contexts
+ * allocated. The fence context is used for setting &dma_fence.context to a
+ * unique number by passing the context to dma_fence_init().
*/
u64 dma_fence_context_alloc(unsigned num)
{
@@ -59,10 +90,14 @@ EXPORT_SYMBOL(dma_fence_context_alloc);
* Signal completion for software callbacks on a fence, this will unblock
* dma_fence_wait() calls and run all the callbacks added with
* dma_fence_add_callback(). Can be called multiple times, but since a fence
- * can only go from unsignaled to signaled state, it will only be effective
- * the first time.
+ * can only go from the unsignaled to the signaled state and not back, it will
+ * only be effective the first time.
*
- * Unlike dma_fence_signal, this function must be called with fence->lock held.
+ * Unlike dma_fence_signal(), this function must be called with &dma_fence.lock
+ * held.
+ *
+ * Returns 0 on success and a negative error value when @fence has been
+ * signalled already.
*/
int dma_fence_signal_locked(struct dma_fence *fence)
{
@@ -102,8 +137,11 @@ EXPORT_SYMBOL(dma_fence_signal_locked);
* Signal completion for software callbacks on a fence, this will unblock
* dma_fence_wait() calls and run all the callbacks added with
* dma_fence_add_callback(). Can be called multiple times, but since a fence
- * can only go from unsignaled to signaled state, it will only be effective
- * the first time.
+ * can only go from the unsignaled to the signaled state and not back, it will
+ * only be effective the first time.
+ *
+ * Returns 0 on success and a negative error value when @fence has been
+ * signalled already.
*/
int dma_fence_signal(struct dma_fence *fence)
{
@@ -136,9 +174,9 @@ EXPORT_SYMBOL(dma_fence_signal);
/**
* dma_fence_wait_timeout - sleep until the fence gets signaled
* or until timeout elapses
- * @fence: [in] the fence to wait on
- * @intr: [in] if true, do an interruptible wait
- * @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ * @fence: the fence to wait on
+ * @intr: if true, do an interruptible wait
+ * @timeout: timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
*
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
* remaining timeout in jiffies on success. Other error values may be
@@ -148,6 +186,8 @@ EXPORT_SYMBOL(dma_fence_signal);
* directly or indirectly (buf-mgr between reservation and committing)
* holds a reference to the fence, otherwise the fence might be
* freed before return, resulting in undefined behavior.
+ *
+ * See also dma_fence_wait() and dma_fence_wait_any_timeout().
*/
signed long
dma_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
@@ -167,6 +207,13 @@ dma_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
}
EXPORT_SYMBOL(dma_fence_wait_timeout);
+/**
+ * dma_fence_release - default relese function for fences
+ * @kref: &dma_fence.recfount
+ *
+ * This is the default release functions for &dma_fence. Drivers shouldn't call
+ * this directly, but instead call dma_fence_put().
+ */
void dma_fence_release(struct kref *kref)
{
struct dma_fence *fence =
@@ -199,10 +246,11 @@ EXPORT_SYMBOL(dma_fence_free);
/**
* dma_fence_enable_sw_signaling - enable signaling on fence
- * @fence: [in] the fence to enable
+ * @fence: the fence to enable
*
- * this will request for sw signaling to be enabled, to make the fence
- * complete as soon as possible
+ * This will request for sw signaling to be enabled, to make the fence
+ * complete as soon as possible. This calls &dma_fence_ops.enable_signaling
+ * internally.
*/
void dma_fence_enable_sw_signaling(struct dma_fence *fence)
{
@@ -226,24 +274,24 @@ EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
/**
* dma_fence_add_callback - add a callback to be called when the fence
* is signaled
- * @fence: [in] the fence to wait on
- * @cb: [in] the callback to register
- * @func: [in] the function to call
+ * @fence: the fence to wait on
+ * @cb: the callback to register
+ * @func: the function to call
*
- * cb will be initialized by dma_fence_add_callback, no initialization
+ * @cb will be initialized by dma_fence_add_callback(), no initialization
* by the caller is required. Any number of callbacks can be registered
* to a fence, but a callback can only be registered to one fence at a time.
*
* Note that the callback can be called from an atomic context. If
* fence is already signaled, this function will return -ENOENT (and
- * *not* call the callback)
+ * *not* call the callback).
*
* Add a software callback to the fence. Same restrictions apply to
- * refcount as it does to dma_fence_wait, however the caller doesn't need to
- * keep a refcount to fence afterwards: when software access is enabled,
- * the creator of the fence is required to keep the fence alive until
- * after it signals with dma_fence_signal. The callback itself can be called
- * from irq context.
+ * refcount as it does to dma_fence_wait(), however the caller doesn't need to
+ * keep a refcount to fence afterward dma_fence_add_callback() has returned:
+ * when software access is enabled, the creator of the fence is required to keep
+ * the fence alive until after it signals with dma_fence_signal(). The callback
+ * itself can be called from irq context.
*
* Returns 0 in case of success, -ENOENT if the fence is already signaled
* and -EINVAL in case of error.
@@ -292,7 +340,7 @@ EXPORT_SYMBOL(dma_fence_add_callback);
/**
* dma_fence_get_status - returns the status upon completion
- * @fence: [in] the dma_fence to query
+ * @fence: the dma_fence to query
*
* This wraps dma_fence_get_status_locked() to return the error status
* condition on a signaled fence. See dma_fence_get_status_locked() for more
@@ -317,8 +365,8 @@ EXPORT_SYMBOL(dma_fence_get_status);
/**
* dma_fence_remove_callback - remove a callback from the signaling list
- * @fence: [in] the fence to wait on
- * @cb: [in] the callback to remove
+ * @fence: the fence to wait on
+ * @cb: the callback to remove
*
* Remove a previously queued callback from the fence. This function returns
* true if the callback is successfully removed, or false if the fence has
@@ -329,6 +377,9 @@ EXPORT_SYMBOL(dma_fence_get_status);
* doing, since deadlocks and race conditions could occur all too easily. For
* this reason, it should only ever be done on hardware lockup recovery,
* with a reference held to the fence.
+ *
+ * Behaviour is undefined if @cb has not been added to @fence using
+ * dma_fence_add_callback() beforehand.
*/
bool
dma_fence_remove_callback(struct dma_fence *fence, struct dma_fence_cb *cb)
@@ -365,9 +416,9 @@ dma_fence_default_wait_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
/**
* dma_fence_default_wait - default sleep until the fence gets signaled
* or until timeout elapses
- * @fence: [in] the fence to wait on
- * @intr: [in] if true, do an interruptible wait
- * @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ * @fence: the fence to wait on
+ * @intr: if true, do an interruptible wait
+ * @timeout: timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
*
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
* remaining timeout in jiffies on success. If timeout is zero the value one is
@@ -460,12 +511,12 @@ dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count,
/**
* dma_fence_wait_any_timeout - sleep until any fence gets signaled
* or until timeout elapses
- * @fences: [in] array of fences to wait on
- * @count: [in] number of fences to wait on
- * @intr: [in] if true, do an interruptible wait
- * @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
- * @idx: [out] the first signaled fence index, meaningful only on
- * positive return
+ * @fences: array of fences to wait on
+ * @count: number of fences to wait on
+ * @intr: if true, do an interruptible wait
+ * @timeout: timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ * @idx: used to store the first signaled fence index, meaningful only on
+ * positive return
*
* Returns -EINVAL on custom fence wait implementation, -ERESTARTSYS if
* interrupted, 0 if the wait timed out, or the remaining timeout in jiffies
@@ -474,6 +525,8 @@ dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count,
* Synchronous waits for the first fence in the array to be signaled. The
* caller needs to hold a reference to all fences in the array, otherwise a
* fence might be freed before return, resulting in undefined behavior.
+ *
+ * See also dma_fence_wait() and dma_fence_wait_timeout().
*/
signed long
dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
@@ -546,19 +599,18 @@ EXPORT_SYMBOL(dma_fence_wait_any_timeout);
/**
* dma_fence_init - Initialize a custom fence.
- * @fence: [in] the fence to initialize
- * @ops: [in] the dma_fence_ops for operations on this fence
- * @lock: [in] the irqsafe spinlock to use for locking this fence
- * @context: [in] the execution context this fence is run on
- * @seqno: [in] a linear increasing sequence number for this context
+ * @fence: the fence to initialize
+ * @ops: the dma_fence_ops for operations on this fence
+ * @lock: the irqsafe spinlock to use for locking this fence
+ * @context: the execution context this fence is run on
+ * @seqno: a linear increasing sequence number for this context
*
* Initializes an allocated fence, the caller doesn't have to keep its
* refcount after committing with this fence, but it will need to hold a
- * refcount again if dma_fence_ops.enable_signaling gets called. This can
- * be used for other implementing other types of fence.
+ * refcount again if &dma_fence_ops.enable_signaling gets called.
*
* context and seqno are used for easy comparison between fences, allowing
- * to check which fence is later by simply using dma_fence_later.
+ * to check which fence is later by simply using dma_fence_later().
*/
void
dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
--
2.17.0
Almost everyone uses dma_fence_default_wait.
Signed-off-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: Gustavo Padovan <gustavo(a)padovan.org>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/dma-fence-array.c | 1 -
drivers/dma-buf/dma-fence.c | 5 ++++-
drivers/dma-buf/sw_sync.c | 1 -
include/linux/dma-fence.h | 13 ++++++++-----
4 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index dd1edfb27b61..a8c254497251 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -104,7 +104,6 @@ const struct dma_fence_ops dma_fence_array_ops = {
.get_timeline_name = dma_fence_array_get_timeline_name,
.enable_signaling = dma_fence_array_enable_signaling,
.signaled = dma_fence_array_signaled,
- .wait = dma_fence_default_wait,
.release = dma_fence_array_release,
};
EXPORT_SYMBOL(dma_fence_array_ops);
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 59049375bd19..30fcbe415ff4 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -158,7 +158,10 @@ dma_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
return -EINVAL;
trace_dma_fence_wait_start(fence);
- ret = fence->ops->wait(fence, intr, timeout);
+ if (fence->ops->wait)
+ ret = fence->ops->wait(fence, intr, timeout);
+ else
+ ret = dma_fence_default_wait(fence, intr, timeout);
trace_dma_fence_wait_end(fence);
return ret;
}
diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c
index 3d78ca89a605..53c1d6d36a64 100644
--- a/drivers/dma-buf/sw_sync.c
+++ b/drivers/dma-buf/sw_sync.c
@@ -188,7 +188,6 @@ static const struct dma_fence_ops timeline_fence_ops = {
.get_timeline_name = timeline_fence_get_timeline_name,
.enable_signaling = timeline_fence_enable_signaling,
.signaled = timeline_fence_signaled,
- .wait = dma_fence_default_wait,
.release = timeline_fence_release,
.fence_value_str = timeline_fence_value_str,
.timeline_value_str = timeline_fence_timeline_value_str,
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index c730f569621a..d05496ff0d10 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -191,11 +191,14 @@ struct dma_fence_ops {
/**
* @wait:
*
- * Custom wait implementation, or dma_fence_default_wait.
+ * Custom wait implementation, defaults to dma_fence_default_wait() if
+ * not set.
*
- * Must not be NULL, set to dma_fence_default_wait for default implementation.
- * the dma_fence_default_wait implementation should work for any fence, as long
- * as enable_signaling works correctly.
+ * The dma_fence_default_wait implementation should work for any fence, as long
+ * as @enable_signaling works correctly. This hook allows drivers to
+ * have an optimized version for the case where a process context is
+ * already available, e.g. if @enable_signaling for the general case
+ * needs to set up a worker thread.
*
* Must return -ERESTARTSYS if the wait is intr = true and the wait was
* interrupted, and remaining jiffies if fence has signaled, or 0 if wait
@@ -203,7 +206,7 @@ struct dma_fence_ops {
* which should be treated as if the fence is signaled. For example a hardware
* lockup could be reported like that.
*
- * This callback is mandatory.
+ * This callback is optional.
*/
signed long (*wait)(struct dma_fence *fence,
bool intr, signed long timeout);
--
2.17.0
- Switch to inline member docs for dma_fence_ops.
- Mild polish all around.
- hyperlink all the things!
v2: - Remove the various [in] annotations, they seem really uncommon
in kerneldoc and look funny.
Signed-off-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
include/linux/dma-fence.h | 235 +++++++++++++++++++++++++-------------
1 file changed, 154 insertions(+), 81 deletions(-)
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 4c008170fe65..9d6f39bf2111 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -94,11 +94,11 @@ typedef void (*dma_fence_func_t)(struct dma_fence *fence,
struct dma_fence_cb *cb);
/**
- * struct dma_fence_cb - callback for dma_fence_add_callback
- * @node: used by dma_fence_add_callback to append this struct to fence::cb_list
+ * struct dma_fence_cb - callback for dma_fence_add_callback()
+ * @node: used by dma_fence_add_callback() to append this struct to fence::cb_list
* @func: dma_fence_func_t to call
*
- * This struct will be initialized by dma_fence_add_callback, additional
+ * This struct will be initialized by dma_fence_add_callback(), additional
* data can be passed along by embedding dma_fence_cb in another struct.
*/
struct dma_fence_cb {
@@ -108,75 +108,142 @@ struct dma_fence_cb {
/**
* struct dma_fence_ops - operations implemented for fence
- * @get_driver_name: returns the driver name.
- * @get_timeline_name: return the name of the context this fence belongs to.
- * @enable_signaling: enable software signaling of fence.
- * @signaled: [optional] peek whether the fence is signaled, can be null.
- * @wait: custom wait implementation, or dma_fence_default_wait.
- * @release: [optional] called on destruction of fence, can be null
- * @fill_driver_data: [optional] callback to fill in free-form debug info
- * Returns amount of bytes filled, or -errno.
- * @fence_value_str: [optional] fills in the value of the fence as a string
- * @timeline_value_str: [optional] fills in the current value of the timeline
- * as a string
*
- * Notes on enable_signaling:
- * For fence implementations that have the capability for hw->hw
- * signaling, they can implement this op to enable the necessary
- * irqs, or insert commands into cmdstream, etc. This is called
- * in the first wait() or add_callback() path to let the fence
- * implementation know that there is another driver waiting on
- * the signal (ie. hw->sw case).
- *
- * This function can be called from atomic context, but not
- * from irq context, so normal spinlocks can be used.
- *
- * A return value of false indicates the fence already passed,
- * or some failure occurred that made it impossible to enable
- * signaling. True indicates successful enabling.
- *
- * fence->error may be set in enable_signaling, but only when false is
- * returned.
- *
- * Calling dma_fence_signal before enable_signaling is called allows
- * for a tiny race window in which enable_signaling is called during,
- * before, or after dma_fence_signal. To fight this, it is recommended
- * that before enable_signaling returns true an extra reference is
- * taken on the fence, to be released when the fence is signaled.
- * This will mean dma_fence_signal will still be called twice, but
- * the second time will be a noop since it was already signaled.
- *
- * Notes on signaled:
- * May set fence->error if returning true.
- *
- * Notes on wait:
- * Must not be NULL, set to dma_fence_default_wait for default implementation.
- * the dma_fence_default_wait implementation should work for any fence, as long
- * as enable_signaling works correctly.
- *
- * Must return -ERESTARTSYS if the wait is intr = true and the wait was
- * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
- * timed out. Can also return other error values on custom implementations,
- * which should be treated as if the fence is signaled. For example a hardware
- * lockup could be reported like that.
- *
- * Notes on release:
- * Can be NULL, this function allows additional commands to run on
- * destruction of the fence. Can be called from irq context.
- * If pointer is set to NULL, kfree will get called instead.
*/
-
struct dma_fence_ops {
+ /**
+ * @get_driver_name:
+ *
+ * Returns the driver name. This is a callback to allow drivers to
+ * compute the name at runtime, without having it to store permanently
+ * for each fence, or build a cache of some sort.
+ *
+ * This callback is mandatory.
+ */
const char * (*get_driver_name)(struct dma_fence *fence);
+
+ /**
+ * @get_timeline_name:
+ *
+ * Return the name of the context this fence belongs to. This is a
+ * callback to allow drivers to compute the name at runtime, without
+ * having it to store permanently for each fence, or build a cache of
+ * some sort.
+ *
+ * This callback is mandatory.
+ */
const char * (*get_timeline_name)(struct dma_fence *fence);
+
+ /**
+ * @enable_signaling:
+ *
+ * Enable software signaling of fence.
+ *
+ * For fence implementations that have the capability for hw->hw
+ * signaling, they can implement this op to enable the necessary
+ * interrupts, or insert commands into cmdstream, etc, to avoid these
+ * costly operations for the common case where only hw->hw
+ * synchronization is required. This is called in the first
+ * dma_fence_wait() or dma_fence_add_callback() path to let the fence
+ * implementation know that there is another driver waiting on the
+ * signal (ie. hw->sw case).
+ *
+ * This function can be called from atomic context, but not
+ * from irq context, so normal spinlocks can be used.
+ *
+ * A return value of false indicates the fence already passed,
+ * or some failure occurred that made it impossible to enable
+ * signaling. True indicates successful enabling.
+ *
+ * &dma_fence.error may be set in enable_signaling, but only when false
+ * is returned.
+ *
+ * Since many implementations can call dma_fence_signal() even when before
+ * @enable_signaling has been called there's a race window, where the
+ * dma_fence_signal() might result in the final fence reference being
+ * released and its memory freed. To avoid this, implementations of this
+ * callback should grab their own reference using dma_fence_get(), to be
+ * released when the fence is signalled (through e.g. the interrupt
+ * handler).
+ *
+ * This callback is mandatory.
+ */
bool (*enable_signaling)(struct dma_fence *fence);
+
+ /**
+ * @signaled:
+ *
+ * Peek whether the fence is signaled, as a fastpath optimization for
+ * e.g. dma_fence_wait() or dma_fence_add_callback(). Note that this
+ * callback does not need to make any guarantees beyond that a fence
+ * once indicates as signalled must always return true from this
+ * callback. This callback may return false even if the fence has
+ * completed already, in this case information hasn't propogated throug
+ * the system yet. See also dma_fence_is_signaled().
+ *
+ * May set &dma_fence.error if returning true.
+ *
+ * This callback is optional.
+ */
bool (*signaled)(struct dma_fence *fence);
+
+ /**
+ * @wait:
+ *
+ * Custom wait implementation, or dma_fence_default_wait.
+ *
+ * Must not be NULL, set to dma_fence_default_wait for default implementation.
+ * the dma_fence_default_wait implementation should work for any fence, as long
+ * as enable_signaling works correctly.
+ *
+ * Must return -ERESTARTSYS if the wait is intr = true and the wait was
+ * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
+ * timed out. Can also return other error values on custom implementations,
+ * which should be treated as if the fence is signaled. For example a hardware
+ * lockup could be reported like that.
+ *
+ * This callback is mandatory.
+ */
signed long (*wait)(struct dma_fence *fence,
bool intr, signed long timeout);
+
+ /**
+ * @release:
+ *
+ * Called on destruction of fence to release additional resources.
+ * Can be called from irq context. This callback is optional. If it is
+ * NULL, then dma_fence_free() is instead called as the default
+ * implementation.
+ */
void (*release)(struct dma_fence *fence);
+ /**
+ * @fill_driver_data:
+ *
+ * Callback to fill in free-form debug info Returns amount of bytes
+ * filled, or negative error on failure.
+ *
+ * This callback is optional.
+ */
int (*fill_driver_data)(struct dma_fence *fence, void *data, int size);
+
+ /**
+ * @fence_value_str:
+ *
+ * Callback to fill in free-form debug info specific to this fence, like
+ * the sequence number.
+ *
+ * This callback is optional.
+ */
void (*fence_value_str)(struct dma_fence *fence, char *str, int size);
+
+ /**
+ * @timeline_value_str:
+ *
+ * Fills in the current value of the timeline as a string, like the
+ * sequence number. This should match what @fill_driver_data prints for
+ * the most recently signalled fence (assuming no delayed signalling).
+ */
void (*timeline_value_str)(struct dma_fence *fence,
char *str, int size);
};
@@ -189,7 +256,7 @@ void dma_fence_free(struct dma_fence *fence);
/**
* dma_fence_put - decreases refcount of the fence
- * @fence: [in] fence to reduce refcount of
+ * @fence: fence to reduce refcount of
*/
static inline void dma_fence_put(struct dma_fence *fence)
{
@@ -199,7 +266,7 @@ static inline void dma_fence_put(struct dma_fence *fence)
/**
* dma_fence_get - increases refcount of the fence
- * @fence: [in] fence to increase refcount of
+ * @fence: fence to increase refcount of
*
* Returns the same fence, with refcount increased by 1.
*/
@@ -213,7 +280,7 @@ static inline struct dma_fence *dma_fence_get(struct dma_fence *fence)
/**
* dma_fence_get_rcu - get a fence from a reservation_object_list with
* rcu read lock
- * @fence: [in] fence to increase refcount of
+ * @fence: fence to increase refcount of
*
* Function returns NULL if no refcount could be obtained, or the fence.
*/
@@ -227,7 +294,7 @@ static inline struct dma_fence *dma_fence_get_rcu(struct dma_fence *fence)
/**
* dma_fence_get_rcu_safe - acquire a reference to an RCU tracked fence
- * @fencep: [in] pointer to fence to increase refcount of
+ * @fencep: pointer to fence to increase refcount of
*
* Function returns NULL if no refcount could be obtained, or the fence.
* This function handles acquiring a reference to a fence that may be
@@ -289,14 +356,16 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence);
/**
* dma_fence_is_signaled_locked - Return an indication if the fence
* is signaled yet.
- * @fence: [in] the fence to check
+ * @fence: the fence to check
*
* Returns true if the fence was already signaled, false if not. Since this
* function doesn't enable signaling, it is not guaranteed to ever return
- * true if dma_fence_add_callback, dma_fence_wait or
- * dma_fence_enable_sw_signaling haven't been called before.
+ * true if dma_fence_add_callback(), dma_fence_wait() or
+ * dma_fence_enable_sw_signaling() haven't been called before.
*
- * This function requires fence->lock to be held.
+ * This function requires &dma_fence.lock to be held.
+ *
+ * See also dma_fence_is_signaled().
*/
static inline bool
dma_fence_is_signaled_locked(struct dma_fence *fence)
@@ -314,17 +383,19 @@ dma_fence_is_signaled_locked(struct dma_fence *fence)
/**
* dma_fence_is_signaled - Return an indication if the fence is signaled yet.
- * @fence: [in] the fence to check
+ * @fence: the fence to check
*
* Returns true if the fence was already signaled, false if not. Since this
* function doesn't enable signaling, it is not guaranteed to ever return
- * true if dma_fence_add_callback, dma_fence_wait or
- * dma_fence_enable_sw_signaling haven't been called before.
+ * true if dma_fence_add_callback(), dma_fence_wait() or
+ * dma_fence_enable_sw_signaling() haven't been called before.
*
* It's recommended for seqno fences to call dma_fence_signal when the
* operation is complete, it makes it possible to prevent issues from
* wraparound between time of issue and time of use by checking the return
* value of this function before calling hardware-specific wait instructions.
+ *
+ * See also dma_fence_is_signaled_locked().
*/
static inline bool
dma_fence_is_signaled(struct dma_fence *fence)
@@ -342,8 +413,8 @@ dma_fence_is_signaled(struct dma_fence *fence)
/**
* __dma_fence_is_later - return if f1 is chronologically later than f2
- * @f1: [in] the first fence's seqno
- * @f2: [in] the second fence's seqno from the same context
+ * @f1: the first fence's seqno
+ * @f2: the second fence's seqno from the same context
*
* Returns true if f1 is chronologically later than f2. Both fences must be
* from the same context, since a seqno is not common across contexts.
@@ -355,8 +426,8 @@ static inline bool __dma_fence_is_later(u32 f1, u32 f2)
/**
* dma_fence_is_later - return if f1 is chronologically later than f2
- * @f1: [in] the first fence from the same context
- * @f2: [in] the second fence from the same context
+ * @f1: the first fence from the same context
+ * @f2: the second fence from the same context
*
* Returns true if f1 is chronologically later than f2. Both fences must be
* from the same context, since a seqno is not re-used across contexts.
@@ -372,8 +443,8 @@ static inline bool dma_fence_is_later(struct dma_fence *f1,
/**
* dma_fence_later - return the chronologically later fence
- * @f1: [in] the first fence from the same context
- * @f2: [in] the second fence from the same context
+ * @f1: the first fence from the same context
+ * @f2: the second fence from the same context
*
* Returns NULL if both fences are signaled, otherwise the fence that would be
* signaled last. Both fences must be from the same context, since a seqno is
@@ -398,7 +469,7 @@ static inline struct dma_fence *dma_fence_later(struct dma_fence *f1,
/**
* dma_fence_get_status_locked - returns the status upon completion
- * @fence: [in] the dma_fence to query
+ * @fence: the dma_fence to query
*
* Drivers can supply an optional error status condition before they signal
* the fence (to indicate whether the fence was completed due to an error
@@ -422,8 +493,8 @@ int dma_fence_get_status(struct dma_fence *fence);
/**
* dma_fence_set_error - flag an error condition on the fence
- * @fence: [in] the dma_fence
- * @error: [in] the error to store
+ * @fence: the dma_fence
+ * @error: the error to store
*
* Drivers can supply an optional error status condition before they signal
* the fence, to indicate that the fence was completed due to an error
@@ -449,8 +520,8 @@ signed long dma_fence_wait_any_timeout(struct dma_fence **fences,
/**
* dma_fence_wait - sleep until the fence gets signaled
- * @fence: [in] the fence to wait on
- * @intr: [in] if true, do an interruptible wait
+ * @fence: the fence to wait on
+ * @intr: if true, do an interruptible wait
*
* This function will return -ERESTARTSYS if interrupted by a signal,
* or 0 if the fence was signaled. Other error values may be
@@ -459,6 +530,8 @@ signed long dma_fence_wait_any_timeout(struct dma_fence **fences,
* Performs a synchronous wait on this fence. It is assumed the caller
* directly or indirectly holds a reference to the fence, otherwise the
* fence might be freed before return, resulting in undefined behavior.
+ *
+ * See also dma_fence_wait_timeout() and dma_fence_wait_any_timeout().
*/
static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr)
{
--
2.17.0
(changed subject and decoupling from udmabuf thread)
On Wed, Apr 11, 2018 at 08:59:32AM +0300, Oleksandr Andrushchenko wrote:
> On 04/10/2018 08:26 PM, Dongwon Kim wrote:
> >On Tue, Apr 10, 2018 at 09:37:53AM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/06/2018 09:57 PM, Dongwon Kim wrote:
> >>>On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko wrote:
> >>>>On 04/06/2018 02:57 PM, Gerd Hoffmann wrote:
> >>>>> Hi,
> >>>>>
> >>>>>>>I fail to see any common ground for xen-zcopy and udmabuf ...
> >>>>>>Does the above mean you can assume that xen-zcopy and udmabuf
> >>>>>>can co-exist as two different solutions?
> >>>>>Well, udmabuf route isn't fully clear yet, but yes.
> >>>>>
> >>>>>See also gvt (intel vgpu), where the hypervisor interface is abstracted
> >>>>>away into a separate kernel modules even though most of the actual vgpu
> >>>>>emulation code is common.
> >>>>Thank you for your input, I'm just trying to figure out
> >>>>which of the three z-copy solutions intersect and how much
> >>>>>>And what about hyper-dmabuf?
> >>>xen z-copy solution is pretty similar fundamentally to hyper_dmabuf
> >>>in terms of these core sharing feature:
> >>>
> >>>1. the sharing process - import prime/dmabuf from the producer -> extract
> >>>underlying pages and get those shared -> return references for shared pages
> >Another thing is danvet was kind of against to the idea of importing existing
> >dmabuf/prime buffer and forward it to the other domain due to synchronization
> >issues. He proposed to make hyper_dmabuf only work as an exporter so that it
> >can have a full control over the buffer. I think we need to talk about this
> >further as well.
> Yes, I saw this. But this limits the use-cases so much.
I agree. Our current approach is a lot more flexible. You can find very
similar feedback in my reply to those review messages. However, I also
understand Daniel's concern as well. I believe we need more dicussion
regarding this matter.
> For instance, running Android as a Guest (which uses ION to allocate
> buffers) means that finally HW composer will import dma-buf into
> the DRM driver. Then, in case of xen-front for example, it needs to be
> shared with the backend (Host side). Of course, we can change user-space
> to make xen-front allocate the buffers (make it exporter), but what we try
> to avoid is to change user-space which in normal world would have remain
> unchanged otherwise.
> So, I do think we have to support this use-case and just have to understand
> the complexity.
>
> >
> >danvet, can you comment on this topic?
> >
> >>>2. the page sharing mechanism - it uses Xen-grant-table.
> >>>
> >>>And to give you a quick summary of differences as far as I understand
> >>>between two implementations (please correct me if I am wrong, Oleksandr.)
> >>>
> >>>1. xen-zcopy is DRM specific - can import only DRM prime buffer
> >>>while hyper_dmabuf can export any dmabuf regardless of originator
> >>Well, this is true. And at the same time this is just a matter
> >>of extending the API: xen-zcopy is a helper driver designed for
> >>xen-front/back use-case, so this is why it only has DRM PRIME API
> >>>2. xen-zcopy doesn't seem to have dma-buf synchronization between two VMs
> >>>while (as danvet called it as remote dmabuf api sharing) hyper_dmabuf sends
> >>>out synchronization message to the exporting VM for synchronization.
> >>This is true. Again, this is because of the use-cases it covers.
> >>But having synchronization for a generic solution seems to be a good idea.
> >Yeah, understood xen-zcopy works ok with your use case. But I am just curious
> >if it is ok not to have any inter-domain synchronization in this sharing model.
> The synchronization is done with displif protocol [1]
> >The buffer being shared is technically dma-buf and originator needs to be able
> >to keep track of it.
> As I am working in DRM terms the tracking is done by the DRM core
> for me for free. (This might be one of the reasons Daniel sees DRM
> based implementation fit very good from code-reuse POV).
yeah but once you have a DRM object (whether it's dmabuf or not) on a remote
domain, it is totally new object and out of sync (correct me if I am wrong)
with original DRM prime, isn't it? How could these two different but based on
same pages be synchronized?
> >
> >>>3. 1-level references - when using grant-table for sharing pages, there will
> >>>be same # of refs (each 8 byte)
> >>To be precise, grant ref is 4 bytes
> >You are right. Thanks for correction.;)
> >
> >>>as # of shared pages, which is passed to
> >>>the userspace to be shared with importing VM in case of xen-zcopy.
> >>The reason for that is that xen-zcopy is a helper driver, e.g.
> >>the grant references come from the display backend [1], which implements
> >>Xen display protocol [2]. So, effectively the backend extracts references
> >>from frontend's requests and passes those to xen-zcopy as an array
> >>of refs.
> >>> Compared
> >>>to this, hyper_dmabuf does multiple level addressing to generate only one
> >>>reference id that represents all shared pages.
> >>In the protocol [2] only one reference to the gref directory is passed
> >>between VMs
> >>(and the gref directory is a single-linked list of shared pages containing
> >>all
> >>of the grefs of the buffer).
> >ok, good to know. I will look into its implementation in more details but is
> >this gref directory (chained grefs) something that can be used for any general
> >memory sharing use case or is it jsut for xen-display (in current code base)?
> Not to mislead you: one grant ref is passed via displif protocol,
> but the page it's referencing contains the rest of the grant refs.
>
I checked displif.h. I like the concept of chaining 2nd level grefs.
As you should have already realized, our multi-level addressing is almost
identical to gref directory except that we defined another level on top to
address multiple 2nd level grefs instead of creating a linked list. And I
see there would be an advantage in terms of memory saving in your method.
Now I think why it should be remaining as one of displif features. I think
we could expand this to any type of large buffer sharing use-case in Xen
(possibly as an extension to grant-table driver?)
> As to if this can be used for any memory: yes. It is the same for
> sndif and displif Xen protocols, but defined twice as strictly speaking
> sndif and displif are two separate protocols.
>
> While reviewing your RFC v2 one of the comments I had [2] was that if we
> can start from defining such a generic protocol for hyper-dmabuf.
> It can be a header file, which not only has the description part
> (which then become a part of Documentation/...rst file), but also defines
> all the required constants for requests, responses, defines message formats,
> state diagrams etc. all at one place. Of course this protocol must not be
> Xen specific, but be OS/hypervisor agnostic.
> Having that will trigger a new round of discussion, so we have it all
> designed
> and discussed before we start implementing.
>
> Besides the protocol we have to design UAPI part as well and make sure
> the hyper-dmabuf is not only accessible from user-space, but there will be
> number
> of kernel-space users as well.
> >
> >>>4. inter VM messaging (hype_dmabuf only) - hyper_dmabuf has inter-vm msg
> >>>communication defined for dmabuf synchronization and private data (meta
> >>>info that Matt Roper mentioned) exchange.
> >>This is true, xen-zcopy has no means for inter VM sync and meta-data,
> >>simply because it doesn't have any code for inter VM exchange in it,
> >>e.g. the inter VM protocol is handled by the backend [1].
> >>>5. driver-to-driver notification (hyper_dmabuf only) - importing VM gets
> >>>notified when newdmabuf is exported from other VM - uevent can be optionally
> >>>generated when this happens.
> >>>
> >>>6. structure - hyper_dmabuf is targetting to provide a generic solution for
> >>>inter-domain dmabuf sharing for most hypervisors, which is why it has two
> >>>layers as mattrope mentioned, front-end that contains standard API and backend
> >>>that is specific to hypervisor.
> >>Again, xen-zcopy is decoupled from inter VM communication
> >>>>>No idea, didn't look at it in detail.
> >>>>>
> >>>>>Looks pretty complex from a distant view. Maybe because it tries to
> >>>>>build a communication framework using dma-bufs instead of a simple
> >>>>>dma-buf passing mechanism.
> >>>we started with simple dma-buf sharing but realized there are many
> >>>things we need to consider in real use-case, so we added communication
> >>>, notification and dma-buf synchronization then re-structured it to
> >>>front-end and back-end (this made things more compicated..) since Xen
> >>>was not our only target. Also, we thought passing the reference for the
> >>>buffer (hyper_dmabuf_id) is not secure so added uvent mechanism later.
> >>>
> >>>>Yes, I am looking at it now, trying to figure out the full story
> >>>>and its implementation. BTW, Intel guys were about to share some
> >>>>test application for hyper-dmabuf, maybe I have missed one.
> >>>>It could probably better explain the use-cases and the complexity
> >>>>they have in hyper-dmabuf.
> >>>One example is actually in github. If you want take a look at it, please
> >>>visit:
> >>>
> >>>https://github.com/downor/linux_hyper_dmabuf_test/tree/xen/simple_export
> >>Thank you, I'll have a look
> >>>>>Like xen-zcopy it seems to depend on the idea that the hypervisor
> >>>>>manages all memory it is easy for guests to share pages with the help of
> >>>>>the hypervisor.
> >>>>So, for xen-zcopy we were not trying to make it generic,
> >>>>it just solves display (dumb) zero-copying use-cases for Xen.
> >>>>We implemented it as a DRM helper driver because we can't see any
> >>>>other use-cases as of now.
> >>>>For example, we also have Xen para-virtualized sound driver, but
> >>>>its buffer memory usage is not comparable to what display wants
> >>>>and it works somewhat differently (e.g. there is no "frame done"
> >>>>event, so one can't tell when the sound buffer can be "flipped").
> >>>>At the same time, we do not use virtio-gpu, so this could probably
> >>>>be one more candidate for shared dma-bufs some day.
> >>>>> Which simply isn't the case on kvm.
> >>>>>
> >>>>>hyper-dmabuf and xen-zcopy could maybe share code, or hyper-dmabuf build
> >>>>>on top of xen-zcopy.
> >>>>Hm, I can imagine that: xen-zcopy could be a library code for hyper-dmabuf
> >>>>in terms of implementing all that page sharing fun in multiple directions,
> >>>>e.g. Host->Guest, Guest->Host, Guest<->Guest.
> >>>>But I'll let Matt and Dongwon to comment on that.
> >>>I think we can definitely collaborate. Especially, maybe we are using some
> >>>outdated sharing mechanism/grant-table mechanism in our Xen backend (thanks
> >>>for bringing that up Oleksandr). However, the question is once we collaborate
> >>>somehow, can xen-zcopy's usecase use the standard API that hyper_dmabuf
> >>>provides? I don't think we need different IOCTLs that do the same in the final
> >>>solution.
> >>>
> >>If you think of xen-zcopy as a library (which implements Xen
> >>grant references mangling) and DRM PRIME wrapper on top of that
> >>library, we can probably define proper API for that library,
> >>so both xen-zcopy and hyper-dmabuf can use it. What is more, I am
> >>about to start upstreaming Xen para-virtualized sound device driver soon,
> >>which also uses similar code and gref passing mechanism [3].
> >>(Actually, I was about to upstream drm/xen-front, drm/xen-zcopy and
> >>snd/xen-front and then propose a Xen helper library for sharing big buffers,
> >>so common code of the above drivers can use the same code w/o code
> >>duplication)
> >I think it is possible to use your functions for memory sharing part in
> >hyper_dmabuf's backend (this 'backend' means the layer that does page sharing
> >and inter-vm communication with xen-specific way.), so why don't we work on
> >"Xen helper library for sharing big buffers" first while we continue our
> >discussion on the common API layer that can cover any dmabuf sharing cases.
> >
> Well, I would love we reuse the code that I have, but I also
> understand that it was limited by my use-cases. So, I do not
> insist we have to ;)
> If we start designing and discussing hyper-dmabuf protocol we of course
> can work on this helper library in parallel.
> >>Thank you,
> >>Oleksandr
> >>
> >>P.S. All, is it a good idea to move this out of udmabuf thread into a
> >>dedicated one?
> >Either way is fine with me.
> So, if you can start designing the protocol we may have a dedicated mail
> thread for that. I will try to help with the protocol as much as I can
Sure thanks. We can talk about it. just FYI, I have prepared a application
note that contains definition of hyper_dmabuf messages included in RFC v2
patch. That would be a great starting point. It will be great if you can
review it.
>
> >>>>>cheers,
> >>>>> Gerd
> >>>>>
> >>>>Thank you,
> >>>>Oleksandr
> >>>>
> >>>>P.S. Sorry for making your original mail thread to discuss things much
> >>>>broader than your RFC...
> >>>>
> >>[1] https://github.com/xen-troops/displ_be
> >>[2] https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/…
> >>[3] https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/…
> >>
> [1] https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/…
> [2]
> https://lists.xenproject.org/archives/html/xen-devel/2018-04/msg00685.html