Documentation says that code requiring dma-buf should add it to select, so inline fallbacks are not going to be used. A link error will make it obvious what went wrong, instead of silently doing nothing at runtime.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- include/linux/dma-buf.h | 99 ----------------------------------------------- 1 file changed, 99 deletions(-)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index eb48f38..bd2e52c 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -156,7 +156,6 @@ static inline void get_dma_buf(struct dma_buf *dmabuf) get_file(dmabuf->file); }
-#ifdef CONFIG_DMA_SHARED_BUFFER struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, struct device *dev); void dma_buf_detach(struct dma_buf *dmabuf, @@ -184,103 +183,5 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long); void *dma_buf_vmap(struct dma_buf *); void dma_buf_vunmap(struct dma_buf *, void *vaddr); -#else - -static inline struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, - struct device *dev) -{ - return ERR_PTR(-ENODEV); -} - -static inline void dma_buf_detach(struct dma_buf *dmabuf, - struct dma_buf_attachment *dmabuf_attach) -{ - return; -} - -static inline struct dma_buf *dma_buf_export(void *priv, - const struct dma_buf_ops *ops, - size_t size, int flags) -{ - return ERR_PTR(-ENODEV); -} - -static inline int dma_buf_fd(struct dma_buf *dmabuf, int flags) -{ - return -ENODEV; -} - -static inline struct dma_buf *dma_buf_get(int fd) -{ - return ERR_PTR(-ENODEV); -} - -static inline void dma_buf_put(struct dma_buf *dmabuf) -{ - return; -} - -static inline struct sg_table *dma_buf_map_attachment( - struct dma_buf_attachment *attach, enum dma_data_direction write) -{ - return ERR_PTR(-ENODEV); -} - -static inline void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, - struct sg_table *sg, enum dma_data_direction dir) -{ - return; -} - -static inline int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, - size_t start, size_t len, - enum dma_data_direction dir) -{ - return -ENODEV; -} - -static inline void dma_buf_end_cpu_access(struct dma_buf *dmabuf, - size_t start, size_t len, - enum dma_data_direction dir) -{ -} - -static inline void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, - unsigned long pnum) -{ - return NULL; -} - -static inline void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, - unsigned long pnum, void *vaddr) -{ -} - -static inline void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long pnum) -{ - return NULL; -} - -static inline void dma_buf_kunmap(struct dma_buf *dmabuf, - unsigned long pnum, void *vaddr) -{ -} - -static inline int dma_buf_mmap(struct dma_buf *dmabuf, - struct vm_area_struct *vma, - unsigned long pgoff) -{ - return -ENODEV; -} - -static inline void *dma_buf_vmap(struct dma_buf *dmabuf) -{ - return NULL; -} - -static inline void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) -{ -} -#endif /* CONFIG_DMA_SHARED_BUFFER */
#endif /* __DMA_BUF_H__ */
A dma-fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace.
A dma-fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence.
+ dma_fence_signal()
The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf.
TODO maybe need some helper fxn for simple devices, like a display- only drm/kms device which simply wants to wait for exclusive fence to be signaled, and then attach a non-exclusive fence while scanout is in progress.
The one pending on the fence can add an async callback: + dma_fence_add_callback() The callback can optionally be cancelled with remove_wait_queue()
Or wait synchronously (optionally with timeout or interruptible): + dma_fence_wait()
A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example:
fence = dma_buf_get_fence(dmabuf); if (fence->ops == &bikeshed_fence_ops) { dma_buf *fence_buf; dma_bikeshed_fence_get_buf(fence, &fence_buf, &offset); ... tell the hw the memory location to wait on ... } else { /* fall-back to sw sync * / dma_fence_add_callback(fence, my_cb); }
On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way.
To facilitate other non-sw implementations, the enable_signaling callback can be used to keep track if a device not supporting hw sync is waiting on the fence, and in this case should arrange to call dma_fence_signal() at some point after the condition has changed, to notify other devices waiting on the fence. If there are no sw waiters, this can be skipped to avoid waking the CPU unnecessarily. The handler of the enable_signaling op should take a refcount until the fence is signaled, then release its ref.
The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization).
v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- Documentation/DocBook/device-drivers.tmpl | 2 drivers/base/Makefile | 2 drivers/base/dma-fence.c | 268 +++++++++++++++++++++++++++++ include/linux/dma-fence.h | 124 +++++++++++++ 4 files changed, 395 insertions(+), 1 deletion(-) create mode 100644 drivers/base/dma-fence.c create mode 100644 include/linux/dma-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 7514dbf..36252ac 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -126,6 +126,8 @@ X!Edrivers/base/interface.c </sect1> <sect1><title>Device Drivers DMA Management</title> !Edrivers/base/dma-buf.c +!Edrivers/base/dma-fence.c +!Iinclude/linux/dma-fence.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1> diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 5aa2d70..6e9f217 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c new file mode 100644 index 0000000..93448e4 --- /dev/null +++ b/drivers/base/dma-fence.c @@ -0,0 +1,268 @@ +/* + * Fence mechanism for dma-buf to allow for asynchronous dma access + * + * Copyright (C) 2012 Texas Instruments + * Author: Rob Clark rob.clark@linaro.org + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <linux/slab.h> +#include <linux/sched.h> +#include <linux/export.h> +#include <linux/dma-fence.h> + +/** + * dma_fence_signal - signal completion of a fence + * @fence: the fence to signal + * + * All registered callbacks will be called directly (synchronously) and + * all blocked waters will be awoken. This should be always be called on + * software only fences, or alternatively be called after + * dma_fence_ops::enable_signaling is called. + */ +int dma_fence_signal(struct dma_fence *fence) +{ + unsigned long flags; + int ret = -EINVAL; + + if (WARN_ON(!fence)) + return -EINVAL; + + spin_lock_irqsave(&fence->event_queue.lock, flags); + if (!fence->signaled) { + fence->signaled = true; + __wake_up_locked_key(&fence->event_queue, TASK_NORMAL, + &fence->event_queue); + ret = 0; + } else + WARN(1, "Already signaled"); + spin_unlock_irqrestore(&fence->event_queue.lock, flags); + + return ret; +} +EXPORT_SYMBOL_GPL(dma_fence_signal); + +static void release_fence(struct kref *kref) +{ + struct dma_fence *fence = + container_of(kref, struct dma_fence, refcount); + + BUG_ON(waitqueue_active(&fence->event_queue)); + + if (fence->ops->release) + fence->ops->release(fence); + + kfree(fence); +} + +/** + * dma_fence_put - decreases refcount of the fence + * @fence: [in] fence to reduce refcount of + */ +void dma_fence_put(struct dma_fence *fence) +{ + if (WARN_ON(!fence)) + return; + kref_put(&fence->refcount, release_fence); +} +EXPORT_SYMBOL_GPL(dma_fence_put); + +/** + * dma_fence_get - increases refcount of the fence + * @fence: [in] fence to increase refcount of + */ +void dma_fence_get(struct dma_fence *fence) +{ + if (WARN_ON(!fence)) + return; + kref_get(&fence->refcount); +} +EXPORT_SYMBOL_GPL(dma_fence_get); + +static int check_signaling(struct dma_fence *fence) +{ + bool enable_signaling = false, signaled; + unsigned long flags; + + spin_lock_irqsave(&fence->event_queue.lock, flags); + signaled = fence->signaled; + if (!signaled && !fence->needs_sw_signal) + enable_signaling = fence->needs_sw_signal = true; + spin_unlock_irqrestore(&fence->event_queue.lock, flags); + + if (enable_signaling) { + int ret; + + /* At this point, if enable_signaling returns any error + * a wakeup has to be performanced regardless. + * -ENOENT signals fence was already signaled. Any other error + * inidicates a catastrophic hardware error. + * + * If any hardware error occurs, nothing can be done against + * it, so it's treated like the fence was already signaled. + * No synchronization can be performed, so we have to assume + * the fence was already signaled. + */ + ret = fence->ops->enable_signaling(fence); + if (ret) { + signaled = true; + dma_fence_signal(fence); + } + } + + if (!signaled) + return 0; + else + return -ENOENT; +} + +static int +__dma_fence_wake_func(wait_queue_t *wait, unsigned mode, int flags, void *key) +{ + struct dma_fence_cb *cb = + container_of(wait, struct dma_fence_cb, base); + + __remove_wait_queue(key, wait); + return cb->func(cb, wait->private); +} + +/** + * dma_fence_add_callback - add a callback to be called when the fence + * is signaled + * + * @fence: [in] the fence to wait on + * @cb: [in] the callback to register + * @func: [in] the function to call + * @priv: [in] the argument to pass to function + * + * cb will be initialized by dma_fence_add_callback, no initialization + * by the caller is required. Any number of callbacks can be registered + * to a fence, but a callback can only be registered to one fence at a time. + * + * Note that the callback can be called from an atomic context. If + * fence is already signaled, this function will return -ENOENT (and + * *not* call the callback) + */ +int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb, + dma_fence_func_t func, void *priv) +{ + unsigned long flags; + int ret; + + if (WARN_ON(!fence || !func)) + return -EINVAL; + + ret = check_signaling(fence); + + spin_lock_irqsave(&fence->event_queue.lock, flags); + if (!ret && fence->signaled) + ret = -ENOENT; + + if (!ret) { + cb->base.flags = 0; + cb->base.func = __dma_fence_wake_func; + cb->base.private = priv; + cb->fence = fence; + cb->func = func; + __add_wait_queue(&fence->event_queue, &cb->base); + } + spin_unlock_irqrestore(&fence->event_queue.lock, flags); + + return ret; +} +EXPORT_SYMBOL_GPL(dma_fence_add_callback); + +/** + * dma_fence_wait - wait for a fence to be signaled + * + * @fence: [in] The fence to wait on + * @intr: [in] if true, do an interruptible wait + * @timeout: [in] absolute time for timeout, in jiffies. + * + * Returns 0 on success, -EBUSY if a timeout occured, + * -ERESTARTSYS if the wait was interrupted by a signal. + */ +int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout) +{ + unsigned long cur; + int ret; + + if (WARN_ON(!fence)) + return -EINVAL; + + cur = jiffies; + if (time_after_eq(cur, timeout)) + return -EBUSY; + + timeout -= cur; + + ret = check_signaling(fence); + if (ret == -ENOENT) + return 0; + else if (ret) + return ret; + + if (intr) + ret = wait_event_interruptible_timeout(fence->event_queue, + fence->signaled, + timeout); + else + ret = wait_event_timeout(fence->event_queue, + fence->signaled, timeout); + + if (ret > 0) + return 0; + else if (!ret) + return -EBUSY; + else + return ret; +} +EXPORT_SYMBOL_GPL(dma_fence_wait); + +static int sw_enable_signaling(struct dma_fence *fence) +{ + /* dma_fence_create sets needs_sw_signal, + * so this should never be called + */ + WARN_ON_ONCE(1); + return 0; +} + +static const struct dma_fence_ops sw_fence_ops = { + .enable_signaling = sw_enable_signaling, +}; + +/** + * dma_fence_create - create a simple sw-only fence + * @priv: [in] the value to use for the priv member + * + * This fence only supports signaling from/to CPU. Other implementations + * of dma-fence can be used to support hardware to hardware signaling, if + * supported by the hardware, and use the dma_fence_helper_* functions for + * compatibility with other devices that only support sw signaling. + */ +struct dma_fence *dma_fence_create(void *priv) +{ + struct dma_fence *fence; + + fence = kmalloc(sizeof(struct dma_fence), GFP_KERNEL); + if (!fence) + return NULL; + + __dma_fence_init(fence, &sw_fence_ops, priv); + fence->needs_sw_signal = true; + + return fence; +} +EXPORT_SYMBOL_GPL(dma_fence_create); diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h new file mode 100644 index 0000000..e0ceddd --- /dev/null +++ b/include/linux/dma-fence.h @@ -0,0 +1,124 @@ +/* + * Fence mechanism for dma-buf to allow for asynchronous dma access + * + * Copyright (C) 2012 Texas Instruments + * Author: Rob Clark rob.clark@linaro.org + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __DMA_FENCE_H__ +#define __DMA_FENCE_H__ + +#include <linux/err.h> +#include <linux/list.h> +#include <linux/wait.h> +#include <linux/list.h> +#include <linux/dma-buf.h> + +struct dma_fence; +struct dma_fence_ops; +struct dma_fence_cb; + +/** + * struct dma_fence - software synchronization primitive + * @refcount: refcount for this fence + * @ops: dma_fence_ops associated with this fence + * @priv: fence specific private data + * @event_queue: event queue used for signaling fence + * @signaled: whether this fence has been completed yet + * @needs_sw_signal: whether dma_fence_ops::enable_signaling + * has been called yet + * + * Read Documentation/dma-buf-synchronization.txt for usage. + */ +struct dma_fence { + struct kref refcount; + const struct dma_fence_ops *ops; + wait_queue_head_t event_queue; + void *priv; + bool signaled:1; + bool needs_sw_signal:1; +}; + +typedef int (*dma_fence_func_t)(struct dma_fence_cb *cb, void *priv); + +/** + * struct dma_fence_cb - callback for dma_fence_add_callback + * @base: wait_queue_t added to event_queue + * @func: dma_fence_func_t to call + * @fence: fence this dma_fence_cb was used on + * + * This struct will be initialized by dma_fence_add_callback, additional + * data can be passed along by embedding dma_fence_cb in another struct. + */ +struct dma_fence_cb { + wait_queue_t base; + dma_fence_func_t func; + struct dma_fence *fence; +}; + +/** + * struct dma_fence_ops - operations implemented for dma-fence + * @enable_signaling: enable software signaling of fence + * @release: [optional] called on destruction of fence + * + * Notes on enable_signaling: + * For fence implementations that have the capability for hw->hw + * signaling, they can implement this op to enable the necessary + * irqs, or insert commands into cmdstream, etc. This is called + * in the first wait() or add_callback() path to let the fence + * implementation know that there is another driver waiting on + * the signal (ie. hw->sw case). + * + * A return value of -ENOENT will indicate that the fence has + * already passed. Any other errors will be treated as -ENOENT, + * and can happen because of hardware failure. + */ + +struct dma_fence_ops { + int (*enable_signaling)(struct dma_fence *fence); + void (*release)(struct dma_fence *fence); +}; + +struct dma_fence *dma_fence_create(void *priv); + +/** + * __dma_fence_init - Initialize a custom dma_fence. + * @fence: [in] The fence to initialize + * @ops: [in] The dma_fence_ops for operations on this fence. + * @priv: [in] The value to use for the priv member. + */ +static inline void +__dma_fence_init(struct dma_fence *fence, + const struct dma_fence_ops *ops, void *priv) +{ + WARN_ON(!ops || !ops->enable_signaling); + + kref_init(&fence->refcount); + fence->ops = ops; + fence->priv = priv; + fence->needs_sw_signal = false; + fence->signaled = false; + init_waitqueue_head(&fence->event_queue); +} + +void dma_fence_get(struct dma_fence *fence); +void dma_fence_put(struct dma_fence *fence); + +int dma_fence_signal(struct dma_fence *fence); +int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout); +int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb, + dma_fence_func_t func, void *priv); + +#endif /* __DMA_FENCE_H__ */
On Fri, Aug 10, 2012 at 04:57:52PM +0200, Maarten Lankhorst wrote:
A dma-fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace.
A dma-fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence.
- dma_fence_signal()
The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf.
TODO maybe need some helper fxn for simple devices, like a display- only drm/kms device which simply wants to wait for exclusive fence to be signaled, and then attach a non-exclusive fence while scanout is in progress.
The one pending on the fence can add an async callback:
- dma_fence_add_callback()
The callback can optionally be cancelled with remove_wait_queue()
Or wait synchronously (optionally with timeout or interruptible):
- dma_fence_wait()
A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example:
fence = dma_buf_get_fence(dmabuf); if (fence->ops == &bikeshed_fence_ops) { dma_buf *fence_buf; dma_bikeshed_fence_get_buf(fence, &fence_buf, &offset); ... tell the hw the memory location to wait on ... } else { /* fall-back to sw sync * / dma_fence_add_callback(fence, my_cb); }
On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way.
To facilitate other non-sw implementations, the enable_signaling callback can be used to keep track if a device not supporting hw sync is waiting on the fence, and in this case should arrange to call dma_fence_signal() at some point after the condition has changed, to notify other devices waiting on the fence. If there are no sw waiters, this can be skipped to avoid waking the CPU unnecessarily. The handler of the enable_signaling op should take a refcount until the fence is signaled, then release its ref.
The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization).
I think the commit message should be cleaned up: Kill the TODO, rip out the bikeshed_fence and otherwise update it to the latest code.
v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
I like the design of this, and especially that it's rather simple ;-)
A few comments to polish the interface, implementation and documentation a bit below.
Documentation/DocBook/device-drivers.tmpl | 2 drivers/base/Makefile | 2 drivers/base/dma-fence.c | 268 +++++++++++++++++++++++++++++ include/linux/dma-fence.h | 124 +++++++++++++ 4 files changed, 395 insertions(+), 1 deletion(-) create mode 100644 drivers/base/dma-fence.c create mode 100644 include/linux/dma-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 7514dbf..36252ac 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -126,6 +126,8 @@ X!Edrivers/base/interface.c </sect1> <sect1><title>Device Drivers DMA Management</title> !Edrivers/base/dma-buf.c +!Edrivers/base/dma-fence.c +!Iinclude/linux/dma-fence.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1> diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 5aa2d70..6e9f217 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c new file mode 100644 index 0000000..93448e4 --- /dev/null +++ b/drivers/base/dma-fence.c @@ -0,0 +1,268 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Texas Instruments
- Author: Rob Clark rob.clark@linaro.org
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/slab.h> +#include <linux/sched.h> +#include <linux/export.h> +#include <linux/dma-fence.h>
+/**
- dma_fence_signal - signal completion of a fence
- @fence: the fence to signal
- All registered callbacks will be called directly (synchronously) and
- all blocked waters will be awoken. This should be always be called on
- software only fences, or alternatively be called after
- dma_fence_ops::enable_signaling is called.
I think we need to be cleare here when dma_fence_signal can be called: - for a sw-only fence (i.e. created with dma_fence_create) dma_fence_signal _must_ be called under all circumstances. - for any other fences, dma_fence_signal may be called, but it _must_ be called once the ->enable_signalling func has been called and return 0 (i.e. success). - it may be called only _once_.
- */
+int dma_fence_signal(struct dma_fence *fence) +{
- unsigned long flags;
- int ret = -EINVAL;
- if (WARN_ON(!fence))
return -EINVAL;
- spin_lock_irqsave(&fence->event_queue.lock, flags);
- if (!fence->signaled) {
fence->signaled = true;
__wake_up_locked_key(&fence->event_queue, TASK_NORMAL,
&fence->event_queue);
ret = 0;
- } else
WARN(1, "Already signaled");
- spin_unlock_irqrestore(&fence->event_queue.lock, flags);
- return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_signal);
+static void release_fence(struct kref *kref) +{
- struct dma_fence *fence =
container_of(kref, struct dma_fence, refcount);
- BUG_ON(waitqueue_active(&fence->event_queue));
- if (fence->ops->release)
fence->ops->release(fence);
- kfree(fence);
+}
+/**
- dma_fence_put - decreases refcount of the fence
- @fence: [in] fence to reduce refcount of
- */
+void dma_fence_put(struct dma_fence *fence) +{
- if (WARN_ON(!fence))
return;
- kref_put(&fence->refcount, release_fence);
+} +EXPORT_SYMBOL_GPL(dma_fence_put);
+/**
- dma_fence_get - increases refcount of the fence
- @fence: [in] fence to increase refcount of
- */
+void dma_fence_get(struct dma_fence *fence) +{
- if (WARN_ON(!fence))
return;
- kref_get(&fence->refcount);
+} +EXPORT_SYMBOL_GPL(dma_fence_get);
+static int check_signaling(struct dma_fence *fence) +{
- bool enable_signaling = false, signaled;
- unsigned long flags;
- spin_lock_irqsave(&fence->event_queue.lock, flags);
- signaled = fence->signaled;
- if (!signaled && !fence->needs_sw_signal)
enable_signaling = fence->needs_sw_signal = true;
- spin_unlock_irqrestore(&fence->event_queue.lock, flags);
- if (enable_signaling) {
int ret;
/* At this point, if enable_signaling returns any error
* a wakeup has to be performanced regardless.
* -ENOENT signals fence was already signaled. Any other error
* inidicates a catastrophic hardware error.
*
* If any hardware error occurs, nothing can be done against
* it, so it's treated like the fence was already signaled.
* No synchronization can be performed, so we have to assume
* the fence was already signaled.
*/
ret = fence->ops->enable_signaling(fence);
if (ret) {
signaled = true;
dma_fence_signal(fence);
I think we should call dma_fence_signal only for -ENOENT and pass all other errors back as-is. E.g. on -ENOMEM or so we might want to retry ...
}
- }
- if (!signaled)
return 0;
- else
return -ENOENT;
+}
+static int +__dma_fence_wake_func(wait_queue_t *wait, unsigned mode, int flags, void *key) +{
- struct dma_fence_cb *cb =
container_of(wait, struct dma_fence_cb, base);
- __remove_wait_queue(key, wait);
- return cb->func(cb, wait->private);
+}
+/**
- dma_fence_add_callback - add a callback to be called when the fence
- is signaled
- @fence: [in] the fence to wait on
- @cb: [in] the callback to register
- @func: [in] the function to call
- @priv: [in] the argument to pass to function
- cb will be initialized by dma_fence_add_callback, no initialization
- by the caller is required. Any number of callbacks can be registered
- to a fence, but a callback can only be registered to one fence at a time.
- Note that the callback can be called from an atomic context. If
- fence is already signaled, this function will return -ENOENT (and
- *not* call the callback)
- */
+int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
dma_fence_func_t func, void *priv)
+{
- unsigned long flags;
- int ret;
- if (WARN_ON(!fence || !func))
return -EINVAL;
- ret = check_signaling(fence);
- spin_lock_irqsave(&fence->event_queue.lock, flags);
- if (!ret && fence->signaled)
ret = -ENOENT;
The locking here is a bit suboptimal: We grab the fence spinlock once in check_signalling and then again here. We should combine this into one critical section.
- if (!ret) {
cb->base.flags = 0;
cb->base.func = __dma_fence_wake_func;
cb->base.private = priv;
cb->fence = fence;
cb->func = func;
__add_wait_queue(&fence->event_queue, &cb->base);
- }
- spin_unlock_irqrestore(&fence->event_queue.lock, flags);
- return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_add_callback);
I think for api completenes we should also have a dma_fence_remove_callback function.
+/**
- dma_fence_wait - wait for a fence to be signaled
- @fence: [in] The fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] absolute time for timeout, in jiffies.
I don't quite like this, I think we should keep the styl of all other wait_*_timeout functions and pass the arg as timeout in jiffies (and also the same return semantics). Otherwise well have funny code that needs to handle return values differently depending upon whether it waits upon a dma_fence or a native object (where it would us the wait_*_timeout functions directly).
Also, I think we should add the non-_timeout variants, too, just for completeness.
- Returns 0 on success, -EBUSY if a timeout occured,
- -ERESTARTSYS if the wait was interrupted by a signal.
- */
+int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout) +{
- unsigned long cur;
- int ret;
- if (WARN_ON(!fence))
return -EINVAL;
- cur = jiffies;
- if (time_after_eq(cur, timeout))
return -EBUSY;
- timeout -= cur;
- ret = check_signaling(fence);
- if (ret == -ENOENT)
return 0;
- else if (ret)
return ret;
- if (intr)
ret = wait_event_interruptible_timeout(fence->event_queue,
fence->signaled,
timeout);
We have a race here, since fence->signaled is proctected by fenc->event_queu.lock. There's a special variant of the wait_event macros that automatically drops a spinlock at the right time, which would fit here. Again, like for the callback function I think you then need to open-code check_signalling to avoid taking the spinlock twice.
- else
ret = wait_event_timeout(fence->event_queue,
fence->signaled, timeout);
- if (ret > 0)
return 0;
- else if (!ret)
return -EBUSY;
- else
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_wait);
+static int sw_enable_signaling(struct dma_fence *fence) +{
- /* dma_fence_create sets needs_sw_signal,
* so this should never be called
*/
- WARN_ON_ONCE(1);
- return 0;
+}
+static const struct dma_fence_ops sw_fence_ops = {
- .enable_signaling = sw_enable_signaling,
+};
+/**
- dma_fence_create - create a simple sw-only fence
- @priv: [in] the value to use for the priv member
- This fence only supports signaling from/to CPU. Other implementations
- of dma-fence can be used to support hardware to hardware signaling, if
- supported by the hardware, and use the dma_fence_helper_* functions for
- compatibility with other devices that only support sw signaling.
- */
+struct dma_fence *dma_fence_create(void *priv) +{
- struct dma_fence *fence;
- fence = kmalloc(sizeof(struct dma_fence), GFP_KERNEL);
- if (!fence)
return NULL;
- __dma_fence_init(fence, &sw_fence_ops, priv);
- fence->needs_sw_signal = true;
- return fence;
+} +EXPORT_SYMBOL_GPL(dma_fence_create); diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h new file mode 100644 index 0000000..e0ceddd --- /dev/null +++ b/include/linux/dma-fence.h @@ -0,0 +1,124 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Texas Instruments
- Author: Rob Clark rob.clark@linaro.org
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __DMA_FENCE_H__ +#define __DMA_FENCE_H__
+#include <linux/err.h> +#include <linux/list.h> +#include <linux/wait.h> +#include <linux/list.h> +#include <linux/dma-buf.h>
+struct dma_fence; +struct dma_fence_ops; +struct dma_fence_cb;
+/**
- struct dma_fence - software synchronization primitive
- @refcount: refcount for this fence
- @ops: dma_fence_ops associated with this fence
- @priv: fence specific private data
- @event_queue: event queue used for signaling fence
- @signaled: whether this fence has been completed yet
- @needs_sw_signal: whether dma_fence_ops::enable_signaling
has been called yet
- Read Documentation/dma-buf-synchronization.txt for usage.
- */
+struct dma_fence {
- struct kref refcount;
- const struct dma_fence_ops *ops;
- wait_queue_head_t event_queue;
- void *priv;
- bool signaled:1;
- bool needs_sw_signal:1;
I guess a comment here is in order that signaled and needs_sw_signal is protected by event_queue.lock. Also, since the compiler is rather free to do crazy stuff with bitfields, I think it's preferred style to use an unsigned long and explicit bit #defines (ton ensure the compiler doesn't generate loads/stores that leak to other members of the struct).
+};
+typedef int (*dma_fence_func_t)(struct dma_fence_cb *cb, void *priv);
+/**
- struct dma_fence_cb - callback for dma_fence_add_callback
- @base: wait_queue_t added to event_queue
- @func: dma_fence_func_t to call
- @fence: fence this dma_fence_cb was used on
- This struct will be initialized by dma_fence_add_callback, additional
- data can be passed along by embedding dma_fence_cb in another struct.
- */
+struct dma_fence_cb {
- wait_queue_t base;
- dma_fence_func_t func;
- struct dma_fence *fence;
+};
+/**
- struct dma_fence_ops - operations implemented for dma-fence
- @enable_signaling: enable software signaling of fence
- @release: [optional] called on destruction of fence
- Notes on enable_signaling:
- For fence implementations that have the capability for hw->hw
- signaling, they can implement this op to enable the necessary
- irqs, or insert commands into cmdstream, etc. This is called
- in the first wait() or add_callback() path to let the fence
- implementation know that there is another driver waiting on
- the signal (ie. hw->sw case).
- A return value of -ENOENT will indicate that the fence has
- already passed. Any other errors will be treated as -ENOENT,
- and can happen because of hardware failure.
- */
I think we need to specify the calling contexts of these two.
+struct dma_fence_ops {
- int (*enable_signaling)(struct dma_fence *fence);
I think we should mandate that enable_signalling can be called from atomic context, but not irq context (since I don't see a use-case for calling this from irq context).
- void (*release)(struct dma_fence *fence);
Since a waiter might call ->release as a reaction to a signal, I think the release callback must be able to handle any calling context, and especially anything that calls dma_fence_signal.
+};
+struct dma_fence *dma_fence_create(void *priv);
+/**
- __dma_fence_init - Initialize a custom dma_fence.
- @fence: [in] The fence to initialize
- @ops: [in] The dma_fence_ops for operations on this fence.
- @priv: [in] The value to use for the priv member.
- */
+static inline void +__dma_fence_init(struct dma_fence *fence,
const struct dma_fence_ops *ops, void *priv)
+{
- WARN_ON(!ops || !ops->enable_signaling);
- kref_init(&fence->refcount);
- fence->ops = ops;
- fence->priv = priv;
- fence->needs_sw_signal = false;
- fence->signaled = false;
- init_waitqueue_head(&fence->event_queue);
+}
+void dma_fence_get(struct dma_fence *fence); +void dma_fence_put(struct dma_fence *fence);
+int dma_fence_signal(struct dma_fence *fence); +int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout); +int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
dma_fence_func_t func, void *priv);
+#endif /* __DMA_FENCE_H__ */
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
On Fri, Aug 10, 2012 at 3:29 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Fri, Aug 10, 2012 at 04:57:52PM +0200, Maarten Lankhorst wrote:
A dma-fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace.
A dma-fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence.
- dma_fence_signal()
The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf.
TODO maybe need some helper fxn for simple devices, like a display- only drm/kms device which simply wants to wait for exclusive fence to be signaled, and then attach a non-exclusive fence while scanout is in progress.
The one pending on the fence can add an async callback:
- dma_fence_add_callback()
The callback can optionally be cancelled with remove_wait_queue()
Or wait synchronously (optionally with timeout or interruptible):
- dma_fence_wait()
A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example:
fence = dma_buf_get_fence(dmabuf); if (fence->ops == &bikeshed_fence_ops) { dma_buf *fence_buf; dma_bikeshed_fence_get_buf(fence, &fence_buf, &offset); ... tell the hw the memory location to wait on ... } else { /* fall-back to sw sync * / dma_fence_add_callback(fence, my_cb); }
On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way.
To facilitate other non-sw implementations, the enable_signaling callback can be used to keep track if a device not supporting hw sync is waiting on the fence, and in this case should arrange to call dma_fence_signal() at some point after the condition has changed, to notify other devices waiting on the fence. If there are no sw waiters, this can be skipped to avoid waking the CPU unnecessarily. The handler of the enable_signaling op should take a refcount until the fence is signaled, then release its ref.
The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization).
I think the commit message should be cleaned up: Kill the TODO, rip out the bikeshed_fence and otherwise update it to the latest code.
v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
I like the design of this, and especially that it's rather simple ;-)
A few comments to polish the interface, implementation and documentation a bit below.
Documentation/DocBook/device-drivers.tmpl | 2 drivers/base/Makefile | 2 drivers/base/dma-fence.c | 268 +++++++++++++++++++++++++++++ include/linux/dma-fence.h | 124 +++++++++++++ 4 files changed, 395 insertions(+), 1 deletion(-) create mode 100644 drivers/base/dma-fence.c create mode 100644 include/linux/dma-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 7514dbf..36252ac 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -126,6 +126,8 @@ X!Edrivers/base/interface.c </sect1> <sect1><title>Device Drivers DMA Management</title> !Edrivers/base/dma-buf.c +!Edrivers/base/dma-fence.c +!Iinclude/linux/dma-fence.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1> diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 5aa2d70..6e9f217 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c new file mode 100644 index 0000000..93448e4 --- /dev/null +++ b/drivers/base/dma-fence.c @@ -0,0 +1,268 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Texas Instruments
- Author: Rob Clark rob.clark@linaro.org
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/slab.h> +#include <linux/sched.h> +#include <linux/export.h> +#include <linux/dma-fence.h>
+/**
- dma_fence_signal - signal completion of a fence
- @fence: the fence to signal
- All registered callbacks will be called directly (synchronously) and
- all blocked waters will be awoken. This should be always be called on
- software only fences, or alternatively be called after
- dma_fence_ops::enable_signaling is called.
I think we need to be cleare here when dma_fence_signal can be called:
- for a sw-only fence (i.e. created with dma_fence_create) dma_fence_signal _must_ be called under all circumstances.
- for any other fences, dma_fence_signal may be called, but it _must_ be called once the ->enable_signalling func has been called and return 0 (i.e. success).
- it may be called only _once_.
- */
+int dma_fence_signal(struct dma_fence *fence) +{
unsigned long flags;
int ret = -EINVAL;
if (WARN_ON(!fence))
return -EINVAL;
spin_lock_irqsave(&fence->event_queue.lock, flags);
if (!fence->signaled) {
fence->signaled = true;
__wake_up_locked_key(&fence->event_queue, TASK_NORMAL,
&fence->event_queue);
ret = 0;
} else
WARN(1, "Already signaled");
spin_unlock_irqrestore(&fence->event_queue.lock, flags);
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_signal);
+static void release_fence(struct kref *kref) +{
struct dma_fence *fence =
container_of(kref, struct dma_fence, refcount);
BUG_ON(waitqueue_active(&fence->event_queue));
if (fence->ops->release)
fence->ops->release(fence);
kfree(fence);
+}
+/**
- dma_fence_put - decreases refcount of the fence
- @fence: [in] fence to reduce refcount of
- */
+void dma_fence_put(struct dma_fence *fence) +{
if (WARN_ON(!fence))
return;
kref_put(&fence->refcount, release_fence);
+} +EXPORT_SYMBOL_GPL(dma_fence_put);
+/**
- dma_fence_get - increases refcount of the fence
- @fence: [in] fence to increase refcount of
- */
+void dma_fence_get(struct dma_fence *fence) +{
if (WARN_ON(!fence))
return;
kref_get(&fence->refcount);
+} +EXPORT_SYMBOL_GPL(dma_fence_get);
+static int check_signaling(struct dma_fence *fence) +{
bool enable_signaling = false, signaled;
unsigned long flags;
spin_lock_irqsave(&fence->event_queue.lock, flags);
signaled = fence->signaled;
if (!signaled && !fence->needs_sw_signal)
enable_signaling = fence->needs_sw_signal = true;
spin_unlock_irqrestore(&fence->event_queue.lock, flags);
if (enable_signaling) {
int ret;
/* At this point, if enable_signaling returns any error
* a wakeup has to be performanced regardless.
* -ENOENT signals fence was already signaled. Any other error
* inidicates a catastrophic hardware error.
*
* If any hardware error occurs, nothing can be done against
* it, so it's treated like the fence was already signaled.
* No synchronization can be performed, so we have to assume
* the fence was already signaled.
*/
ret = fence->ops->enable_signaling(fence);
if (ret) {
signaled = true;
dma_fence_signal(fence);
I think we should call dma_fence_signal only for -ENOENT and pass all other errors back as-is. E.g. on -ENOMEM or so we might want to retry ...
}
}
if (!signaled)
return 0;
else
return -ENOENT;
+}
+static int +__dma_fence_wake_func(wait_queue_t *wait, unsigned mode, int flags, void *key) +{
struct dma_fence_cb *cb =
container_of(wait, struct dma_fence_cb, base);
__remove_wait_queue(key, wait);
return cb->func(cb, wait->private);
+}
+/**
- dma_fence_add_callback - add a callback to be called when the fence
- is signaled
- @fence: [in] the fence to wait on
- @cb: [in] the callback to register
- @func: [in] the function to call
- @priv: [in] the argument to pass to function
- cb will be initialized by dma_fence_add_callback, no initialization
- by the caller is required. Any number of callbacks can be registered
- to a fence, but a callback can only be registered to one fence at a time.
- Note that the callback can be called from an atomic context. If
- fence is already signaled, this function will return -ENOENT (and
- *not* call the callback)
- */
+int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
dma_fence_func_t func, void *priv)
+{
unsigned long flags;
int ret;
if (WARN_ON(!fence || !func))
return -EINVAL;
ret = check_signaling(fence);
spin_lock_irqsave(&fence->event_queue.lock, flags);
if (!ret && fence->signaled)
ret = -ENOENT;
The locking here is a bit suboptimal: We grab the fence spinlock once in check_signalling and then again here. We should combine this into one critical section.
Fwiw, Maarten had the same thought. I had suggested keep it clean/simple for now and get it working, and then go back and optimize after, so you can blame this one on me :-P
I guess we could either just inline the check_signaling() code, but I didn't want to do that yet. Or we could call check_signaling() with the lock already hand, and just drop and re-acquire it around the relatively infrequent enable_signaling() callback.
if (!ret) {
cb->base.flags = 0;
cb->base.func = __dma_fence_wake_func;
cb->base.private = priv;
cb->fence = fence;
cb->func = func;
__add_wait_queue(&fence->event_queue, &cb->base);
}
spin_unlock_irqrestore(&fence->event_queue.lock, flags);
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_add_callback);
I think for api completenes we should also have a dma_fence_remove_callback function.
We did originally but Maarten found it was difficult to deal with properly when the gpu's hang. I think his alternative was just to require the hung driver to signal the fence. I had kicked around the idea of a dma_fence_cancel() alternative to signal that could pass an error thru to the waiting driver.. although not sure if the other driver could really do anything differently at that point.
+/**
- dma_fence_wait - wait for a fence to be signaled
- @fence: [in] The fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] absolute time for timeout, in jiffies.
I don't quite like this, I think we should keep the styl of all other wait_*_timeout functions and pass the arg as timeout in jiffies (and also the same return semantics). Otherwise well have funny code that needs to handle return values differently depending upon whether it waits upon a dma_fence or a native object (where it would us the wait_*_timeout functions directly).
We did start out this way, but there was an ugly jiffies roll-over problem that was difficult to deal with properly. Using an absolute time avoided the problem.
Also, I think we should add the non-_timeout variants, too, just for completeness.
- Returns 0 on success, -EBUSY if a timeout occured,
- -ERESTARTSYS if the wait was interrupted by a signal.
- */
+int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout) +{
unsigned long cur;
int ret;
if (WARN_ON(!fence))
return -EINVAL;
cur = jiffies;
if (time_after_eq(cur, timeout))
return -EBUSY;
timeout -= cur;
ret = check_signaling(fence);
if (ret == -ENOENT)
return 0;
else if (ret)
return ret;
if (intr)
ret = wait_event_interruptible_timeout(fence->event_queue,
fence->signaled,
timeout);
We have a race here, since fence->signaled is proctected by fenc->event_queu.lock. There's a special variant of the wait_event macros that automatically drops a spinlock at the right time, which would fit here. Again, like for the callback function I think you then need to open-code check_signalling to avoid taking the spinlock twice.
yeah, this would work for the call-check_signaling()-with-lock-already-held approach to get rid of the double lock..
else
ret = wait_event_timeout(fence->event_queue,
fence->signaled, timeout);
if (ret > 0)
return 0;
else if (!ret)
return -EBUSY;
else
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_wait);
+static int sw_enable_signaling(struct dma_fence *fence) +{
/* dma_fence_create sets needs_sw_signal,
* so this should never be called
*/
WARN_ON_ONCE(1);
return 0;
+}
+static const struct dma_fence_ops sw_fence_ops = {
.enable_signaling = sw_enable_signaling,
+};
+/**
- dma_fence_create - create a simple sw-only fence
- @priv: [in] the value to use for the priv member
- This fence only supports signaling from/to CPU. Other implementations
- of dma-fence can be used to support hardware to hardware signaling, if
- supported by the hardware, and use the dma_fence_helper_* functions for
- compatibility with other devices that only support sw signaling.
- */
+struct dma_fence *dma_fence_create(void *priv) +{
struct dma_fence *fence;
fence = kmalloc(sizeof(struct dma_fence), GFP_KERNEL);
if (!fence)
return NULL;
__dma_fence_init(fence, &sw_fence_ops, priv);
fence->needs_sw_signal = true;
return fence;
+} +EXPORT_SYMBOL_GPL(dma_fence_create); diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h new file mode 100644 index 0000000..e0ceddd --- /dev/null +++ b/include/linux/dma-fence.h @@ -0,0 +1,124 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Texas Instruments
- Author: Rob Clark rob.clark@linaro.org
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __DMA_FENCE_H__ +#define __DMA_FENCE_H__
+#include <linux/err.h> +#include <linux/list.h> +#include <linux/wait.h> +#include <linux/list.h> +#include <linux/dma-buf.h>
+struct dma_fence; +struct dma_fence_ops; +struct dma_fence_cb;
+/**
- struct dma_fence - software synchronization primitive
- @refcount: refcount for this fence
- @ops: dma_fence_ops associated with this fence
- @priv: fence specific private data
- @event_queue: event queue used for signaling fence
- @signaled: whether this fence has been completed yet
- @needs_sw_signal: whether dma_fence_ops::enable_signaling
has been called yet
- Read Documentation/dma-buf-synchronization.txt for usage.
- */
+struct dma_fence {
struct kref refcount;
const struct dma_fence_ops *ops;
wait_queue_head_t event_queue;
void *priv;
bool signaled:1;
bool needs_sw_signal:1;
I guess a comment here is in order that signaled and needs_sw_signal is protected by event_queue.lock. Also, since the compiler is rather free to do crazy stuff with bitfields, I think it's preferred style to use an unsigned long and explicit bit #defines (ton ensure the compiler doesn't generate loads/stores that leak to other members of the struct).
yeah, good point.. I guess we should just change that to be a 'unsigned long' bitmask.
BR, -R
+};
+typedef int (*dma_fence_func_t)(struct dma_fence_cb *cb, void *priv);
+/**
- struct dma_fence_cb - callback for dma_fence_add_callback
- @base: wait_queue_t added to event_queue
- @func: dma_fence_func_t to call
- @fence: fence this dma_fence_cb was used on
- This struct will be initialized by dma_fence_add_callback, additional
- data can be passed along by embedding dma_fence_cb in another struct.
- */
+struct dma_fence_cb {
wait_queue_t base;
dma_fence_func_t func;
struct dma_fence *fence;
+};
+/**
- struct dma_fence_ops - operations implemented for dma-fence
- @enable_signaling: enable software signaling of fence
- @release: [optional] called on destruction of fence
- Notes on enable_signaling:
- For fence implementations that have the capability for hw->hw
- signaling, they can implement this op to enable the necessary
- irqs, or insert commands into cmdstream, etc. This is called
- in the first wait() or add_callback() path to let the fence
- implementation know that there is another driver waiting on
- the signal (ie. hw->sw case).
- A return value of -ENOENT will indicate that the fence has
- already passed. Any other errors will be treated as -ENOENT,
- and can happen because of hardware failure.
- */
I think we need to specify the calling contexts of these two.
+struct dma_fence_ops {
int (*enable_signaling)(struct dma_fence *fence);
I think we should mandate that enable_signalling can be called from atomic context, but not irq context (since I don't see a use-case for calling this from irq context).
void (*release)(struct dma_fence *fence);
Since a waiter might call ->release as a reaction to a signal, I think the release callback must be able to handle any calling context, and especially anything that calls dma_fence_signal.
+};
+struct dma_fence *dma_fence_create(void *priv);
+/**
- __dma_fence_init - Initialize a custom dma_fence.
- @fence: [in] The fence to initialize
- @ops: [in] The dma_fence_ops for operations on this fence.
- @priv: [in] The value to use for the priv member.
- */
+static inline void +__dma_fence_init(struct dma_fence *fence,
const struct dma_fence_ops *ops, void *priv)
+{
WARN_ON(!ops || !ops->enable_signaling);
kref_init(&fence->refcount);
fence->ops = ops;
fence->priv = priv;
fence->needs_sw_signal = false;
fence->signaled = false;
init_waitqueue_head(&fence->event_queue);
+}
+void dma_fence_get(struct dma_fence *fence); +void dma_fence_put(struct dma_fence *fence);
+int dma_fence_signal(struct dma_fence *fence); +int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout); +int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
dma_fence_func_t func, void *priv);
+#endif /* __DMA_FENCE_H__ */
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
-- Daniel Vetter Mail: daniel@ffwll.ch Mobile: +41 (0)79 365 57 48 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Hey,
Op 11-08-12 17:14, Rob Clark schreef:
On Fri, Aug 10, 2012 at 3:29 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Fri, Aug 10, 2012 at 04:57:52PM +0200, Maarten Lankhorst wrote:
A dma-fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace.
A dma-fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence.
- dma_fence_signal()
The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf.
TODO maybe need some helper fxn for simple devices, like a display- only drm/kms device which simply wants to wait for exclusive fence to be signaled, and then attach a non-exclusive fence while scanout is in progress.
The one pending on the fence can add an async callback:
- dma_fence_add_callback()
The callback can optionally be cancelled with remove_wait_queue()
Or wait synchronously (optionally with timeout or interruptible):
- dma_fence_wait()
A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example:
fence = dma_buf_get_fence(dmabuf); if (fence->ops == &bikeshed_fence_ops) { dma_buf *fence_buf; dma_bikeshed_fence_get_buf(fence, &fence_buf, &offset); ... tell the hw the memory location to wait on ... } else { /* fall-back to sw sync * / dma_fence_add_callback(fence, my_cb); }
On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way.
To facilitate other non-sw implementations, the enable_signaling callback can be used to keep track if a device not supporting hw sync is waiting on the fence, and in this case should arrange to call dma_fence_signal() at some point after the condition has changed, to notify other devices waiting on the fence. If there are no sw waiters, this can be skipped to avoid waking the CPU unnecessarily. The handler of the enable_signaling op should take a refcount until the fence is signaled, then release its ref.
The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization).
I think the commit message should be cleaned up: Kill the TODO, rip out the bikeshed_fence and otherwise update it to the latest code.
Agreed.
v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
I like the design of this, and especially that it's rather simple ;-)
A few comments to polish the interface, implementation and documentation a bit below.
Documentation/DocBook/device-drivers.tmpl | 2 drivers/base/Makefile | 2 drivers/base/dma-fence.c | 268 +++++++++++++++++++++++++++++ include/linux/dma-fence.h | 124 +++++++++++++ 4 files changed, 395 insertions(+), 1 deletion(-) create mode 100644 drivers/base/dma-fence.c create mode 100644 include/linux/dma-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 7514dbf..36252ac 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -126,6 +126,8 @@ X!Edrivers/base/interface.c </sect1> <sect1><title>Device Drivers DMA Management</title> !Edrivers/base/dma-buf.c +!Edrivers/base/dma-fence.c +!Iinclude/linux/dma-fence.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1> diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 5aa2d70..6e9f217 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c new file mode 100644 index 0000000..93448e4 --- /dev/null +++ b/drivers/base/dma-fence.c @@ -0,0 +1,268 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Texas Instruments
- Author: Rob Clark rob.clark@linaro.org
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/slab.h> +#include <linux/sched.h> +#include <linux/export.h> +#include <linux/dma-fence.h>
+/**
- dma_fence_signal - signal completion of a fence
- @fence: the fence to signal
- All registered callbacks will be called directly (synchronously) and
- all blocked waters will be awoken. This should be always be called on
- software only fences, or alternatively be called after
- dma_fence_ops::enable_signaling is called.
I think we need to be cleare here when dma_fence_signal can be called:
- for a sw-only fence (i.e. created with dma_fence_create) dma_fence_signal _must_ be called under all circumstances.
- for any other fences, dma_fence_signal may be called, but it _must_ be called once the ->enable_signalling func has been called and return 0 (i.e. success).
- it may be called only _once_.
As we discussed on irc it might be beneficial to be able to have it called twice, the second time would be a noop, however.
- */
+int dma_fence_signal(struct dma_fence *fence) +{
unsigned long flags;
int ret = -EINVAL;
if (WARN_ON(!fence))
return -EINVAL;
spin_lock_irqsave(&fence->event_queue.lock, flags);
if (!fence->signaled) {
fence->signaled = true;
__wake_up_locked_key(&fence->event_queue, TASK_NORMAL,
&fence->event_queue);
ret = 0;
} else
WARN(1, "Already signaled");
spin_unlock_irqrestore(&fence->event_queue.lock, flags);
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_signal);
+static void release_fence(struct kref *kref) +{
struct dma_fence *fence =
container_of(kref, struct dma_fence, refcount);
BUG_ON(waitqueue_active(&fence->event_queue));
if (fence->ops->release)
fence->ops->release(fence);
kfree(fence);
+}
+/**
- dma_fence_put - decreases refcount of the fence
- @fence: [in] fence to reduce refcount of
- */
+void dma_fence_put(struct dma_fence *fence) +{
if (WARN_ON(!fence))
return;
kref_put(&fence->refcount, release_fence);
+} +EXPORT_SYMBOL_GPL(dma_fence_put);
+/**
- dma_fence_get - increases refcount of the fence
- @fence: [in] fence to increase refcount of
- */
+void dma_fence_get(struct dma_fence *fence) +{
if (WARN_ON(!fence))
return;
kref_get(&fence->refcount);
+} +EXPORT_SYMBOL_GPL(dma_fence_get);
+static int check_signaling(struct dma_fence *fence) +{
bool enable_signaling = false, signaled;
unsigned long flags;
spin_lock_irqsave(&fence->event_queue.lock, flags);
signaled = fence->signaled;
if (!signaled && !fence->needs_sw_signal)
enable_signaling = fence->needs_sw_signal = true;
spin_unlock_irqrestore(&fence->event_queue.lock, flags);
if (enable_signaling) {
int ret;
/* At this point, if enable_signaling returns any error
* a wakeup has to be performanced regardless.
* -ENOENT signals fence was already signaled. Any other error
* inidicates a catastrophic hardware error.
*
* If any hardware error occurs, nothing can be done against
* it, so it's treated like the fence was already signaled.
* No synchronization can be performed, so we have to assume
* the fence was already signaled.
*/
ret = fence->ops->enable_signaling(fence);
if (ret) {
signaled = true;
dma_fence_signal(fence);
I think we should call dma_fence_signal only for -ENOENT and pass all other errors back as-is. E.g. on -ENOMEM or so we might want to retry ...
Also discussed on irc, boolean might be a better solution until we start dealing with hardware on fire. :) This would however likely be dealt in the same way as signaling, however.
}
}
if (!signaled)
return 0;
else
return -ENOENT;
+}
+static int +__dma_fence_wake_func(wait_queue_t *wait, unsigned mode, int flags, void *key) +{
struct dma_fence_cb *cb =
container_of(wait, struct dma_fence_cb, base);
__remove_wait_queue(key, wait);
return cb->func(cb, wait->private);
+}
+/**
- dma_fence_add_callback - add a callback to be called when the fence
- is signaled
- @fence: [in] the fence to wait on
- @cb: [in] the callback to register
- @func: [in] the function to call
- @priv: [in] the argument to pass to function
- cb will be initialized by dma_fence_add_callback, no initialization
- by the caller is required. Any number of callbacks can be registered
- to a fence, but a callback can only be registered to one fence at a time.
- Note that the callback can be called from an atomic context. If
- fence is already signaled, this function will return -ENOENT (and
- *not* call the callback)
- */
+int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
dma_fence_func_t func, void *priv)
+{
unsigned long flags;
int ret;
if (WARN_ON(!fence || !func))
return -EINVAL;
ret = check_signaling(fence);
spin_lock_irqsave(&fence->event_queue.lock, flags);
if (!ret && fence->signaled)
ret = -ENOENT;
The locking here is a bit suboptimal: We grab the fence spinlock once in check_signalling and then again here. We should combine this into one critical section.
Fwiw, Maarten had the same thought. I had suggested keep it clean/simple for now and get it working, and then go back and optimize after, so you can blame this one on me :-P
I guess we could either just inline the check_signaling() code, but I didn't want to do that yet. Or we could call check_signaling() with the lock already hand, and just drop and re-acquire it around the relatively infrequent enable_signaling() callback.
There's nothing that would prevent us from doing it in 1 go and do enable_signaling after adding the callback. As danvet pointed out on irc, dma_fence_wait has to be reworked to remove a race condition anyway.
if (!ret) {
cb->base.flags = 0;
cb->base.func = __dma_fence_wake_func;
cb->base.private = priv;
cb->fence = fence;
cb->func = func;
__add_wait_queue(&fence->event_queue, &cb->base);
}
spin_unlock_irqrestore(&fence->event_queue.lock, flags);
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_add_callback);
I think for api completenes we should also have a dma_fence_remove_callback function.
We did originally but Maarten found it was difficult to deal with properly when the gpu's hang. I think his alternative was just to require the hung driver to signal the fence. I had kicked around the idea of a dma_fence_cancel() alternative to signal that could pass an error thru to the waiting driver.. although not sure if the other driver could really do anything differently at that point.
No, there is a very real reason I removed dma_fence_remove_callback. It is absolutely non-trivial to cancel it once added, since you have to deal with all kinds of race conditions.. See i915_gem_reset_requests in my git tree: http://cgit.freedesktop.org/~mlankhorst/linux/commit/?id=673c4b2550bc63ec134...
This is the only way to do it completely deadlock/memory corruption free since you essentially have a locking inversion to avoid. I had it wrong the first 2 times too, even when I knew about a lot of the locking complications. If you want to do it, in most cases it will likely be easier to just eat the signal and ignore it instead of canceling.
+/**
- dma_fence_wait - wait for a fence to be signaled
- @fence: [in] The fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] absolute time for timeout, in jiffies.
I don't quite like this, I think we should keep the styl of all other wait_*_timeout functions and pass the arg as timeout in jiffies (and also the same return semantics). Otherwise well have funny code that needs to handle return values differently depending upon whether it waits upon a dma_fence or a native object (where it would us the wait_*_timeout functions directly).
We did start out this way, but there was an ugly jiffies roll-over problem that was difficult to deal with properly. Using an absolute time avoided the problem.
Yeah, this makes it easier to wait on multiple fences, instead of resetting the timeout over and over and over again, or manually recalculating.
Also, I think we should add the non-_timeout variants, too, just for completeness.
Would it be ok if timeout == 0 is special, then?
- Returns 0 on success, -EBUSY if a timeout occured,
- -ERESTARTSYS if the wait was interrupted by a signal.
- */
+int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout) +{
unsigned long cur;
int ret;
if (WARN_ON(!fence))
return -EINVAL;
cur = jiffies;
if (time_after_eq(cur, timeout))
return -EBUSY;
timeout -= cur;
ret = check_signaling(fence);
if (ret == -ENOENT)
return 0;
else if (ret)
return ret;
if (intr)
ret = wait_event_interruptible_timeout(fence->event_queue,
fence->signaled,
timeout);
We have a race here, since fence->signaled is proctected by fenc->event_queu.lock. There's a special variant of the wait_event macros that automatically drops a spinlock at the right time, which would fit here. Again, like for the callback function I think you then need to open-code check_signalling to avoid taking the spinlock twice.
yeah, this would work for the call-check_signaling()-with-lock-already-held approach to get rid of the double lock..
Ok.
else
ret = wait_event_timeout(fence->event_queue,
fence->signaled, timeout);
if (ret > 0)
return 0;
else if (!ret)
return -EBUSY;
else
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_wait);
+static int sw_enable_signaling(struct dma_fence *fence) +{
/* dma_fence_create sets needs_sw_signal,
* so this should never be called
*/
WARN_ON_ONCE(1);
return 0;
+}
+static const struct dma_fence_ops sw_fence_ops = {
.enable_signaling = sw_enable_signaling,
+};
+/**
- dma_fence_create - create a simple sw-only fence
- @priv: [in] the value to use for the priv member
- This fence only supports signaling from/to CPU. Other implementations
- of dma-fence can be used to support hardware to hardware signaling, if
- supported by the hardware, and use the dma_fence_helper_* functions for
- compatibility with other devices that only support sw signaling.
- */
+struct dma_fence *dma_fence_create(void *priv) +{
struct dma_fence *fence;
fence = kmalloc(sizeof(struct dma_fence), GFP_KERNEL);
if (!fence)
return NULL;
__dma_fence_init(fence, &sw_fence_ops, priv);
fence->needs_sw_signal = true;
return fence;
+} +EXPORT_SYMBOL_GPL(dma_fence_create); diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h new file mode 100644 index 0000000..e0ceddd --- /dev/null +++ b/include/linux/dma-fence.h @@ -0,0 +1,124 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Texas Instruments
- Author: Rob Clark rob.clark@linaro.org
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __DMA_FENCE_H__ +#define __DMA_FENCE_H__
+#include <linux/err.h> +#include <linux/list.h> +#include <linux/wait.h> +#include <linux/list.h> +#include <linux/dma-buf.h>
+struct dma_fence; +struct dma_fence_ops; +struct dma_fence_cb;
+/**
- struct dma_fence - software synchronization primitive
- @refcount: refcount for this fence
- @ops: dma_fence_ops associated with this fence
- @priv: fence specific private data
- @event_queue: event queue used for signaling fence
- @signaled: whether this fence has been completed yet
- @needs_sw_signal: whether dma_fence_ops::enable_signaling
has been called yet
- Read Documentation/dma-buf-synchronization.txt for usage.
- */
+struct dma_fence {
struct kref refcount;
const struct dma_fence_ops *ops;
wait_queue_head_t event_queue;
void *priv;
bool signaled:1;
bool needs_sw_signal:1;
I guess a comment here is in order that signaled and needs_sw_signal is protected by event_queue.lock. Also, since the compiler is rather free to do crazy stuff with bitfields, I think it's preferred style to use an unsigned long and explicit bit #defines (ton ensure the compiler doesn't generate loads/stores that leak to other members of the struct).
yeah, good point.. I guess we should just change that to be a 'unsigned long' bitmask.
BR, -R
+1
+};
+typedef int (*dma_fence_func_t)(struct dma_fence_cb *cb, void *priv);
+/**
- struct dma_fence_cb - callback for dma_fence_add_callback
- @base: wait_queue_t added to event_queue
- @func: dma_fence_func_t to call
- @fence: fence this dma_fence_cb was used on
- This struct will be initialized by dma_fence_add_callback, additional
- data can be passed along by embedding dma_fence_cb in another struct.
- */
+struct dma_fence_cb {
wait_queue_t base;
dma_fence_func_t func;
struct dma_fence *fence;
+};
+/**
- struct dma_fence_ops - operations implemented for dma-fence
- @enable_signaling: enable software signaling of fence
- @release: [optional] called on destruction of fence
- Notes on enable_signaling:
- For fence implementations that have the capability for hw->hw
- signaling, they can implement this op to enable the necessary
- irqs, or insert commands into cmdstream, etc. This is called
- in the first wait() or add_callback() path to let the fence
- implementation know that there is another driver waiting on
- the signal (ie. hw->sw case).
- A return value of -ENOENT will indicate that the fence has
- already passed. Any other errors will be treated as -ENOENT,
- and can happen because of hardware failure.
- */
I think we need to specify the calling contexts of these two.
+struct dma_fence_ops {
int (*enable_signaling)(struct dma_fence *fence);
I think we should mandate that enable_signalling can be called from atomic context, but not irq context (since I don't see a use-case for calling this from irq context).
What would not having this called from irq context get you? I do agree that you would be crazy to do so, but not sure why we should restrict it.
void (*release)(struct dma_fence *fence);
Since a waiter might call ->release as a reaction to a signal, I think the release callback must be able to handle any calling context, and especially anything that calls dma_fence_signal.
Agreed. It is also the most likely case it will be called from irq context.
+};
+struct dma_fence *dma_fence_create(void *priv);
+/**
- __dma_fence_init - Initialize a custom dma_fence.
- @fence: [in] The fence to initialize
- @ops: [in] The dma_fence_ops for operations on this fence.
- @priv: [in] The value to use for the priv member.
- */
+static inline void +__dma_fence_init(struct dma_fence *fence,
const struct dma_fence_ops *ops, void *priv)
+{
WARN_ON(!ops || !ops->enable_signaling);
kref_init(&fence->refcount);
fence->ops = ops;
fence->priv = priv;
fence->needs_sw_signal = false;
fence->signaled = false;
init_waitqueue_head(&fence->event_queue);
+}
+void dma_fence_get(struct dma_fence *fence); +void dma_fence_put(struct dma_fence *fence);
+int dma_fence_signal(struct dma_fence *fence); +int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout); +int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
dma_fence_func_t func, void *priv);
+#endif /* __DMA_FENCE_H__ */
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
-- Daniel Vetter Mail: daniel@ffwll.ch Mobile: +41 (0)79 365 57 48 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
~Maarten
On Sat, Aug 11, 2012 at 06:00:46PM +0200, Maarten Lankhorst wrote:
Op 11-08-12 17:14, Rob Clark schreef:
On Fri, Aug 10, 2012 at 3:29 PM, Daniel Vetter daniel@ffwll.ch wrote:
+/**
- dma_fence_signal - signal completion of a fence
- @fence: the fence to signal
- All registered callbacks will be called directly (synchronously) and
- all blocked waters will be awoken. This should be always be called on
- software only fences, or alternatively be called after
- dma_fence_ops::enable_signaling is called.
I think we need to be cleare here when dma_fence_signal can be called:
- for a sw-only fence (i.e. created with dma_fence_create) dma_fence_signal _must_ be called under all circumstances.
- for any other fences, dma_fence_signal may be called, but it _must_ be called once the ->enable_signalling func has been called and return 0 (i.e. success).
- it may be called only _once_.
As we discussed on irc it might be beneficial to be able to have it called twice, the second time would be a noop, however.
Agreed.
[snip]
/* At this point, if enable_signaling returns any error
* a wakeup has to be performanced regardless.
* -ENOENT signals fence was already signaled. Any other error
* inidicates a catastrophic hardware error.
*
* If any hardware error occurs, nothing can be done against
* it, so it's treated like the fence was already signaled.
* No synchronization can be performed, so we have to assume
* the fence was already signaled.
*/
ret = fence->ops->enable_signaling(fence);
if (ret) {
signaled = true;
dma_fence_signal(fence);
I think we should call dma_fence_signal only for -ENOENT and pass all other errors back as-is. E.g. on -ENOMEM or so we might want to retry ...
Also discussed on irc, boolean might be a better solution until we start dealing with hardware on fire. :) This would however likely be dealt in the same way as signaling, however.
Agreed.
[snip]
if (!ret) {
cb->base.flags = 0;
cb->base.func = __dma_fence_wake_func;
cb->base.private = priv;
cb->fence = fence;
cb->func = func;
__add_wait_queue(&fence->event_queue, &cb->base);
}
spin_unlock_irqrestore(&fence->event_queue.lock, flags);
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_add_callback);
I think for api completenes we should also have a dma_fence_remove_callback function.
We did originally but Maarten found it was difficult to deal with properly when the gpu's hang. I think his alternative was just to require the hung driver to signal the fence. I had kicked around the idea of a dma_fence_cancel() alternative to signal that could pass an error thru to the waiting driver.. although not sure if the other driver could really do anything differently at that point.
No, there is a very real reason I removed dma_fence_remove_callback. It is absolutely non-trivial to cancel it once added, since you have to deal with all kinds of race conditions.. See i915_gem_reset_requests in my git tree: http://cgit.freedesktop.org/~mlankhorst/linux/commit/?id=673c4b2550bc63ec134...
I don't see the point in that code ... Why can't we drop the kref _outside_ of the critical section protected by event_queue_lock? Then you pretty much have an open-coded version of dma_fence_callback_cancel in there.
This is the only way to do it completely deadlock/memory corruption free since you essentially have a locking inversion to avoid. I had it wrong the first 2 times too, even when I knew about a lot of the locking complications. If you want to do it, in most cases it will likely be easier to just eat the signal and ignore it instead of canceling.
+/**
- dma_fence_wait - wait for a fence to be signaled
- @fence: [in] The fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] absolute time for timeout, in jiffies.
I don't quite like this, I think we should keep the styl of all other wait_*_timeout functions and pass the arg as timeout in jiffies (and also the same return semantics). Otherwise well have funny code that needs to handle return values differently depending upon whether it waits upon a dma_fence or a native object (where it would us the wait_*_timeout functions directly).
We did start out this way, but there was an ugly jiffies roll-over problem that was difficult to deal with properly. Using an absolute time avoided the problem.
Yeah, this makes it easier to wait on multiple fences, instead of resetting the timeout over and over and over again, or manually recalculating.
I don't see how updating the jiffies_left timeout is that onerous, and in any case we can easily wrap that up into a little helper function, passing in an array of dma_fence pointers.
Creating interfaces that differ from established kernel api patterns otoh isn't good imo. I.e. I want dma_fence_wait_bla to be a drop-in replacement for the corresponding wait_event_bla function/macro, which the same semantics for the timeout and return values.
Differing in such things only leads to confusion when reading patches imo.
Also, I think we should add the non-_timeout variants, too, just for completeness.
Would it be ok if timeout == 0 is special, then?
See above ;-)
[snip]
+struct dma_fence_ops {
int (*enable_signaling)(struct dma_fence *fence);
I think we should mandate that enable_signalling can be called from atomic context, but not irq context (since I don't see a use-case for calling this from irq context).
What would not having this called from irq context get you? I do agree that you would be crazy to do so, but not sure why we should restrict it.
If we allow ->enable_signalling to be called from irq context, all spinlocks the driver grabs need to be irq safe. If we disallow this I guess some drivers could easily get by with plain spinlocks.
And since we both agree that it would be crazy to call ->enable_signalling from irq context, I think we should bake this constrain into the interface.
Cheers, Daniel
Hey,
Op 11-08-12 21:39, Daniel Vetter schreef:
if (!ret) {
cb->base.flags = 0;
cb->base.func = __dma_fence_wake_func;
cb->base.private = priv;
cb->fence = fence;
cb->func = func;
__add_wait_queue(&fence->event_queue, &cb->base);
}
spin_unlock_irqrestore(&fence->event_queue.lock, flags);
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_add_callback);
I think for api completenes we should also have a dma_fence_remove_callback function.
We did originally but Maarten found it was difficult to deal with properly when the gpu's hang. I think his alternative was just to require the hung driver to signal the fence. I had kicked around the idea of a dma_fence_cancel() alternative to signal that could pass an error thru to the waiting driver.. although not sure if the other driver could really do anything differently at that point.
No, there is a very real reason I removed dma_fence_remove_callback. It is absolutely non-trivial to cancel it once added, since you have to deal with all kinds of race conditions.. See i915_gem_reset_requests in my git tree: http://cgit.freedesktop.org/~mlankhorst/linux/commit/?id=673c4b2550bc63ec134...
I don't see the point in that code ... Why can't we drop the kref _outside_ of the critical section protected by event_queue_lock? Then you pretty much have an open-coded version of dma_fence_callback_cancel in there.
The event_queue_lock protects 2 things: 1. Refcount to dma_fence won't drop to 0 if val->fences[i] != NULL Creator is supposed to keep a refcount until after dma_fence_signal returns. Adding a refcount you release in the callback won't help you here much.
2. Integrity of request->prime_list The list_del's are otherwise not serialized, leaving a corrupted list if 2 fences signal at the same time. kref_put in the non-free'ing case is simply an atomic decrement, so there's no measurable penalty for keeping it in the lock.
So no, you could remove it from the kref_put, but val->fences[i] = NULL assignment would still need it, so there's no real penalty left for putting kref_put in the spinlock to also protect the second case without dropping/retaking lock.
I'll add dma_fence_remove_callback that returns a bool of whether the callback was removed or not. In the latter case the fence already fired. However, if you call dma_fence_remove_callback twice, on the wrong fence, or without ever calling dma_fence_add_callback you'd undefined behavior, and there's no guarantee I could detech such situation, but with those constraints I think it could be useful to have.
It sucks but prime_rm_lock is the inner lock so the only way not to deadlock is doing what I'm doing there, or not getting the hardware locked in the first place.
This is the only way to do it completely deadlock/memory corruption free since you essentially have a locking inversion to avoid. I had it wrong the first 2 times too, even when I knew about a lot of the locking complications. If you want to do it, in most cases it will likely be easier to just eat the signal and ignore it instead of canceling.
+/**
- dma_fence_wait - wait for a fence to be signaled
- @fence: [in] The fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] absolute time for timeout, in jiffies.
I don't quite like this, I think we should keep the styl of all other wait_*_timeout functions and pass the arg as timeout in jiffies (and also the same return semantics). Otherwise well have funny code that needs to handle return values differently depending upon whether it waits upon a dma_fence or a native object (where it would us the wait_*_timeout functions directly).
We did start out this way, but there was an ugly jiffies roll-over problem that was difficult to deal with properly. Using an absolute time avoided the problem.
Yeah, this makes it easier to wait on multiple fences, instead of resetting the timeout over and over and over again, or manually recalculating.
I don't see how updating the jiffies_left timeout is that onerous, and in any case we can easily wrap that up into a little helper function, passing in an array of dma_fence pointers.
Creating interfaces that differ from established kernel api patterns otoh isn't good imo. I.e. I want dma_fence_wait_bla to be a drop-in replacement for the corresponding wait_event_bla function/macro, which the same semantics for the timeout and return values.
Differing in such things only leads to confusion when reading patches imo.
Ok, I'll see if I can make a set of functions that follow the normal rules for these types of functions.
~Maarten
On Sat, Aug 11, 2012 at 10:14:40AM -0500, Rob Clark wrote:
On Fri, Aug 10, 2012 at 3:29 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Fri, Aug 10, 2012 at 04:57:52PM +0200, Maarten Lankhorst wrote:
if (!ret) {
cb->base.flags = 0;
cb->base.func = __dma_fence_wake_func;
cb->base.private = priv;
cb->fence = fence;
cb->func = func;
__add_wait_queue(&fence->event_queue, &cb->base);
}
spin_unlock_irqrestore(&fence->event_queue.lock, flags);
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_add_callback);
I think for api completenes we should also have a dma_fence_remove_callback function.
We did originally but Maarten found it was difficult to deal with properly when the gpu's hang. I think his alternative was just to require the hung driver to signal the fence. I had kicked around the idea of a dma_fence_cancel() alternative to signal that could pass an error thru to the waiting driver.. although not sure if the other driver could really do anything differently at that point.
Well, the idea is not to cancel all callbacks, but just a single one, in case a driver wants to somehow abort the wait. E.g. when the own gpu dies I guess we should clear all these fence callbacks so that they can't clobber the hw state after the reset.
+/**
- dma_fence_wait - wait for a fence to be signaled
- @fence: [in] The fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] absolute time for timeout, in jiffies.
I don't quite like this, I think we should keep the styl of all other wait_*_timeout functions and pass the arg as timeout in jiffies (and also the same return semantics). Otherwise well have funny code that needs to handle return values differently depending upon whether it waits upon a dma_fence or a native object (where it would us the wait_*_timeout functions directly).
We did start out this way, but there was an ugly jiffies roll-over problem that was difficult to deal with properly. Using an absolute time avoided the problem.
Well, as-is the api works differently than all the other _timeout apis I've seen in the kernel, which makes it confusing. Also, I don't quite see what jiffies wraparound issue there is?
Also, I think we should add the non-_timeout variants, too, just for completeness.
This request here has the same reasons, essentially: If we offer a dma_fence wait api that matches the usual wait apis closely, it's harder to get their usage wrong. I know that i915 has some major cludge for a wait_seqno interface internally, but that's no reason to copy that approach ;-)
Cheers, Daniel
On Sat, Aug 11, 2012 at 2:22 PM, Daniel Vetter daniel@ffwll.ch wrote:
+/**
- dma_fence_wait - wait for a fence to be signaled
- @fence: [in] The fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] absolute time for timeout, in jiffies.
I don't quite like this, I think we should keep the styl of all other wait_*_timeout functions and pass the arg as timeout in jiffies (and also the same return semantics). Otherwise well have funny code that needs to handle return values differently depending upon whether it waits upon a dma_fence or a native object (where it would us the wait_*_timeout functions directly).
We did start out this way, but there was an ugly jiffies roll-over problem that was difficult to deal with properly. Using an absolute time avoided the problem.
Well, as-is the api works differently than all the other _timeout apis I've seen in the kernel, which makes it confusing. Also, I don't quite see what jiffies wraparound issue there is?
iirc, the problem was in dmabufmgr, in dmabufmgr_wait_completed_cpu().. with an absolute timeout, it could loop over all the fences without having to adjust the timeout for the elapsed time. Otherwise it had to adjust the timeout and keep track of when the timeout elapsed without confusing itself via rollover.
BR, -R
On Fri, Aug 10, 2012 at 04:57:52PM +0200, Maarten Lankhorst wrote:
A dma-fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace.
A dma-fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence.
- dma_fence_signal()
The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf.
TODO maybe need some helper fxn for simple devices, like a display- only drm/kms device which simply wants to wait for exclusive fence to be signaled, and then attach a non-exclusive fence while scanout is in progress.
The one pending on the fence can add an async callback:
- dma_fence_add_callback()
The callback can optionally be cancelled with remove_wait_queue()
Or wait synchronously (optionally with timeout or interruptible):
- dma_fence_wait()
A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example:
fence = dma_buf_get_fence(dmabuf); if (fence->ops == &bikeshed_fence_ops) { dma_buf *fence_buf; dma_bikeshed_fence_get_buf(fence, &fence_buf, &offset); ... tell the hw the memory location to wait on ... } else { /* fall-back to sw sync * / dma_fence_add_callback(fence, my_cb); }
On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way.
To facilitate other non-sw implementations, the enable_signaling callback can be used to keep track if a device not supporting hw sync is waiting on the fence, and in this case should arrange to call dma_fence_signal() at some point after the condition has changed, to notify other devices waiting on the fence. If there are no sw waiters, this can be skipped to avoid waking the CPU unnecessarily. The handler of the enable_signaling op should take a refcount until the fence is signaled, then release its ref.
The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization).
v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Another thing I've noticed that is missing from the fence api:
bool dma_fence_is_signaled(fence) { rmb()
retrun fence->signaled; }
Since we only require a fence to monotonically switch from "not signalled" to "signalled", the rmb should be good enough to enforce that and we don't need to grab the spinlock (since especially the irq disabling is a bit expensive). -Daniel
Documentation/DocBook/device-drivers.tmpl | 2 drivers/base/Makefile | 2 drivers/base/dma-fence.c | 268 +++++++++++++++++++++++++++++ include/linux/dma-fence.h | 124 +++++++++++++ 4 files changed, 395 insertions(+), 1 deletion(-) create mode 100644 drivers/base/dma-fence.c create mode 100644 include/linux/dma-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 7514dbf..36252ac 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -126,6 +126,8 @@ X!Edrivers/base/interface.c </sect1> <sect1><title>Device Drivers DMA Management</title> !Edrivers/base/dma-buf.c +!Edrivers/base/dma-fence.c +!Iinclude/linux/dma-fence.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1> diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 5aa2d70..6e9f217 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c new file mode 100644 index 0000000..93448e4 --- /dev/null +++ b/drivers/base/dma-fence.c @@ -0,0 +1,268 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Texas Instruments
- Author: Rob Clark rob.clark@linaro.org
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/slab.h> +#include <linux/sched.h> +#include <linux/export.h> +#include <linux/dma-fence.h>
+/**
- dma_fence_signal - signal completion of a fence
- @fence: the fence to signal
- All registered callbacks will be called directly (synchronously) and
- all blocked waters will be awoken. This should be always be called on
- software only fences, or alternatively be called after
- dma_fence_ops::enable_signaling is called.
- */
+int dma_fence_signal(struct dma_fence *fence) +{
- unsigned long flags;
- int ret = -EINVAL;
- if (WARN_ON(!fence))
return -EINVAL;
- spin_lock_irqsave(&fence->event_queue.lock, flags);
- if (!fence->signaled) {
fence->signaled = true;
__wake_up_locked_key(&fence->event_queue, TASK_NORMAL,
&fence->event_queue);
ret = 0;
- } else
WARN(1, "Already signaled");
- spin_unlock_irqrestore(&fence->event_queue.lock, flags);
- return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_signal);
+static void release_fence(struct kref *kref) +{
- struct dma_fence *fence =
container_of(kref, struct dma_fence, refcount);
- BUG_ON(waitqueue_active(&fence->event_queue));
- if (fence->ops->release)
fence->ops->release(fence);
- kfree(fence);
+}
+/**
- dma_fence_put - decreases refcount of the fence
- @fence: [in] fence to reduce refcount of
- */
+void dma_fence_put(struct dma_fence *fence) +{
- if (WARN_ON(!fence))
return;
- kref_put(&fence->refcount, release_fence);
+} +EXPORT_SYMBOL_GPL(dma_fence_put);
+/**
- dma_fence_get - increases refcount of the fence
- @fence: [in] fence to increase refcount of
- */
+void dma_fence_get(struct dma_fence *fence) +{
- if (WARN_ON(!fence))
return;
- kref_get(&fence->refcount);
+} +EXPORT_SYMBOL_GPL(dma_fence_get);
+static int check_signaling(struct dma_fence *fence) +{
- bool enable_signaling = false, signaled;
- unsigned long flags;
- spin_lock_irqsave(&fence->event_queue.lock, flags);
- signaled = fence->signaled;
- if (!signaled && !fence->needs_sw_signal)
enable_signaling = fence->needs_sw_signal = true;
- spin_unlock_irqrestore(&fence->event_queue.lock, flags);
- if (enable_signaling) {
int ret;
/* At this point, if enable_signaling returns any error
* a wakeup has to be performanced regardless.
* -ENOENT signals fence was already signaled. Any other error
* inidicates a catastrophic hardware error.
*
* If any hardware error occurs, nothing can be done against
* it, so it's treated like the fence was already signaled.
* No synchronization can be performed, so we have to assume
* the fence was already signaled.
*/
ret = fence->ops->enable_signaling(fence);
if (ret) {
signaled = true;
dma_fence_signal(fence);
}
- }
- if (!signaled)
return 0;
- else
return -ENOENT;
+}
+static int +__dma_fence_wake_func(wait_queue_t *wait, unsigned mode, int flags, void *key) +{
- struct dma_fence_cb *cb =
container_of(wait, struct dma_fence_cb, base);
- __remove_wait_queue(key, wait);
- return cb->func(cb, wait->private);
+}
+/**
- dma_fence_add_callback - add a callback to be called when the fence
- is signaled
- @fence: [in] the fence to wait on
- @cb: [in] the callback to register
- @func: [in] the function to call
- @priv: [in] the argument to pass to function
- cb will be initialized by dma_fence_add_callback, no initialization
- by the caller is required. Any number of callbacks can be registered
- to a fence, but a callback can only be registered to one fence at a time.
- Note that the callback can be called from an atomic context. If
- fence is already signaled, this function will return -ENOENT (and
- *not* call the callback)
- */
+int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
dma_fence_func_t func, void *priv)
+{
- unsigned long flags;
- int ret;
- if (WARN_ON(!fence || !func))
return -EINVAL;
- ret = check_signaling(fence);
- spin_lock_irqsave(&fence->event_queue.lock, flags);
- if (!ret && fence->signaled)
ret = -ENOENT;
- if (!ret) {
cb->base.flags = 0;
cb->base.func = __dma_fence_wake_func;
cb->base.private = priv;
cb->fence = fence;
cb->func = func;
__add_wait_queue(&fence->event_queue, &cb->base);
- }
- spin_unlock_irqrestore(&fence->event_queue.lock, flags);
- return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_add_callback);
+/**
- dma_fence_wait - wait for a fence to be signaled
- @fence: [in] The fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] absolute time for timeout, in jiffies.
- Returns 0 on success, -EBUSY if a timeout occured,
- -ERESTARTSYS if the wait was interrupted by a signal.
- */
+int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout) +{
- unsigned long cur;
- int ret;
- if (WARN_ON(!fence))
return -EINVAL;
- cur = jiffies;
- if (time_after_eq(cur, timeout))
return -EBUSY;
- timeout -= cur;
- ret = check_signaling(fence);
- if (ret == -ENOENT)
return 0;
- else if (ret)
return ret;
- if (intr)
ret = wait_event_interruptible_timeout(fence->event_queue,
fence->signaled,
timeout);
- else
ret = wait_event_timeout(fence->event_queue,
fence->signaled, timeout);
- if (ret > 0)
return 0;
- else if (!ret)
return -EBUSY;
- else
return ret;
+} +EXPORT_SYMBOL_GPL(dma_fence_wait);
+static int sw_enable_signaling(struct dma_fence *fence) +{
- /* dma_fence_create sets needs_sw_signal,
* so this should never be called
*/
- WARN_ON_ONCE(1);
- return 0;
+}
+static const struct dma_fence_ops sw_fence_ops = {
- .enable_signaling = sw_enable_signaling,
+};
+/**
- dma_fence_create - create a simple sw-only fence
- @priv: [in] the value to use for the priv member
- This fence only supports signaling from/to CPU. Other implementations
- of dma-fence can be used to support hardware to hardware signaling, if
- supported by the hardware, and use the dma_fence_helper_* functions for
- compatibility with other devices that only support sw signaling.
- */
+struct dma_fence *dma_fence_create(void *priv) +{
- struct dma_fence *fence;
- fence = kmalloc(sizeof(struct dma_fence), GFP_KERNEL);
- if (!fence)
return NULL;
- __dma_fence_init(fence, &sw_fence_ops, priv);
- fence->needs_sw_signal = true;
- return fence;
+} +EXPORT_SYMBOL_GPL(dma_fence_create); diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h new file mode 100644 index 0000000..e0ceddd --- /dev/null +++ b/include/linux/dma-fence.h @@ -0,0 +1,124 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Texas Instruments
- Author: Rob Clark rob.clark@linaro.org
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __DMA_FENCE_H__ +#define __DMA_FENCE_H__
+#include <linux/err.h> +#include <linux/list.h> +#include <linux/wait.h> +#include <linux/list.h> +#include <linux/dma-buf.h>
+struct dma_fence; +struct dma_fence_ops; +struct dma_fence_cb;
+/**
- struct dma_fence - software synchronization primitive
- @refcount: refcount for this fence
- @ops: dma_fence_ops associated with this fence
- @priv: fence specific private data
- @event_queue: event queue used for signaling fence
- @signaled: whether this fence has been completed yet
- @needs_sw_signal: whether dma_fence_ops::enable_signaling
has been called yet
- Read Documentation/dma-buf-synchronization.txt for usage.
- */
+struct dma_fence {
- struct kref refcount;
- const struct dma_fence_ops *ops;
- wait_queue_head_t event_queue;
- void *priv;
- bool signaled:1;
- bool needs_sw_signal:1;
+};
+typedef int (*dma_fence_func_t)(struct dma_fence_cb *cb, void *priv);
+/**
- struct dma_fence_cb - callback for dma_fence_add_callback
- @base: wait_queue_t added to event_queue
- @func: dma_fence_func_t to call
- @fence: fence this dma_fence_cb was used on
- This struct will be initialized by dma_fence_add_callback, additional
- data can be passed along by embedding dma_fence_cb in another struct.
- */
+struct dma_fence_cb {
- wait_queue_t base;
- dma_fence_func_t func;
- struct dma_fence *fence;
+};
+/**
- struct dma_fence_ops - operations implemented for dma-fence
- @enable_signaling: enable software signaling of fence
- @release: [optional] called on destruction of fence
- Notes on enable_signaling:
- For fence implementations that have the capability for hw->hw
- signaling, they can implement this op to enable the necessary
- irqs, or insert commands into cmdstream, etc. This is called
- in the first wait() or add_callback() path to let the fence
- implementation know that there is another driver waiting on
- the signal (ie. hw->sw case).
- A return value of -ENOENT will indicate that the fence has
- already passed. Any other errors will be treated as -ENOENT,
- and can happen because of hardware failure.
- */
+struct dma_fence_ops {
- int (*enable_signaling)(struct dma_fence *fence);
- void (*release)(struct dma_fence *fence);
+};
+struct dma_fence *dma_fence_create(void *priv);
+/**
- __dma_fence_init - Initialize a custom dma_fence.
- @fence: [in] The fence to initialize
- @ops: [in] The dma_fence_ops for operations on this fence.
- @priv: [in] The value to use for the priv member.
- */
+static inline void +__dma_fence_init(struct dma_fence *fence,
const struct dma_fence_ops *ops, void *priv)
+{
- WARN_ON(!ops || !ops->enable_signaling);
- kref_init(&fence->refcount);
- fence->ops = ops;
- fence->priv = priv;
- fence->needs_sw_signal = false;
- fence->signaled = false;
- init_waitqueue_head(&fence->event_queue);
+}
+void dma_fence_get(struct dma_fence *fence); +void dma_fence_put(struct dma_fence *fence);
+int dma_fence_signal(struct dma_fence *fence); +int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout); +int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
dma_fence_func_t func, void *priv);
+#endif /* __DMA_FENCE_H__ */
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
Hi,
On Fri, Aug 10, 2012 at 4:57 PM, Maarten Lankhorst maarten.lankhorst@canonical.com wrote:
[Snip]
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h new file mode 100644 index 0000000..e0ceddd --- /dev/null +++ b/include/linux/dma-fence.h @@ -0,0 +1,124 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Texas Instruments
- Author: Rob Clark rob.clark@linaro.org
- This program is free software; you can redistribute it and/or modify
it
- under the terms of the GNU General Public License version 2 as
published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but
WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for
- more details.
- You should have received a copy of the GNU General Public License
along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __DMA_FENCE_H__ +#define __DMA_FENCE_H__
+#include <linux/err.h> +#include <linux/list.h> +#include <linux/wait.h> +#include <linux/list.h>
Duplicated include.
Regards, Francesco
This type of fence can be used with hardware synchronization for simple hardware that can block execution until the condition (dma_buf[offset] - value) >= 0 has been met.
A software fallback still has to be provided in case the fence is used with a device that doesn't support this mechanism. It is useful to expose this for graphics cards that have an op to support this.
Some cards like i915 can export those, but don't have an option to wait, so they need the software fallback.
I extended the original patch by Rob Clark.
v1: Original v2: Renamed from bikeshed to seqno, moved into dma-fence.c since not much was left of the file. Lots of documentation added.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- drivers/base/dma-fence.c | 21 +++++++++++++++ include/linux/dma-fence.h | 61 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+)
diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c index 93448e4..4092a58 100644 --- a/drivers/base/dma-fence.c +++ b/drivers/base/dma-fence.c @@ -266,3 +266,24 @@ struct dma_fence *dma_fence_create(void *priv) return fence; } EXPORT_SYMBOL_GPL(dma_fence_create); + +static int seqno_enable_signaling(struct dma_fence *fence) +{ + struct dma_seqno_fence *seqno_fence = to_seqno_fence(fence); + return seqno_fence->enable_signaling(seqno_fence); +} + +static void seqno_release(struct dma_fence *fence) +{ + struct dma_seqno_fence *f = to_seqno_fence(fence); + + if (f->release) + f->release(f); + dma_buf_put(f->sync_buf); +} + +const struct dma_fence_ops dma_seqno_fence_ops = { + .enable_signaling = seqno_enable_signaling, + .release = seqno_release +}; +EXPORT_SYMBOL_GPL(dma_seqno_fence_ops); diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index e0ceddd..3ef0da0 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -91,6 +91,19 @@ struct dma_fence_ops { void (*release)(struct dma_fence *fence); };
+struct dma_seqno_fence { + struct dma_fence base; + + struct dma_buf *sync_buf; + uint32_t seqno_ofs; + uint32_t seqno; + + int (*enable_signaling)(struct dma_seqno_fence *fence); + void (*release)(struct dma_seqno_fence *fence); +}; + +extern const struct dma_fence_ops dma_seqno_fence_ops; + struct dma_fence *dma_fence_create(void *priv);
/** @@ -121,4 +134,52 @@ int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout); int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb, dma_fence_func_t func, void *priv);
+/** + * to_seqno_fence - cast a dma_fence to a dma_seqno_fence + * @fence: dma_fence to cast to a dma_seqno_fence + * + * Returns NULL if the dma_fence is not a dma_seqno_fence, + * or the dma_seqno_fence otherwise. + */ +static inline struct dma_seqno_fence * +to_seqno_fence(struct dma_fence *fence) +{ + if (fence->ops != &dma_seqno_fence_ops) + return NULL; + return container_of(fence, struct dma_seqno_fence, base); +} + +/** + * dma_seqno_fence_init - initialize a seqno fence + * @fence: dma_seqno_fence to initialize + * @sync_buf: buffer containing the memory location to signal on + * @seqno_ofs: the offset within @sync_buf + * @seqno: the sequence # to signal on + * @priv: value of priv member + * @enable_signaling: callback which is called when some other device is + * waiting for sw notification of fence + * @release: callback called during destruction before object is freed. + * + * This function initializes a struct dma_seqno_fence with passed parameters, + * and takes a reference on sync_buf which is released on fence destruction. + */ +static inline void +dma_seqno_fence_init(struct dma_seqno_fence *fence, + struct dma_buf *sync_buf, + uint32_t seqno_ofs, uint32_t seqno, void *priv, + int (*enable_signaling)(struct dma_seqno_fence *), + void (*release)(struct dma_seqno_fence *)) +{ + BUG_ON(!fence || !sync_buf || !enable_signaling); + + __dma_fence_init(&fence->base, &dma_seqno_fence_ops, priv); + + get_dma_buf(sync_buf); + fence->sync_buf = sync_buf; + fence->seqno_ofs = seqno_ofs; + fence->seqno = seqno; + fence->enable_signaling = enable_signaling; + fence->release = release; +} + #endif /* __DMA_FENCE_H__ */
On Fri, Aug 10, 2012 at 04:57:58PM +0200, Maarten Lankhorst wrote:
This type of fence can be used with hardware synchronization for simple hardware that can block execution until the condition (dma_buf[offset] - value) >= 0 has been met.
A software fallback still has to be provided in case the fence is used with a device that doesn't support this mechanism. It is useful to expose this for graphics cards that have an op to support this.
Some cards like i915 can export those, but don't have an option to wait, so they need the software fallback.
I extended the original patch by Rob Clark.
v1: Original v2: Renamed from bikeshed to seqno, moved into dma-fence.c since not much was left of the file. Lots of documentation added.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Patch looks good, two bikesheds inline. Either way Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
drivers/base/dma-fence.c | 21 +++++++++++++++ include/linux/dma-fence.h | 61 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+)
diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c index 93448e4..4092a58 100644 --- a/drivers/base/dma-fence.c +++ b/drivers/base/dma-fence.c @@ -266,3 +266,24 @@ struct dma_fence *dma_fence_create(void *priv) return fence; } EXPORT_SYMBOL_GPL(dma_fence_create);
+static int seqno_enable_signaling(struct dma_fence *fence) +{
- struct dma_seqno_fence *seqno_fence = to_seqno_fence(fence);
- return seqno_fence->enable_signaling(seqno_fence);
+}
+static void seqno_release(struct dma_fence *fence) +{
- struct dma_seqno_fence *f = to_seqno_fence(fence);
- if (f->release)
f->release(f);
- dma_buf_put(f->sync_buf);
+}
+const struct dma_fence_ops dma_seqno_fence_ops = {
- .enable_signaling = seqno_enable_signaling,
- .release = seqno_release
+}; +EXPORT_SYMBOL_GPL(dma_seqno_fence_ops); diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index e0ceddd..3ef0da0 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -91,6 +91,19 @@ struct dma_fence_ops { void (*release)(struct dma_fence *fence); }; +struct dma_seqno_fence {
- struct dma_fence base;
- struct dma_buf *sync_buf;
- uint32_t seqno_ofs;
- uint32_t seqno;
- int (*enable_signaling)(struct dma_seqno_fence *fence);
- void (*release)(struct dma_seqno_fence *fence);
I think using dma_fence_ops here is the better color. We lose type-safety at compile-time, but still keep type-safety at runtime (thanks to to_dma_seqno_fence). In addition people seem to like to constify function pointers, we'd save a pointer and if we extend the sw dma_fence interface.
+};
+extern const struct dma_fence_ops dma_seqno_fence_ops;
struct dma_fence *dma_fence_create(void *priv); /** @@ -121,4 +134,52 @@ int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout); int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb, dma_fence_func_t func, void *priv); +/**
- to_seqno_fence - cast a dma_fence to a dma_seqno_fence
- @fence: dma_fence to cast to a dma_seqno_fence
- Returns NULL if the dma_fence is not a dma_seqno_fence,
- or the dma_seqno_fence otherwise.
- */
+static inline struct dma_seqno_fence * +to_seqno_fence(struct dma_fence *fence) +{
- if (fence->ops != &dma_seqno_fence_ops)
return NULL;
- return container_of(fence, struct dma_seqno_fence, base);
+}
I think adding an is_dma_seqno_fence would be nice ...
+/**
- dma_seqno_fence_init - initialize a seqno fence
- @fence: dma_seqno_fence to initialize
- @sync_buf: buffer containing the memory location to signal on
- @seqno_ofs: the offset within @sync_buf
- @seqno: the sequence # to signal on
- @priv: value of priv member
- @enable_signaling: callback which is called when some other device is
- waiting for sw notification of fence
- @release: callback called during destruction before object is freed.
- This function initializes a struct dma_seqno_fence with passed parameters,
- and takes a reference on sync_buf which is released on fence destruction.
- */
+static inline void +dma_seqno_fence_init(struct dma_seqno_fence *fence,
struct dma_buf *sync_buf,
uint32_t seqno_ofs, uint32_t seqno, void *priv,
int (*enable_signaling)(struct dma_seqno_fence *),
void (*release)(struct dma_seqno_fence *))
+{
- BUG_ON(!fence || !sync_buf || !enable_signaling);
- __dma_fence_init(&fence->base, &dma_seqno_fence_ops, priv);
- get_dma_buf(sync_buf);
- fence->sync_buf = sync_buf;
- fence->seqno_ofs = seqno_ofs;
- fence->seqno = seqno;
- fence->enable_signaling = enable_signaling;
- fence->release = release;
+}
#endif /* __DMA_FENCE_H__ */
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
Op 10-08-12 21:57, Daniel Vetter schreef:
On Fri, Aug 10, 2012 at 04:57:58PM +0200, Maarten Lankhorst wrote:
This type of fence can be used with hardware synchronization for simple hardware that can block execution until the condition (dma_buf[offset] - value) >= 0 has been met.
A software fallback still has to be provided in case the fence is used with a device that doesn't support this mechanism. It is useful to expose this for graphics cards that have an op to support this.
Some cards like i915 can export those, but don't have an option to wait, so they need the software fallback.
I extended the original patch by Rob Clark.
v1: Original v2: Renamed from bikeshed to seqno, moved into dma-fence.c since not much was left of the file. Lots of documentation added.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Patch looks good, two bikesheds inline. Either way Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
drivers/base/dma-fence.c | 21 +++++++++++++++ include/linux/dma-fence.h | 61 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+)
diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c index 93448e4..4092a58 100644 --- a/drivers/base/dma-fence.c +++ b/drivers/base/dma-fence.c @@ -266,3 +266,24 @@ struct dma_fence *dma_fence_create(void *priv) return fence; } EXPORT_SYMBOL_GPL(dma_fence_create);
+static int seqno_enable_signaling(struct dma_fence *fence) +{
- struct dma_seqno_fence *seqno_fence = to_seqno_fence(fence);
- return seqno_fence->enable_signaling(seqno_fence);
+}
+static void seqno_release(struct dma_fence *fence) +{
- struct dma_seqno_fence *f = to_seqno_fence(fence);
- if (f->release)
f->release(f);
- dma_buf_put(f->sync_buf);
+}
+const struct dma_fence_ops dma_seqno_fence_ops = {
- .enable_signaling = seqno_enable_signaling,
- .release = seqno_release
+}; +EXPORT_SYMBOL_GPL(dma_seqno_fence_ops); diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index e0ceddd..3ef0da0 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -91,6 +91,19 @@ struct dma_fence_ops { void (*release)(struct dma_fence *fence); }; +struct dma_seqno_fence {
- struct dma_fence base;
- struct dma_buf *sync_buf;
- uint32_t seqno_ofs;
- uint32_t seqno;
- int (*enable_signaling)(struct dma_seqno_fence *fence);
- void (*release)(struct dma_seqno_fence *fence);
I think using dma_fence_ops here is the better color. We lose type-safety at compile-time, but still keep type-safety at runtime (thanks to to_dma_seqno_fence). In addition people seem to like to constify function pointers, we'd save a pointer and if we extend the sw dma_fence interface.
Ok, will change.
+};
+extern const struct dma_fence_ops dma_seqno_fence_ops;
struct dma_fence *dma_fence_create(void *priv); /** @@ -121,4 +134,52 @@ int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout); int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb, dma_fence_func_t func, void *priv); +/**
- to_seqno_fence - cast a dma_fence to a dma_seqno_fence
- @fence: dma_fence to cast to a dma_seqno_fence
- Returns NULL if the dma_fence is not a dma_seqno_fence,
- or the dma_seqno_fence otherwise.
- */
+static inline struct dma_seqno_fence * +to_seqno_fence(struct dma_fence *fence) +{
- if (fence->ops != &dma_seqno_fence_ops)
return NULL;
- return container_of(fence, struct dma_seqno_fence, base);
+}
I think adding an is_dma_seqno_fence would be nice ...
#define is_dma_seqno_fence !!to_dma_seqno_fence
The first thing you would do after finding out it's a seqno fence is calling to_dma_seqno_fence, otherwise why would you care? As such the check was pointless and deleted.
My bikeshed, go build your own!
+/**
- dma_seqno_fence_init - initialize a seqno fence
- @fence: dma_seqno_fence to initialize
- @sync_buf: buffer containing the memory location to signal on
- @seqno_ofs: the offset within @sync_buf
- @seqno: the sequence # to signal on
- @priv: value of priv member
- @enable_signaling: callback which is called when some other device is
- waiting for sw notification of fence
- @release: callback called during destruction before object is freed.
- This function initializes a struct dma_seqno_fence with passed parameters,
- and takes a reference on sync_buf which is released on fence destruction.
- */
+static inline void +dma_seqno_fence_init(struct dma_seqno_fence *fence,
struct dma_buf *sync_buf,
uint32_t seqno_ofs, uint32_t seqno, void *priv,
int (*enable_signaling)(struct dma_seqno_fence *),
void (*release)(struct dma_seqno_fence *))
+{
- BUG_ON(!fence || !sync_buf || !enable_signaling);
- __dma_fence_init(&fence->base, &dma_seqno_fence_ops, priv);
- get_dma_buf(sync_buf);
- fence->sync_buf = sync_buf;
- fence->seqno_ofs = seqno_ofs;
- fence->seqno = seqno;
- fence->enable_signaling = enable_signaling;
- fence->release = release;
+}
#endif /* __DMA_FENCE_H__ */
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
dma-buf-mgr handles the case of reserving single or multiple dma-bufs while trying to prevent deadlocks from buffers being reserved simultaneously. For this to happen extra functions have been introduced:
+ dma_buf_reserve() + dma_buf_unreserve() + dma_buf_wait_unreserved()
Reserve a single buffer, optionally with a sequence to indicate this is part of a multi-dmabuf reservation. This function will return -EDEADLK and return immediately if reserving would cause a deadlock. In case a single buffer is being reserved, no sequence is needed, otherwise please use the dmabufmgr calls.
If you want to attach a exclusive dma-fence, you have to wait until all shared fences have signalled completion. If there are none, or if a shared fence has to be attached, wait until last exclusive fence has signalled completion.
The new fence has to be attached before unreserving the buffer, and in exclusive mode all previous fences will have be removed from the buffer, and unreffed when done with it.
dmabufmgr methods:
+ dmabufmgr_validate_init() This function inits a dmabufmgr_validate structure and appends it to the tail of the list, with refcount set to 1. + dmabufmgr_validate_put() Convenience function to unref and free a dmabufmgr_validate structure. However if it's used for custom callback signalling, a custom function should be implemented.
+ dmabufmgr_reserve_buffers() This function takes a linked list of dmabufmgr_validate's, each one requires the following members to be set by the caller: - validate->head, list head - validate->bo, must be set to the dma-buf to reserve. - validate->shared, set to true if opened in shared mode. - validate->priv, can be used by the caller to identify this buffer.
This function will then set the following members on succesful completion:
- validate->num_fences, amount of valid fences to wait on before this buffer can be accessed. This can be 0. - validate->fences[0...num_fences-1] fences to wait on
+ dmabufmgr_backoff_reservation() This can be used when the caller encounters an error between reservation and usage. No new fence will be attached and all reservations will be undone without side effects.
+ dmabufmgr_fence_buffer_objects Upon successful completion a new fence will have to be attached. This function releases old fences and attaches the new one.
+ dmabufmgr_wait_completed_cpu A simple cpu waiter convenience function. Waits until all fences have signalled completion before returning.
The rationale of refcounting dmabufmgr_validate lies in the wait dma_fence_cb wait member. Before calling dma_fence_add_callback you should increase the refcount on dmabufmgr_validate with dmabufmgr_validate_get, and on signal completion you should call kref_put(&val->refcount, custom_free_signal); after all callbacks have added you drop the refcount by 1 also, when refcount drops to 0 all callbacks have been signalled, and dmabufmgr_validate has been waited on and can be freed. Since this will require atomic spinlocks to unlink the list and signal completion, a deadlock could occur if you try to call add_callback otherwise, so the refcount is used as a means of preventing this from occuring by having your custom free function take a device specific lock, removing from list and freeing the data. The nice/evil part about this is that this will also guarantee no memory leaks can occur behind your back. This allows delays completion by moving the dmabufmgr_validate list to be a part of the committed reservation.
v1: Original version v2: Use dma-fence v3: Added refcounting to dmabufmgr-validate v4: Fixed dmabufmgr_wait_completed_cpu prototype, added more documentation and added Documentation/dma-buf-synchronization.txt --- Documentation/DocBook/device-drivers.tmpl | 2 Documentation/dma-buf-synchronization.txt | 197 +++++++++++++++++++++ drivers/base/Makefile | 2 drivers/base/dma-buf-mgr.c | 277 +++++++++++++++++++++++++++++ drivers/base/dma-buf.c | 114 ++++++++++++ include/linux/dma-buf-mgr.h | 121 +++++++++++++ include/linux/dma-buf.h | 24 +++ 7 files changed, 736 insertions(+), 1 deletion(-) create mode 100644 Documentation/dma-buf-synchronization.txt create mode 100644 drivers/base/dma-buf-mgr.c create mode 100644 include/linux/dma-buf-mgr.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 36252ac..2fc050c 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -128,6 +128,8 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-buf.c !Edrivers/base/dma-fence.c !Iinclude/linux/dma-fence.h +!Edrivers/base/dma-buf-mgr.c +!Iinclude/linux/dma-buf-mgr.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1> diff --git a/Documentation/dma-buf-synchronization.txt b/Documentation/dma-buf-synchronization.txt new file mode 100644 index 0000000..dd4685e --- /dev/null +++ b/Documentation/dma-buf-synchronization.txt @@ -0,0 +1,197 @@ + DMA Buffer Synchronization API Guide + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + Maarten Lankhorst + maarten.lankhorst@canonical.com + m.b.lankhorst@gmail.com + +This is a followup to dma-buf-sharing.txt, which should be read first. +Unless you're dealing with the most simplest of cases, you're going to need +synchronization. This is done with the help of dma-fence and dma-buf-mgr. + + +dma-fence +--------- + +dma-fence is simply a synchronization primitive used mostly by dma-buf-mgr. +In general, driver writers would not need to implement their own kind of +dma-fence, but re-use the existing types. The possibility is left open for +platforms which support alternate means of hardware synchronization between +IP blocks to provide their own implementation shared by the drivers on that +platform. + +The base dma-fence is sufficient for software based signaling. Ie. when the +signaling driver gets an irq, calls dma_fence_signal() which wakes other +driver(s) that are waiting for the fence to be signaled. + +But to support cases where no CPU involvement is required in the buffer +handoff between two devices, different fence implementations can be used. By +comparing the ops pointer with known ops, it is possible to see if the fence +you are waiting on works in a special way known to your driver, and act +differently based upon that. For example dma_seqno_fence allows hardware +waiting until the condition is met: + + (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 + +But all dma-fences should have a software fallback, for the driver creating +the fence does not know if the driver waiting on the fence supports hardware +signaling. The enable_signaling() callback is to notify the fence +implementation (or possibly the creator of the fence) that some other driver +is waiting for software notification and dma_fence_signal() must be called +once the fence is passed. This could be used to enable some irq that would +not normally be enabled, etc, so that the CPU is woken once the fence condition +has arrived. + + +dma-buf-mgr overview +-------------------- + +dma-buf-mgr is a reservation manager, and it is used to handle the case where +multiple devices want to access multiple dma-bufs in an arbitrary order, it +uses dma-fences for synchronization. There are 3 steps that are important here: + +1. Reservation of all dma-buf buffers with dma-buf-mgr + - Create a struct dmabufmgr_validate for each one with a call to + dmabufmgr_validate_init() + - Reserve the list with dmabufmgr_reserve_buffers() +2. Queueing waits and allocating a new dma-fence + - dmabufmgr_wait_completed_cpu or custom implementation. + * Custom implementation can use dma_fence_wait, dma_fence_add_callback + or a custom method that would depend on the fence type. + * An implementation that uses dma_fence_add_callback can use the + refcounting of dmabufmgr_validate to do signal completion, when + the original list head is empty, all fences would have been signaled, + and the command sequence can start running. This requires a custom put. + - dma_fence_create, dma_seqno_fence_init or custom implementation + that calls __dma_fence_init. +3. Committing with the new dma-fence. + - dmabufmgr_fence_buffer_objects + - reduce refcount of list by 1 with dmabufmgr_validate_put or custom put. + +The waits queued in step 2 don't have to be completed before commit, this +allows users of dma-buf-mgr to prevent stalls for as long as possible. + + +dma-fence operations +-------------------- + +dma_fence_get() increments the refcount on a dma-fence by 1. +dma_fence_put() decrements the refcount by 1. + Each dma-buf the dma-fence is attached to will also hold a reference to the + dma-fence, but this can will be removed by dma-buf-mgr upon committing a + reservation. + +dma_fence_ops.enable_signaling() + Indicates dma_fence_signal will have to be called, any error code returned + will cause the fence to be signaled. On success, if the dma_fence creator + didn't already hold a refcount, it should increase the refcount, and + decrease it after calling dma_fence_signal. + +dma_fence_ops.release() + Can be NULL, this function allows additional commands to run on destruction + of the dma_fence. + +dma_fence_signal() + Signal completion for software callbacks on a dma-fence, this will unblock + dma_fence_wait() calls and run all the callbacks added with + dma_fence_add_callback(). + +dma_fence_wait() + Do a synchronous wait on this dma-fence. It is assumed the caller directly + or indirectly (dma-buf-mgr between reservation and committing) holds a + reference to the dma-fence, otherwise the dma-fence might be freed + before return, resulting in undefined behavior. + +dma_fence_add_callback() + Add a software callback to the dma-fence. Same restrictions apply to + refcount as it does to dma_fence_wait, however the caller doesn't need to + keep a refcount to dma-fence afterwards: when software access is enabled, + the creator of the dma-fence is required to keep the fence alive until + after it signals with dma_fence_signal. The callback itself can be called + from irq context. + + This function returns -EINVAL if an input parameter is NULL, or -ENOENT + if the fence was already signaled. + + *WARNING*: + Cancelling a callback should only be done if you really know what you're + doing, since deadlocks and race conditions could occur all too easily. For + this reason, it should only ever be done on hardware lockup recovery. + +dma_fence_create() + Create a software only fence, the creator must keep its reference until + after it calls dma_fence_signal. + +__dma_fence_init() + Initializes an allocated fence, the caller doesn't have to keep its + refcount after committing with this fence, but it will need to hold a + refcount again if dma_fence_ops.enable_signaling gets called. This can + be used for other implementing other types of dma_fence. + +dma_seqno_fence_init() + Initializes a dma_seqno_fence, the caller will need to be able to + enable software completion, but it also completes when + (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true. + + The dma_seqno_fence will take a refcount on sync_buf until it's destroyed. + + Certain hardware have instructions to insert this type of wait condition + in the command stream, so no intervention from software would be needed. + This type of fence can be destroyed before completed, however a reference + on the sync_buf dma-buf can be taken. It is encouraged to re-use the same + dma-buf, since mapping or unmapping the sync_buf to the device's vm can be + expensive. + + +dma-buf-mgr operations +---------------------- + +dmabufmgr_validate_init() + Initialize a struct dmabufmgr_validate for use with dmabufmgr methods, and + appends it to the list. + +dmabufmgr_validate_get() +dmabufmgr_validate_put() + Decrease or increase a reference to a dmabufmgr_validate, these are + convenience functions and don't have to be used. The dmabufmgr commands + below will never touch the refcount. + +dmabufmgr_reserve_buffers() + Attempts to reserve a list of dmabufmgr_validate. This function does not + decrease or increase refcount on dmabufmgr_validate. + + When this command returns 0 (success), the following + dmabufmgr_validate members become valid: + num_fences, fences[0...num_fences) + + The caller will have to queue waits on those fences before calling + dmabufmgr_fence_buffer_objects, dma_fence_add_callback will keep + the fence alive until it is signaled. + + As such, by incrementing refcount on dmabufmgr_validate before calling + dma_fence_add_callback, and making the callback decrement refcount on + dmabufmgr_validate, or releasing refcount if dma_fence_add_callback + failed, the dmabufmgr_validate would be freed when all the fences + have been signaled, and only after the last ref is released, which should + be after dmabufmgr_fence_buffer_objects. With proper locking, when the + list_head holding the list of dmabufmgr_validate's becomes empty it + indicates all fences for all dma-bufs have been signaled. + +dmabufmgr_backoff_reservation() + Unreserves a list of dmabufmgr_validate's, after dmabufmgr_reserve_buffers + was called. This function does not decrease or increase refcount on + dmabufmgr_validate. + +dmabufmgr_fence_buffer_objects() + Commits the list of dmabufmgr_validate's with the dma-fence specified. + This should be done after dmabufmgr_reserve_buffers was called succesfully. + dmabufmgr_backoff_reservation doesn't need to be called after this. + This function does not decrease or increase refcount on dmabufmgr_validate. + +dmabufmgr_wait_completed_cpu() + Will block until all dmabufmgr_validate's have been completed, a signal + has been received, or the wait timed out. This is a convenience function + to speed up initial implementations, however since this blocks + synchronously this is not the best way to wait. + Can be called after dmabufmgr_reserve_buffers returned, but before + dmabufmgr_backoff_reservation or dmabufmgr_fence_buffer_objects. diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 6e9f217..f11d40f 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o dma-buf-mgr.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf-mgr.c b/drivers/base/dma-buf-mgr.c new file mode 100644 index 0000000..899a99b --- /dev/null +++ b/drivers/base/dma-buf-mgr.c @@ -0,0 +1,277 @@ +/* + * Copyright (C) 2012 Canonical Ltd + * + * Based on ttm_bo.c which bears the following copyright notice, + * but is dual licensed: + * + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com> + */ + + +#include <linux/dma-buf-mgr.h> +#include <linux/export.h> +#include <linux/sched.h> +#include <linux/slab.h> + +static void dmabufmgr_backoff_reservation_locked(struct list_head *list) +{ + struct dmabufmgr_validate *entry; + + list_for_each_entry(entry, list, head) { + struct dma_buf *bo = entry->bo; + if (!entry->reserved) + continue; + entry->reserved = false; + + entry->num_fences = 0; + + atomic_set(&bo->reserved, 0); + wake_up_all(&bo->event_queue); + } +} + +static int +dmabufmgr_wait_unreserved_locked(struct list_head *list, + struct dma_buf *bo) +{ + int ret; + + spin_unlock(&dma_buf_reserve_lock); + ret = dma_buf_wait_unreserved(bo, true); + spin_lock(&dma_buf_reserve_lock); + if (unlikely(ret != 0)) + dmabufmgr_backoff_reservation_locked(list); + return ret; +} + +/** + * dmabufmgr_backoff_reservation - cancel a reservation + * @list: [in] a linked list of struct dmabufmgr_validate + * + * This function cancels a previous reservation done by + * dmabufmgr_reserve_buffers. This is useful in case something + * goes wrong between reservation and committing. + * + * Please read Documentation/dma-buf-synchronization.txt + */ +void +dmabufmgr_backoff_reservation(struct list_head *list) +{ + if (list_empty(list)) + return; + + spin_lock(&dma_buf_reserve_lock); + dmabufmgr_backoff_reservation_locked(list); + spin_unlock(&dma_buf_reserve_lock); +} +EXPORT_SYMBOL_GPL(dmabufmgr_backoff_reservation); + +/** + * dmabufmgr_reserve_buffers - reserve a list of dmabufmgr_validate + * @list: [in] a linked list of struct dmabufmgr_validate + * + * Please read Documentation/dma-buf-synchronization.txt + */ +int +dmabufmgr_reserve_buffers(struct list_head *list) +{ + struct dmabufmgr_validate *entry; + int ret; + u32 val_seq; + + if (list_empty(list)) + return 0; + + list_for_each_entry(entry, list, head) { + entry->reserved = false; + entry->num_fences = 0; + } + +retry: + spin_lock(&dma_buf_reserve_lock); + val_seq = atomic_inc_return(&dma_buf_reserve_counter); + + list_for_each_entry(entry, list, head) { + struct dma_buf *bo = entry->bo; + +retry_this_bo: + ret = dma_buf_reserve_locked(bo, true, true, true, val_seq); + switch (ret) { + case 0: + break; + case -EBUSY: + ret = dmabufmgr_wait_unreserved_locked(list, bo); + if (unlikely(ret != 0)) { + spin_unlock(&dma_buf_reserve_lock); + return ret; + } + goto retry_this_bo; + case -EAGAIN: + dmabufmgr_backoff_reservation_locked(list); + spin_unlock(&dma_buf_reserve_lock); + ret = dma_buf_wait_unreserved(bo, true); + if (unlikely(ret != 0)) + return ret; + goto retry; + default: + dmabufmgr_backoff_reservation_locked(list); + spin_unlock(&dma_buf_reserve_lock); + return ret; + } + + entry->reserved = true; + + if (entry->shared && + bo->fence_shared_count == DMA_BUF_MAX_SHARED_FENCE) { + WARN_ON_ONCE(1); + dmabufmgr_backoff_reservation_locked(list); + spin_unlock(&dma_buf_reserve_lock); + return -EINVAL; + } + + if (!entry->shared && bo->fence_shared_count) { + entry->num_fences = bo->fence_shared_count; + + BUILD_BUG_ON(sizeof(entry->fences) != + sizeof(bo->fence_shared)); + + memcpy(entry->fences, bo->fence_shared, + sizeof(bo->fence_shared)); + } else if (bo->fence_excl) { + entry->num_fences = 1; + entry->fences[0] = bo->fence_excl; + } else + entry->num_fences = 0; + } + spin_unlock(&dma_buf_reserve_lock); + + return 0; +} +EXPORT_SYMBOL_GPL(dmabufmgr_reserve_buffers); + +/** + * dmabufmgr_wait_completed_cpu - wait synchronously for completion on cpu + * @list: [in] a linked list of struct dmabufmgr_validate + * @intr: [in] perform an interruptible wait + * @timeout: [in] absolute timeout in jiffies + * + * Since this function waits synchronously it is meant mostly for cases where + * stalling is unimportant, or to speed up initial implementations. + */ +int +dmabufmgr_wait_completed_cpu(struct list_head *list, bool intr, + unsigned long timeout) +{ + struct dmabufmgr_validate *entry; + int i, ret = 0; + + list_for_each_entry(entry, list, head) { + for (i = 0; i < entry->num_fences && !ret; i++) + ret = dma_fence_wait(entry->fences[i], intr, timeout); + + if (ret && ret != -ERESTARTSYS) + pr_err("waiting returns %i\n", ret); + if (ret) + return ret; + } + return 0; +} +EXPORT_SYMBOL_GPL(dmabufmgr_wait_completed_cpu); + +/** + * dmabufmgr_fence_buffer_objects - commit a reservation with a new fence + * @fence: [in] the fence that indicates completion + * @list: [in] a linked list of struct dmabufmgr_validate + * + * This function should be called after a hardware command submission is + * completed succesfully. The fence is used to indicate completion of + * those commands. + * + * Please read Documentation/dma-buf-synchronization.txt + */ +void +dmabufmgr_fence_buffer_objects(struct dma_fence *fence, struct list_head *list) +{ + struct dmabufmgr_validate *entry; + struct dma_buf *bo; + + if (list_empty(list) || WARN_ON(!fence)) + return; + + /* Until deferred fput hits mainline, release old things here */ + list_for_each_entry(entry, list, head) { + bo = entry->bo; + + if (!entry->shared) { + int i; + for (i = 0; i < bo->fence_shared_count; ++i) { + dma_fence_put(bo->fence_shared[i]); + bo->fence_shared[i] = NULL; + } + bo->fence_shared_count = 0; + if (bo->fence_excl) { + dma_fence_put(bo->fence_excl); + bo->fence_excl = NULL; + } + } + + entry->reserved = false; + } + + spin_lock(&dma_buf_reserve_lock); + + list_for_each_entry(entry, list, head) { + bo = entry->bo; + + dma_fence_get(fence); + if (entry->shared) + bo->fence_shared[bo->fence_shared_count++] = fence; + else + bo->fence_excl = fence; + + dma_buf_unreserve_locked(bo); + } + + spin_unlock(&dma_buf_reserve_lock); +} +EXPORT_SYMBOL_GPL(dmabufmgr_fence_buffer_objects); + +/** + * dmabufmgr_validate_free - simple free function for dmabufmgr_validate + * @ref: [in] pointer to dmabufmgr_validate::refcount to free + * + * Can be called when refcount drops to 0, but isn't required to be used. + */ +void dmabufmgr_validate_free(struct kref *ref) +{ + struct dmabufmgr_validate *val; + val = container_of(ref, struct dmabufmgr_validate, refcount); + list_del(&val->head); + kfree(val); +} +EXPORT_SYMBOL_GPL(dmabufmgr_validate_free); diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 24e88fe..a19a518 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -25,14 +25,20 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h> +#include <linux/dma-fence.h> #include <linux/anon_inodes.h> #include <linux/export.h> +#include <linux/sched.h> + +atomic_t dma_buf_reserve_counter = ATOMIC_INIT(1); +DEFINE_SPINLOCK(dma_buf_reserve_lock);
static inline int is_dma_buf_file(struct file *);
static int dma_buf_release(struct inode *inode, struct file *file) { struct dma_buf *dmabuf; + int i;
if (!is_dma_buf_file(file)) return -EINVAL; @@ -40,6 +46,15 @@ static int dma_buf_release(struct inode *inode, struct file *file) dmabuf = file->private_data;
dmabuf->ops->release(dmabuf); + + BUG_ON(waitqueue_active(&dmabuf->event_queue)); + BUG_ON(atomic_read(&dmabuf->reserved)); + + if (dmabuf->fence_excl) + dma_fence_put(dmabuf->fence_excl); + for (i = 0; i < dmabuf->fence_shared_count; ++i) + dma_fence_put(dmabuf->fence_shared[i]); + kfree(dmabuf); return 0; } @@ -119,6 +134,7 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops,
mutex_init(&dmabuf->lock); INIT_LIST_HEAD(&dmabuf->attachments); + init_waitqueue_head(&dmabuf->event_queue);
return dmabuf; } @@ -503,3 +519,101 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) dmabuf->ops->vunmap(dmabuf, vaddr); } EXPORT_SYMBOL_GPL(dma_buf_vunmap); + +int +dma_buf_reserve_locked(struct dma_buf *dmabuf, bool interruptible, + bool no_wait, bool use_sequence, u32 sequence) +{ + int ret; + + while (unlikely(atomic_cmpxchg(&dmabuf->reserved, 0, 1) != 0)) { + /** + * Deadlock avoidance for multi-dmabuf reserving. + */ + if (use_sequence && dmabuf->seq_valid) { + /** + * We've already reserved this one. + */ + if (unlikely(sequence == dmabuf->val_seq)) + return -EDEADLK; + /** + * Already reserved by a thread that will not back + * off for us. We need to back off. + */ + if (unlikely(sequence - dmabuf->val_seq < (1 << 31))) + return -EAGAIN; + } + + if (no_wait) + return -EBUSY; + + spin_unlock(&dma_buf_reserve_lock); + ret = dma_buf_wait_unreserved(dmabuf, interruptible); + spin_lock(&dma_buf_reserve_lock); + + if (unlikely(ret)) + return ret; + } + + if (use_sequence) { + /** + * Wake up waiters that may need to recheck for deadlock, + * if we decreased the sequence number. + */ + if (unlikely((dmabuf->val_seq - sequence < (1 << 31)) + || !dmabuf->seq_valid)) + wake_up_all(&dmabuf->event_queue); + + dmabuf->val_seq = sequence; + dmabuf->seq_valid = true; + } else { + dmabuf->seq_valid = false; + } + + return 0; +} +EXPORT_SYMBOL_GPL(dma_buf_reserve_locked); + +int +dma_buf_reserve(struct dma_buf *dmabuf, bool interruptible, bool no_wait, + bool use_sequence, u32 sequence) +{ + int ret; + + spin_lock(&dma_buf_reserve_lock); + ret = dma_buf_reserve_locked(dmabuf, interruptible, no_wait, + use_sequence, sequence); + spin_unlock(&dma_buf_reserve_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(dma_buf_reserve); + +int +dma_buf_wait_unreserved(struct dma_buf *dmabuf, bool interruptible) +{ + if (interruptible) { + return wait_event_interruptible(dmabuf->event_queue, + atomic_read(&dmabuf->reserved) == 0); + } else { + wait_event(dmabuf->event_queue, + atomic_read(&dmabuf->reserved) == 0); + return 0; + } +} +EXPORT_SYMBOL_GPL(dma_buf_wait_unreserved); + +void dma_buf_unreserve_locked(struct dma_buf *dmabuf) +{ + atomic_set(&dmabuf->reserved, 0); + wake_up_all(&dmabuf->event_queue); +} +EXPORT_SYMBOL_GPL(dma_buf_unreserve_locked); + +void dma_buf_unreserve(struct dma_buf *dmabuf) +{ + spin_lock(&dma_buf_reserve_lock); + dma_buf_unreserve_locked(dmabuf); + spin_unlock(&dma_buf_reserve_lock); +} +EXPORT_SYMBOL_GPL(dma_buf_unreserve); diff --git a/include/linux/dma-buf-mgr.h b/include/linux/dma-buf-mgr.h new file mode 100644 index 0000000..df30ee4 --- /dev/null +++ b/include/linux/dma-buf-mgr.h @@ -0,0 +1,121 @@ +/* + * Header file for dma buffer sharing framework. + * + * Copyright(C) 2011 Linaro Limited. All rights reserved. + * Author: Sumit Semwal sumit.semwal@ti.com + * + * Many thanks to linaro-mm-sig list, and specially + * Arnd Bergmann arnd@arndb.de, Rob Clark rob@ti.com and + * Daniel Vetter daniel@ffwll.ch for their support in creation and + * refining of this idea. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ +#ifndef __DMA_BUF_MGR_H__ +#define __DMA_BUF_MGR_H__ + +#include <linux/dma-buf.h> +#include <linux/dma-fence.h> +#include <linux/list.h> + +/** + * struct dmabufmgr_validate - reservation structure for a dma-buf + * @head: list entry + * @refcount: refcount + * @reserved: internal use: signals if reservation is succesful + * @shared: whether shared or exclusive access was requested + * @bo: pointer to a dma-buf to reserve + * @priv: pointer to user-specific data + * @num_fences: number of fences to wait on + * @num_waits: amount of waits queued + * @fences: fences to wait on + * @wait: dma_fence_cb that can be passed to dma_fence_add_callback + * + * Based on struct ttm_validate_buffer, but unrecognisably modified. + * num_fences and fences are only valid after dmabufmgr_reserve_buffers + * is called. + */ +struct dmabufmgr_validate { + struct list_head head; + struct kref refcount; + + bool reserved; + bool shared; + struct dma_buf *bo; + void *priv; + + unsigned num_fences, num_waits; + struct dma_fence *fences[DMA_BUF_MAX_SHARED_FENCE]; + struct dma_fence_cb wait[DMA_BUF_MAX_SHARED_FENCE]; +}; + +/** + * dmabufmgr_validate_init - initialize a dmabufmgr_validate struct + * @val: [in] pointer to dmabufmgr_validate + * @list: [in] pointer to list to append val to + * @bo: [in] pointer to dma-buf + * @priv: [in] pointer to user-specific data + * @shared: [in] request shared or exclusive access + */ +static inline void +dmabufmgr_validate_init(struct dmabufmgr_validate *val, + struct list_head *list, struct dma_buf *bo, + void *priv, bool shared) +{ + kref_init(&val->refcount); + list_add_tail(&val->head, list); + val->bo = bo; + val->priv = priv; + val->shared = shared; +} + +extern void dmabufmgr_validate_free(struct kref *ref); + +/** + * dmabufmgr_validate_get - increase refcount on a dmabufmgr_validate + * @val: [in] pointer to dmabufmgr_validate + */ +static inline struct dmabufmgr_validate * +dmabufmgr_validate_get(struct dmabufmgr_validate *val) +{ + kref_get(&val->refcount); + return val; +} + +/** + * dmabufmgr_validate_put - decrease refcount on a dmabufmgr_validate + * @val: [in] pointer to dmabufmgr_validate + * + * Returns true if the caller removed last refcount on val, + * false otherwise. + */ +static inline bool +dmabufmgr_validate_put(struct dmabufmgr_validate *val) +{ + return kref_put(&val->refcount, dmabufmgr_validate_free); +} + +extern int +dmabufmgr_reserve_buffers(struct list_head *list); + +extern void +dmabufmgr_backoff_reservation(struct list_head *list); + +extern void +dmabufmgr_fence_buffer_objects(struct dma_fence *fence, struct list_head *list); + +extern int +dmabufmgr_wait_completed_cpu(struct list_head *list, bool intr, + unsigned long timeout); + +#endif /* __DMA_BUF_MGR_H__ */ diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index bd2e52c..8b14103 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -35,6 +35,11 @@ struct device; struct dma_buf; struct dma_buf_attachment;
+extern atomic_t dma_buf_reserve_counter; +extern spinlock_t dma_buf_reserve_lock; + +#define DMA_BUF_MAX_SHARED_FENCE 8 + /** * struct dma_buf_ops - operations possible on struct dma_buf * @attach: [optional] allows different devices to 'attach' themselves to the @@ -122,6 +127,18 @@ struct dma_buf { /* mutex to serialize list manipulation and attach/detach */ struct mutex lock; void *priv; + + /** event queue for waking up when this dmabuf becomes unreserved */ + wait_queue_head_t event_queue; + + atomic_t reserved; + + /** These require dma_buf_reserve to be called before modification */ + bool seq_valid; + u32 val_seq; + struct dma_fence *fence_excl; + struct dma_fence *fence_shared[DMA_BUF_MAX_SHARED_FENCE]; + u32 fence_shared_count; };
/** @@ -183,5 +200,12 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long); void *dma_buf_vmap(struct dma_buf *); void dma_buf_vunmap(struct dma_buf *, void *vaddr); +int dma_buf_reserve_locked(struct dma_buf *, bool intr, bool no_wait, + bool use_seq, u32 seq); +int dma_buf_reserve(struct dma_buf *, bool intr, bool no_wait, + bool use_seq, u32 seq); +int dma_buf_wait_unreserved(struct dma_buf *, bool interruptible); +void dma_buf_unreserve_locked(struct dma_buf *); +void dma_buf_unreserve(struct dma_buf *);
#endif /* __DMA_BUF_H__ */
Hi Maarten,
Ok, here comes the promised review (finally!), but it's rather a high-level thingy. I've mostly thought about how we could create a neat api with the following points. For a bit of clarity, I've grouped the different considerations a bit.
Easy Integration ================
Where I mean integration of simple dma_buf importers that don't want to deal with all the hassle of dma_fence (like v4l framegrabbers). Or drivers where everything interesting needs cpu access anyway (like the udl driver). The case with explicitly handling dma_fences and going through the reservation dance should be the explicitly requested special cases.
I'm thinking of adding a new dma_buf_attach_special function which takes an additional flags parameter (since we might need some other funky extension later on ...). A new flag ASYNC_ATTACHMENT would indicate that the driver will use this attachment with the dma_bufmgr and will use dma_fences to sync with other drivers (which is also stored in a new flag in the attachment struct).
To ensure we can have mixed attachments we need to ensure that all other access (and also cpu access) sync up with any dma_fences left behind by drivers with async attachments (think e.g. nouveau render, but both intel and udl displaying a buffer). Note that we don't need any exclusion, but only barriers, i.e. if anyone sneaks in other rendering while an dma access from a synchronous client is underway, we don't need to care.
The same needs to happen for cpu access obviously.
Since both the cpu access functions (begin/end_cpu_access) and the device access functions (which atm are on map/unmap_attachment) have a direction attribute, we can even differentiate between read (i.e. shared) access and write (i.e. exclusive) access. Note that the dma_fence syncing needs to happen before we call the exporter's callbacks, otherwise any cache flushing/invalidation the exporter does might not yet see all rendering.
Imo it would be good to split this up into a separate patch with: - adding the dma_fence members to struct dma_buf (but with no way yet to set them). - adding a quick helper to wait for fences attached to a dma_buf (either shared or exclusive access). - adding the synchronous bool to the attachment struct, setting it by default and wiring, again with no way yet to use async attachments. - we also need to add the dma_bufmgr_spinlock, since this is what protects the fences attached to a dma_buf for read access (at least that's my understanding).
Aside: This doesn't make too much sense since most drivers cheat and just hang onto the attachment mapping (instead of doing map/unmap for each access as the spec says they should ...). So there's no way actually for drivers to /simply/ sync up. But the lack of a streaming api (i.e. setting the coherency domain without map/unmap) is a known lack in the dma_buf api, so I think I'll follow up with a patch to finally add this. I'm thinking of something like
int dma_buf_sync_attachment(attachment, enum {BEGIN_DMA, END_DMA}, direction)
The BEGIN_DMA/END_DMA is just to avoid coming up with two nice names - opposed to the normal dma api we need to differentiate explictly between begin/end (like for cpu access), since a given importer knows only about it's own usage (and hence we can't implictly flush the old coherency domain when we switch to a new coherency domain). Synchronous attachments would then simply as sync up with any dma_fences attached.
Aside 2: I think for async attachments we need to demand that all the devices are coherent wrt each another - otherwise we need to allow that the exporter can do some magic cache flushing in between when one device signals a fence and everyone else receiving the update that the fence signaled. If there is any hw out there that would require cache-flushing at the gart level (i.e. not some caches which are known to the driver), we should know about it ... (I seriously hope nothing is that brain-dead).
Allowing extensions ===================
Like I've said in irc discussion, I think we should aim for the eventual goal that dma_buf objects are fully evictable. Having a deadlock-free reservation system is a big step towards that. Afaict two things would be missing on top of your current bufmgr:
- drivers would also need to be able to reserve their own, not-exported buffers (and any other resource objects) with the dma_bufmgr. But they don't necessarily want to use dma_fences to sync their private objects (for efficiency reasons). So I think it should be possible to embed the reservation fields (and only those) into any driver-private struct.
- We'd again need a special attachment mode (hence the flags array, not just a bool) to signal that the driver can cope with the exporter evicting a dma_buf. Exporters would be free to evict any object (or just gart mappings) if all the affected attachments are of the evictable type and all the fences attached to a dma_buf have signalled (a bit ineffiecent since we could wait for a different gart mapping, but eviction shouldn't happen often).
On the driver-side we only need to check for errors in the map_attachment/sync_attachment calls indicating whether the buffer has been evicted and that we need to back off (i.e. unmap all reserved&mapped buffers thus far) to avoid a deadlock.
Since synchronization between eviction and dma access would happen through dma_fence, an evictable always needs to be async, too.
This is mostly useful on SoC where a (sometimes tiny) gart is shared by a few IC blocks (e.g. video codec, gpu, display block).
I don't think we need to do anything special to allow this, but I think we should ensure that it is possible. The simplest way is to but the bufmgr into it's own patch and extract any reservation fields from dma_buf into a separate dma_bufmgr_object. That patch wouldn't mention dma_fences or add dma_bufmgr_object to dma_buf, I think all that integration should happen in the final patch to put things together.
Putting the bufmgr reservation logic into it's own patch should also make review easier ;-)
Better encapsulation ====================
I think we should avoid to expose clients as much as possible to bufmgr internals - they should be able to treat it as a magic blackbox that tells them when to back off and when they're good to go imo. Specifically I'm thinking of
struct dma_bufmgr_reservation_ticket { seqno list_head *reservations }
I also have a few gripes with the ttm-inspired "validation" moniker. Hence my bikeshed proposal for - "reservation ticket" the abstract thingy you get from bufmgr that denotes your place (ticket) in the global reservation queue - and "reservation" for the book-keeping struct to reserve and object.
I think the ttm wording comes from validating whether all buffers are in the right ttm (and moving them if needed), which (at least for dma_fence deadlock prevention only) doesn't quite fit for the dma_bufmgr.
Another advantege of encapsulating the reservation_ticket is that we can easily track these, e.g. a global list of all still outstanding reservation can be used to prevent seqno wraparounds (we block for the old reservation to unreserve the seqno). Or we could dump all reservations into debugfs, which probably helps to diagnose deadlocks (if we add a bit of debug infrastructure to associate reservations with driver state).
To make all this work neatly for your main use case, i.e. to reserve dma_bufs to put dma_fences onto them we'd to add some helpers that init a reservation from a dma buf and add the fences to all dma_bufs in a reservation. But I think that should be in the last patch that ties up the bufmgr with dma_buf.
Now once we have a reservation_ticket struct we can put it to some good use:
Fat-trimming ============
Imo your validation struct contains too much stuff, and I'd like it to be as small as possible so that drivers can quickly allocate lots of these (if they also use them for private objects):
+struct dmabufmgr_validate {
- struct list_head head;
- struct kref refcount;
I see the need for refcounting the validation, so that we can wait on it without it disappearing under us. But imo it makes more sence if we reference count the proposed reservation_ticket instead and move a few related fields to that: - I think we should move the waitqueue from the dma_buf object (or the bufmgr_object) to the reservation_ticket, too. Conceptually we only need to wait on the reservation to unreserve all buffers, not on individual buffers. The only case where waiting on the reservation_ticket is different from waiting on the buffer is when there's contention and a driver needs to back off. But in that case it's fairer for the reservation to complete instead of trying to snatch away the buffer.
- bool reserved;
Afaict can tell that's only used in the reservation backoff code, I think we can ditch this by being a bit more clever there (e.g. splice of the list of already reserved buffers).
- bool shared;
We could shove this bit into bit0 of the below pointer. With some helpers to set up the reservation, no user of the bufmgr api would notice this.
- struct dma_buf *bo;
- void *priv;
priv is imo unncessery, users can embed this struct into anything if they need more context for reservations.
- unsigned num_fences, num_waits;
- struct dma_fence *fences[DMA_BUF_MAX_SHARED_FENCE];
- struct dma_fence_cb wait[DMA_BUF_MAX_SHARED_FENCE];
Imo this shouldn't be part of the reservation itself: - for fences we only need a list of the (preferrably unique) dma_fence we need to wait on before the batch can use all the reserved dma_bufs. And we only need this list to attach our own callback to it, so this list is (I think) only required within fence_buffers
- the callback structs are a bit more obnoxious, since we need one for every fence we need to wait on. And these need to be attached to a reference-counted struct (or we need to allocate them individually, which is even more wasteful).
With my proposal we have two refcounted struct: reservation_ticket and the dma_fence we newly attach to all dma_bufs, which will signal the completion of the batchbuffer we are processing. Adding the callbacks to the reservation_ticket doesn't make any more sense (that should get free'd after the unreserve), but the dma_fence needs to be around until all old fences have signaled anyway, for before the new batch can't start.
So what about we adjust the flow of fence_buffers a bit:
- it walks all the buffers on the reservation_ticket, assembling a list of fences we need to wait on. While doing so it also puts the new fence into place (either shared or exclusive). - then it allocates the cb array, storing a pointer to it into the new dma_fence - for every fence it adds the callback, increment the reference count of the new dma_fence - the dma_fence code needs to free the attached cb array (if there's any) on the final kput.
+};
Aside: I think it'd be good to document which members are protected by which looks. I could deduce the following locking rules:
- the fence lists in dma_fence are read-protected by dma_bufmgr_lock. Writing is only allowed if you also hold a valid reservation on that buffer. - seqno in the reservation_ticket is immutable, the list global list of reservation tickets (my new proposal, handy for debuggin) is protected by dma_bufmgr_lock - any other field in reservation_ticket and all fields in reservation are presumed to be only manipulated by one thread (the one doing the reservation/batchbuffer submission), additional locking is the callers problem (if required, would indicate some big issue though imo).
Proposed patch sequence =======================
I.e. this is the summary of all the above blablabla:
- First a patch that integrates the dma_fence book-keeping into dma_buf, adds the required wait helpers and wires them up at all the right places.
- A patch introducing the dma reservation framework, with the 3 struct dma_reservation_ticket, dma_reservation and dma_reservation_item or whatever you're gonna call them. I think having this separate is good to triple-check the reservation logic in review.
- A final patch to wire things up, essentially adding a dma_reservation_object embedded into dma_buf and adding the fence_buffers function to exchange the fences, plus any other neat helpers (waiting on all fences of a reservation for synchronous fallback, stitching together the reservations list, reserve dma_bufs instead of reservation_objects).
btw, if you agree somewhat with my ideas, I think it'd be good to just discuss the structs and function interfaces a bit first - otherwise I fear you'll need to change the code way too often.
Now let's here back the your flames ;-)
Cheers, Daniel
PS: I was somehow under the impression that the reservation code has some minimal fairness guarantee: If you try to reserve a buffer, but need to backoff due to a potential deadlock, not later reservation_ticket can snatch that buffer while the retry is happening. Somehow I've though ttm would implement this, but I'm too lazy to check ;-)
In any case this is not required for the basic version - we only need a neat interface and no deadlocks for that.
PPS: I think bikeshedding the documentation isn't useful yet, before we've settled on an interface for the dma_bufmgr.
On Fri, Aug 10, 2012 at 04:58:06PM +0200, Maarten Lankhorst wrote:
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
dma-buf-mgr handles the case of reserving single or multiple dma-bufs while trying to prevent deadlocks from buffers being reserved simultaneously. For this to happen extra functions have been introduced:
- dma_buf_reserve()
- dma_buf_unreserve()
- dma_buf_wait_unreserved()
Reserve a single buffer, optionally with a sequence to indicate this is part of a multi-dmabuf reservation. This function will return -EDEADLK and return immediately if reserving would cause a deadlock. In case a single buffer is being reserved, no sequence is needed, otherwise please use the dmabufmgr calls.
If you want to attach a exclusive dma-fence, you have to wait until all shared fences have signalled completion. If there are none, or if a shared fence has to be attached, wait until last exclusive fence has signalled completion.
The new fence has to be attached before unreserving the buffer, and in exclusive mode all previous fences will have be removed from the buffer, and unreffed when done with it.
dmabufmgr methods:
- dmabufmgr_validate_init()
This function inits a dmabufmgr_validate structure and appends it to the tail of the list, with refcount set to 1.
- dmabufmgr_validate_put()
Convenience function to unref and free a dmabufmgr_validate structure. However if it's used for custom callback signalling, a custom function should be implemented.
- dmabufmgr_reserve_buffers()
This function takes a linked list of dmabufmgr_validate's, each one requires the following members to be set by the caller:
- validate->head, list head
- validate->bo, must be set to the dma-buf to reserve.
- validate->shared, set to true if opened in shared mode.
- validate->priv, can be used by the caller to identify this buffer.
This function will then set the following members on succesful completion:
validate->num_fences, amount of valid fences to wait on before this buffer can be accessed. This can be 0.
validate->fences[0...num_fences-1] fences to wait on
- dmabufmgr_backoff_reservation()
This can be used when the caller encounters an error between reservation and usage. No new fence will be attached and all reservations will be undone without side effects.
- dmabufmgr_fence_buffer_objects
Upon successful completion a new fence will have to be attached. This function releases old fences and attaches the new one.
- dmabufmgr_wait_completed_cpu
A simple cpu waiter convenience function. Waits until all fences have signalled completion before returning.
The rationale of refcounting dmabufmgr_validate lies in the wait dma_fence_cb wait member. Before calling dma_fence_add_callback you should increase the refcount on dmabufmgr_validate with dmabufmgr_validate_get, and on signal completion you should call kref_put(&val->refcount, custom_free_signal); after all callbacks have added you drop the refcount by 1 also, when refcount drops to 0 all callbacks have been signalled, and dmabufmgr_validate has been waited on and can be freed. Since this will require atomic spinlocks to unlink the list and signal completion, a deadlock could occur if you try to call add_callback otherwise, so the refcount is used as a means of preventing this from occuring by having your custom free function take a device specific lock, removing from list and freeing the data. The nice/evil part about this is that this will also guarantee no memory leaks can occur behind your back. This allows delays completion by moving the dmabufmgr_validate list to be a part of the committed reservation.
v1: Original version v2: Use dma-fence v3: Added refcounting to dmabufmgr-validate v4: Fixed dmabufmgr_wait_completed_cpu prototype, added more documentation and added Documentation/dma-buf-synchronization.txt
Documentation/DocBook/device-drivers.tmpl | 2 Documentation/dma-buf-synchronization.txt | 197 +++++++++++++++++++++ drivers/base/Makefile | 2 drivers/base/dma-buf-mgr.c | 277 +++++++++++++++++++++++++++++ drivers/base/dma-buf.c | 114 ++++++++++++ include/linux/dma-buf-mgr.h | 121 +++++++++++++ include/linux/dma-buf.h | 24 +++ 7 files changed, 736 insertions(+), 1 deletion(-) create mode 100644 Documentation/dma-buf-synchronization.txt create mode 100644 drivers/base/dma-buf-mgr.c create mode 100644 include/linux/dma-buf-mgr.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 36252ac..2fc050c 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -128,6 +128,8 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-buf.c !Edrivers/base/dma-fence.c !Iinclude/linux/dma-fence.h +!Edrivers/base/dma-buf-mgr.c +!Iinclude/linux/dma-buf-mgr.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1> diff --git a/Documentation/dma-buf-synchronization.txt b/Documentation/dma-buf-synchronization.txt new file mode 100644 index 0000000..dd4685e --- /dev/null +++ b/Documentation/dma-buf-synchronization.txt @@ -0,0 +1,197 @@
DMA Buffer Synchronization API Guide
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Maarten Lankhorst
<maarten.lankhorst@canonical.com>
<m.b.lankhorst@gmail.com>
+This is a followup to dma-buf-sharing.txt, which should be read first. +Unless you're dealing with the most simplest of cases, you're going to need +synchronization. This is done with the help of dma-fence and dma-buf-mgr.
+dma-fence +---------
+dma-fence is simply a synchronization primitive used mostly by dma-buf-mgr. +In general, driver writers would not need to implement their own kind of +dma-fence, but re-use the existing types. The possibility is left open for +platforms which support alternate means of hardware synchronization between +IP blocks to provide their own implementation shared by the drivers on that +platform.
+The base dma-fence is sufficient for software based signaling. Ie. when the +signaling driver gets an irq, calls dma_fence_signal() which wakes other +driver(s) that are waiting for the fence to be signaled.
+But to support cases where no CPU involvement is required in the buffer +handoff between two devices, different fence implementations can be used. By +comparing the ops pointer with known ops, it is possible to see if the fence +you are waiting on works in a special way known to your driver, and act +differently based upon that. For example dma_seqno_fence allows hardware +waiting until the condition is met:
- (s32)((sync_buf)[seqno_ofs] - seqno) >= 0
+But all dma-fences should have a software fallback, for the driver creating +the fence does not know if the driver waiting on the fence supports hardware +signaling. The enable_signaling() callback is to notify the fence +implementation (or possibly the creator of the fence) that some other driver +is waiting for software notification and dma_fence_signal() must be called +once the fence is passed. This could be used to enable some irq that would +not normally be enabled, etc, so that the CPU is woken once the fence condition +has arrived.
+dma-buf-mgr overview +--------------------
+dma-buf-mgr is a reservation manager, and it is used to handle the case where +multiple devices want to access multiple dma-bufs in an arbitrary order, it +uses dma-fences for synchronization. There are 3 steps that are important here:
+1. Reservation of all dma-buf buffers with dma-buf-mgr
- Create a struct dmabufmgr_validate for each one with a call to
- dmabufmgr_validate_init()
- Reserve the list with dmabufmgr_reserve_buffers()
+2. Queueing waits and allocating a new dma-fence
- dmabufmgr_wait_completed_cpu or custom implementation.
- Custom implementation can use dma_fence_wait, dma_fence_add_callback
or a custom method that would depend on the fence type.
- An implementation that uses dma_fence_add_callback can use the
refcounting of dmabufmgr_validate to do signal completion, when
the original list head is empty, all fences would have been signaled,
and the command sequence can start running. This requires a custom put.
- dma_fence_create, dma_seqno_fence_init or custom implementation
- that calls __dma_fence_init.
+3. Committing with the new dma-fence.
- dmabufmgr_fence_buffer_objects
- reduce refcount of list by 1 with dmabufmgr_validate_put or custom put.
+The waits queued in step 2 don't have to be completed before commit, this +allows users of dma-buf-mgr to prevent stalls for as long as possible.
+dma-fence operations +--------------------
+dma_fence_get() increments the refcount on a dma-fence by 1. +dma_fence_put() decrements the refcount by 1.
- Each dma-buf the dma-fence is attached to will also hold a reference to the
- dma-fence, but this can will be removed by dma-buf-mgr upon committing a
- reservation.
+dma_fence_ops.enable_signaling()
- Indicates dma_fence_signal will have to be called, any error code returned
- will cause the fence to be signaled. On success, if the dma_fence creator
- didn't already hold a refcount, it should increase the refcount, and
- decrease it after calling dma_fence_signal.
+dma_fence_ops.release()
- Can be NULL, this function allows additional commands to run on destruction
- of the dma_fence.
+dma_fence_signal()
- Signal completion for software callbacks on a dma-fence, this will unblock
- dma_fence_wait() calls and run all the callbacks added with
- dma_fence_add_callback().
+dma_fence_wait()
- Do a synchronous wait on this dma-fence. It is assumed the caller directly
- or indirectly (dma-buf-mgr between reservation and committing) holds a
- reference to the dma-fence, otherwise the dma-fence might be freed
- before return, resulting in undefined behavior.
+dma_fence_add_callback()
- Add a software callback to the dma-fence. Same restrictions apply to
- refcount as it does to dma_fence_wait, however the caller doesn't need to
- keep a refcount to dma-fence afterwards: when software access is enabled,
- the creator of the dma-fence is required to keep the fence alive until
- after it signals with dma_fence_signal. The callback itself can be called
- from irq context.
- This function returns -EINVAL if an input parameter is NULL, or -ENOENT
- if the fence was already signaled.
- *WARNING*:
- Cancelling a callback should only be done if you really know what you're
- doing, since deadlocks and race conditions could occur all too easily. For
- this reason, it should only ever be done on hardware lockup recovery.
+dma_fence_create()
- Create a software only fence, the creator must keep its reference until
- after it calls dma_fence_signal.
+__dma_fence_init()
- Initializes an allocated fence, the caller doesn't have to keep its
- refcount after committing with this fence, but it will need to hold a
- refcount again if dma_fence_ops.enable_signaling gets called. This can
- be used for other implementing other types of dma_fence.
+dma_seqno_fence_init()
- Initializes a dma_seqno_fence, the caller will need to be able to
- enable software completion, but it also completes when
- (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true.
- The dma_seqno_fence will take a refcount on sync_buf until it's destroyed.
- Certain hardware have instructions to insert this type of wait condition
- in the command stream, so no intervention from software would be needed.
- This type of fence can be destroyed before completed, however a reference
- on the sync_buf dma-buf can be taken. It is encouraged to re-use the same
- dma-buf, since mapping or unmapping the sync_buf to the device's vm can be
- expensive.
+dma-buf-mgr operations +----------------------
+dmabufmgr_validate_init()
- Initialize a struct dmabufmgr_validate for use with dmabufmgr methods, and
- appends it to the list.
+dmabufmgr_validate_get() +dmabufmgr_validate_put()
- Decrease or increase a reference to a dmabufmgr_validate, these are
- convenience functions and don't have to be used. The dmabufmgr commands
- below will never touch the refcount.
+dmabufmgr_reserve_buffers()
- Attempts to reserve a list of dmabufmgr_validate. This function does not
- decrease or increase refcount on dmabufmgr_validate.
- When this command returns 0 (success), the following
- dmabufmgr_validate members become valid:
- num_fences, fences[0...num_fences)
- The caller will have to queue waits on those fences before calling
- dmabufmgr_fence_buffer_objects, dma_fence_add_callback will keep
- the fence alive until it is signaled.
- As such, by incrementing refcount on dmabufmgr_validate before calling
- dma_fence_add_callback, and making the callback decrement refcount on
- dmabufmgr_validate, or releasing refcount if dma_fence_add_callback
- failed, the dmabufmgr_validate would be freed when all the fences
- have been signaled, and only after the last ref is released, which should
- be after dmabufmgr_fence_buffer_objects. With proper locking, when the
- list_head holding the list of dmabufmgr_validate's becomes empty it
- indicates all fences for all dma-bufs have been signaled.
+dmabufmgr_backoff_reservation()
- Unreserves a list of dmabufmgr_validate's, after dmabufmgr_reserve_buffers
- was called. This function does not decrease or increase refcount on
- dmabufmgr_validate.
+dmabufmgr_fence_buffer_objects()
- Commits the list of dmabufmgr_validate's with the dma-fence specified.
- This should be done after dmabufmgr_reserve_buffers was called succesfully.
- dmabufmgr_backoff_reservation doesn't need to be called after this.
- This function does not decrease or increase refcount on dmabufmgr_validate.
+dmabufmgr_wait_completed_cpu()
- Will block until all dmabufmgr_validate's have been completed, a signal
- has been received, or the wait timed out. This is a convenience function
- to speed up initial implementations, however since this blocks
- synchronously this is not the best way to wait.
- Can be called after dmabufmgr_reserve_buffers returned, but before
- dmabufmgr_backoff_reservation or dmabufmgr_fence_buffer_objects.
diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 6e9f217..f11d40f 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o dma-buf-mgr.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf-mgr.c b/drivers/base/dma-buf-mgr.c new file mode 100644 index 0000000..899a99b --- /dev/null +++ b/drivers/base/dma-buf-mgr.c @@ -0,0 +1,277 @@ +/*
- Copyright (C) 2012 Canonical Ltd
- Based on ttm_bo.c which bears the following copyright notice,
- but is dual licensed:
- Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
- All Rights Reserved.
- Permission is hereby granted, free of charge, to any person obtaining a
- copy of this software and associated documentation files (the
- "Software"), to deal in the Software without restriction, including
- without limitation the rights to use, copy, modify, merge, publish,
- distribute, sub license, and/or sell copies of the Software, and to
- permit persons to whom the Software is furnished to do so, subject to
- the following conditions:
- The above copyright notice and this permission notice (including the
- next paragraph) shall be included in all copies or substantial portions
- of the Software.
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
- THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
- DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
- USE OR OTHER DEALINGS IN THE SOFTWARE.
- **************************************************************************/
+/*
- Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
- */
+#include <linux/dma-buf-mgr.h> +#include <linux/export.h> +#include <linux/sched.h> +#include <linux/slab.h>
+static void dmabufmgr_backoff_reservation_locked(struct list_head *list) +{
- struct dmabufmgr_validate *entry;
- list_for_each_entry(entry, list, head) {
struct dma_buf *bo = entry->bo;
if (!entry->reserved)
continue;
entry->reserved = false;
entry->num_fences = 0;
atomic_set(&bo->reserved, 0);
wake_up_all(&bo->event_queue);
- }
+}
+static int +dmabufmgr_wait_unreserved_locked(struct list_head *list,
struct dma_buf *bo)
+{
- int ret;
- spin_unlock(&dma_buf_reserve_lock);
- ret = dma_buf_wait_unreserved(bo, true);
- spin_lock(&dma_buf_reserve_lock);
- if (unlikely(ret != 0))
dmabufmgr_backoff_reservation_locked(list);
- return ret;
+}
+/**
- dmabufmgr_backoff_reservation - cancel a reservation
- @list: [in] a linked list of struct dmabufmgr_validate
- This function cancels a previous reservation done by
- dmabufmgr_reserve_buffers. This is useful in case something
- goes wrong between reservation and committing.
- Please read Documentation/dma-buf-synchronization.txt
- */
+void +dmabufmgr_backoff_reservation(struct list_head *list) +{
- if (list_empty(list))
return;
- spin_lock(&dma_buf_reserve_lock);
- dmabufmgr_backoff_reservation_locked(list);
- spin_unlock(&dma_buf_reserve_lock);
+} +EXPORT_SYMBOL_GPL(dmabufmgr_backoff_reservation);
+/**
- dmabufmgr_reserve_buffers - reserve a list of dmabufmgr_validate
- @list: [in] a linked list of struct dmabufmgr_validate
- Please read Documentation/dma-buf-synchronization.txt
- */
+int +dmabufmgr_reserve_buffers(struct list_head *list) +{
- struct dmabufmgr_validate *entry;
- int ret;
- u32 val_seq;
- if (list_empty(list))
return 0;
- list_for_each_entry(entry, list, head) {
entry->reserved = false;
entry->num_fences = 0;
- }
+retry:
- spin_lock(&dma_buf_reserve_lock);
- val_seq = atomic_inc_return(&dma_buf_reserve_counter);
- list_for_each_entry(entry, list, head) {
struct dma_buf *bo = entry->bo;
+retry_this_bo:
ret = dma_buf_reserve_locked(bo, true, true, true, val_seq);
switch (ret) {
case 0:
break;
case -EBUSY:
ret = dmabufmgr_wait_unreserved_locked(list, bo);
if (unlikely(ret != 0)) {
spin_unlock(&dma_buf_reserve_lock);
return ret;
}
goto retry_this_bo;
case -EAGAIN:
dmabufmgr_backoff_reservation_locked(list);
spin_unlock(&dma_buf_reserve_lock);
ret = dma_buf_wait_unreserved(bo, true);
if (unlikely(ret != 0))
return ret;
goto retry;
default:
dmabufmgr_backoff_reservation_locked(list);
spin_unlock(&dma_buf_reserve_lock);
return ret;
}
entry->reserved = true;
if (entry->shared &&
bo->fence_shared_count == DMA_BUF_MAX_SHARED_FENCE) {
WARN_ON_ONCE(1);
dmabufmgr_backoff_reservation_locked(list);
spin_unlock(&dma_buf_reserve_lock);
return -EINVAL;
}
if (!entry->shared && bo->fence_shared_count) {
entry->num_fences = bo->fence_shared_count;
BUILD_BUG_ON(sizeof(entry->fences) !=
sizeof(bo->fence_shared));
memcpy(entry->fences, bo->fence_shared,
sizeof(bo->fence_shared));
} else if (bo->fence_excl) {
entry->num_fences = 1;
entry->fences[0] = bo->fence_excl;
} else
entry->num_fences = 0;
- }
- spin_unlock(&dma_buf_reserve_lock);
- return 0;
+} +EXPORT_SYMBOL_GPL(dmabufmgr_reserve_buffers);
+/**
- dmabufmgr_wait_completed_cpu - wait synchronously for completion on cpu
- @list: [in] a linked list of struct dmabufmgr_validate
- @intr: [in] perform an interruptible wait
- @timeout: [in] absolute timeout in jiffies
- Since this function waits synchronously it is meant mostly for cases where
- stalling is unimportant, or to speed up initial implementations.
- */
+int +dmabufmgr_wait_completed_cpu(struct list_head *list, bool intr,
unsigned long timeout)
+{
- struct dmabufmgr_validate *entry;
- int i, ret = 0;
- list_for_each_entry(entry, list, head) {
for (i = 0; i < entry->num_fences && !ret; i++)
ret = dma_fence_wait(entry->fences[i], intr, timeout);
if (ret && ret != -ERESTARTSYS)
pr_err("waiting returns %i\n", ret);
if (ret)
return ret;
- }
- return 0;
+} +EXPORT_SYMBOL_GPL(dmabufmgr_wait_completed_cpu);
+/**
- dmabufmgr_fence_buffer_objects - commit a reservation with a new fence
- @fence: [in] the fence that indicates completion
- @list: [in] a linked list of struct dmabufmgr_validate
- This function should be called after a hardware command submission is
- completed succesfully. The fence is used to indicate completion of
- those commands.
- Please read Documentation/dma-buf-synchronization.txt
- */
+void +dmabufmgr_fence_buffer_objects(struct dma_fence *fence, struct list_head *list) +{
- struct dmabufmgr_validate *entry;
- struct dma_buf *bo;
- if (list_empty(list) || WARN_ON(!fence))
return;
- /* Until deferred fput hits mainline, release old things here */
- list_for_each_entry(entry, list, head) {
bo = entry->bo;
if (!entry->shared) {
int i;
for (i = 0; i < bo->fence_shared_count; ++i) {
dma_fence_put(bo->fence_shared[i]);
bo->fence_shared[i] = NULL;
}
bo->fence_shared_count = 0;
if (bo->fence_excl) {
dma_fence_put(bo->fence_excl);
bo->fence_excl = NULL;
}
}
entry->reserved = false;
- }
- spin_lock(&dma_buf_reserve_lock);
- list_for_each_entry(entry, list, head) {
bo = entry->bo;
dma_fence_get(fence);
if (entry->shared)
bo->fence_shared[bo->fence_shared_count++] = fence;
else
bo->fence_excl = fence;
dma_buf_unreserve_locked(bo);
- }
- spin_unlock(&dma_buf_reserve_lock);
+} +EXPORT_SYMBOL_GPL(dmabufmgr_fence_buffer_objects);
+/**
- dmabufmgr_validate_free - simple free function for dmabufmgr_validate
- @ref: [in] pointer to dmabufmgr_validate::refcount to free
- Can be called when refcount drops to 0, but isn't required to be used.
- */
+void dmabufmgr_validate_free(struct kref *ref) +{
- struct dmabufmgr_validate *val;
- val = container_of(ref, struct dmabufmgr_validate, refcount);
- list_del(&val->head);
- kfree(val);
+} +EXPORT_SYMBOL_GPL(dmabufmgr_validate_free); diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 24e88fe..a19a518 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -25,14 +25,20 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h> +#include <linux/dma-fence.h> #include <linux/anon_inodes.h> #include <linux/export.h> +#include <linux/sched.h>
+atomic_t dma_buf_reserve_counter = ATOMIC_INIT(1); +DEFINE_SPINLOCK(dma_buf_reserve_lock); static inline int is_dma_buf_file(struct file *); static int dma_buf_release(struct inode *inode, struct file *file) { struct dma_buf *dmabuf;
- int i;
if (!is_dma_buf_file(file)) return -EINVAL; @@ -40,6 +46,15 @@ static int dma_buf_release(struct inode *inode, struct file *file) dmabuf = file->private_data; dmabuf->ops->release(dmabuf);
- BUG_ON(waitqueue_active(&dmabuf->event_queue));
- BUG_ON(atomic_read(&dmabuf->reserved));
- if (dmabuf->fence_excl)
dma_fence_put(dmabuf->fence_excl);
- for (i = 0; i < dmabuf->fence_shared_count; ++i)
dma_fence_put(dmabuf->fence_shared[i]);
- kfree(dmabuf); return 0;
} @@ -119,6 +134,7 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops, mutex_init(&dmabuf->lock); INIT_LIST_HEAD(&dmabuf->attachments);
- init_waitqueue_head(&dmabuf->event_queue);
return dmabuf; } @@ -503,3 +519,101 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) dmabuf->ops->vunmap(dmabuf, vaddr); } EXPORT_SYMBOL_GPL(dma_buf_vunmap);
+int +dma_buf_reserve_locked(struct dma_buf *dmabuf, bool interruptible,
bool no_wait, bool use_sequence, u32 sequence)
+{
- int ret;
- while (unlikely(atomic_cmpxchg(&dmabuf->reserved, 0, 1) != 0)) {
/**
* Deadlock avoidance for multi-dmabuf reserving.
*/
if (use_sequence && dmabuf->seq_valid) {
/**
* We've already reserved this one.
*/
if (unlikely(sequence == dmabuf->val_seq))
return -EDEADLK;
/**
* Already reserved by a thread that will not back
* off for us. We need to back off.
*/
if (unlikely(sequence - dmabuf->val_seq < (1 << 31)))
return -EAGAIN;
}
if (no_wait)
return -EBUSY;
spin_unlock(&dma_buf_reserve_lock);
ret = dma_buf_wait_unreserved(dmabuf, interruptible);
spin_lock(&dma_buf_reserve_lock);
if (unlikely(ret))
return ret;
- }
- if (use_sequence) {
/**
* Wake up waiters that may need to recheck for deadlock,
* if we decreased the sequence number.
*/
if (unlikely((dmabuf->val_seq - sequence < (1 << 31))
|| !dmabuf->seq_valid))
wake_up_all(&dmabuf->event_queue);
dmabuf->val_seq = sequence;
dmabuf->seq_valid = true;
- } else {
dmabuf->seq_valid = false;
- }
- return 0;
+} +EXPORT_SYMBOL_GPL(dma_buf_reserve_locked);
+int +dma_buf_reserve(struct dma_buf *dmabuf, bool interruptible, bool no_wait,
bool use_sequence, u32 sequence)
+{
- int ret;
- spin_lock(&dma_buf_reserve_lock);
- ret = dma_buf_reserve_locked(dmabuf, interruptible, no_wait,
use_sequence, sequence);
- spin_unlock(&dma_buf_reserve_lock);
- return ret;
+} +EXPORT_SYMBOL_GPL(dma_buf_reserve);
+int +dma_buf_wait_unreserved(struct dma_buf *dmabuf, bool interruptible) +{
- if (interruptible) {
return wait_event_interruptible(dmabuf->event_queue,
atomic_read(&dmabuf->reserved) == 0);
- } else {
wait_event(dmabuf->event_queue,
atomic_read(&dmabuf->reserved) == 0);
return 0;
- }
+} +EXPORT_SYMBOL_GPL(dma_buf_wait_unreserved);
+void dma_buf_unreserve_locked(struct dma_buf *dmabuf) +{
- atomic_set(&dmabuf->reserved, 0);
- wake_up_all(&dmabuf->event_queue);
+} +EXPORT_SYMBOL_GPL(dma_buf_unreserve_locked);
+void dma_buf_unreserve(struct dma_buf *dmabuf) +{
- spin_lock(&dma_buf_reserve_lock);
- dma_buf_unreserve_locked(dmabuf);
- spin_unlock(&dma_buf_reserve_lock);
+} +EXPORT_SYMBOL_GPL(dma_buf_unreserve); diff --git a/include/linux/dma-buf-mgr.h b/include/linux/dma-buf-mgr.h new file mode 100644 index 0000000..df30ee4 --- /dev/null +++ b/include/linux/dma-buf-mgr.h @@ -0,0 +1,121 @@ +/*
- Header file for dma buffer sharing framework.
- Copyright(C) 2011 Linaro Limited. All rights reserved.
- Author: Sumit Semwal sumit.semwal@ti.com
- Many thanks to linaro-mm-sig list, and specially
- Arnd Bergmann arnd@arndb.de, Rob Clark rob@ti.com and
- Daniel Vetter daniel@ffwll.ch for their support in creation and
- refining of this idea.
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __DMA_BUF_MGR_H__ +#define __DMA_BUF_MGR_H__
+#include <linux/dma-buf.h> +#include <linux/dma-fence.h> +#include <linux/list.h>
+/**
- struct dmabufmgr_validate - reservation structure for a dma-buf
- @head: list entry
- @refcount: refcount
- @reserved: internal use: signals if reservation is succesful
- @shared: whether shared or exclusive access was requested
- @bo: pointer to a dma-buf to reserve
- @priv: pointer to user-specific data
- @num_fences: number of fences to wait on
- @num_waits: amount of waits queued
- @fences: fences to wait on
- @wait: dma_fence_cb that can be passed to dma_fence_add_callback
- Based on struct ttm_validate_buffer, but unrecognisably modified.
- num_fences and fences are only valid after dmabufmgr_reserve_buffers
- is called.
- */
+struct dmabufmgr_validate {
- struct list_head head;
- struct kref refcount;
- bool reserved;
- bool shared;
- struct dma_buf *bo;
- void *priv;
- unsigned num_fences, num_waits;
- struct dma_fence *fences[DMA_BUF_MAX_SHARED_FENCE];
- struct dma_fence_cb wait[DMA_BUF_MAX_SHARED_FENCE];
+};
+/**
- dmabufmgr_validate_init - initialize a dmabufmgr_validate struct
- @val: [in] pointer to dmabufmgr_validate
- @list: [in] pointer to list to append val to
- @bo: [in] pointer to dma-buf
- @priv: [in] pointer to user-specific data
- @shared: [in] request shared or exclusive access
- */
+static inline void +dmabufmgr_validate_init(struct dmabufmgr_validate *val,
struct list_head *list, struct dma_buf *bo,
void *priv, bool shared)
+{
- kref_init(&val->refcount);
- list_add_tail(&val->head, list);
- val->bo = bo;
- val->priv = priv;
- val->shared = shared;
+}
+extern void dmabufmgr_validate_free(struct kref *ref);
+/**
- dmabufmgr_validate_get - increase refcount on a dmabufmgr_validate
- @val: [in] pointer to dmabufmgr_validate
- */
+static inline struct dmabufmgr_validate * +dmabufmgr_validate_get(struct dmabufmgr_validate *val) +{
- kref_get(&val->refcount);
- return val;
+}
+/**
- dmabufmgr_validate_put - decrease refcount on a dmabufmgr_validate
- @val: [in] pointer to dmabufmgr_validate
- Returns true if the caller removed last refcount on val,
- false otherwise.
- */
+static inline bool +dmabufmgr_validate_put(struct dmabufmgr_validate *val) +{
- return kref_put(&val->refcount, dmabufmgr_validate_free);
+}
+extern int +dmabufmgr_reserve_buffers(struct list_head *list);
+extern void +dmabufmgr_backoff_reservation(struct list_head *list);
+extern void +dmabufmgr_fence_buffer_objects(struct dma_fence *fence, struct list_head *list);
+extern int +dmabufmgr_wait_completed_cpu(struct list_head *list, bool intr,
unsigned long timeout);
+#endif /* __DMA_BUF_MGR_H__ */ diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index bd2e52c..8b14103 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -35,6 +35,11 @@ struct device; struct dma_buf; struct dma_buf_attachment; +extern atomic_t dma_buf_reserve_counter; +extern spinlock_t dma_buf_reserve_lock;
+#define DMA_BUF_MAX_SHARED_FENCE 8
/**
- struct dma_buf_ops - operations possible on struct dma_buf
- @attach: [optional] allows different devices to 'attach' themselves to the
@@ -122,6 +127,18 @@ struct dma_buf { /* mutex to serialize list manipulation and attach/detach */ struct mutex lock; void *priv;
- /** event queue for waking up when this dmabuf becomes unreserved */
- wait_queue_head_t event_queue;
- atomic_t reserved;
- /** These require dma_buf_reserve to be called before modification */
- bool seq_valid;
- u32 val_seq;
- struct dma_fence *fence_excl;
- struct dma_fence *fence_shared[DMA_BUF_MAX_SHARED_FENCE];
- u32 fence_shared_count;
}; /** @@ -183,5 +200,12 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long); void *dma_buf_vmap(struct dma_buf *); void dma_buf_vunmap(struct dma_buf *, void *vaddr); +int dma_buf_reserve_locked(struct dma_buf *, bool intr, bool no_wait,
bool use_seq, u32 seq);
+int dma_buf_reserve(struct dma_buf *, bool intr, bool no_wait,
bool use_seq, u32 seq);
+int dma_buf_wait_unreserved(struct dma_buf *, bool interruptible); +void dma_buf_unreserve_locked(struct dma_buf *); +void dma_buf_unreserve(struct dma_buf *); #endif /* __DMA_BUF_H__ */
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
Hey Dan,
Op 16-08-12 01:12, Daniel Vetter schreef:
Hi Maarten,
Ok, here comes the promised review (finally!), but it's rather a high-level thingy. I've mostly thought about how we could create a neat api with the following points. For a bit of clarity, I've grouped the different considerations a bit.
<snip>
Thanks, I have significantly reworked the api based on your comments.
Documentation is currently lacking, and will get updated again for the final version.
Full patch series also includes some ttm changes to make use of dma-reservation, with the intention of moving out fencing from ttm too, but that requires more work.
For the full series see: http://cgit.freedesktop.org/~mlankhorst/linux/log/?h=v10-wip
My plan is to add a pointer for dma_reservation to a dma-buf, so all users of dma-reservation can perform reservations across multiple devices as well. Since the default for ttm likely will mean only a few buffers are shared I didn't want to complicate the abi for ttm much further so only added a pointer that can be null to use ttm's reservation_object structure.
The major difference with ttm is that each reservation object gets its own lock for fencing and reservations, but they can be merged:
spin_lock(obj->resv) __dma_object_reserve() grab a ref to all obj->fences spin_unlock(obj->resv)
spin_lock(obj->resv) assign new fence to obj->fences __dma_object_unreserve() spin_unlock(obj->resv)
There's only one thing about fences I haven't been able to map yet properly. vmwgfx has sync_obj_flush, but as far as I can tell it has not much to do with sync objects, but is rather a generic 'flush before release'. Maybe one of the vmwgfx devs could confirm whether that call is really needed there? And if so, if there could be some other way do that, because it seems to be the ttm_bo_wait call before that would be enough, if not it might help more to move the flush to some other call.
PS: For ttm devs some of the code may look familiar, I don't know if the kernel accepts I-told-you-so tag or not, but if it does you might want to add them now. :-)
PPS: I'm aware that I still need to add a signaled op to fences
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 030f705..7da9637 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -129,6 +129,8 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-fence.c !Iinclude/linux/dma-fence.h !Iinclude/linux/dma-seqno-fence.h +!Edrivers/base/dma-reservation.c +!Iinclude/linux/dma-reservation.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1> diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 6e9f217..b26e639 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o dma-reservation.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 24e88fe..3c84ead 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -25,8 +25,10 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h> +#include <linux/dma-fence.h> #include <linux/anon_inodes.h> #include <linux/export.h> +#include <linux/dma-reservation.h>
static inline int is_dma_buf_file(struct file *);
@@ -40,6 +42,9 @@ static int dma_buf_release(struct inode *inode, struct file *file) dmabuf = file->private_data;
dmabuf->ops->release(dmabuf); + + if (dmabuf->resv == (struct dma_reservation_object*)&dmabuf[1]) + dma_reservation_object_fini(dmabuf->resv); kfree(dmabuf); return 0; } @@ -94,6 +99,8 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops, { struct dma_buf *dmabuf; struct file *file; + size_t alloc_size = sizeof(struct dma_buf); + alloc_size += sizeof(struct dma_reservation_object);
if (WARN_ON(!priv || !ops || !ops->map_dma_buf @@ -105,13 +112,15 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops, return ERR_PTR(-EINVAL); }
- dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL); + dmabuf = kzalloc(alloc_size, GFP_KERNEL); if (dmabuf == NULL) return ERR_PTR(-ENOMEM);
dmabuf->priv = priv; dmabuf->ops = ops; dmabuf->size = size; + dmabuf->resv = (struct dma_reservation_object*)&dmabuf[1]; + dma_reservation_object_init(dmabuf->resv);
file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags);
diff --git a/drivers/base/dma-reservation.c b/drivers/base/dma-reservation.c new file mode 100644 index 0000000..e7cf4fa --- /dev/null +++ b/drivers/base/dma-reservation.c @@ -0,0 +1,321 @@ +/* + * Copyright (C) 2012 Canonical Ltd + * + * Based on ttm_bo.c which bears the following copyright notice, + * but is dual licensed: + * + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com> + */ + +#include <linux/dma-fence.h> +#include <linux/dma-reservation.h> +#include <linux/export.h> +#include <linux/sched.h> +#include <linux/slab.h> + +atomic64_t dma_reservation_counter = ATOMIC64_INIT(0); +EXPORT_SYMBOL_GPL(dma_reservation_counter); + +int +__dma_object_reserve(struct dma_reservation_object *obj, bool intr, + bool no_wait, dma_reservation_ticket_t *ticket) +{ + int ret; + u64 sequence = ticket ? ticket->seqno : 0; + + while (unlikely(atomic_cmpxchg(&obj->reserved, 0, 1) != 0)) { + /** + * Deadlock avoidance for multi-dmabuf reserving. + */ + if (sequence && obj->sequence) { + /** + * We've already reserved this one. + */ + if (unlikely(sequence == obj->sequence)) + return -EDEADLK; + /** + * Already reserved by a thread that will not back + * off for us. We need to back off. + */ + if (unlikely(sequence - obj->sequence < (1ULL << 63))) + return -EAGAIN; + } + + if (no_wait) + return -EBUSY; + + spin_unlock(&obj->lock); + ret = dma_object_wait_unreserved(obj, intr); + spin_lock(&obj->lock); + + if (unlikely(ret)) + return ret; + } + + /** + * Wake up waiters that may need to recheck for deadlock, + * if we decreased the sequence number. + */ + if (sequence && unlikely((obj->sequence - sequence < (1ULL << 63)) || + !obj->sequence)) + wake_up_all(&obj->event_queue); + + obj->sequence = sequence; + return 0; +} +EXPORT_SYMBOL_GPL(__dma_object_reserve); + +int +dma_object_reserve(struct dma_reservation_object *obj, bool intr, + bool no_wait, dma_reservation_ticket_t *ticket) +{ + int ret; + + spin_lock(&obj->lock); + ret = __dma_object_reserve(obj, intr, no_wait, ticket); + spin_unlock(&obj->lock); + + return ret; +} +EXPORT_SYMBOL_GPL(dma_object_reserve); + +int +dma_object_wait_unreserved(struct dma_reservation_object *obj, bool intr) +{ + if (intr) { + return wait_event_interruptible(obj->event_queue, + atomic_read(&obj->reserved) == 0); + } else { + wait_event(obj->event_queue, + atomic_read(&obj->reserved) == 0); + return 0; + } +} +EXPORT_SYMBOL_GPL(dma_object_wait_unreserved); + +void +__dma_object_unreserve(struct dma_reservation_object *obj, + dma_reservation_ticket_t *ticket) +{ + atomic_set(&obj->reserved, 0); + wake_up_all(&obj->event_queue); +} +EXPORT_SYMBOL_GPL(__dma_object_unreserve); + +void +dma_object_unreserve(struct dma_reservation_object *obj, + dma_reservation_ticket_t *ticket) +{ + spin_lock(&obj->lock); + __dma_object_unreserve(obj, ticket); + spin_unlock(&obj->lock); +} +EXPORT_SYMBOL_GPL(dma_object_unreserve); + +/** + * dma_ticket_backoff - cancel a reservation + * @ticket: [in] a dma_reservation_ticket + * @entries: [in] the list list of dma_reservation_entry entries to unreserve + * + * This function cancels a previous reservation done by + * dma_ticket_reserve. This is useful in case something + * goes wrong between reservation and committing. + * + * This should only be called after dma_ticket_reserve returns success. + * + * Please read Documentation/dma-buf-synchronization.txt + */ +void +dma_ticket_backoff(struct dma_reservation_ticket *ticket, struct list_head *entries) +{ + struct list_head *cur; + + if (list_empty(entries)) + return; + + list_for_each(cur, entries) { + struct dma_reservation_object *obj; + + dma_reservation_entry_get(cur, &obj, NULL); + + dma_object_unreserve(obj, ticket); + } + dma_reservation_ticket_fini(ticket); +} +EXPORT_SYMBOL_GPL(dma_ticket_backoff); + +static void +dma_ticket_backoff_early(struct dma_reservation_ticket *ticket, + struct list_head *list, + struct dma_reservation_entry *entry) +{ + list_for_each_entry_continue_reverse(entry, list, head) { + struct dma_reservation_object *obj; + + dma_reservation_entry_get(&entry->head, &obj, NULL); + dma_object_unreserve(obj, ticket); + } + dma_reservation_ticket_fini(ticket); +} + +/** + * dma_ticket_reserve - reserve a list of dma_reservation_entry + * @ticket: [out] a dma_reservation_ticket + * @entries: [in] a list of entries to reserve. + * + * Do not initialize ticket, it will be initialized by this function. + * + * XXX: Nuke rest + * The caller will have to queue waits on those fences before calling + * dmabufmgr_fence_buffer_objects, with either hardware specific methods, + * dma_fence_add_callback will, or dma_fence_wait. + * + * As such, by incrementing refcount on dma_reservation_entry before calling + * dma_fence_add_callback, and making the callback decrement refcount on + * dma_reservation_entry, or releasing refcount if dma_fence_add_callback + * failed, the dma_reservation_entry will be freed when all the fences + * have been signaled, and only after the last ref is released, which should + * be after dmabufmgr_fence_buffer_objects. With proper locking, when the + * list_head holding the list of dma_reservation_entry's becomes empty it + * indicates all fences for all dma-bufs have been signaled. + * + * Please read Documentation/dma-buf-synchronization.txt + */ +int +dma_ticket_reserve(struct dma_reservation_ticket *ticket, + struct list_head *entries) +{ + struct list_head *cur; + int ret; + + if (list_empty(entries)) + return 0; + +retry: + dma_reservation_ticket_init(ticket); + + list_for_each(cur, entries) { + struct dma_reservation_entry *entry; + struct dma_reservation_object *bo; + bool shared; + + entry = dma_reservation_entry_get(cur, &bo, &shared); + + ret = dma_object_reserve(bo, true, false, ticket); + switch (ret) { + case 0: + break; + case -EAGAIN: + dma_ticket_backoff_early(ticket, entries, entry); + ret = dma_object_wait_unreserved(bo, true); + if (unlikely(ret != 0)) + return ret; + goto retry; + default: + dma_ticket_backoff_early(ticket, entries, entry); + return ret; + } + + if (shared && + bo->fence_shared_count == DMA_BUF_MAX_SHARED_FENCE) { + WARN_ON_ONCE(1); + dma_ticket_backoff_early(ticket, entries, entry); + return -EINVAL; + } + } + + return 0; +} +EXPORT_SYMBOL_GPL(dma_ticket_reserve); + +/** + * dma_ticket_commit - commit a reservation with a new fence + * @ticket: [in] the dma_reservation_ticket returned by + * dma_ticket_reserve + * @entries: [in] a linked list of struct dma_reservation_entry + * @fence: [in] the fence that indicates completion + * + * This function will call dma_reservation_ticket_fini, no need + * to do it manually. + * + * This function should be called after a hardware command submission is + * completed succesfully. The fence is used to indicate completion of + * those commands. + * + * Please read Documentation/dma-buf-synchronization.txt + */ +void +dma_ticket_commit(struct dma_reservation_ticket *ticket, + struct list_head *entries, struct dma_fence *fence) +{ + struct list_head *cur; + + if (list_empty(entries)) + return; + + if (WARN_ON(!fence)) { + dma_ticket_backoff(ticket, entries); + return; + } + + list_for_each(cur, entries) { + struct dma_reservation_object *bo; + bool shared; + + dma_reservation_entry_get(cur, &bo, &shared); + + spin_lock(&bo->lock); + + if (!shared) { + int i; + for (i = 0; i < bo->fence_shared_count; ++i) { + dma_fence_put(bo->fence_shared[i]); + bo->fence_shared[i] = NULL; + } + bo->fence_shared_count = 0; + if (bo->fence_excl) + dma_fence_put(bo->fence_excl); + + bo->fence_excl = fence; + } else { + if (WARN_ON(bo->fence_shared_count >= + ARRAY_SIZE(bo->fence_shared))) { + spin_unlock(&bo->lock); + continue; + } + + bo->fence_shared[bo->fence_shared_count++] = fence; + } + dma_fence_get(fence); + + __dma_object_unreserve(bo, ticket); + spin_unlock(&bo->lock); + } + dma_reservation_ticket_fini(ticket); +} +EXPORT_SYMBOL_GPL(dma_ticket_commit); diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index bd2e52c..dee44dd 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -122,6 +122,7 @@ struct dma_buf { /* mutex to serialize list manipulation and attach/detach */ struct mutex lock; void *priv; + struct dma_reservation_object *resv; };
/** diff --git a/include/linux/dma-reservation.h b/include/linux/dma-reservation.h new file mode 100644 index 0000000..b8798c1 --- /dev/null +++ b/include/linux/dma-reservation.h @@ -0,0 +1,170 @@ +/* + * Header file for dma buffer sharing framework. + * + * Copyright(C) 2011 Linaro Limited. All rights reserved. + * Author: Sumit Semwal sumit.semwal@ti.com + * + * Many thanks to linaro-mm-sig list, and specially + * Arnd Bergmann arnd@arndb.de, Rob Clark rob@ti.com and + * Daniel Vetter daniel@ffwll.ch for their support in creation and + * refining of this idea. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ +#ifndef __DMA_RESERVATION_H__ +#define __DMA_RESERVATION_H__ + +#define DMA_BUF_MAX_SHARED_FENCE 8 + +#include <linux/dma-fence.h> + +extern atomic64_t dma_reservation_counter; + +struct dma_reservation_object { + wait_queue_head_t event_queue; + spinlock_t lock; + + atomic_t reserved; + + u64 sequence; + u32 fence_shared_count; + struct dma_fence *fence_excl; + struct dma_fence *fence_shared[DMA_BUF_MAX_SHARED_FENCE]; +}; + +typedef struct dma_reservation_ticket { + u64 seqno; +} dma_reservation_ticket_t; + +/** + * struct dma_reservation_entry - reservation structure for a + * dma_reservation_object + * @head: list entry + * @obj_shared: pointer to a dma_reservation_object to reserve + * + * Bit 0 of obj_shared is set to bool shared, as such pointer has to be + * converted back, which can be done with dma_reservation_entry_get. + */ +struct dma_reservation_entry { + struct list_head head; + unsigned long obj_shared; +}; + + +static inline void +__dma_reservation_object_init(struct dma_reservation_object *obj) +{ + init_waitqueue_head(&obj->event_queue); + spin_lock_init(&obj->lock); +} + +static inline void +dma_reservation_object_init(struct dma_reservation_object *obj) +{ + memset(obj, 0, sizeof(*obj)); + __dma_reservation_object_init(obj); +} + +static inline void +dma_reservation_object_fini(struct dma_reservation_object *obj) +{ + int i; + + BUG_ON(waitqueue_active(&obj->event_queue)); + BUG_ON(atomic_read(&obj->reserved)); + + if (obj->fence_excl) + dma_fence_put(obj->fence_excl); + for (i = 0; i < obj->fence_shared_count; ++i) + dma_fence_put(obj->fence_shared[i]); +} + +static inline void +dma_reservation_ticket_init(struct dma_reservation_ticket *t) +{ + do { + t->seqno = atomic64_inc_return(&dma_reservation_counter); + } while (unlikely(!t->seqno)); +} + +/** + * dma_reservation_ticket_fini - end a reservation ticket + * @t: [in] dma_reservation_ticket that completed all reservations + * + * This currently does nothing, but should be called after all reservations + * made with this ticket have been unreserved. It is likely that in the future + * it will be hooked up to perf events, or aid in debugging in other ways. + */ +static inline void +dma_reservation_ticket_fini(struct dma_reservation_ticket *t) +{ } + +/** + * dma_reservation_entry_init - initialize and append a dma_reservation_entry + * to the list + * @entry: entry to initialize + * @list: list to append to + * @obj: dma_reservation_object to initialize the entry with + * @shared: whether shared or exclusive access is requested + */ +static inline void +dma_reservation_entry_init(struct dma_reservation_entry *entry, + struct list_head *list, + struct dma_reservation_object *obj, bool shared) +{ + entry->obj_shared = (unsigned long)obj | !!shared; +} + +static inline struct dma_reservation_entry * +dma_reservation_entry_get(struct list_head *list, + struct dma_reservation_object **obj, bool *shared) +{ + struct dma_reservation_entry *e = container_of(list, struct dma_reservation_entry, head); + unsigned long val = e->obj_shared; + + if (obj) + *obj = (struct dma_reservation_object*)(val & ~1); + if (shared) + *shared = val & 1; + return e; +} + +extern int +__dma_object_reserve(struct dma_reservation_object *obj, + bool intr, bool no_wait, + dma_reservation_ticket_t *ticket); + +extern int +dma_object_reserve(struct dma_reservation_object *obj, + bool intr, bool no_wait, + dma_reservation_ticket_t *ticket); + +extern void +__dma_object_unreserve(struct dma_reservation_object *, + dma_reservation_ticket_t *ticket); + +extern void +dma_object_unreserve(struct dma_reservation_object *, + dma_reservation_ticket_t *ticket); + +extern int +dma_object_wait_unreserved(struct dma_reservation_object *, bool intr); + +extern int dma_ticket_reserve(struct dma_reservation_ticket *, + struct list_head *entries); +extern void dma_ticket_backoff(struct dma_reservation_ticket *, + struct list_head *entries); +extern void dma_ticket_commit(struct dma_reservation_ticket *, + struct list_head *entries, struct dma_fence *); + +#endif /* __DMA_BUF_MGR_H__ */
Hi, Maarten, please see some comments inline.
On 08/22/2012 01:50 PM, Maarten Lankhorst wrote:
Hey Dan,
Op 16-08-12 01:12, Daniel Vetter schreef:
Hi Maarten,
Ok, here comes the promised review (finally!), but it's rather a high-level thingy. I've mostly thought about how we could create a neat api with the following points. For a bit of clarity, I've grouped the different considerations a bit.
<snip>
Thanks, I have significantly reworked the api based on your comments.
Documentation is currently lacking, and will get updated again for the final version.
Full patch series also includes some ttm changes to make use of dma-reservation, with the intention of moving out fencing from ttm too, but that requires more work.
For the full series see: http://cgit.freedesktop.org/~mlankhorst/linux/log/?h=v10-wip
My plan is to add a pointer for dma_reservation to a dma-buf, so all users of dma-reservation can perform reservations across multiple devices as well. Since the default for ttm likely will mean only a few buffers are shared I didn't want to complicate the abi for ttm much further so only added a pointer that can be null to use ttm's reservation_object structure.
The major difference with ttm is that each reservation object gets its own lock for fencing and reservations, but they can be merged:
TTM previously had a lock on each buffer object which protected sync_obj and sync_obj_arg, however when fencing multiple buffers, say 100 buffers or so in a single command submission, it meant 100 locks / unlocks that weren't really necessary, since just updating the sync_obj and sync_obj_arg members is a pretty quick operation, whereas locking may be a pretty slow operation, so those locks were removed for efficiency. The reason a single lock (the lru lock) is used to protect reservation is that a TTM object that is being reserved *atomically* needs to be taken off LRU lists, since processes performing LRU eviction don't take a ticket when evicting, and may thus cause deadlocks; It might be possible to fix this within TTM by requiring a ticket for all reservation, but then that ticket needs to be passed down the call chain for all functions that may perform a reservation. It would perhaps be simpler (but perhaps not so fair) to use the current thread info pointer as a ticket sequence number.
spin_lock(obj->resv) __dma_object_reserve() grab a ref to all obj->fences spin_unlock(obj->resv)
spin_lock(obj->resv) assign new fence to obj->fences __dma_object_unreserve() spin_unlock(obj->resv)
There's only one thing about fences I haven't been able to map yet properly. vmwgfx has sync_obj_flush, but as far as I can tell it has not much to do with sync objects, but is rather a generic 'flush before release'. Maybe one of the vmwgfx devs could confirm whether that call is really needed there? And if so, if there could be some other way do that, because it seems to be the ttm_bo_wait call before that would be enough, if not it might help more to move the flush to some other call.
The fence flush should be interpreted as an operation for fencing mechanisms that aren't otherwise required to signal in finite time, and where the time from flush to signal might be substantial. TTM is then supposed to issue a fence flush when it knows ahead of time that it will soon perform a periodical poll for a buffer to be idle, but not block waiting for the buffer to be idle. The delayed buffer delete mechanism is, I think, the only user currently. For hardware that always signal fences immediately, the flush mechanism is not needed.
PS: For ttm devs some of the code may look familiar, I don't know if the kernel accepts I-told-you-so tag or not, but if it does you might want to add them now. :-)
PPS: I'm aware that I still need to add a signaled op to fences
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 030f705..7da9637 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -129,6 +129,8 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-fence.c !Iinclude/linux/dma-fence.h !Iinclude/linux/dma-seqno-fence.h +!Edrivers/base/dma-reservation.c +!Iinclude/linux/dma-reservation.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1> diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 6e9f217..b26e639 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o dma-reservation.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 24e88fe..3c84ead 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -25,8 +25,10 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h> +#include <linux/dma-fence.h> #include <linux/anon_inodes.h> #include <linux/export.h> +#include <linux/dma-reservation.h> static inline int is_dma_buf_file(struct file *); @@ -40,6 +42,9 @@ static int dma_buf_release(struct inode *inode, struct file *file) dmabuf = file->private_data; dmabuf->ops->release(dmabuf);
- if (dmabuf->resv == (struct dma_reservation_object*)&dmabuf[1])
kfree(dmabuf); return 0; }dma_reservation_object_fini(dmabuf->resv);
@@ -94,6 +99,8 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops, { struct dma_buf *dmabuf; struct file *file;
- size_t alloc_size = sizeof(struct dma_buf);
- alloc_size += sizeof(struct dma_reservation_object);
if (WARN_ON(!priv || !ops || !ops->map_dma_buf @@ -105,13 +112,15 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops, return ERR_PTR(-EINVAL); }
- dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL);
- dmabuf = kzalloc(alloc_size, GFP_KERNEL); if (dmabuf == NULL) return ERR_PTR(-ENOMEM);
dmabuf->priv = priv; dmabuf->ops = ops; dmabuf->size = size;
- dmabuf->resv = (struct dma_reservation_object*)&dmabuf[1];
- dma_reservation_object_init(dmabuf->resv);
file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags); diff --git a/drivers/base/dma-reservation.c b/drivers/base/dma-reservation.c new file mode 100644 index 0000000..e7cf4fa --- /dev/null +++ b/drivers/base/dma-reservation.c @@ -0,0 +1,321 @@ +/*
- Copyright (C) 2012 Canonical Ltd
- Based on ttm_bo.c which bears the following copyright notice,
- but is dual licensed:
- Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
- All Rights Reserved.
- Permission is hereby granted, free of charge, to any person obtaining a
- copy of this software and associated documentation files (the
- "Software"), to deal in the Software without restriction, including
- without limitation the rights to use, copy, modify, merge, publish,
- distribute, sub license, and/or sell copies of the Software, and to
- permit persons to whom the Software is furnished to do so, subject to
- the following conditions:
- The above copyright notice and this permission notice (including the
- next paragraph) shall be included in all copies or substantial portions
- of the Software.
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
- THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
- DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
- USE OR OTHER DEALINGS IN THE SOFTWARE.
- **************************************************************************/
+/*
- Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
- */
+#include <linux/dma-fence.h> +#include <linux/dma-reservation.h> +#include <linux/export.h> +#include <linux/sched.h> +#include <linux/slab.h>
+atomic64_t dma_reservation_counter = ATOMIC64_INIT(0); +EXPORT_SYMBOL_GPL(dma_reservation_counter);
+int +__dma_object_reserve(struct dma_reservation_object *obj, bool intr,
bool no_wait, dma_reservation_ticket_t *ticket)
+{
- int ret;
- u64 sequence = ticket ? ticket->seqno : 0;
- while (unlikely(atomic_cmpxchg(&obj->reserved, 0, 1) != 0)) {
/**
* Deadlock avoidance for multi-dmabuf reserving.
*/
if (sequence && obj->sequence) {
/**
* We've already reserved this one.
*/
if (unlikely(sequence == obj->sequence))
return -EDEADLK;
/**
* Already reserved by a thread that will not back
* off for us. We need to back off.
*/
if (unlikely(sequence - obj->sequence < (1ULL << 63)))
return -EAGAIN;
}
if (no_wait)
return -EBUSY;
spin_unlock(&obj->lock);
ret = dma_object_wait_unreserved(obj, intr);
spin_lock(&obj->lock);
if (unlikely(ret))
return ret;
- }
- /**
* Wake up waiters that may need to recheck for deadlock,
* if we decreased the sequence number.
*/
- if (sequence && unlikely((obj->sequence - sequence < (1ULL << 63)) ||
!obj->sequence))
wake_up_all(&obj->event_queue);
- obj->sequence = sequence;
- return 0;
+} +EXPORT_SYMBOL_GPL(__dma_object_reserve);
Since this function and the corresponding unreserve is exported, it should probably be documented (this holds for TTM as well) that they need memory barriers to protect data, since IIRC the linux atomic_xxx operations do not necessarily order memory reads and writes. For the corresponding unlocked dma_object_reserve and dma_object_unreserve, the spinlocks should take care of that.
/Thomas
Hey,
Op 22-08-12 14:52, Thomas Hellstrom schreef:
Hi, Maarten, please see some comments inline.
On 08/22/2012 01:50 PM, Maarten Lankhorst wrote:
Hey Dan,
Op 16-08-12 01:12, Daniel Vetter schreef:
Hi Maarten,
Ok, here comes the promised review (finally!), but it's rather a high-level thingy. I've mostly thought about how we could create a neat api with the following points. For a bit of clarity, I've grouped the different considerations a bit.
<snip>
Thanks, I have significantly reworked the api based on your comments.
Documentation is currently lacking, and will get updated again for the final version.
Full patch series also includes some ttm changes to make use of dma-reservation, with the intention of moving out fencing from ttm too, but that requires more work.
For the full series see: http://cgit.freedesktop.org/~mlankhorst/linux/log/?h=v10-wip
My plan is to add a pointer for dma_reservation to a dma-buf, so all users of dma-reservation can perform reservations across multiple devices as well. Since the default for ttm likely will mean only a few buffers are shared I didn't want to complicate the abi for ttm much further so only added a pointer that can be null to use ttm's reservation_object structure.
The major difference with ttm is that each reservation object gets its own lock for fencing and reservations, but they can be merged:
TTM previously had a lock on each buffer object which protected sync_obj and sync_obj_arg, however when fencing multiple buffers, say 100 buffers or so in a single command submission, it meant 100 locks / unlocks that weren't really necessary, since just updating the sync_obj and sync_obj_arg members is a pretty quick operation, whereas locking may be a pretty slow operation, so those locks were removed for efficiency.
Speaking of which, mind if I kill sync_obj_arg? Only user is again vmwgfx and it always seems to pass the same for flags, namely DRM_VMW_FENCE_FLAG_EXEC.
The reason a single lock (the lru lock) is used to protect reservation is that a TTM object that is being reserved *atomically* needs to be taken off LRU lists, since processes performing LRU eviction don't take a ticket when evicting, and may thus cause deadlocks; It might be possible to fix this within TTM by requiring a ticket for all reservation, but then that ticket needs to be passed down the call chain for all functions that may perform a reservation. It would perhaps be simpler (but perhaps not so fair) to use the current thread info pointer as a ticket sequence number.
Yeah, that's why the ttm patch for ttm_bo_reserve_locked always calls dma_object_reserve with no_wait set to true. :) It does its own EBUSY handling for the no_wait case, so there should be no functional changes.
I've been toying with the idea of always requiring a sequence number, I just didn't in the current patch yet since it would mean converting every driver, so for a preliminary patch based on a unmerged api it was not worth the time.
spin_lock(obj->resv) __dma_object_reserve() grab a ref to all obj->fences spin_unlock(obj->resv)
spin_lock(obj->resv) assign new fence to obj->fences __dma_object_unreserve() spin_unlock(obj->resv)
There's only one thing about fences I haven't been able to map yet properly. vmwgfx has sync_obj_flush, but as far as I can tell it has not much to do with sync objects, but is rather a generic 'flush before release'. Maybe one of the vmwgfx devs could confirm whether that call is really needed there? And if so, if there could be some other way do that, because it seems to be the ttm_bo_wait call before that would be enough, if not it might help more to move the flush to some other call.
The fence flush should be interpreted as an operation for fencing mechanisms that aren't otherwise required to signal in finite time, and where the time from flush to signal might be substantial. TTM is then supposed to issue a fence flush when it knows ahead of time that it will soon perform a periodical poll for a buffer to be idle, but not block waiting for the buffer to be idle. The delayed buffer delete mechanism is, I think, the only user currently. For hardware that always signal fences immediately, the flush mechanism is not needed.
So if I understand it correctly it is the same as I'm doing in fences with dma_fence::enable_sw_signals? Great, I don't need to add another op then. Although it looks like I should export a function to manually enable it for those cases. :)
<snip> + +int +__dma_object_reserve(struct dma_reservation_object *obj, bool intr, + bool no_wait, dma_reservation_ticket_t *ticket) +{ + int ret; + u64 sequence = ticket ? ticket->seqno : 0; + + while (unlikely(atomic_cmpxchg(&obj->reserved, 0, 1) != 0)) { + /** + * Deadlock avoidance for multi-dmabuf reserving. + */ + if (sequence && obj->sequence) { + /** + * We've already reserved this one. + */ + if (unlikely(sequence == obj->sequence)) + return -EDEADLK; + /** + * Already reserved by a thread that will not back + * off for us. We need to back off. + */ + if (unlikely(sequence - obj->sequence < (1ULL << 63))) + return -EAGAIN; + } + + if (no_wait) + return -EBUSY; + + spin_unlock(&obj->lock); + ret = dma_object_wait_unreserved(obj, intr); + spin_lock(&obj->lock); + + if (unlikely(ret)) + return ret; + } + + /** + * Wake up waiters that may need to recheck for deadlock, + * if we decreased the sequence number. + */ + if (sequence && unlikely((obj->sequence - sequence < (1ULL << 63)) || + !obj->sequence)) + wake_up_all(&obj->event_queue); + + obj->sequence = sequence; + return 0; +} +EXPORT_SYMBOL_GPL(__dma_object_reserve);
Since this function and the corresponding unreserve is exported, it should probably be documented (this holds for TTM as well) that they need memory barriers to protect data, since IIRC the linux atomic_xxx operations do not necessarily order memory reads and writes. For the corresponding unlocked dma_object_reserve and dma_object_unreserve, the spinlocks should take care of that.
The documentation is still lacking, but they require the spinlocks to be taken by the caller, else things explode. It's meant for updating fence and ending reservation atomically.
~Maarten
On 08/22/2012 03:32 PM, Maarten Lankhorst wrote:
Hey,
Op 22-08-12 14:52, Thomas Hellstrom schreef:
Hi, Maarten, please see some comments inline.
On 08/22/2012 01:50 PM, Maarten Lankhorst wrote:
Hey Dan,
Op 16-08-12 01:12, Daniel Vetter schreef:
Hi Maarten,
Ok, here comes the promised review (finally!), but it's rather a high-level thingy. I've mostly thought about how we could create a neat api with the following points. For a bit of clarity, I've grouped the different considerations a bit.
<snip>
Thanks, I have significantly reworked the api based on your comments.
Documentation is currently lacking, and will get updated again for the final version.
Full patch series also includes some ttm changes to make use of dma-reservation, with the intention of moving out fencing from ttm too, but that requires more work.
For the full series see: http://cgit.freedesktop.org/~mlankhorst/linux/log/?h=v10-wip
My plan is to add a pointer for dma_reservation to a dma-buf, so all users of dma-reservation can perform reservations across multiple devices as well. Since the default for ttm likely will mean only a few buffers are shared I didn't want to complicate the abi for ttm much further so only added a pointer that can be null to use ttm's reservation_object structure.
The major difference with ttm is that each reservation object gets its own lock for fencing and reservations, but they can be merged:
TTM previously had a lock on each buffer object which protected sync_obj and sync_obj_arg, however when fencing multiple buffers, say 100 buffers or so in a single command submission, it meant 100 locks / unlocks that weren't really necessary, since just updating the sync_obj and sync_obj_arg members is a pretty quick operation, whereas locking may be a pretty slow operation, so those locks were removed for efficiency.
Speaking of which, mind if I kill sync_obj_arg? Only user is again vmwgfx and it always seems to pass the same for flags, namely DRM_VMW_FENCE_FLAG_EXEC.
I guess so, although I've always thought it to be a great idea :), but nobody really understands or care what it's for.
Which is a single fence might have multiple definitions of signaled, depending on the user: Consider an awkward GPU with a single command stream that feeds multiple engines. The command parser signals when it has parsed the commands, the 2D engine signals when it is done with the 2D commands it has been fed, and the 3D engine signals when the 3D engine is done, and finally the flush engine signals when all rendered data is flushed. Depending on which engines touch a buffer, each buffer may have a different view on when the attached fence is signaled.
But anyway. No in-tree driver is using it, (the old unichrome driver did), and I guess the same functionality can be implemented with multiple fences attached to a single buffer, so feel free to get rid of it.
The reason a single lock (the lru lock) is used to protect reservation is that a TTM object that is being reserved *atomically* needs to be taken off LRU lists, since processes performing LRU eviction don't take a ticket when evicting, and may thus cause deadlocks; It might be possible to fix this within TTM by requiring a ticket for all reservation, but then that ticket needs to be passed down the call chain for all functions that may perform a reservation. It would perhaps be simpler (but perhaps not so fair) to use the current thread info pointer as a ticket sequence number.
Yeah, that's why the ttm patch for ttm_bo_reserve_locked always calls dma_object_reserve with no_wait set to true. :) It does its own EBUSY handling for the no_wait case, so there should be no functional changes.
I need to look a bit deeper into the TTM patches, but as long as nothing breaks I've nothing against it using dma reservation objects. OTOH, it might be worthwhile thinking about the 'dma' prefix, since the reservation objects may find use elsewhere as well, for example for vmwgfx resources, that really have little to do with dma-buffers or buffers at all.
I've been toying with the idea of always requiring a sequence number, I just didn't in the current patch yet since it would mean converting every driver, so for a preliminary patch based on a unmerged api it was not worth the time.
spin_lock(obj->resv) __dma_object_reserve() grab a ref to all obj->fences spin_unlock(obj->resv)
spin_lock(obj->resv) assign new fence to obj->fences __dma_object_unreserve() spin_unlock(obj->resv)
There's only one thing about fences I haven't been able to map yet properly. vmwgfx has sync_obj_flush, but as far as I can tell it has not much to do with sync objects, but is rather a generic 'flush before release'. Maybe one of the vmwgfx devs could confirm whether that call is really needed there? And if so, if there could be some other way do that, because it seems to be the ttm_bo_wait call before that would be enough, if not it might help more to move the flush to some other call.
The fence flush should be interpreted as an operation for fencing mechanisms that aren't otherwise required to signal in finite time, and where the time from flush to signal might be substantial. TTM is then supposed to issue a fence flush when it knows ahead of time that it will soon perform a periodical poll for a buffer to be idle, but not block waiting for the buffer to be idle. The delayed buffer delete mechanism is, I think, the only user currently. For hardware that always signal fences immediately, the flush mechanism is not needed.
So if I understand it correctly it is the same as I'm doing in fences with dma_fence::enable_sw_signals? Great, I don't need to add another op then. Although it looks like I should export a function to manually enable it for those cases. :)
Again, i need to look a bit deeper into the enable_sw_signals stuff.
<snip> + +int +__dma_object_reserve(struct dma_reservation_object *obj, bool intr, + bool no_wait, dma_reservation_ticket_t *ticket) +{ + int ret; + u64 sequence = ticket ? ticket->seqno : 0; + + while (unlikely(atomic_cmpxchg(&obj->reserved, 0, 1) != 0)) { + /** + * Deadlock avoidance for multi-dmabuf reserving. + */ + if (sequence && obj->sequence) { + /** + * We've already reserved this one. + */ + if (unlikely(sequence == obj->sequence)) + return -EDEADLK; + /** + * Already reserved by a thread that will not back + * off for us. We need to back off. + */ + if (unlikely(sequence - obj->sequence < (1ULL << 63))) + return -EAGAIN; + } + + if (no_wait) + return -EBUSY; + + spin_unlock(&obj->lock); + ret = dma_object_wait_unreserved(obj, intr); + spin_lock(&obj->lock); + + if (unlikely(ret)) + return ret; + } + + /** + * Wake up waiters that may need to recheck for deadlock, + * if we decreased the sequence number. + */ + if (sequence && unlikely((obj->sequence - sequence < (1ULL << 63)) || + !obj->sequence)) + wake_up_all(&obj->event_queue); + + obj->sequence = sequence; + return 0; +} +EXPORT_SYMBOL_GPL(__dma_object_reserve);
Since this function and the corresponding unreserve is exported, it should probably be documented (this holds for TTM as well) that they need memory barriers to protect data, since IIRC the linux atomic_xxx operations do not necessarily order memory reads and writes. For the corresponding unlocked dma_object_reserve and dma_object_unreserve, the spinlocks should take care of that.
The documentation is still lacking, but they require the spinlocks to be taken by the caller, else things explode. It's meant for updating fence and ending reservation atomically.
OK.
/Thomas
~Maarten
On Wed, Aug 22, 2012 at 02:52:10PM +0200, Thomas Hellstrom wrote:
Hi, Maarten, please see some comments inline.
On 08/22/2012 01:50 PM, Maarten Lankhorst wrote:
Hey Dan,
Op 16-08-12 01:12, Daniel Vetter schreef:
Hi Maarten,
Ok, here comes the promised review (finally!), but it's rather a high-level thingy. I've mostly thought about how we could create a neat api with the following points. For a bit of clarity, I've grouped the different considerations a bit.
<snip>
Thanks, I have significantly reworked the api based on your comments.
Documentation is currently lacking, and will get updated again for the final version.
Full patch series also includes some ttm changes to make use of dma-reservation, with the intention of moving out fencing from ttm too, but that requires more work.
For the full series see: http://cgit.freedesktop.org/~mlankhorst/linux/log/?h=v10-wip
My plan is to add a pointer for dma_reservation to a dma-buf, so all users of dma-reservation can perform reservations across multiple devices as well. Since the default for ttm likely will mean only a few buffers are shared I didn't want to complicate the abi for ttm much further so only added a pointer that can be null to use ttm's reservation_object structure.
The major difference with ttm is that each reservation object gets its own lock for fencing and reservations, but they can be merged:
TTM previously had a lock on each buffer object which protected sync_obj and sync_obj_arg, however when fencing multiple buffers, say 100 buffers or so in a single command submission, it meant 100 locks / unlocks that weren't really necessary, since just updating the sync_obj and sync_obj_arg members is a pretty quick operation, whereas locking may be a pretty slow operation, so those locks were removed for efficiency. The reason a single lock (the lru lock) is used to protect reservation is that a TTM object that is being reserved *atomically* needs to be taken off LRU lists, since processes performing LRU eviction don't take a ticket when evicting, and may thus cause deadlocks; It might be possible to fix this within TTM by requiring a ticket for all reservation, but then that ticket needs to be passed down the call chain for all functions that may perform a reservation. It would perhaps be simpler (but perhaps not so fair) to use the current thread info pointer as a ticket sequence number.
While discussing this stuff with Maarten I've read through the generic mutex code, and I think we could adapt the ideas from in there (which would boil down to a single atomice op for the fastpath for both reserve and unreserve, which even have per-arch optimized asm). So I think we can make the per-obj lock as fast as it's possible, since the current ttm fences already use that atomic op.
For passing the reservation_ticket down the callstacks I guess with a common reservation systems used for shared buffers (which is the idea here) we can make a good case to add a pointer to the current thread info. Especially for cross-device reservations through dma_buf I think that would simplify the interfaces quite a bit.
Wrt the dma_ prefix I agree it's not a stellar name, but since the intention is to use this together with dma_buf and dma_fence to faciliate cross-device madness it does fit somewhat ...
Fyi I hopefully get around to play with Maarten's patches a bit, too. One of the things I'd like to add to the current reservation framework is lockdep annotations. Since if we use this across devices it's way too easy to nest reservations improperly, or to create deadlocks because one thread grabs another lock while holding reservations, while another tries to reserve buffers while holding that lock.
spin_lock(obj->resv) __dma_object_reserve() grab a ref to all obj->fences spin_unlock(obj->resv)
spin_lock(obj->resv) assign new fence to obj->fences __dma_object_unreserve() spin_unlock(obj->resv)
There's only one thing about fences I haven't been able to map yet properly. vmwgfx has sync_obj_flush, but as far as I can tell it has not much to do with sync objects, but is rather a generic 'flush before release'. Maybe one of the vmwgfx devs could confirm whether that call is really needed there? And if so, if there could be some other way do that, because it seems to be the ttm_bo_wait call before that would be enough, if not it might help more to move the flush to some other call.
The fence flush should be interpreted as an operation for fencing mechanisms that aren't otherwise required to signal in finite time, and where the time from flush to signal might be substantial. TTM is then supposed to issue a fence flush when it knows ahead of time that it will soon perform a periodical poll for a buffer to be idle, but not block waiting for the buffer to be idle. The delayed buffer delete mechanism is, I think, the only user currently. For hardware that always signal fences immediately, the flush mechanism is not needed.
Hm, atm we only call back to the driver for dma_fences when adding a callback (or waiting for the fence in a blocking fashion). I guess we could add another interface that just does this call, without adding any callback - as a heads-up of sorts for drivers where making a fence signal in time is expensive and/or should be done as early as possible if timely signaling is required. -Daniel
On Fri, Aug 10, 2012 at 04:57:43PM +0200, Maarten Lankhorst wrote:
Documentation says that code requiring dma-buf should add it to select, so inline fallbacks are not going to be used. A link error will make it obvious what went wrong, instead of silently doing nothing at runtime.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
I've botched it more than once to update these when creating new dma-buf code. Hence
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
On Fri, Aug 10, 2012 at 2:32 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Fri, Aug 10, 2012 at 04:57:43PM +0200, Maarten Lankhorst wrote:
Documentation says that code requiring dma-buf should add it to select, so inline fallbacks are not going to be used. A link error will make it obvious what went wrong, instead of silently doing nothing at runtime.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
I've botched it more than once to update these when creating new dma-buf code. Hence
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
yeah, I think the fallbacks date back to when it was a user configurable option, rather than something select'd by drivers using dmabuf, and we just never went back to clean up. Let's drop the fallbacks.
Reviewed-by: Rob Clark rob.clark@linaro.org
-- Daniel Vetter Mail: daniel@ffwll.ch Mobile: +41 (0)79 365 57 48 -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
linaro-mm-sig@lists.linaro.org